CN108182183B - Picture character translation method, application and computer equipment - Google Patents
Picture character translation method, application and computer equipment Download PDFInfo
- Publication number
- CN108182183B CN108182183B CN201711447783.8A CN201711447783A CN108182183B CN 108182183 B CN108182183 B CN 108182183B CN 201711447783 A CN201711447783 A CN 201711447783A CN 108182183 B CN108182183 B CN 108182183B
- Authority
- CN
- China
- Prior art keywords
- picture
- translated
- translation
- paragraph
- text
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000013519 translation Methods 0.000 title claims abstract description 311
- 238000000034 method Methods 0.000 title claims abstract description 50
- 238000012545 processing Methods 0.000 claims abstract description 28
- 238000007667 floating Methods 0.000 claims description 13
- 239000003086 colorant Substances 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 9
- 230000000052 comparative effect Effects 0.000 claims 1
- 230000006870 function Effects 0.000 description 17
- 238000004891 communication Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 4
- 230000001960 triggered effect Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 241000590419 Polygonia interrogationis Species 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/40—Processing or translation of natural language
- G06F40/42—Data-driven translation
- G06F40/45—Example-based machine translation; Alignment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/103—Formatting, i.e. changing of presentation of documents
- G06F40/106—Display of layout of documents; Previewing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/40—Processing or translation of natural language
- G06F40/58—Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/14—Image acquisition
- G06V30/148—Segmentation of character regions
- G06V30/153—Segmentation of character regions using recognition of characters or words
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Machine Translation (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention provides a picture character translation method, application and computer equipment, wherein the method comprises the following steps: acquiring a picture translation request, wherein the translation request comprises a picture to be translated and a target language type; if the current translation mode is determined to be contrast translation, performing text recognition and paragraph division processing on the picture to be translated, and determining each original text paragraph included in the picture to be translated; translating each original text paragraph respectively to generate each translated text paragraph corresponding to the target language type; and sequentially contrasting and displaying the original text paragraphs and the translated text paragraphs according to a preset pattern. By sequentially comparing and displaying the original text paragraphs and the translated text paragraphs according to the preset pattern, the readability of the translation result is improved, the time for the user to look up the translation result is reduced, and the user experience is improved.
Description
Technical Field
The invention relates to the technical field of computers, in particular to a picture character translation method, application and computer equipment.
Background
With the rapid development of digital technology, high-performance digital cameras are provided in terminal devices such as mobile phones. When people read, strange foreign language words encountered in the reading process can be shot by using the terminal equipment at any time, the characters in the shot picture are recognized through the character recognition technology of the terminal equipment, and then the recognition result is translated.
In the prior art, when words in a picture are translated, the picture can only be translated in full text and a full text translation result is displayed, so that the readability of the translation result is poor, the time for a user to look up the translation result is consumed, and the user experience is poor.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, the invention provides a picture character translation method, which improves the readability of the translation result, reduces the time for a user to look up the translation result and improves the user experience by sequentially comparing and displaying the original text paragraphs and the translated text paragraphs according to the preset pattern.
The invention also provides an application of the picture character translation.
The invention also provides computer equipment.
The invention also provides a computer readable storage medium.
The embodiment of the first aspect of the invention provides a picture character translation method, which comprises the following steps: acquiring a picture translation request, wherein the translation request comprises a picture to be translated and a target language type; if the current translation mode is determined to be contrast translation, performing text recognition and paragraph division processing on the picture to be translated, and determining each original text paragraph included in the picture to be translated; translating each original text paragraph respectively to generate each translated text paragraph corresponding to the target language type; and sequentially contrasting and displaying the original text paragraphs and the translated text paragraphs according to a preset pattern.
According to the picture character translation method provided by the embodiment of the invention, when the picture translation request is obtained, if the current translation mode is determined to be the contrast translation, text recognition and paragraph division processing can be carried out on the picture to be translated so as to determine each original text paragraph included in the picture to be translated, each original text paragraph is translated to generate each translated text paragraph corresponding to the target language type, and therefore each original text paragraph and each translated text paragraph are sequentially displayed in a contrast mode according to the preset pattern. Therefore, the original text paragraphs and the translated text paragraphs are sequentially contrasted and displayed according to the preset pattern, so that the readability of the translation result is improved, the time for the user to look up the translation result is reduced, and the user experience is improved.
An embodiment of a second aspect of the present invention provides an application for translating a picture text, including: the first acquisition module is used for acquiring a picture translation request, wherein the translation request comprises a picture to be translated and a target language type; the first determining module is used for performing text recognition and paragraph division processing on the picture to be translated when the current translation mode is determined to be the contrast translation mode, and determining each original text paragraph included in the picture to be translated; the translation module is used for translating each original text paragraph respectively to generate each translated text paragraph corresponding to the target language type; and the first display module is used for sequentially contrasting and displaying the original text paragraphs and the translated text paragraphs according to a preset pattern.
In the picture character translation application of the embodiment of the present invention, when the picture translation request is obtained, if it is determined that the current translation mode is the contrast translation, text recognition and paragraph division processing may be performed on the picture to be translated to determine each original paragraph included in the picture to be translated, and each original paragraph is translated to generate each translated paragraph corresponding to the target language type, so that each original paragraph and each translated paragraph are sequentially displayed in a contrast manner according to a preset pattern. Therefore, the original text paragraphs and the translated text paragraphs are sequentially contrasted and displayed according to the preset pattern, so that the readability of the translation result is improved, the time for the user to look up the translation result is reduced, and the user experience is improved.
An embodiment of a third aspect of the present invention provides a computer device, including: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the picture text translation method according to the first aspect when executing the program.
A fourth aspect of the present invention provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the method for translating picture texts according to the first aspect.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flow chart of a method for text translation of a picture according to an embodiment of the present invention;
FIGS. 1A-1F are exemplary diagrams of a display interface according to one embodiment of the invention;
FIG. 2 is a flowchart of a method for translating picture text according to another embodiment of the present invention;
FIGS. 2A-2E are exemplary diagrams of a display interface according to another embodiment of the invention;
FIG. 3 is a block diagram of a picture text translation application according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
Specifically, the embodiments of the present invention provide a picture and text translation method, which aims at the problems that, in the prior art, when translating characters in a picture, only full-text translation can be performed on the picture and a full-text translation result can be displayed, the readability of the translation result is poor, the time for a user to look up the translation result is consumed, and the user experience is poor.
According to the picture character translation method provided by the embodiment of the invention, when the picture translation request is acquired, if the current translation mode is determined to be the contrast translation, text recognition and paragraph division processing can be carried out on the picture to be translated so as to determine each original text paragraph included in the picture to be translated, each original text paragraph is translated to generate each translated text paragraph corresponding to the target language type, and therefore each original text paragraph and each translated text paragraph are sequentially displayed in a contrast mode according to the preset style. Therefore, the original text paragraphs and the translated text paragraphs are sequentially contrasted and displayed according to the preset pattern, so that the readability of the translation result is improved, the time for the user to look up the translation result is reduced, and the user experience is improved.
The following describes a picture character translation method, an application, and a computer device according to an embodiment of the present invention in detail with reference to the accompanying drawings.
Fig. 1 is a flowchart of a method for translating picture text according to an embodiment of the present invention.
As shown in fig. 1, the method for translating the picture characters includes:
Specifically, the execution main body of the picture character translation method provided by the embodiment of the present invention is the picture character translation application provided by the embodiment of the present invention. The application can be configured in any computer equipment with a display screen, such as a mobile phone, a computer and the like, so as to translate the picture to be translated. The embodiment of the present invention is described by taking an example in which the application is configured in a mobile phone having a touch screen.
The picture to be translated may be a picture stored in a preset position in the computer device, or a picture directly taken by a user through a camera in the computer device, which is not limited herein.
The target language type can be any type such as Chinese type, English type and the like. The embodiment of the invention takes the example that the language type of the characters in the picture to be translated is English type and the target language type is Chinese type as an example for explanation.
In specific implementation, different buttons capable of uploading the picture to be translated in different modes can be arranged on a display interface (picture translation function interface) with a picture character translation function in computer equipment, and a user can select to upload the picture to be translated in a corresponding mode by touching the different buttons.
For example, referring to fig. 1A and fig. 1B, after a user enters a picture translation function interface by touching an area 1 in fig. 1A, the user can select a picture from a preset position of a computer device to upload by touching a button 1 in fig. 1B, and can take the picture by using a camera in the computer device by touching a button 2 in fig. 1B, so that the picture text translation application can acquire the picture to be translated uploaded by the user from the preset position of the computer device or taken by using the camera.
It should be noted that, when a user takes a picture to be translated by using a camera of a computer device, as shown in fig. 1B, a text alignment reference line may be displayed in a preview image, so that when the user takes a picture, the text in the picture may be aligned along the direction of the reference line, so that the taken picture is better in effect, and the translation effect is better.
In addition, in the picture translation function interface, a language direction toolbar shown in a black area in the upper part of fig. 1B may be displayed, and the user may select a target language type by operating in the language direction toolbar.
Specifically, when the user touches a button with a picture translation function by clicking, long-pressing, sliding, or the like, the picture translation request can be triggered. For example, a user may touch the button 1 in fig. 1B to upload a picture to be translated from a preset position of the computer device, and trigger a picture translation request; alternatively, the user may touch the button 2 in fig. 1B to take a picture to be translated with a camera in the computer device, and trigger a picture translation request.
And 102, if the current translation mode is determined to be the contrast translation, performing text recognition and paragraph division processing on the picture to be translated, and determining each original text paragraph included in the picture to be translated.
Specifically, the picture text translation application may determine that the current translation mode is the contrast translation when the contrast translation instruction sent by the user is acquired. That is, before step 102, the method may further include:
and acquiring a contrast type translation instruction sent by a user.
The contrasting translation instruction may be triggered by the user under various conditions.
For example, when a picture translation request is acquired, a picture to be translated can be displayed on a display interface, so that a user can directly send a contrast translation instruction on the picture display interface to be translated; or after the picture translation request is obtained, the picture character translation application may perform full-text recognition and translation on the picture to be translated, then display a full-text translation result on the display interface, and when the user wishes to display the original text and the translated text at the same time, send a contrast type translation instruction on the translation result display interface corresponding to the picture to be translated.
That is, obtaining the contrasting translation instruction sent by the user may include:
acquiring a contrast type translation instruction sent by a user on a picture display interface to be translated;
or,
and acquiring a contrast type translation instruction sent by a user on a translation result display interface corresponding to the picture to be translated.
Correspondingly, when the picture text translation application acquires the picture translation request, the picture to be translated can be displayed on the display interface firstly, and buttons such as contrast translation and full-text translation are directly displayed on the picture display interface to be translated, so that a user can select and touch the corresponding button according to the requirement to trigger the corresponding translation mode instruction. If the user touches the contrast translation button, the picture text translation application can determine that the current translation mode is contrast translation, so that text recognition and paragraph division processing can be performed on the picture to be translated, and each original text paragraph included in the picture to be translated is determined.
Or after the picture translation request is obtained, the picture text translation application may perform full-text recognition and translation on the picture to be translated, and then, as shown in fig. 1C, display a full-text translation result on the display interface, and at the same time, display a button with a contrast translation function, such as the button 3 in fig. 1C, on the translation result display interface. Therefore, when a user wants to simultaneously display the original text and the translated text, the contrast type translation instruction can be triggered by touching the button with the contrast type translation function. After the picture character translation application acquires a contrast type translation instruction sent by a user, text recognition and paragraph division processing are carried out on the picture to be translated, and each original text paragraph included in the picture to be translated is determined.
In a specific implementation, the text recognition and paragraph division processing may be performed on the picture to be translated in the following manner shown in steps 102a to 102c, so as to determine each original text paragraph included in the picture to be translated.
Step 102a, performing text recognition on the picture to be translated, and determining the type, the interval between characters, the size and the style of each character in the picture to be translated.
The type of the character may include punctuation mark type, character type, etc. The space between the characters may be the line space of the characters, etc. The character size may be the specific font size or dimension of the character. The character style may include whether the character is bold, oblique, etc.
And step 102b, determining a paragraph relationship between words and sentences in the picture to be translated according to the type of each character, the space between the characters, the size of the characters and the style in the picture to be translated.
And 102c, according to the paragraph relation among words and sentences, paragraph division is carried out on the picture to be translated.
The term specifically refers to a sentence including one or more words.
The paragraph relation may be a paragraph correlation between words and sentences.
It will be appreciated that various types of characters are typically included in an article, each type of character having a different role. For example, in a single word or sentence, pauses are separated by commas, and the end of the word or sentence ends with periods, question marks, exclamation marks, or the like. In addition, when composing an article, the size and style of characters and the pitch between characters are generally different in the title, abstract, and specific paragraph content of the article.
In the embodiment of the present invention, the picture to be translated may be segmented into a plurality of words and sentences according to the type of each character in the picture to be translated, and then, a paragraph relationship between the words and sentences in the picture to be translated is determined according to the space between the characters, the size of the characters, and the style, so that the picture to be translated is segmented according to the paragraph relationship between the words and sentences.
For example, the picture to be translated includes an article, and it is determined that the picture to be translated includes 7 words and sentences according to the type of each character. The line spacing between sentences 1-4 is the same, the line spacing between sentences 5-7 is the same, and the line spacing between sentences 4-5 is larger than that between other sentences. It can be determined that the paragraph correlations between sentences 1-4 are large and the paragraph correlations between sentences 5-7 are large, so that sentences 1-4 are divided into one paragraph and sentences 5-7 are divided into one paragraph.
And 103, translating each original text paragraph respectively to generate each translated text paragraph corresponding to the target language type.
Specifically, text recognition and paragraph division processing are performed on the picture to be translated, and after it is determined that each original paragraph included in the picture to be translated is lagged, each original paragraph can be translated to generate each translated paragraph corresponding to the target language type.
And step 104, sequentially comparing and displaying the original text paragraphs and the translated text paragraphs according to a preset pattern.
The preset style refers to a preset style for sequentially contrasting and displaying the original text paragraphs and the translated text paragraphs.
In specific implementation, the display patterns of the original paragraphs and the translation paragraphs may be preset, so that the original paragraphs included in the picture to be translated are translated, and after the translation paragraphs corresponding to the original paragraphs are determined, the original paragraphs and the translation paragraphs may be sequentially displayed in a contrasting manner according to the preset patterns. By sequentially comparing and displaying the original text paragraphs and the translated text paragraphs, the correspondence between the original text paragraphs and the translated text paragraphs can be improved, so that the readability of the translation result is improved, the time for a user to look up the translation result is reduced, and the user experience is improved.
Furthermore, when the original paragraphs and the translated paragraphs are sequentially contrasted and displayed according to a preset pattern, the original paragraphs and the translated paragraphs may be further sequentially contrasted and displayed according to different colors, fonts or background colors. That is, step 104 may include:
and sequentially contrasting and displaying the original text paragraphs and the translated text paragraphs according to different colors, fonts and/or background colors.
For example, assume that original paragraph a corresponds to translation paragraph a and original paragraph B corresponds to translation paragraph B. The original paragraph a and the translated paragraph a can be contrastingly displayed according to one color, font and background color, and the original paragraph B and the translated paragraph B can be contrastingly displayed according to another color, font and background color.
By sequentially contrasting and displaying the original text paragraphs and the translated text paragraphs according to different colors, fonts and/or background colors, the user can distinguish the different original text paragraphs and the corresponding translated text paragraphs more easily, so that the time for the user to look up the translation result is shortened, and the visual perception of the user is improved.
In addition, when each original paragraph and each translated paragraph are displayed, they may be displayed in the form of pictures or characters, and the display is not limited herein. When each original text paragraph and each translated text paragraph are displayed in the form of a picture, the display style of the picture can be determined according to the style of the picture to be translated.
It should be noted that, in the embodiment of the present invention, if the comparison type translation instruction sent by the user is sent on the translation result display interface corresponding to the picture to be translated, that is, after the picture text translation application obtains the picture translation request, the picture text translation application performs full text recognition and translation on the picture, and then when performing the comparison type translation according to the comparison type translation instruction sent by the user, it is determined that each original text segment included in the picture to be translated is backward, and the original text segment may also not be translated any more, but each translated text segment corresponding to each original text segment is directly obtained from the full text translation result, and then each original text segment and each translated text segment are sequentially displayed in a comparison manner according to the preset pattern, so as to improve the picture text translation efficiency.
For example, assuming that the display interface of the picture to be translated is as shown in fig. 1D, the picture to be translated is subjected to text recognition and paragraph division, and it is determined that the picture to be translated includes 3 original paragraphs, i.e., the original paragraphs a, b, and c shown in fig. 1D. The full text translation result display interface is shown in FIG. 1E, in which each translation section A, B, C corresponds to the original text sections a, b, and c in FIG. 1D, respectively. The original paragraphs and the translated paragraphs may be displayed in a manner similar to that shown in fig. 1F. Wherein, different paragraphs can be divided by using a dividing line.
It should be noted that, when the original paragraphs and the translated paragraphs are sequentially displayed in a predetermined manner, all the translation results may not be displayed while being clear due to the size limitation of the display interface. In the embodiment of the present invention, only a part of the original paragraphs and the translated paragraphs may be displayed first, and then other original paragraphs and translated paragraphs may be displayed according to the user's operation on the display interface, such as sliding up and down.
According to the picture character translation method provided by the embodiment of the invention, when the picture translation request is obtained, if the current translation mode is determined to be the contrast translation, text recognition and paragraph division processing can be carried out on the picture to be translated so as to determine each original text paragraph included in the picture to be translated, each original text paragraph is translated to generate each translated text paragraph corresponding to the target language type, and therefore each original text paragraph and each translated text paragraph are sequentially displayed in a contrast mode according to the preset pattern. Therefore, the original text paragraphs and the translated text paragraphs are sequentially contrasted and displayed according to the preset pattern, so that the readability of the translation result is improved, the time for the user to look up the translation result is reduced, and the user experience is improved.
By the analysis, when the picture translation request is obtained, if the current translation mode is determined to be the contrast translation, text recognition and paragraph division processing can be performed on the picture to be translated to determine each original text paragraph included in the picture to be translated, each original text paragraph is translated to generate each translated text paragraph corresponding to the target language type, and therefore each original text paragraph and each translated text paragraph are sequentially displayed in a contrast mode according to the preset pattern. In practical applications, after the original paragraphs and the translated paragraphs are sequentially displayed in a preset manner, various functions may be implemented according to the operation of the user on the comparison display interface, which is described in detail below with reference to fig. 2.
Fig. 2 is a flowchart of a method for translating picture text according to another embodiment of the present invention.
As shown in fig. 2, the method for translating picture words provided by the embodiment of the present invention may include:
The recognition result may include characters in the picture to be translated, types of the characters, intervals between the characters, sizes of the characters, styles and the like.
And step 203, displaying a translation result corresponding to the picture to be translated on a display interface.
And 204, acquiring a contrast translation instruction sent by a user on a translation result display interface corresponding to the picture to be translated.
Specifically, after the picture translation request is obtained, the picture text translation application may perform full-text recognition and translation on the picture to be translated, and then display a full-text translation result on the display interface, and when the user wishes to display the original text and the translated text at the same time, the user may send a contrast type translation instruction on the translation result display interface corresponding to the picture to be translated.
For example, after the picture translation request is obtained, the picture text translation application may perform full-text recognition and translation on the picture to be translated, and then as shown in fig. 1C, the full-text translation result is displayed on the display interface, and meanwhile, a button with a contrast translation function, such as the button 3 in fig. 1C, may be displayed on the translation result display interface. Therefore, when a user wants to simultaneously display the original text and the translated text, the contrast type translation instruction can be triggered by touching the button with the contrast type translation function.
When the full-text translation result is displayed on the display interface, the translation result may be displayed in a simple text form or a picture form. When the image is displayed in the form of an image, the display style of the image can be determined according to the style of the image to be translated. Correspondingly, before the translation result is displayed, the style of the picture to be translated can be determined, and then the display style of the full-text translation result is determined. That is, before step 204, it may further include:
identifying the picture to be translated, and determining the pattern of the picture to be translated and the pattern of characters in the picture to be translated;
and determining the display style of the translation result according to the style of the picture to be translated and the style of characters in the picture to be translated.
The style of the picture to be translated may include a background color of the picture to be translated, a pattern in the picture, and the like. The style of the characters in the picture to be translated may include the size, color, font, etc. of the characters in the picture to be translated.
The display style of the translation result may include a picture style such as a picture background color of the translation result, and a character style such as a character size, a color, and a font in the translation result.
Specifically, after the picture to be translated is identified, the determined pattern of the picture to be translated can be used as the picture pattern of the translation result, and the pattern of the characters in the picture to be translated can be used as the character pattern of the translation result, so that the translation result is displayed in the determined display pattern. That is, for the user, when displaying the translation result, only the original text in the picture to be translated may be converted into the translated text, and none of the other texts may be changed, so as to improve the visual experience of the user.
Further, after the translation result is displayed in the determined display style, the user can also perform operation on the translation result display interface to switch the translation result display interface into a picture to be translated, or store or share the translation result display interface, and the like. That is, after determining the display style of the translation result, the method may further include:
displaying the translation result in a determined display style;
and switching the translation result display interface into a picture to be translated or storing or sharing the translation result display interface according to the operation of the user on the translation result display interface.
Specifically, different processing modes corresponding to different operations can be preset, so that when a user operates the translation result display interface, the picture and text translation application can switch, store or share the translation result display interface according to the operation mode of the user.
For example, when the user taps the area 2 of the translation result display interface shown in fig. 1C, the translation result display interface may be switched to the picture to be translated shown in fig. 2A; when the user presses the translation result display interface shown in fig. 1C for a long time, as shown in fig. 2B, a save, share, and cancel button (gamma in the figure) is displayed on the upper layer of the translation result display interface, and the user can select the button corresponding to the touch control according to the needs, so that the picture and text translation application can perform storage or sharing processing on the translation result display interface or close the upper layer interface of the translation result display interface according to the operation of the user. If the user presses the translation result display interface shown in fig. 1C for a long time and touches the save button in the upper interface of the translation result display interface shown in fig. 2B, the translation result display interface can be saved.
It should be noted that, after the user taps the area 2 shown in fig. 1C, so that the translation result display interface is switched to the picture to be translated shown in fig. 2A, the user may also switch the picture to be translated back to the translation result display interface by tapping the area 3 shown in fig. 2A. Through switching back and forth between the translation result display interface and the picture to be translated according to the operation of the user, the user can check the translation result.
And step 207, sequentially comparing and displaying the original text paragraphs and the translated text paragraphs according to a preset pattern.
Specifically, after a translation result display interface corresponding to a picture to be translated is obtained by a user, and a contrast translation instruction is sent, it can be determined that a current translation mode is a contrast translation mode, so that paragraph division processing can be performed on the picture to be translated according to an identification result corresponding to the picture to be translated, each original paragraph included in the picture to be translated is determined, each translated paragraph corresponding to each original paragraph and a target language type is determined according to a translation result corresponding to the picture to be translated, and each original paragraph and each translated paragraph are sequentially displayed in a contrast mode according to a preset pattern.
The detailed implementation process and principle of step 205-207 may refer to the detailed description of the above embodiments, and are not described herein again.
And step 208, according to the operation of the user, performing voice playing on the target translation paragraph or the target original paragraph, editing the target translation paragraph or the target original paragraph, and/or displaying a specific explanation of a preset word in the target translation paragraph or the target original paragraph in the floating layer.
The number of the preset words may be one or more, and is not limited herein.
During specific implementation, a plurality of buttons can be arranged on the comparison display interface, and each button can realize different functions, so that corresponding functions can be realized according to touch operation of a user on each button.
For example, as shown in fig. 2C, a voice playing button (buttons 4 and 5) may be provided at the beginning of each original paragraph, and when the user wishes to play the original paragraph 1 on the comparison display interface, the user may touch the button 4, so that the image text translation application may play the original paragraph 1 (target original paragraph) on the comparison display interface in a voice manner according to the operation of the user. Similarly, a voice playing button may be provided at the beginning of each translation paragraph, so that the target translation paragraph is played according to the operation of the user on each button.
In addition, a translation editing button can be set on the comparison display interface, and a user thinks that the translation result of the picture and word translation application on a certain original text paragraph is inaccurate, and when the user wants to modify the corresponding translation paragraph, the user can touch the translation editing button and then touch the target translation paragraph to be edited, so that the picture and word translation application can edit the target translation paragraph according to the operation of the user. Likewise, an edit text button, such as button 6 in fig. 2C, may also be provided in the comparison display page. The user considers that the recognition result of the picture to be translated by the picture text translation application is not accurate, and when the user wants to modify the text paragraphs, the user can touch the button 6 and then touch the target text paragraph to be edited, so that the picture text translation application can display the interface shown in fig. 2D on the display interface according to the operation of the user, so that the user edits the target text paragraph, and after the user finishes editing the text paragraph, the user can touch the finish button, so that the picture text translation application can update the text paragraph which is in contrast with the display interface. By editing the target original text paragraphs or target translated text paragraphs, the display result of the contrast display interface is more in line with the requirements of the user.
In addition, a viewing detail paraphrasing button corresponding to each original text paragraph and translation text paragraph can be set on the comparison display interface, and when a user wants to view a specific explanation of a word in a certain original text paragraph (target original text paragraph) or translation text paragraph (target translation text paragraph), the user can touch the viewing detail paraphrasing button corresponding to the target original text paragraph or target translation text paragraph. Or, the user may also be configured to view the specific explanation of the word in the target original text paragraph or the target translation paragraph by touching the target original text paragraph or the target translation paragraph. Therefore, the image text translation application can display the specific explanation of the preset words in the target original text paragraph or the target translation text paragraph in the floating layer according to the operation of the user.
For example, a button 7 and a button 8 may be provided on the comparison display interface shown in fig. 2C, and are respectively used for triggering and displaying specific explanations of words preset in the text paragraphs 1 and 2 on the comparison display interface. After the user touches the button 7, as shown in fig. 2E, a specific explanation of the preset word in the text paragraph 1 may be displayed in the floating layer in the lower region of the comparison display interface.
It should be noted that, in the embodiment of the present invention, the transparency of the floating layer may also be set according to needs. For example, the floating layer displaying the specific explanation of the preset word may be configured to be semi-transparent, so that the floating layer does not block the contrast display interface when the floating layer is displayed on the contrast display interface.
In the embodiment of the present invention, the preset word may be a word specified by a user, or a word determined by a picture and text translation application, and is not limited herein.
Correspondingly, before displaying the specific explanation of the preset word in the target translation paragraph or the target original paragraph in the floating layer, the method may further include:
determining preset words according to the operation of a user;
or,
and determining preset words according to the historical translation records of the user and the difficulty of each character in the target original text paragraph.
Specifically, the image text translation application may determine, in advance, specific interpretations of words in each translated text paragraph and each original text paragraph, and after the user touches and views the detailed paraphrase button, when the user wants to view a specific interpretation of a certain word, the user may touch and view the word, so that the image text translation application may determine, according to the operation of the user, a preset word and display the specific interpretation of the preset word in the floating layer.
Or, the picture and text translation application may determine, according to the historical translation record of the user and the difficulty of each character in the target text paragraph, a character with a high difficulty, which is not translated by the user or has a low translation frequency, as a character that the user may not know well, that is, a preset word, so that after the user touches and views the detailed paraphrase button, a specific explanation of the preset word may be displayed in the floating layer.
According to the picture character translation method, after the picture translation request is obtained, the picture to be translated is subjected to text recognition and translation, the recognition result and the translation result corresponding to the picture to be translated are determined, the translation result corresponding to the picture to be translated can be displayed on the display interface, the contrast type translation instruction sent by the user on the translation result display interface corresponding to the picture to be translated is obtained, then the picture to be translated is subjected to paragraph division processing according to the recognition result corresponding to the picture to be translated, each original text paragraph included in the picture to be translated is determined, each translated text paragraph corresponding to each original text paragraph and the target language type is determined according to the translation result corresponding to the picture to be translated, and therefore each original text paragraph and each translated text paragraph are sequentially subjected to contrast display according to the preset pattern. Therefore, the original text paragraphs and the translated text paragraphs are sequentially contrasted and displayed according to the preset pattern, so that the readability of the translation result is improved, the time for a user to look up the translation result is reduced, the target translated text paragraph or the target original text paragraph is subjected to voice playing, editing and the like according to the operation of the user, the possibility of more learning operations is provided for the user on the result of picture translation, and the user experience is improved.
Fig. 3 is a schematic structural diagram of a picture-to-text translation application according to an embodiment of the present invention.
As shown in fig. 3, the picture text translation application includes:
the first obtaining module 31 is configured to obtain a picture translation request, where the translation request includes a picture to be translated and a target language type;
the first determining module 32 is configured to, when it is determined that the current translation mode is the contrast translation, perform text recognition and paragraph division processing on the picture to be translated, and determine each original text paragraph included in the picture to be translated;
the translation module 33 is configured to translate each original paragraph to generate each translated paragraph corresponding to the target language type;
the first display module 34 is configured to compare and display the original paragraphs and the translated paragraphs in sequence according to a preset style.
Specifically, the image text translation application provided by the embodiment of the present invention may execute the image text translation method provided by the embodiment of the present invention, and the application may be configured in any computer device with a display screen, such as a mobile phone and a computer, to translate an image to be translated.
In a possible implementation form, the first determining module is specifically configured to:
performing text recognition on the picture to be translated, and determining the type, the interval between characters, the size and the style of each character in the picture to be translated;
determining a paragraph relationship between words and sentences in the picture to be translated according to the type of each character, the space between the characters, the size of the character and the style of the character in the picture to be translated;
and according to the paragraph relation among the words and sentences, paragraph division is carried out on the picture to be translated.
In another possible implementation form, the first display module is specifically configured to:
and sequentially contrasting and displaying the original text paragraphs and the translated text paragraphs according to different colors, fonts and/or background colors.
In another possible implementation form, the apparatus further includes:
the first processing module is used for performing voice playing on the target translation paragraph or the target original paragraph, editing the target translation paragraph or the target original paragraph, and/or displaying a specific explanation of a preset word in the target translation paragraph or the target original paragraph in the floating layer according to the operation of a user.
In another possible implementation form, the first processing module is further configured to:
determining the preset words according to the operation of a user;
or,
and determining the preset words according to the historical translation records of the user and the difficulty of each character in the target text paragraph.
In another possible implementation form, the apparatus further includes:
and the second acquisition module is used for acquiring the contrast type translation instruction sent by the user.
In another possible implementation form, the second obtaining module is specifically configured to:
acquiring a contrast type translation instruction sent by a user on a picture display interface to be translated;
or,
and acquiring a contrast type translation instruction sent by a user on a translation result display interface corresponding to the picture to be translated.
In another possible implementation form, the apparatus further includes:
the recognition module is used for recognizing the picture to be translated and determining the pattern of the picture to be translated and the pattern of characters in the picture to be translated;
and the second determining module is used for determining the display style of the translation result according to the style of the picture to be translated and the style of characters in the picture to be translated.
In another possible implementation form, the apparatus further includes:
the second display module is used for displaying the translation result in a determined display mode;
and the second processing module is used for switching the translation result display interface into a picture to be translated or storing or sharing the translation result display interface according to the operation of a user on the translation result display interface.
It should be noted that the explanation of the embodiment of the image text translation method is also applicable to the image text translation application of the embodiment, and is not repeated here.
In the picture character translation application provided by the embodiment of the present invention, when the picture translation request is obtained, if it is determined that the current translation mode is the contrast translation, text recognition and paragraph division processing may be performed on the picture to be translated to determine each original paragraph included in the picture to be translated, and translate each original paragraph to generate each translated paragraph corresponding to the target language type, so that each original paragraph and each translated paragraph are sequentially displayed in a contrast manner according to a preset pattern. Therefore, the original text paragraphs and the translated text paragraphs are sequentially contrasted and displayed according to the preset pattern, so that the readability of the translation result is improved, the time for the user to look up the translation result is reduced, and the user experience is improved.
Fig. 4 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
As shown in fig. 4, the computer apparatus includes:
a memory 41, a processor 42, and a computer program stored on the memory 41 and executable on the processor 42.
The processor 42 implements the picture-text translation method provided in the above embodiments when executing the program.
The computer device can be a computer, a mobile phone, a wearable device and the like.
Further, the computer device further comprises:
a communication interface 43 for communication between the memory 41 and the processor 42.
A memory 41 for storing a computer program operable on the processor 42.
The memory 41 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The processor 42 is configured to implement the picture and text translation method according to the foregoing embodiment when executing the program.
If the memory 41, the processor 42 and the communication interface 43 are implemented independently, the communication interface 43, the memory 41 and the processor 42 may be connected to each other through a bus and perform communication with each other. The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 4, but it is not intended that there be only one bus or one type of bus.
Alternatively, in practical implementation, if the memory 41, the processor 42 and the communication interface 43 are integrated on one chip, the memory 41, the processor 42 and the communication interface 43 may complete communication with each other through an internal interface.
A fourth aspect of the present invention provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the method for translating picture texts as in the foregoing embodiments.
An embodiment of a fifth aspect of the present invention provides a computer program product, wherein when the instructions in the computer program product are executed by a processor, the method for translating picture and text as in the foregoing embodiments is performed.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.
Claims (10)
1. A picture character translation method is characterized by comprising the following steps:
acquiring a picture translation request, wherein the translation request comprises a picture to be translated and a target language type;
performing text recognition and translation on a picture to be translated, and determining a recognition result and a translation result corresponding to the picture to be translated;
determining the display style of the translation result according to the display style of the picture to be translated, and displaying the translation result in the determined display style;
acquiring a contrast type translation instruction sent by a user on a translation result display interface corresponding to the picture to be translated;
according to the recognition result corresponding to the picture to be translated, carrying out paragraph division processing on the picture to be translated, and determining each original text paragraph included in the picture to be translated, wherein the picture to be translated is divided into a plurality of words and sentences, and the picture to be translated is subjected to paragraph division according to the paragraph relation among the words and sentences;
determining each translated text paragraph corresponding to each original text paragraph and the target language type according to the translation result corresponding to the picture to be translated;
and sequentially contrasting and displaying the original text paragraphs and the translated text paragraphs according to a preset pattern.
2. The method of claim 1, wherein the paragraph splitting processing the picture to be translated comprises:
performing text recognition on the picture to be translated, and determining the type, the interval between characters, the size and the style of each character in the picture to be translated;
determining a paragraph relationship between words and sentences in the picture to be translated according to the type of each character, the space between the characters, the size of the character and the style of the character in the picture to be translated;
and according to the paragraph relation among the words and sentences, paragraph division is carried out on the picture to be translated.
3. The method of claim 1, wherein the comparing and displaying the original paragraphs and the translated paragraphs in turn according to a predetermined pattern comprises:
and sequentially contrasting and displaying the original text paragraphs and the translated text paragraphs according to different colors, fonts and/or background colors.
4. The method according to claim 1, wherein after sequentially displaying the original paragraphs and the translated paragraphs in a predetermined manner, the method further comprises:
according to the operation of a user, performing voice playing on the target translation paragraph or the target original paragraph, editing the target translation paragraph or the target original paragraph, and/or displaying a specific explanation of a preset word in the target translation paragraph or the target original paragraph in the floating layer.
5. The method of claim 4, wherein before displaying the specific interpretation of the predetermined word in the target textual paragraph in the floating layer, further comprising:
determining the preset words according to the operation of a user;
or,
and determining the preset words according to the historical translation records of the user and the difficulty of each character in the target text paragraph.
6. The method of claim 1, wherein the obtaining of the comparative translation instruction sent by the user on the translation result display interface corresponding to the picture to be translated further comprises:
identifying the picture to be translated, and determining the pattern of the picture to be translated and the pattern of characters in the picture to be translated;
and determining the display style of the translation result according to the style of the picture to be translated and the style of characters in the picture to be translated.
7. The method of claim 1, wherein after displaying the translation result in the determined display style, further comprising:
and switching the translation result display interface into a picture to be translated or storing or sharing the translation result display interface according to the operation of a user on the translation result display interface.
8. A picture word translation application, comprising:
the first acquisition module is used for acquiring a picture translation request, wherein the translation request comprises a picture to be translated and a target language type;
the second display module is used for carrying out text recognition and translation on the picture to be translated, determining a recognition result and a translation result corresponding to the picture to be translated, and displaying the translation result in a determined display mode after determining the display mode of the translation result according to the display mode of the picture to be translated;
the second acquisition module is used for acquiring a contrast type translation instruction sent by a user on a translation result display interface corresponding to the picture to be translated;
the first determining module is used for carrying out paragraph division processing on the picture to be translated according to the identification result corresponding to the picture to be translated, and determining each original text paragraph included in the picture to be translated, wherein the picture to be translated is divided into a plurality of words and sentences, and the picture to be translated is subjected to paragraph division according to the paragraph relationship among the words and sentences;
the translation module is used for determining each translated text paragraph corresponding to each original text paragraph and each target language type according to the translation result corresponding to the picture to be translated;
and the first display module is used for sequentially contrasting and displaying the original text paragraphs and the translated text paragraphs according to a preset pattern.
9. A computer device, comprising:
memory, processor and computer program stored on the memory and executable on the processor, characterized in that the processor implements the method for translating pictures and texts according to any one of claims 1-7 when executing the program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a method for text translation of pictures according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711447783.8A CN108182183B (en) | 2017-12-27 | 2017-12-27 | Picture character translation method, application and computer equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711447783.8A CN108182183B (en) | 2017-12-27 | 2017-12-27 | Picture character translation method, application and computer equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108182183A CN108182183A (en) | 2018-06-19 |
CN108182183B true CN108182183B (en) | 2021-09-17 |
Family
ID=62547717
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711447783.8A Active CN108182183B (en) | 2017-12-27 | 2017-12-27 | Picture character translation method, application and computer equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108182183B (en) |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108985201A (en) * | 2018-06-29 | 2018-12-11 | 网易有道信息技术(北京)有限公司 | Image processing method, medium, device and calculating equipment |
CN109388810A (en) * | 2018-08-31 | 2019-02-26 | 北京搜狗科技发展有限公司 | A kind of data processing method, device and the device for data processing |
CN111104805A (en) * | 2018-10-26 | 2020-05-05 | 广州金山移动科技有限公司 | Translation processing method and device, computer storage medium and terminal |
CN109766304A (en) * | 2018-12-11 | 2019-05-17 | 中新金桥数字科技(北京)有限公司 | The method and its system read about the bilingual speech control of Epub books based on iPad |
CN109657619A (en) * | 2018-12-20 | 2019-04-19 | 江苏省舜禹信息技术有限公司 | A kind of attached drawing interpretation method, device and storage medium |
CN111414768A (en) * | 2019-01-07 | 2020-07-14 | 搜狗(杭州)智能科技有限公司 | Information display method and device and electronic equipment |
CN112329480A (en) * | 2019-07-19 | 2021-02-05 | 搜狗(杭州)智能科技有限公司 | Area adjustment method and device and electronic equipment |
CN112584252B (en) * | 2019-09-29 | 2022-02-22 | 深圳市万普拉斯科技有限公司 | Instant translation display method and device, mobile terminal and computer storage medium |
CN110969029A (en) * | 2019-12-16 | 2020-04-07 | 北京明略软件系统有限公司 | Text conversion processing method and device and electronic equipment |
CN111191470A (en) * | 2019-12-25 | 2020-05-22 | 语联网(武汉)信息技术有限公司 | Document translation method and device |
CN111368562B (en) * | 2020-02-28 | 2024-02-27 | 北京字节跳动网络技术有限公司 | Method and device for translating characters in picture, electronic equipment and storage medium |
CN113298912B (en) * | 2020-04-26 | 2024-07-12 | 阿里巴巴新加坡控股有限公司 | Commodity picture processing method, commodity picture processing device and commodity picture processing server |
CN111985255A (en) * | 2020-09-01 | 2020-11-24 | 北京中科凡语科技有限公司 | Translation method, translation device, electronic device and storage medium |
CN112328348A (en) * | 2020-11-05 | 2021-02-05 | 深圳壹账通智能科技有限公司 | Application program multi-language support method and device, computer equipment and storage medium |
CN112711954B (en) * | 2020-12-31 | 2024-03-22 | 维沃软件技术有限公司 | Translation method, translation device, electronic equipment and storage medium |
CN114237468B (en) * | 2021-12-08 | 2024-01-16 | 文思海辉智科科技有限公司 | Text and picture translation method and device, electronic equipment and readable storage medium |
CN115131791A (en) * | 2022-04-28 | 2022-09-30 | 广东小天才科技有限公司 | Translation method and device, wearable device and storage medium |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1553364A (en) * | 2003-05-26 | 2004-12-08 | 北京永邦专利技术开发有限责任公司 | Interactive words and phrase studying system |
CN102073628A (en) * | 2009-11-24 | 2011-05-25 | 英业达股份有限公司 | Translation window display system and method |
CN102737238A (en) * | 2011-04-01 | 2012-10-17 | 洛阳磊石软件科技有限公司 | Gesture motion-based character recognition system and character recognition method, and application thereof |
CN102930263A (en) * | 2012-09-27 | 2013-02-13 | 百度国际科技(深圳)有限公司 | Information processing method and device |
CN103678290A (en) * | 2013-12-12 | 2014-03-26 | 苏州市峰之火数码科技有限公司 | Electronic foreign language reader |
CN103823796A (en) * | 2014-02-25 | 2014-05-28 | 武汉传神信息技术有限公司 | System and method for translation |
CN103914539A (en) * | 2014-04-01 | 2014-07-09 | 百度在线网络技术(北京)有限公司 | Information search method and device |
CN104090871A (en) * | 2014-07-18 | 2014-10-08 | 百度在线网络技术(北京)有限公司 | Picture translation method and system |
CN104714944A (en) * | 2015-04-14 | 2015-06-17 | 语联网(武汉)信息技术有限公司 | Document translation method and document translation system |
CN105573969A (en) * | 2006-10-02 | 2016-05-11 | 谷歌公司 | Displaying original text in a user interface with translated text |
CN106649295A (en) * | 2017-01-04 | 2017-05-10 | 携程旅游网络技术(上海)有限公司 | Text translation method for mobile terminal |
-
2017
- 2017-12-27 CN CN201711447783.8A patent/CN108182183B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1553364A (en) * | 2003-05-26 | 2004-12-08 | 北京永邦专利技术开发有限责任公司 | Interactive words and phrase studying system |
CN105573969A (en) * | 2006-10-02 | 2016-05-11 | 谷歌公司 | Displaying original text in a user interface with translated text |
CN102073628A (en) * | 2009-11-24 | 2011-05-25 | 英业达股份有限公司 | Translation window display system and method |
CN102737238A (en) * | 2011-04-01 | 2012-10-17 | 洛阳磊石软件科技有限公司 | Gesture motion-based character recognition system and character recognition method, and application thereof |
CN102930263A (en) * | 2012-09-27 | 2013-02-13 | 百度国际科技(深圳)有限公司 | Information processing method and device |
CN103678290A (en) * | 2013-12-12 | 2014-03-26 | 苏州市峰之火数码科技有限公司 | Electronic foreign language reader |
CN103823796A (en) * | 2014-02-25 | 2014-05-28 | 武汉传神信息技术有限公司 | System and method for translation |
CN103914539A (en) * | 2014-04-01 | 2014-07-09 | 百度在线网络技术(北京)有限公司 | Information search method and device |
CN104090871A (en) * | 2014-07-18 | 2014-10-08 | 百度在线网络技术(北京)有限公司 | Picture translation method and system |
CN104714944A (en) * | 2015-04-14 | 2015-06-17 | 语联网(武汉)信息技术有限公司 | Document translation method and document translation system |
CN106649295A (en) * | 2017-01-04 | 2017-05-10 | 携程旅游网络技术(上海)有限公司 | Text translation method for mobile terminal |
Also Published As
Publication number | Publication date |
---|---|
CN108182183A (en) | 2018-06-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108182183B (en) | Picture character translation method, application and computer equipment | |
CN108182184B (en) | Picture character translation method, application and computer equipment | |
US9519641B2 (en) | Photography recognition translation | |
KR102147935B1 (en) | Method for processing data and an electronic device thereof | |
EP3246796A1 (en) | Device and method for inputting note information into image of photographed object | |
US20090125848A1 (en) | Touch surface-sensitive edit system | |
TW201447609A (en) | Detection and reconstruction of East Asian layout features in a fixed format document | |
CN110109590B (en) | Automatic reading method and device | |
US20170102780A1 (en) | Input method editors for indic languages | |
CN104808807A (en) | Method and device for Chinese phonetic input | |
CN111860000A (en) | Text translation editing method and device, electronic equipment and storage medium | |
KR20130115694A (en) | Apparatas and method for inputing and managing of a user data in an electronic device | |
CN108492349B (en) | Processing method, device and equipment for writing strokes and storage medium | |
CN111159975B (en) | Display method and device | |
CN109218522A (en) | Function area processing method and device in application, electronic equipment and storage medium | |
JP2024064941A (en) | Display method, apparatus, pen type electronic dictionary, electronic equipment, and recording medium | |
US20150111189A1 (en) | System and method for browsing multimedia file | |
WO2018049603A1 (en) | Control method, control apparatus and electronic apparatus | |
CN107066438A (en) | A kind of method for editing text and device, electronic equipment | |
CN110245572A (en) | Region content identification method, device, computer equipment and storage medium | |
US11481027B2 (en) | Processing a document through a plurality of input modalities | |
CN105683891A (en) | Inputting tone and diacritic marks by gesture | |
CN112698896B (en) | Text display method and electronic equipment | |
CN113709322A (en) | Scanning method and related equipment thereof | |
CN106775211A (en) | One kind positioning light target moving method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |