CN108182184B - Picture character translation method, application and computer equipment - Google Patents
Picture character translation method, application and computer equipment Download PDFInfo
- Publication number
- CN108182184B CN108182184B CN201711449311.6A CN201711449311A CN108182184B CN 108182184 B CN108182184 B CN 108182184B CN 201711449311 A CN201711449311 A CN 201711449311A CN 108182184 B CN108182184 B CN 108182184B
- Authority
- CN
- China
- Prior art keywords
- picture
- translated
- translation
- text
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000013519 translation Methods 0.000 title claims abstract description 299
- 238000000034 method Methods 0.000 title claims abstract description 59
- 230000006870 function Effects 0.000 claims abstract description 67
- 239000012634 fragment Substances 0.000 claims abstract description 10
- 238000007667 floating Methods 0.000 claims description 72
- 238000004590 computer program Methods 0.000 claims description 9
- 239000003973 paint Substances 0.000 claims description 2
- 238000004891 communication Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 7
- 238000012545 processing Methods 0.000 description 6
- 238000010422 painting Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 230000001960 triggered effect Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000001915 proofreading effect Effects 0.000 description 2
- 238000005096 rolling process Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/40—Processing or translation of natural language
- G06F40/58—Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- User Interface Of Digital Computer (AREA)
- Machine Translation (AREA)
Abstract
The invention provides a picture character translation method, application and computer equipment, wherein the method comprises the following steps: acquiring a picture translation request, wherein the translation request comprises a picture to be translated and a target language type; if the current translation mode is determined to be segment-type translation, displaying a picture to be translated and smearing a function editing area on a display interface; determining a current target segment to be translated according to the operation of a user in the smearing function editing area and the picture to be translated; performing character recognition on the target segment, and determining an original text to be translated; and translating the original text to generate a target text corresponding to the target language type. The method and the device have the advantages that the translation of partial fragments in the picture is realized according to the operation of the user, the translation mode is flexible, the requirement of the user for freely selecting the fragments to be translated is met, and the user experience is improved.
Description
Technical Field
The invention relates to the technical field of computers, in particular to a picture character translation method, application and computer equipment.
Background
With the rapid development of digital technology, high-performance digital cameras are provided in terminal devices such as mobile phones. When people read, strange foreign language words encountered in the reading process can be shot by using the terminal equipment at any time, the characters in the shot picture are recognized through the character recognition technology of the terminal equipment, and then the recognition result is translated.
In practical application, people may need to translate some segments in a picture, and in the prior art, when translating characters in the picture, only full-text recognition and translation can be performed on the picture, so that the translation mode is inflexible, and the user experience is poor.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, the invention provides a picture character translation method, which realizes translation of partial segments in a picture according to the operation of a user, has flexible translation mode, meets the requirement of the user on freely selecting the segment to be translated, and improves the user experience.
The invention also provides an application of the picture character translation.
The invention also provides computer equipment.
The invention also provides a computer readable storage medium.
The embodiment of the first aspect of the invention provides a picture character translation method, which comprises the following steps: acquiring a picture translation request, wherein the translation request comprises a picture to be translated and a target language type; if the current translation mode is determined to be segment-type translation, displaying a picture to be translated and smearing a function editing area on a display interface; determining a current target segment to be translated according to the operation of a user in the smearing function editing area and the picture to be translated; performing character recognition on the target segment, and determining an original text to be translated; and translating the original text to generate a target text corresponding to the target language type.
According to the picture character translation method provided by the embodiment of the invention, when the picture translation request is acquired, if the current translation mode is determined to be the segmented translation, the picture to be translated and the smearing function editing area can be displayed on the display interface, and then the current target segment to be translated is determined according to the operation of a user in the smearing function editing area and the picture to be translated, so that the character recognition and translation are carried out on the target segment. Therefore, partial segments in the picture are translated according to the operation of the user, the translation mode is flexible, the requirement of the user for freely selecting the segments to be translated is met, and the user experience is improved.
An embodiment of a second aspect of the present invention provides an application for translating a picture text, including: the first acquisition module is used for acquiring a picture translation request, wherein the translation request comprises a picture to be translated and a target language type; the first display module is used for displaying the picture to be translated and smearing the function editing area on a display interface when the current translation mode is determined to be segment-type translation; the first determining module is used for determining a current target segment to be translated according to the operation of a user in the smearing function editing area and the picture to be translated; the first identification module is used for carrying out character identification on the target segment and determining an original text to be translated; and the translation module is used for translating the original text to generate a target text corresponding to the target language type.
In the picture character translation application of the embodiment of the invention, when the picture translation request is acquired, if the current translation mode is determined to be the segment-type translation, the picture to be translated and the smearing function editing region can be displayed on the display interface, and then the current target segment to be translated is determined according to the operation of the user in the smearing function editing region and the picture to be translated, so that the character recognition and translation are carried out on the target segment. Therefore, partial segments in the picture are translated according to the operation of the user, the translation mode is flexible, the requirement of the user for freely selecting the segments to be translated is met, and the user experience is improved.
An embodiment of a third aspect of the present invention provides a computer device, including: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the picture text translation method according to the first aspect when executing the program.
A fourth aspect of the present invention provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the method for translating picture texts according to the first aspect.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flow chart of a method for text translation of a picture according to an embodiment of the present invention;
FIGS. 1A-1G are exemplary diagrams of a display interface according to one embodiment of the invention;
FIG. 2 is a flowchart of a method for translating picture text according to another embodiment of the present invention;
FIGS. 2A-2C are exemplary diagrams of a display interface according to another embodiment of the invention;
FIG. 3 is a block diagram of a picture text translation application according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a picture-to-text translation application according to another embodiment of the present invention;
fig. 5 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
Specifically, the embodiments of the present invention provide a picture and text translation method, which aims at solving the problems that in actual application, people may need to translate some segments in a picture, and in the prior art, when translating characters in a picture, only full text recognition and translation can be performed on the picture, the translation mode is inflexible, and the user experience is poor.
According to the picture character translation method provided by the embodiment of the invention, when the picture translation request is acquired, if the current translation mode is determined to be the segment-type translation, the picture to be translated and the smearing function editing area can be displayed on the display interface, and then the current target segment to be translated is determined according to the operation of a user in the smearing function editing area and the picture to be translated, so that the character recognition and translation are carried out on the target segment. Therefore, partial segments in the picture are translated according to the operation of the user, the translation mode is flexible, the requirement of the user for freely selecting the segments to be translated is met, and the user experience is improved.
The following describes a picture character translation method, an application, and a computer device according to an embodiment of the present invention in detail with reference to the accompanying drawings.
Fig. 1 is a flowchart of a method for translating picture text according to an embodiment of the present invention.
As shown in fig. 1, the method for translating the picture characters includes:
Specifically, the execution main body of the picture character translation method provided by the embodiment of the present invention is the picture character translation application provided by the embodiment of the present invention. The application can be configured in any computer equipment with a display screen, such as a mobile phone, a computer and the like, so as to flexibly translate the picture to be translated. The embodiment of the present invention is described by taking an example in which the application is configured in a mobile phone having a touch screen.
The picture to be translated may be a picture stored in a preset position in the computer device, or a picture directly taken by a user through a camera in the computer device, which is not limited herein.
The target language type can be any type such as Chinese type, English type and the like. The embodiment of the invention takes the example that the language type of the characters in the picture to be translated is English type and the target language type is Chinese type as an example for explanation.
In specific implementation, different buttons capable of uploading the picture to be translated in different modes can be arranged on a display interface (picture translation function interface) with a picture character translation function in computer equipment, and a user can select to upload the picture to be translated in a corresponding mode by touching the different buttons.
For example, referring to fig. 1A and fig. 1B, after a user enters a picture translation function interface by touching an area 1 in fig. 1A, the user can select a picture from a preset position of a computer device to upload by touching a button 1 in fig. 1B, and can take the picture by using a camera in the computer device by touching a button 2 in fig. 1B, so that the picture text translation application can acquire the picture to be translated uploaded by the user from the preset position of the computer device or taken by using the camera.
It should be noted that, when a user takes a picture to be translated by using a camera of a computer device, as shown in fig. 1B, a text alignment reference line may be displayed in a preview image, so that when the user takes a picture, the text in the picture may be aligned along the direction of the reference line, so that the taken picture is better in effect, and the translation effect is better.
In addition, in the picture translation function interface, a language direction toolbar shown in a black area in the upper part of fig. 1B may be displayed, and the user may select a target language type by operating in the language direction toolbar.
Specifically, when the user touches a button with a picture translation function by clicking, long-pressing, sliding, or the like, the picture translation request can be triggered. For example, a user may touch the button 1 in fig. 1B to upload a picture to be translated from a preset position of the computer device, and trigger a picture translation request; alternatively, the user may touch the button 2 in fig. 1B to take a picture to be translated with a camera in the computer device, and trigger a picture translation request.
And step 102, if the current translation mode is determined to be segment-type translation, displaying the picture to be translated and smearing the function editing area on a display interface.
The segment-based translation refers to a method for translating a picture by taking a segment as a unit. It should be noted that a segment may include one word or multiple words, and may also include one or multiple paragraphs, which are not limited herein.
In specific implementation, when the picture text translation application acquires a segment-type translation instruction sent by a user, the current translation mode is determined to be segment-type translation. That is, before step 102, the method may further include:
and acquiring a segment type translation instruction sent by a user.
In particular, the fragmented translation instructions may be user-triggered in a variety of situations.
For example, when a picture translation request is acquired, a picture to be translated can be displayed on a display interface, so that a user can directly send a segment type translation instruction on the picture display interface to be translated; or after the picture translation request is obtained, the picture text translation application may perform full-text recognition and translation on the picture to be translated, then display a full-text translation result on the display interface, and when the user wishes to translate a certain segment, send a segment-type translation instruction on the translation result display interface corresponding to the picture to be translated.
That is, obtaining the fragmented translation instruction sent by the user may include:
acquiring a segment type translation instruction sent by a user on a picture display interface to be translated;
or,
and acquiring a segment type translation instruction sent by a user on a translation result display interface corresponding to the picture to be translated.
Correspondingly, when the picture text translation application acquires the picture translation request, the picture to be translated can be displayed on the display interface, and buttons such as 'segment translation', 'full-text translation' and the like can be directly displayed on the picture display interface to be translated, so that a user can select and touch the corresponding button according to the requirement to trigger the corresponding translation mode instruction. If the user touches the 'segment type translation' button, the picture and text translation application can determine that the current translation mode is segment type translation, so that the picture to be translated and the function editing area can be displayed on the display interface.
Or after the picture translation request is obtained, the picture text translation application may perform full-text recognition and translation on the picture to be translated, and then, as shown in fig. 1C, display a full-text translation result on a display interface, and at the same time, display a button with a segment-type translation function, such as the button 3 in fig. 1C, on the translation result display interface. Therefore, when a user wants to translate a certain segment, the user can trigger the segment-type translation instruction by touching the button with the segment-type translation function. After the picture character translation application acquires a segment type translation instruction sent by a user, a picture to be translated and a function editing area are displayed on a display interface.
And 103, determining the current target segment to be translated according to the operation of the user in the smearing function editing area and the picture to be translated.
Wherein the smear function editing region may include at least one of the following function buttons: cancel, smear line edit, clear, and start translation.
Specifically, the cancel button can realize the function of canceling the segment-type translation request; the painting line editing button can realize the function of adjusting the painting line style; a clearing button, which can realize a one-time clearing function to the smeared content when the user is not satisfied with or wants to modify the smeared content; and starting the translation button, so that the translation request can be triggered after the user finishes smearing the picture to be translated.
During specific implementation, a user can smear contents to be translated in a picture to be translated by using the smearing line, and after smearing is completed, the user can touch and start the translation button, so that the picture translation application can determine the contents smeared by the user as a target segment to be translated according to the operation of the user in the smearing function editing area and the picture to be translated.
For example, after the user taps the button 3 with the segment-type translation function shown in fig. 1C, the picture to be translated shown in fig. 1D and the smearing function editing area shown in the area 3 in fig. 1D may be displayed on the display interface. Wherein, the buttons 4, 5, 6, and 7 in fig. 1D are cancel, smear line edit, clear, and start translation buttons, respectively. The user may touch the area 4 in fig. 1D with a finger and move the finger to smear the segment to be translated, thereby highlighting the segment to be translated in the picture to be translated, as shown in fig. 1E.
When the user is not satisfied with the applied content or wants to modify the applied content, the user can touch the button 6 shown in fig. 1D to erase the applied content and then apply the content again. By clearing the button, the display interface can quickly return to the state when the user does not smear, so that the operation steps of the user are saved, and the efficiency of picture and character translation is improved. After the user finishes painting, the user touches the button 7 shown in fig. 1D, and a translation request is triggered. And the picture character translation application can determine the current target segment to be translated according to the highlight content in the picture to be translated.
It should be noted that, when the picture to be translated is painted, the pattern of the painting line may be predetermined by the application of the picture text, or may be determined after the user adjusts the pattern of the painting line according to the need, which is not limited herein.
Correspondingly, in the embodiment of the present invention, the current style of the smearing line may also be adjusted according to the operation of the user, that is, the method for translating pictures and texts provided by the embodiment of the present invention may further include:
acquiring a smearing line editing instruction;
displaying each attribute adjusting button in the third floating layer;
and determining the current style of the smearing line according to the operation of the user on each attribute adjusting button.
The attribute may include color, thickness, and other arbitrary attributes associated with the application line.
Specifically, the user may trigger the scribble line edit command by touching a scribble line edit function button, such as button 5 in fig. 1D, of the scribble function edit area. After the picture character editing application acquires the smearing line editing instruction, each attribute adjusting button can be displayed in the third floating layer. The user can adjust each attribute of the smearing line by touching the corresponding attribute adjusting button as required. The current style of the smearing line can be determined by the picture character translation application according to the operation of each attribute adjusting button by the user.
Further, in the process that a user adjusts a certain attribute of the smearing line through a certain attribute adjusting button, it is desirable to visually understand the display style of the attribute during the adjustment process to determine whether the display style of the attribute meets the desired effect. Correspondingly, in the embodiment of the invention, after the target attribute to be adjusted is determined, the adjustment mode of the target attribute and the display style corresponding to the current attribute can be displayed through the floating layer.
That is, after the attribute adjustment buttons are displayed in the third floating layer, the method may further include:
determining a target attribute to be adjusted according to the operation of a user;
and displaying the adjustment mode of the target attribute and the display style corresponding to the current attribute of the smearing line in the fourth floating layer.
Wherein, different smearing line attributes can correspond to different adjustment modes. Therefore, after the target attribute to be adjusted is determined according to the touch operation of the user on each attribute adjusting button, the adjusting mode corresponding to the target attribute and the display style corresponding to the current attribute of the smearing line can be displayed in the fourth floating layer.
For example, after the user touches the color adjustment button, buttons of different colors may be displayed in the fourth floating layer, and the user may adjust the color of the smearing line to a corresponding color by touching the button of a certain color.
Alternatively, after the user touches the thickness adjustment button, as shown in fig. 1F, the rolling axis and the circular slider are displayed in the fourth floating layer, and the user can adjust the thickness of the application line by dragging the circular slider in the direction of the rolling axis. Specifically, when the sliding block is dragged from left to right, the smearing line can be adjusted from thin to thick; when the sliding block is dragged from right to left, the smearing line can be adjusted to be thick and thin on the right. In the process that the user drags the circular sliding, as shown in the middle area of fig. 1G, in the fourth floating layer, the current display style of the smearing line, that is, the preview of the thickness adjustment of the smearing line, is displayed, so that the user can determine whether the current display style of the smearing line meets the desired effect by previewing the display style of the smearing line. Wherein, circles 1 and 2 in fig. 1G respectively represent the display patterns when the smearing line is thinnest and thickest; the circle 3 in fig. 1G indicates the current display style of the smear line.
By displaying the adjustment mode of the target attribute and the display style corresponding to the current attribute of the smearing line in the fourth floating layer, a user can quickly adjust the style of the smearing line so as to better adapt to the size of the content in the picture to be translated.
It should be noted that, the above example of the adjustment mode of the target attribute displayed in the fourth floating layer and the display style corresponding to the current attribute of the smearing line is only an illustrative example, and is not a limitation to the technical solution of the present application.
And 104, performing character recognition on the target segment, and determining an original text to be translated.
And 105, translating the original text to generate a target text corresponding to the target language type.
Specifically, after the image character translation application determines the current target segment to be translated, character recognition can be performed on the target segment, the original text to be translated is determined, the original text is translated, and the target text corresponding to the target language type is generated.
It should be noted that, in the embodiment of the present invention, if the segment-type translation instruction sent by the user is sent on the translation result display interface corresponding to the picture to be translated, that is, after the picture text translation application obtains the picture translation request, full text recognition and translation are performed on the picture, and then when the segment-type translation is performed according to the segment-type translation instruction sent by the user, after the target segment to be translated is determined according to the operation of the user in smearing the function editing region and the picture to be translated, the target segment may not be recognized and translated, but the target text corresponding to the target segment is directly obtained from the full text translation result, so that the picture text translation efficiency is improved.
Or, because there may be a slight difference between the translation results when the picture to be translated is translated in a full text and translated in paragraphs, when the picture is translated in a contrast mode after being recognized and translated in a full text, the target segment to be translated can be recognized again after being determined according to the operation of the user in smearing the function editing area and the picture to be translated, and then the recognized original text is translated. After full-text recognition and translation are carried out on the picture to be translated, the current translation mode is determined to be segment-type translation, and after the target segment is determined, the target segment is recognized and translated again, so that the accuracy of translation of the target segment can be improved.
According to the picture character translation method provided by the embodiment of the invention, when the picture translation request is acquired, if the current translation mode is determined to be the segmented translation, the picture to be translated and the smearing function editing area can be displayed on the display interface, and then the current target segment to be translated is determined according to the operation of a user in the smearing function editing area and the picture to be translated, so that the character recognition and translation are carried out on the target segment. Therefore, partial segments in the picture are translated according to the operation of the user, the translation mode is flexible, the requirement of the user for freely selecting the segments to be translated is met, and the user experience is improved.
By the analysis, after the picture translation request is obtained, if the current translation mode is determined to be the segmented translation, the picture to be translated and the smearing function editing area can be displayed on the display interface, so that the current target segment to be translated is determined according to the operation of the user in the smearing function editing area and the picture to be translated, the target segment is identified and translated, and the target text corresponding to the target language type is generated. In one possible implementation form, after the target text corresponding to the target language type is generated, the target text may be displayed in a floating layer, which is described in detail below with reference to fig. 2.
Fig. 2 is a flowchart of a method for translating picture text according to another embodiment of the present invention.
As shown in fig. 2, the method for translating picture words provided by the embodiment of the present invention may include:
And step 203, displaying a translation result corresponding to the picture to be translated on a display interface.
And 204, acquiring a segment type translation instruction sent by a user on a translation result display interface corresponding to the picture to be translated.
The segment-based translation refers to a method for translating a picture by taking a segment as a unit. It should be noted that a segment may include one word or multiple words, and may also include one or multiple paragraphs, which are not limited herein.
Specifically, after the picture translation request is obtained, the picture text translation application may perform full-text recognition and translation on the picture to be translated, and then display a full-text translation result on the display interface, and when the user desires to translate a certain segment, the user may send a segment-type translation instruction on the translation result display interface corresponding to the picture to be translated.
For example, after the picture translation request is obtained, the picture text translation application may perform full-text recognition and translation on the picture to be translated, and then as shown in fig. 1C, a full-text translation result is displayed on the display interface, and meanwhile, a button having a segment-type translation function, such as the button 3 in fig. 1C, may be displayed on the translation result display interface. Therefore, when a user wants to translate a certain segment, the user can trigger the segment-type translation instruction by touching the button with the segment-type translation function.
When the full-text translation result is displayed on the display interface, the translation result may be displayed in a simple text form or a picture form. When the image is displayed in the form of an image, the display style of the image can be determined according to the style of the image to be translated. Correspondingly, before the translation result is displayed, the style of the picture to be translated can be determined, and then the display style of the full-text translation result is determined. That is, before step 204, it may further include:
identifying the picture to be translated, and determining the pattern of the picture to be translated and the pattern of characters in the picture to be translated;
and determining the display style of the translation result according to the style of the picture to be translated and the style of characters in the picture to be translated.
The style of the picture to be translated may include a background color of the picture to be translated, a pattern in the picture, and the like. The style of the characters in the picture to be translated may include the size, color, font, etc. of the characters in the picture to be translated.
The display style of the translation result may include a picture style such as a picture background color of the translation result, and a character style such as a character size, a color, and a font in the translation result.
Specifically, after the picture to be translated is identified, the determined pattern of the picture to be translated can be used as the picture pattern of the translation result, and the pattern of the characters in the picture to be translated can be used as the character pattern of the translation result, so that the translation result is displayed in the determined display pattern. That is, for the user, when displaying the translation result, only the target segment in the picture to be translated is converted into the target text, and none of the other segments is changed, so that the visual perception of the user is improved.
Further, after the translation result is displayed in the determined display style, the user can also perform operation on the translation result display interface to switch the translation result display interface into a picture to be translated, or store or share the translation result display interface, and the like. That is, after determining the display style of the translation result, the method may further include:
displaying the translation result in a determined display style;
and switching the translation result display interface into a picture to be translated or storing or sharing the translation result display interface according to the operation of the user on the translation result display interface.
Specifically, different processing modes corresponding to different operations can be preset, so that when a user operates the translation result display interface, the picture and text translation application can switch, store or share the translation result display interface according to the operation mode of the user.
For example, when the user taps the area 2 of the translation result display interface shown in fig. 1C, the translation result display interface may be switched to the picture to be translated shown in fig. 2A; when the user presses the translation result display interface shown in fig. 1C for a long time, as shown in fig. 2B, a save, share, and cancel button (gamma in the figure) is displayed on the upper layer of the translation result display interface, and the user can select the button corresponding to the touch control according to the needs, so that the picture and text translation application can perform storage or sharing processing on the translation result display interface or close the upper layer interface of the translation result display interface according to the operation of the user. If the user presses the translation result display interface shown in fig. 1C for a long time and touches the save button in the upper interface of the translation result display interface shown in fig. 2B, the translation result display interface can be saved.
It should be noted that, after the user touches the area 2 shown in fig. 1C, so that the translation result display interface is switched to the picture to be translated shown in fig. 2A, the user may also touch the area 5 shown in fig. 2A, so as to switch the picture to be translated back to the translation result display interface. Through switching back and forth between the translation result display interface and the picture to be translated according to the operation of the user, the user can check the translation result.
And step 205, displaying the picture to be translated and smearing the function editing area on a display interface.
And step 206, determining the current target segment to be translated according to the operation of the user in the smearing function editing area and the picture to be translated.
And step 207, determining a target text corresponding to the target segment and the target language type according to the translation result corresponding to the picture to be translated.
The detailed implementation process and principle of step 205-207 may refer to the detailed description of the above embodiments, and are not described herein again.
And step 208, displaying the target text on the upper layer of the picture to be translated in the form of the first floating layer at the position where the target segment is not blocked.
Specifically, after the picture character translation application determines the current target segment to be translated, the target text corresponding to the target segment and the target language type can be determined according to the translation result corresponding to the picture to be translated. After the target text is determined, the target text can be displayed on the upper layer of the picture to be translated in the form of the first floating layer, and the position of the target segment is not shielded.
For example, when the user paints the content on the lower part of the picture to be translated in fig. 1E, the picture text translation application may display the target text in the form of a first floating layer on the upper layer of the picture to be translated, as shown in fig. 2C, after determining the target segment according to the highlight region in fig. 1E. The first floating layer is filled in the upper part of the display page and does not shield the area smeared by the user in the picture to be translated. Therefore, the user can see the target segment and the target text at the same time, so that the proofreading of the translation result of the target segment is realized, the requirement of the user on proofreading the picture translation result is met, and the user experience is improved.
When the target text is displayed in the form of the first floating layer, the target text may be in the form of a picture or a character, and is not limited herein. When the target text is in the form of a picture, the display style of the picture can be determined according to the style of the picture to be translated.
In addition, when the target text is displayed in the form of the first floating layer, due to the limitation of the size of the display interface, it may not be possible to display all the target text while ensuring clarity. In the embodiment of the present invention, only a part of the target text may be displayed, and then other target texts may be displayed according to the operation of the user on the first floating layer, such as sliding up, sliding down, and the like.
Furthermore, after the target text is displayed in the form of the first floating layer, the user can also operate the first floating layer, so that the picture and character translation application can realize corresponding functions according to the operation of the user.
That is, after step 208, it may further include:
The number of the preset words may be one or more, and may even be the whole target text, which is not limited herein.
It should be noted that the preset word may be a word specified by a user, or a word determined by a picture and text translation application, and is not limited herein.
Specifically, a plurality of buttons can be arranged in the first floating layer, and each button can realize different functions, so that corresponding functions can be realized according to touch operation of a user on each button.
For example, as shown in fig. 2C, a moving floating layer button (button 8) may be disposed at the bottom of the first floating layer, and when the user wishes to move the first floating layer, the user may press the button 8 to drag the first floating layer, so that the picture and text translation application may move the first floating layer to a corresponding position according to the user operation. By moving the position of the floating layer according to the operation of the user, the user can better correct the translation result.
In addition, a voice playing button (button 9) can be further arranged at the beginning position of the target text, and when the user wants to play the target segment, the user can touch the button 9, so that the picture and word translation application can play the target segment in a voice mode according to the operation of the user. Similarly, a voice playback button may be provided at the beginning of the target clip, so that the target clip is played back in response to the user operating the button.
In addition, a view detailed explanation button (button 10) may be further disposed in the first floating layer, and when a user wishes to view a detailed explanation of a certain word in the target text, the user may touch the button 10, so that the picture and word translation application may display a specific explanation of a preset word in the target text in the second floating layer according to an operation of the user.
In addition, a translation editing button can be arranged in the first floating layer, and when a user thinks that the translation result of the picture and word translation application to the target segment is inaccurate and wants to modify the target text, the user can touch the translation editing button, so that the picture and word translation application can edit the target text according to the operation of the user. Or, the user can also be set to edit the target text by touching the area where the target text of the first floating layer is located. By editing the target text, the display result of the target text is more in line with the requirements of the user.
Similarly, an original text editing button, such as the button 11 in fig. 2C, may be further set in the first floating layer, and when the user considers that the recognition result of the picture text translation application on the target segment is not accurate and wants to modify the original text, the user may touch the button 11, so that the picture text translation application may edit the original text according to the operation of the user.
Correspondingly, before the original text is edited, the original text may be displayed on a display interface, that is, the method for translating the picture text provided by the embodiment of the present invention may further include:
analyzing the picture to be translated, and determining the definition and background color of the picture to be translated;
determining a display style of an original text corresponding to the target segment according to the definition and the background color of the picture to be translated;
and displaying the original text in a determined display style at a position which is not shielded by the target text and is positioned below the first floating layer.
The display style of the original text may include a font, a color, and the like of the original text. In addition, the original text may be displayed in a floating layer form, and accordingly, the display style of the original text may further include a floating layer color, a floating layer size, and the like corresponding to the original text.
Specifically, the definition and the background color of the picture to be translated and the corresponding relationship between the display style of the original text can be preset, so that after the definition and the background color of the picture to be translated are determined, the display style of the original text corresponding to the target segment can be determined according to the preset corresponding relationship, and the original text is displayed on the lower layer of the first floating layer in the determined display style at a position which is not shielded by the target text.
For example, when the definition of the picture to be translated is less than 50% and the background color is light color, the original text is a large font, and the floating layer corresponding to the original text is dark in color and large in size; when the definition of the picture to be translated is more than 50% and the background color is dark, the original text is in a small font, and the floating layer corresponding to the original text is light in color and small in size. Then, the picture to be translated is analyzed, the definition of the picture to be translated is determined to be 40%, and when the background color is white, the display style of the original text can be determined to be: the font is larger, the floating layer is darker in color, and the floating layer is larger in size, so that the original text is displayed at the position which is not shielded by the target text and is at the lower layer of the first floating layer in the determined display style.
In addition, when the original text is displayed in a floating layer form, the original text may be in a picture form or a character form, and is not limited herein. When the original text is in the form of a picture, the display style of the picture can be determined according to the style of the picture to be translated.
By editing the original text, the display result of the original text is more in line with the requirements of users.
It should be noted that, in the embodiment of the present invention, the transparency of each floating layer may also be set according to needs. For example, when the second floating layer is displayed on the upper layer of the first floating layer, the first floating layer can be displayed in a non-transparent mode, and the second floating layer can be displayed in a semi-transparent mode, so that the first floating layer cannot be shielded when the second floating layer is displayed on the upper layer of the first floating layer.
According to the picture character translation method, after the picture translation request is obtained, when a segment type translation instruction sent by a user on a translation result display interface corresponding to a picture to be translated is obtained, the picture to be translated and a smearing function editing area can be displayed on the display interface, a current target segment to be translated is determined according to the operation of the user on the smearing function editing area and the picture to be translated, character recognition is carried out on the target segment, after an original text to be translated is determined, the original text can be translated, a target text corresponding to a target language type is generated, and therefore the target text is displayed on the upper layer of the picture to be translated in a first floating layer mode and does not shield the target segment. Therefore, partial fragments in the picture are translated according to the operation of the user, the translation mode is flexible, the requirement of the user for freely selecting the fragments to be translated is met, the generated target text is displayed on the upper layer of the picture to be translated in the form of the first floating layer and the position of the target fragment which is not shielded, the requirement of the user for correcting the picture translation result is met, and the user experience is improved.
Fig. 3 is a schematic structural diagram of a picture-to-text translation application according to an embodiment of the present invention.
As shown in fig. 3, the picture text translation application includes:
the first obtaining module 31 is configured to obtain a picture translation request, where the translation request includes a picture to be translated and a target language type;
the first display module 32 is configured to display a picture to be translated and a function editing area on a display interface when it is determined that the current translation mode is a segment-type translation;
the first determining module 33 is configured to determine a current target segment to be translated according to an operation of a user in the smearing function editing area and the picture to be translated;
the first recognition module 34 is configured to perform character recognition on the target segment, and determine an original text to be translated;
and the translation module 35 is configured to translate the original text to generate a target text corresponding to the target language type.
Specifically, the image text translation application provided by the embodiment of the present invention can execute the image text translation method provided by the embodiment of the present invention, and the application can be configured in any computer device with a display screen, such as a mobile phone, a computer, and the like, to flexibly translate an image to be translated.
In one possible implementation form, the smearing function editing area includes at least one of the following function buttons: cancel, smear line edit, clear, and start translation.
It should be noted that the explanation of the embodiment of the image text translation method is also applicable to the image text translation application of the embodiment, and is not repeated here.
In the picture character translation application of the embodiment of the invention, when the picture translation request is acquired, if the current translation mode is determined to be the segment-type translation, the picture to be translated and the smearing function editing region can be displayed on the display interface, and then the current target segment to be translated is determined according to the operation of the user in the smearing function editing region and the picture to be translated, so that the character recognition and translation are carried out on the target segment. Therefore, partial segments in the picture are translated according to the operation of the user, the translation mode is flexible, the requirement of the user for freely selecting the segments to be translated is met, and the user experience is improved.
Fig. 4 is a block diagram of a picture-to-text translation application according to another embodiment of the present invention.
As shown in fig. 4, on the basis of fig. 3, the picture text translation application may further include:
and the second display module 41 is configured to display the target text in the form of a first floating layer at a position on the upper layer of the picture to be translated, where the target segment is not occluded.
The first processing module 42 is configured to, according to an operation of a user, move the position of the floating layer, perform voice playing on the target text, perform voice playing on the target segment, edit the original text or the target text, and/or display a specific explanation of a preset word in the target text in the second floating layer.
In one possible implementation form, the application further includes:
the analysis module is used for analyzing the picture to be translated and determining the definition and the background color of the picture to be translated;
the second determining module is used for determining the display style of the original text corresponding to the target segment according to the definition and the background color of the picture to be translated;
and the third display module is used for displaying the original text in a determined display style at a position which is not shielded by the target text and is positioned at the lower layer of the first floating layer.
In another possible implementation form, the application further includes:
the second acquisition module is used for acquiring a smearing line editing instruction;
the fourth display module is used for displaying each attribute adjusting button in the third floating layer;
and the third determining module is used for determining the current style of the smearing line according to the operation of the user on each attribute adjusting button.
In another possible implementation form, the application further includes:
the fourth determining module is used for determining the target attribute to be adjusted according to the operation of the user;
and the fifth display module is used for displaying the adjustment mode of the target attribute and the display style corresponding to the current attribute of the smearing line in a fourth floating layer.
In another possible implementation form, the application further includes:
and the third acquisition module is used for acquiring the segment-type translation instruction sent by the user.
In another possible implementation form, the third obtaining module is specifically configured to:
acquiring a segment type translation instruction sent by a user on a picture display interface to be translated;
or,
and acquiring a segment type translation instruction sent by a user on a translation result display interface corresponding to the picture to be translated.
In another possible implementation form, the application further includes:
the second identification module is used for identifying the picture to be translated and determining the pattern of the picture to be translated and the pattern of characters in the picture to be translated;
and the fifth determining module is used for determining the display style of the translation result according to the style of the picture to be translated and the style of characters in the picture to be translated.
In another possible implementation form, the application further includes:
the sixth display module is used for displaying the translation result in a determined display mode;
and the second processing module is used for switching the translation result display interface into a picture to be translated or storing or sharing the translation result display interface according to the operation of a user on the translation result display interface.
The picture character translation application provided by the embodiment of the invention can display the picture to be translated and the smearing function editing area on the display interface after acquiring the picture translation request, when acquiring the segment type translation instruction sent by the user on the translation result display interface corresponding to the picture to be translated, then determine the current target segment to be translated according to the operation of the user in the smearing function editing area and the picture to be translated, perform character recognition on the target segment, and translate the original text after determining the original text to be translated to generate the target text corresponding to the target language type, so that the target text is displayed on the upper layer of the picture to be translated in the form of the first floating layer and the position of the target segment is not shielded. Therefore, partial fragments in the picture are translated according to the operation of the user, the translation mode is flexible, the requirement of the user for freely selecting the fragments to be translated is met, the generated target text is displayed on the upper layer of the picture to be translated in the form of the first floating layer and the position of the target fragment which is not shielded, the requirement of the user for correcting the picture translation result is met, and the user experience is improved.
Fig. 5 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
As shown in fig. 5, the computer apparatus includes:
a memory 51, a processor 52 and a computer program stored on the memory 51 and executable on the processor 52.
The processor 52 implements the picture-text translation method provided in the above embodiments when executing the program.
The computer device can be a computer, a mobile phone, a wearable device and the like.
Further, the computer device further comprises:
a communication interface 53 for communication between the memory 51 and the processor 52.
A memory 51 for storing a computer program operable on the processor 52.
The memory 51 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The processor 52 is configured to implement the picture and text translation method according to the foregoing embodiment when executing the program.
If the memory 51, the processor 52 and the communication interface 53 are implemented independently, the communication interface 53, the memory 51 and the processor 52 may be connected to each other through a bus and perform communication with each other. The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 5, but this does not mean only one bus or one type of bus.
Alternatively, in practical implementation, if the memory 51, the processor 52 and the communication interface 53 are integrated on one chip, the memory 51, the processor 52 and the communication interface 53 may complete communication with each other through an internal interface.
A fourth aspect of the present invention provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the method for translating picture texts as in the foregoing embodiments.
An embodiment of a fifth aspect of the present invention provides a computer program product, wherein when the instructions in the computer program product are executed by a processor, the method for translating picture and text as in the foregoing embodiments is performed.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.
Claims (11)
1. A picture character translation method is characterized by comprising the following steps:
acquiring a picture translation request, wherein the translation request comprises a picture to be translated and a target language type;
carrying out full-text recognition and translation on the picture to be translated, and displaying a full-text translation result on a translation result display interface; the method comprises the steps of identifying a picture to be translated, determining the style of the picture to be translated and the style of characters in the picture to be translated, determining the display style of a translation result according to the style of the picture to be translated and the style of characters in the picture to be translated, and displaying the translation result in the determined display style;
acquiring a segment type translation instruction sent by a user on the translation result display interface;
if the current translation mode is determined to be segment-type translation, displaying a picture to be translated and smearing a function editing area on a display interface;
determining a current target segment to be translated according to the operation of a user in the smearing function editing area and the picture to be translated;
and generating a target text corresponding to the target fragment and the target language type according to the full-text translation result.
2. The method of claim 1, wherein after generating the target text corresponding to the target language type, further comprising:
and displaying the target text on the upper layer of the picture to be translated in a first floating layer mode at a position where the target segment is not blocked.
3. The method of claim 2, wherein after displaying the target text in a first floating layer, further comprising:
according to the operation of a user, the position of the floating layer is moved, the target text is subjected to voice playing, the target fragment is subjected to voice playing, the original text or the target text is edited, and/or the specific explanation of the preset words in the target text is displayed in the second floating layer.
4. The method of claim 3, wherein prior to editing the original text, further comprising:
analyzing the picture to be translated, and determining the definition and the background color of the picture to be translated;
determining a display style of an original text corresponding to the target segment according to the definition and the background color of the picture to be translated;
and displaying the original text in a determined display style at a position which is not shielded by the target text and is positioned below the first floating layer.
5. The method of any of claims 1-4, wherein the paint function edit section includes at least one of the following function buttons: cancel, smear line edit, clear, and start translation.
6. The method of claim 5, further comprising:
acquiring a smearing line editing instruction;
displaying each attribute adjusting button in the third floating layer;
and determining the current style of the smearing line according to the operation of the user on each attribute adjusting button.
7. The method of claim 6, wherein after displaying the property adjustment buttons in the third floating layer, further comprising:
determining a target attribute to be adjusted according to the operation of a user;
and displaying the adjustment mode of the target attribute and the display style corresponding to the current attribute of the smearing line in a fourth floating layer.
8. The method of claim 1, wherein after displaying the translation result in the determined display style, further comprising:
and switching the translation result display interface into a picture to be translated or storing or sharing the translation result display interface according to the operation of a user on the translation result display interface.
9. A picture word translation application, comprising:
the first acquisition module is used for acquiring a picture translation request, wherein the translation request comprises a picture to be translated and a target language type;
the third obtaining module is used for carrying out full-text recognition and translation on the picture to be translated, and obtaining a segment type translation instruction sent by a user on a translation result display interface after a full-text translation result is displayed on the translation result display interface; wherein, the displaying of the full text translation result on the translation result display interface comprises: identifying the picture to be translated, determining the style of the picture to be translated and the style of characters in the picture to be translated, determining the display style of the translation result according to the style of the picture to be translated and the style of characters in the picture to be translated, and displaying the translation result in the determined display style;
the first display module is used for displaying the picture to be translated and smearing the function editing area on a display interface when the current translation mode is determined to be segment-type translation;
the first determining module is used for determining a current target segment to be translated according to the operation of a user in the smearing function editing area and the picture to be translated;
and the translation module is used for generating a target text corresponding to the target segment and the target language type according to the full-text translation result.
10. A computer device, comprising:
memory, processor and computer program stored on the memory and executable on the processor, characterized in that the processor implements the method for translating pictures and texts according to any one of claims 1-8 when executing the program.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a method for text translation of pictures according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711449311.6A CN108182184B (en) | 2017-12-27 | 2017-12-27 | Picture character translation method, application and computer equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711449311.6A CN108182184B (en) | 2017-12-27 | 2017-12-27 | Picture character translation method, application and computer equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108182184A CN108182184A (en) | 2018-06-19 |
CN108182184B true CN108182184B (en) | 2021-11-02 |
Family
ID=62547829
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711449311.6A Active CN108182184B (en) | 2017-12-27 | 2017-12-27 | Picture character translation method, application and computer equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108182184B (en) |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108536686B (en) * | 2018-04-11 | 2022-05-24 | 百度在线网络技术(北京)有限公司 | Picture translation method, device, terminal and storage medium |
CN109388810A (en) * | 2018-08-31 | 2019-02-26 | 北京搜狗科技发展有限公司 | A kind of data processing method, device and the device for data processing |
CN109462689B (en) * | 2018-09-30 | 2022-01-04 | 深圳壹账通智能科技有限公司 | Voice broadcasting method and device, electronic device and computer readable storage medium |
CN109657619A (en) * | 2018-12-20 | 2019-04-19 | 江苏省舜禹信息技术有限公司 | A kind of attached drawing interpretation method, device and storage medium |
CN110502300A (en) * | 2019-08-14 | 2019-11-26 | 上海掌门科技有限公司 | Speech playing method, equipment and computer-readable medium |
CN110674814A (en) * | 2019-09-25 | 2020-01-10 | 深圳传音控股股份有限公司 | Picture identification and translation method, terminal and medium |
CN111126301B (en) * | 2019-12-26 | 2022-01-11 | 腾讯科技(深圳)有限公司 | Image processing method and device, computer equipment and storage medium |
CN111310482A (en) * | 2020-01-20 | 2020-06-19 | 北京无限光场科技有限公司 | Real-time translation method, device, terminal and storage medium |
CN111368562B (en) * | 2020-02-28 | 2024-02-27 | 北京字节跳动网络技术有限公司 | Method and device for translating characters in picture, electronic equipment and storage medium |
CN111553172A (en) * | 2020-04-02 | 2020-08-18 | 支付宝实验室(新加坡)有限公司 | Translation document display method, device, system and storage medium |
CN112269467A (en) * | 2020-08-04 | 2021-01-26 | 深圳市弘祥光电科技有限公司 | Translation method based on AR and AR equipment |
CN112989846B (en) * | 2021-03-10 | 2023-06-16 | 深圳创维-Rgb电子有限公司 | Text translation method, text translation device, text translation apparatus, and storage medium |
CN114237468B (en) * | 2021-12-08 | 2024-01-16 | 文思海辉智科科技有限公司 | Text and picture translation method and device, electronic equipment and readable storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101620595A (en) * | 2009-08-11 | 2010-01-06 | 上海合合信息科技发展有限公司 | Method and system for translating text of electronic equipment |
CN102737238A (en) * | 2011-04-01 | 2012-10-17 | 洛阳磊石软件科技有限公司 | Gesture motion-based character recognition system and character recognition method, and application thereof |
US8965129B2 (en) * | 2013-03-15 | 2015-02-24 | Translate Abroad, Inc. | Systems and methods for determining and displaying multi-line foreign language translations in real time on mobile devices |
CN105573969A (en) * | 2006-10-02 | 2016-05-11 | 谷歌公司 | Displaying original text in a user interface with translated text |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030200078A1 (en) * | 2002-04-19 | 2003-10-23 | Huitao Luo | System and method for language translation of character strings occurring in captured image data |
-
2017
- 2017-12-27 CN CN201711449311.6A patent/CN108182184B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105573969A (en) * | 2006-10-02 | 2016-05-11 | 谷歌公司 | Displaying original text in a user interface with translated text |
CN101620595A (en) * | 2009-08-11 | 2010-01-06 | 上海合合信息科技发展有限公司 | Method and system for translating text of electronic equipment |
CN102737238A (en) * | 2011-04-01 | 2012-10-17 | 洛阳磊石软件科技有限公司 | Gesture motion-based character recognition system and character recognition method, and application thereof |
US8965129B2 (en) * | 2013-03-15 | 2015-02-24 | Translate Abroad, Inc. | Systems and methods for determining and displaying multi-line foreign language translations in real time on mobile devices |
Also Published As
Publication number | Publication date |
---|---|
CN108182184A (en) | 2018-06-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108182184B (en) | Picture character translation method, application and computer equipment | |
CN108182183B (en) | Picture character translation method, application and computer equipment | |
US10599921B2 (en) | Visual language interpretation system and user interface | |
US11119635B2 (en) | Fanning user interface controls for a media editing application | |
US10516830B2 (en) | Guided image composition on mobile devices | |
CN107392933B (en) | Image segmentation method and mobile terminal | |
US20140333585A1 (en) | Electronic apparatus, information processing method, and storage medium | |
CN104221358A (en) | Unified slider control for modifying multiple image properties | |
US9990740B2 (en) | Camera-based brush creation | |
US11734805B2 (en) | Utilizing context-aware sensors and multi-dimensional gesture inputs to efficiently generate enhanced digital images | |
EP3751448B1 (en) | Text detecting method, reading assisting device and medium | |
CN105045504A (en) | Image content extraction method and apparatus | |
CN109062490A (en) | Take down notes delet method, electronic equipment and computer storage medium | |
US10552015B2 (en) | Setting multiple properties of an art tool in artwork application based on a user interaction | |
CN109102865A (en) | A kind of image processing method and device, equipment, storage medium | |
CN109218522A (en) | Function area processing method and device in application, electronic equipment and storage medium | |
WO2018049603A1 (en) | Control method, control apparatus and electronic apparatus | |
CN111724361A (en) | Method and device for displaying focus in real time, electronic equipment and storage medium | |
KR20220027081A (en) | Text detection method, reading support device and medium | |
Evening | Adobe Photoshop CS5 for Photographers: a professional image editor's guide to the creative use of Photoshop for the Macintosh and PC | |
WO2013114817A1 (en) | Image processing apparatus, image processing system, image processing method, and program | |
US8964128B1 (en) | Image data processing method and apparatus | |
US20150212721A1 (en) | Information processing apparatus capable of being operated by multi-touch | |
CN110737417B (en) | Demonstration equipment and display control method and device of marking line of demonstration equipment | |
JP5741660B2 (en) | Image processing apparatus, image processing method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |