CN110188365B - Word-taking translation method and device - Google Patents
Word-taking translation method and device Download PDFInfo
- Publication number
- CN110188365B CN110188365B CN201910450604.9A CN201910450604A CN110188365B CN 110188365 B CN110188365 B CN 110188365B CN 201910450604 A CN201910450604 A CN 201910450604A CN 110188365 B CN110188365 B CN 110188365B
- Authority
- CN
- China
- Prior art keywords
- translation
- picture data
- characters
- word
- language
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Landscapes
- Machine Translation (AREA)
- Document Processing Apparatus (AREA)
- Character Input (AREA)
Abstract
The invention discloses a method and a device for word-taking translation. The method comprises the following steps: reading display picture data; recognizing characters in the picture data through an optical character recognition technology; translating the characters into preset languages; displaying the translated result in a floating manner above the picture data. Reading the picture data, identifying characters from the picture data, translating the characters into a preset language, displaying a translation result on the picture data in a floating mode, performing word extraction and translation on the picture data from various sources, directly completing the process from picture acquisition to translation, omitting manual switching and calling among various tools, and simplifying translation operation.
Description
Technical Field
The embodiment of the invention relates to the field of image processing, in particular to a method and a device for word extraction and translation.
Background
At present, a plurality of mobile terminals can call cameras to acquire and input images of areas needing translation, recognize characters and translate recognized characters in time.
However, when the existing scheme is used for translation, a camera is required to be called firstly in the scanning and recognition processes, the camera can only shoot contents outside the plane of the equipment, picture contents stored in the mobile phone cannot be shot and recognized, and the translation requirement of local picture inclusion cannot be met.
Disclosure of Invention
The invention provides a method and a device for word-taking translation, which can perform word-taking translation on picture data from various sources by reading display picture data, identifying characters from the picture data, translating the characters into a preset language and then displaying a translation result on the picture data in a floating manner, and can directly complete the process from picture acquisition to translation, thereby simplifying translation operation.
In order to realize the design, the invention adopts the following technical scheme:
in one aspect, a method for word-fetching translation is used, the method comprising:
reading display picture data;
recognizing characters in the picture data through an optical character recognition technology;
translating the characters into a preset language;
and displaying the translation result in a floating mode above the picture data.
Another aspect employs an apparatus for word-fetching translation, the apparatus comprising:
the picture data reading unit is used for reading the display picture data;
the character recognition unit is used for recognizing characters in the picture data through an optical character recognition technology;
the character translation unit is used for translating the characters into preset languages;
and the translation display unit is used for displaying the translation result in a floating manner above the picture data.
The invention has the beneficial effects that: by reading the displayed picture data, recognizing characters from the picture data, translating the characters into a preset language and then displaying a translation result in a floating mode above the picture data, the word extraction and translation can be performed on the picture data from various sources, the process from picture acquisition to translation is directly completed, manual switching and calling among various tools are omitted, and translation operation is simplified.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings used in the description of the embodiments of the present invention will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the contents of the embodiments of the present invention and the drawings without creative efforts.
Fig. 1 is a flowchart of a method of a first embodiment of a method for word-fetching translation according to an embodiment of the present invention.
Fig. 2 is a flowchart of a method for word-fetching translation according to a second embodiment of the present invention.
Fig. 3 is a flowchart of a method of a third embodiment of a word-taking translation method according to an embodiment of the present invention.
Fig. 4 is a flowchart of a method of a fourth embodiment of the word-fetching translation method according to the embodiment of the present invention.
Fig. 5 is a block diagram illustrating a first embodiment of a device for retrieving and translating words according to an embodiment of the present invention.
Fig. 6 is a block diagram illustrating a second embodiment of a word fetching and translating apparatus according to an embodiment of the present invention.
Fig. 7 is a block diagram illustrating a structure of a word fetching and translating apparatus according to a third embodiment of the present invention.
Fig. 8 is a block diagram illustrating a structure of a word fetching and translating apparatus according to a fourth embodiment of the present invention.
Fig. 9 is a block diagram of an apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the technical problems solved, technical solutions adopted and technical effects achieved by the present invention clearer, the technical solutions of the embodiments of the present invention will be described in further detail below with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Please refer to fig. 1, which is a flowchart illustrating a method of word fetching translation according to a first embodiment of the present invention. The word-taking translation method of the embodiment is mainly applied to various intelligent mobile terminals, such as tablet computers, smart phones and the like.
As shown in fig. 1, the method for word-taking translation includes:
step S101: and reading the display picture data.
The translation of the text data is simple for the current translation tool, but the translation object does not exist in the form of the text data in many times, and even is not stored in a data form which can be identified by a computer, and the scheme is used for processing the existing picture data or the actually existing data capable of converting the picture data.
Step S102: and identifying characters in the picture data through an optical character identification technology.
Optical Character Recognition (OCR) refers to a process in which an electronic device (e.g., a scanner or a digital camera) inspects characters printed on paper, determines the shape thereof by detecting dark and light patterns, and then translates the shape into computer-recognizable characters using a Character Recognition method; namely, the process of scanning the text data, then analyzing and processing the image file and obtaining the character and layout information. For the translation tool, the characters to be translated cannot be directly obtained from the picture data, so in this step, the characters in the picture data are recognized by an optical character recognition technology, and the picture data are converted into editable text data which can be recognized by a machine, so that the translation tool can translate the characters subsequently.
Step S103: and translating the characters into a preset language.
Because the languages are various and the operation area on the picture data is limited, only the characters need to be translated into the preset languages in the translation process, and the translation results of all the languages do not need to be acquired. On one hand, a great part of translation results of all languages of the translation object are unnecessary for a user, and on the other hand, the translation results are not easy to present on picture data and are inconvenient to browse.
Step S104: displaying the translated result in a floating manner above the picture data.
Generally, the screen of the mobile terminal is a single task display, and the translation result is displayed in a floating manner above the picture data so as to be conveniently viewed by a user.
In summary, in the embodiments of the present invention, the picture data is read, the text is identified from the picture data, the text is translated into the preset language, and then the translation result is displayed on the top of the picture data in a floating manner, so that the picture data from various sources can be translated into words, the process from obtaining the picture to translating can be directly completed, manual switching between various tools is omitted, and the translation operation is simplified.
Please refer to fig. 2, which is a flowchart illustrating a method for word-fetching translation according to a second embodiment of the present invention. The main difference between this embodiment and the method embodiment is that a specific process of reading picture data and a specific process of online translation are further described.
As shown in fig. 2, the method for word-taking translation includes:
step S201: and opening the camera, and caching the image data acquired by the camera into the memory.
In the embodiment of the invention, after the camera is opened, the camera acquires the picture data and caches the picture data in the memory, that is, the embodiment of the invention does not generate the picture data in the local storage like the photographing and storing operation, and only needs to directly cache the picture data in the memory temporarily.
Step S202: and reading and displaying the picture data in the memory.
And presenting the picture data cached in the previous step. Intuitively, step S201 and step S202 are implemented almost simultaneously, that is, after the camera is turned on, the picture data collected by the camera is directly displayed. However, the data processing process of the machine is actually completed in two steps, namely step S201 and step S202, and the whole process can be regarded as a process of scanning a two-dimensional code or a bar code by using a code scanning tool and a process of aligning a camera with characters. For example, a web page with characters and a journal with characters on a screen of an electronic product are different from scanning a two-dimensional code or a bar code in the subsequent process of decoding the scanned code.
Step S203: displaying a rectangular frame which is resized through touch in a floating mode on the picture data; and setting the part in the rectangular frame after the adjustment is finished as the range to be identified in the picture data.
Generally speaking, the pertinence of word-taking translation is stronger, mainly aiming at words or phrases, so that the range of picture data needing to be recognized needs to be adjusted in the picture data. The scheme of selecting the range through the rectangular frame is mostly realized in the prior art, for example, in various image processing tools, the screenshot adopts the rectangular frame mode.
Step S204: and identifying characters in the range of the picture data needing to be identified by an optical character identification technology.
Optical character recognition technology can be used in a variety of applications. Each application has a different bottom layer processing method, people strive to improve the bottom layer processing method to improve the accuracy of identification, and specific identification technologies are realized, which is not the focus of the discussion of the scheme and is not repeated here.
Step S205: and detecting a preset language and identifying the language of the characters.
The languages are various in variety, and for a single word unit, the translation process generally ranges from one language to another language, and mostly ranges from one language to another language, so that the situation that the language ranges from one language to all other languages does not occur basically. The languages of the characters can be directly confirmed after the characters are recognized, the languages to be translated need to be preset, the languages can be directly detected only by the preset languages, and generally, the languages cannot exceed three languages.
Step S206: and entering an online translation inlet at the background, and inputting the characters and the preset language.
Step S206, the process of online translation is not required to be displayed, the user directly enters the online translation entry from the background, and the recognized characters and the preset language are input into the online translation entry. The online translation portal already has powerful translation functions. The language and translation processing method is rich, and the translation result of the characters can be quickly obtained.
Step S207: and reading a translation result of the characters corresponding to the preset language from a remote server for online translation.
And directly reading the translation result from the remote server of the online translation without displaying an access interface of the remote server and a result display interface of the remote server on the local terminal, and directly reading the translation result to obtain the target content.
Step S208: displaying the translated result in a floating manner above the picture data.
In the picture data, the regions outside the range to be recognized can be used to display the translation result, and in general, it is preferable to display them below so as to be clearly contrasted with the display of the picture data.
In summary, the present invention further describes a specific process of reading picture data and a specific process of online translation, and in general, by reading picture data, recognizing characters from the picture data, translating the characters into a preset language, and displaying a translation result on the picture data in a floating manner, word fetching and translating the picture data from various sources can be performed, a process from obtaining a picture to translating can be directly completed, manual switching and calling among various tools can be omitted, and translation operations can be simplified. Meanwhile, the scheme of the invention can ensure that the pertinence to the recognized and translated characters is stronger by adjusting the recognition range, and the translation result obtained by online translation can also ensure that the translation result is more reasonable.
Please refer to fig. 3, which is a flowchart illustrating a method of word-fetching translation according to a third embodiment of the present invention. The main difference between this embodiment and the second embodiment of the method is that another specific process of reading the picture data, a process of adjusting the recognition range, and a process of translating by using a local translation tool are further described.
As shown in fig. 3, the method for word-taking translation includes:
step S301: and reading and displaying locally stored picture data.
The embodiment is suitable for word-taking translation on the basis of the existing stored picture data, and when the picture data recorded before being browsed (such as picture data obtained by scanning, picture data obtained by photographing and picture data stored in a network) is browsed, the locally stored picture data can be read, and word-taking translation is realized on the basis of the local picture data.
Step S302: displaying a rectangular frame which is resized through touch in a floating mode on the picture data; and setting the part in the rectangular frame after the adjustment is finished as the range to be identified in the picture data.
Generally speaking, the pertinence of the word-taking translation is stronger, mainly aiming at words or phrases, so that the range of the displayed picture data needs to be adjusted in a rectangular frame, the content needing to be recognized is placed in the range, and redundant recognition and translation are reduced. .
Step S303: and identifying characters in the range needing to be identified in the picture data by an optical character identification technology.
Step S304: and detecting a preset language and identifying the language of the characters.
Step S305: and confirming a local translation tool from the language of the characters to a preset language.
Generally speaking, a local translation tool can only realize the translation between two languages, and the translation between multiple languages needs to be realized by multiple local translation tools, so the local translation tool generally only aims at the main language, such as six working languages of the united nations. The local translation tool can be used in an offline state, and the adaptability is stronger.
Step S306: and the background calls the local translation tool to obtain the translation result of the local translation tool.
And directly calling a local translation tool in the background to obtain a translation result.
Step S307: and displaying the translation result in a floating mode above the picture data.
The specific translation result at least comprises a paraphrase, and further comprises a phonetic symbol, a vocalization, an application example and the like.
In summary, the present embodiment further illustrates another specific process of reading the picture data, a process of adjusting the recognition range, and a process of translating by using the local translation tool. Generally speaking, in the embodiment, by reading the picture data, recognizing the characters from the picture data, translating the characters into the preset language, and displaying the translation result on the picture data in a floating manner, the word extraction and translation can be performed on the picture data from various sources, the process from picture acquisition to translation can be directly completed on the picture data, the manual switching and calling among various tools can be omitted, and the translation operation can be simplified. Meanwhile, the embodiment of the invention can further expand the application range of the technical scheme by reading the picture data from the local and translating the picture data by using a local translation tool.
Please refer to fig. 4, which is a flowchart illustrating a method of a word-fetching translation according to a fourth embodiment of the present invention. The main difference between this embodiment and the third embodiment of the method is that the different ways of translation are further illustrated. As shown in fig. 4, the method for word-taking translation includes:
step S401: and reading the display picture data.
The picture data is bmp picture data, JPEG picture data, GIF picture data, TIFF picture data or PNG picture data.
For intelligent terminal equipment, general picture data can be easily read, and only for the subsequent step of character recognition, the optical character recognition technology determines the shape of the intelligent terminal equipment by detecting dark and light modes, and then translates the shape into computer characters by a character recognition method, so that black and white photos are more advantageous for character recognition.
Step S402: displaying a rectangular frame which is resized through touch in a floating mode on the picture data; and setting the part in the rectangular frame after the adjustment is finished as the range to be identified in the picture data.
Step S403: and identifying characters in the range of the picture data to be identified by an optical character identification technology.
Step S404: and detecting a preset language and identifying the language of the characters.
Step S405: confirming a local translation tool from the language of the characters to a preset language; and the background calls the local translation tool to obtain the translation result of the local translation tool.
Step S406: confirming that no local translation tool from the language of the characters to a preset language exists; entering an online translation entry at the background, and inputting the characters and the preset language; and reading a translation result of the characters corresponding to the preset language from a remote server of online translation.
In step S405 and step S406, the translating includes:
carrying out grammar detection on the characters;
when no grammar error is found in the grammar detection of the characters, generating a translation result of the whole sentence meaning of the characters;
and when grammar errors are found in grammar detection of the characters, generating a translation result of the meaning of each character unit of the characters.
The grammar detection of the characters is not performed on all recognized characters as a whole, but is performed on each sentence unit. Because the recognized range obtained by adjustment is basically a rectangular range, and in the corresponding picture, the characters in the rectangular range may be only a part of the first sentence and the last sentence, and at this time, the sentence number is used as the judgment standard of one sentence, and if the content between two sentence numbers, the content before the first sentence number or the content after the last sentence number conforms to the grammatical rules, the two sentence numbers, the first sentence number and the last sentence number are regarded as a whole sentence.
And when the grammar detection is passed, confirming a translation result of the whole sentence meaning of the whole sentence during translation. Especially for online translation, currently, in order to ensure the accuracy of translation of the whole sentence meaning, many translations are translated in a sentence comparison manner, and not only all translations are pieced together in sequence after each word translation. The sentence comparison is that the online translation tool searches sentences similar to or identical to the sentence, and generates a translation result from the most widely used translation mode, wherein the generated result is closer to the language habit of the preset language. If the grammar detection finds an error, a result of the word sense translation for each word unit is generated. The specific text translation process can be realized in various ways, which are not described herein again.
Step S407: displaying the translated result in a floating manner above the picture data.
In contrast to the previous two embodiments, the translation in this embodiment does not consider only local translation tools or online translation, but a combination of both. And only when the local translation tool does not have a corresponding translation tool between two languages, selecting online translation for processing.
Actually, as a scheme that the operation flow has a certain similarity with the scanning of the two-dimensional code or the bar code, the scheme may be integrated in the two-dimensional code or the bar code scanning software, and share a code scanning interface of the scanning software.
In summary, the present embodiment further illustrates different translation manners. Generally speaking, in the embodiment, by reading the picture data, recognizing the characters from the picture data, translating the characters into the preset language, and displaying the translation result on the picture data in a floating manner, the word extraction and translation can be performed on the picture data from various sources, the process from obtaining the picture to translating can be directly completed, manual switching and calling among various tools can be omitted, and the translation operation is simplified. Meanwhile, different translation modes are selected, so that the translation mode which is as comprehensive as possible and has strong adaptability can be obtained.
The following is an embodiment of the apparatus for word-taking translation according to the embodiment of the present invention. The embodiment of the word-taking translation device and the embodiment of the word-taking translation method belong to the same concept, and details which are not described in detail in the embodiment of the word-taking translation device can refer to the embodiment of the word-taking translation method.
Please refer to fig. 5, which is a block diagram illustrating a first embodiment of a word fetching and translating apparatus according to an embodiment of the present invention. The device for word-taking translation comprises the following contents:
a picture data reading unit 510, configured to read display picture data;
a text recognition unit 520, configured to recognize a text in the picture data through an optical character recognition technology;
a text translation unit 530, configured to translate the text into a preset language;
and a translation display unit 540, configured to display the translation result in a floating manner above the picture data.
In summary, through the cooperative work of the above units, by reading the picture data, recognizing the characters from the picture data, translating the characters into the preset language, and then displaying the translation result on the picture data in a floating manner, the word fetching translation can be performed on the picture data from various sources, the process from obtaining the picture to translating can be directly completed, the manual switching call among various tools is omitted, and the translation operation is simplified.
Please refer to fig. 6, which is a block diagram illustrating a second embodiment of a word fetching and translating apparatus according to an embodiment of the present invention. The main difference between this embodiment and the first embodiment of the apparatus is that the specific working modules of the picture data reading unit 510 and the text translation unit 530 are further described. The device for word-taking translation comprises the following contents:
a picture data reading unit 510, configured to read display picture data;
a character recognition unit 520, configured to recognize characters in the image data through an optical character recognition technology;
a text translation unit 530, configured to translate the text into a preset language;
and a translation display unit 540, configured to display the translation result in a floating manner above the picture data.
The picture data reading unit 510 includes:
the picture data caching module 511 is configured to open the camera and cache the picture data acquired by the camera in the memory.
The memory reading module 512 is configured to read and display picture data in a memory.
The text translation unit 530 includes:
a language confirmation module 531, configured to detect a preset language and identify a language of a character;
an online translation module 532, configured to enter an online translation entry in the background and input the text and a preset language;
the result reading module 533 is configured to read, from the remote server for online translation, a translation result of the word corresponding to the preset language.
In summary, the present disclosure further illustrates the specific working modules of the picture data reading unit 510 and the specific working modules of the text translation unit 530, and in general, by reading picture data, recognizing text from the picture data, translating the text into a preset language, and then displaying a translation result on the picture data in a floating manner, the text data from various sources can be translated by fetching words, the process from obtaining the picture to translating can be directly completed, manual switching and calling among various tools can be omitted, and the translation operation can be simplified. Meanwhile, the recognition range is adjusted to enable the pertinence of recognized and translated characters to be stronger, and the translation result can be guaranteed to be more reasonable through an online translation mode.
Please refer to fig. 7, which is a block diagram illustrating a word fetching and translating apparatus according to a third embodiment of the present invention. The embodiment further illustrates the specific work content of the picture data reading unit 510 and the specific work module of the text translation unit 530. The device for word-taking translation comprises the following contents:
a picture data reading unit 510, configured to read display picture data;
a character recognition unit 520, configured to recognize characters in the picture data through an optical character recognition technique;
a text translation unit 530, configured to translate the text into a preset language;
and a translation display unit 540, configured to display the translation result in a floating manner above the picture data.
The picture data reading unit 510 is specifically configured to:
and reading and displaying locally stored picture data.
Wherein, still include:
a range adjustment unit 550 configured to float and display a rectangular frame resized by touch on the picture data; setting the part in the rectangular frame after the adjustment is finished as the range to be identified in the picture data;
the character recognition unit 520 is configured to recognize characters in the range to be recognized in the image data by using an optical character recognition technology.
The word translation unit 530 includes:
a language confirmation module 531, configured to detect a preset language and identify a language of a character;
the local translation confirming module 534 is used for confirming a local translation tool from the language of the characters to a preset language;
the result obtaining module 535 is configured to invoke the local translation tool in the background, and obtain a translation result of the local translation tool.
In summary, the present embodiment further illustrates specific work contents of the picture data reading unit 510 and specific work modules of the text translation unit 530. Generally speaking, in the embodiment, by reading the picture data, recognizing the characters from the picture data, translating the characters into the preset language, and displaying the translation result on the picture data in a floating manner, the word extraction and translation can be performed on the picture data from various sources, the process from obtaining the picture to translating can be directly completed, manual switching and calling among various tools can be omitted, and the translation operation is simplified. Meanwhile, the image data is read from the local, and the translation is carried out by using a local translation tool, so that the application range of the technical scheme can be further expanded.
Please refer to fig. 8, which is a block diagram illustrating an apparatus for word fetching translation according to an embodiment of the present invention. The embodiment further illustrates more comprehensive working modules of the text translation unit 530. As shown in fig. 8, the apparatus for word-taking translation includes the following:
a picture data reading unit 510, configured to read display picture data;
the picture data is bmp picture data, JPEG picture data, GIF picture data, TIFF picture data or PNG picture data.
A character recognition unit 520, configured to recognize characters in the picture data through an optical character recognition technique;
a text translation unit 530, configured to translate the text into a preset language;
and a translation display unit 540, configured to display the translation result in a floating manner above the picture data.
Wherein, still include:
the range adjusting unit is used for displaying the rectangular frame which is adjusted in size through touch in a floating mode on the picture data; setting the part in the rectangular frame after the adjustment is finished as the range to be identified in the picture data;
and a character recognition unit 520, configured to recognize characters in the range to be recognized in the picture data by using an optical character recognition technology.
The word translation unit 530 includes:
a language confirmation module 531, configured to detect a preset language and identify a language of a character;
a first translation module 536 for confirming that there is a local translation tool from the language of the text to a preset language; the background calls a local translation tool to obtain a translation result of the local translation tool;
the second translation module 537 determines that there is no local translation tool from the language of the text to the preset language, enters an online translation entry at the background, and inputs the text and the preset language; and reading a translation result of the characters corresponding to the preset language from a remote server for online translation.
The translation, comprising:
carrying out grammar detection on the characters;
when no grammar error is found in the grammar detection of the characters, confirming a translation result of generating the whole sentence meaning of the characters;
and when the grammar detection of the characters finds grammar errors, confirming the translation result of the character meaning of each character unit for generating the characters.
In summary, the present embodiment further illustrates more comprehensive specific working modules of the text translation unit 530. Generally speaking, in the embodiment, by reading the picture data, recognizing the characters from the picture data, translating the characters into the preset language, and displaying the translation result on the picture data in a floating manner, the word extraction and translation can be performed on the picture data from various sources, the process from obtaining the picture to translating can be directly completed, manual switching and calling among various tools can be omitted, and the translation operation is simplified. Meanwhile, different translation modes are selected, so that the translation mode which is as comprehensive as possible and has strong adaptability can be obtained.
Please refer to fig. 9, which is a block diagram illustrating an apparatus according to an embodiment of the present invention, wherein the apparatus may be used to implement the word fetching translation method and the word fetching translation apparatus in the foregoing embodiments. Specifically, the method comprises the following steps:
the device may include a processor 110, a display screen 120, a storage device 130, and a camera 140. It will be understood by those skilled in the art that the apparatus of fig. 9 is not to be construed as an absolute limitation, e.g., the power supply is not illustrated as a necessary component of general knowledge. May include more or fewer components than shown, or may combine certain components, or a different arrangement of components. Wherein:
and the processor 110 is configured to cache the picture data, and process obtaining of results of character recognition and translation in the picture data.
And the display screen 120 is used for presenting the picture data and receiving the operation on the picture data.
For example, when setting the range, the range may be set by an external device, such as the ordinary display screen 120; the display screen 120 may be a touch type display screen, and the setting of the range is realized by touch operation, but certainly, from the specific function, the touch and the display are realized by two different structures, and the display screen is not divided in detail here, and is directly regarded as a functional complex of the touch and the display.
And the storage device 130 is used for providing pre-stored picture data and storing a local translation tool.
And the camera 140 is used for providing immediately acquired picture data.
Those skilled in the art will appreciate that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the storage medium may include a memory, a magnetic disk, an optical disk, or the like.
The above description is only a preferred embodiment of the present invention, and for those skilled in the art, the present invention should not be limited by the description of the present invention, which should be interpreted as a limitation.
Claims (11)
1. A method of word-fetching translation, comprising:
displaying a mode of acquiring picture data in real time and a mode of reading locally stored picture data in a code scanning interface;
when the locally stored picture data are read, the read picture data are displayed in the code scanning interface;
when picture data are collected in real time, the picture data collected in real time are displayed in the code scanning interface;
displaying a resizable rectangular frame on the picture data in a floating manner;
identifying characters in the rectangular frame in the picture data;
translating the characters into a preset language, wherein the translation comprises the following steps: carrying out grammar detection on the characters in the rectangular frame, generating a translation result of the whole sentence meaning of the characters when no grammar error is found in the grammar detection of the characters, and generating a translation result of the word meaning of each character unit of the characters when the grammar error is found in the grammar detection of the characters; the grammar detection of the characters in the rectangular frame comprises the following steps: taking the period as a judgment standard of a period, and if the content between two periods, the content before the first period or the content after the last period conforms to the grammatical rules, regarding the content as a whole sentence;
and displaying paraphrases corresponding to the characters in a floating mode in the code scanning interface, and displaying at least one of phonetic symbols, sound production playing interfaces and application examples in a floating mode.
2. The method for word-taking translation according to claim 1, wherein the collecting picture data in real time comprises:
opening a camera, and caching picture data acquired by the camera into a memory;
and reading and displaying the picture data in the memory.
3. The method for word-taking translation according to claim 1, wherein the identifying the text in the rectangular frame in the picture data comprises:
and identifying characters in the rectangular frame in the picture data by an optical character recognition technology.
4. The method for retrieving words and translating according to claim 1, wherein floating and displaying paraphrases corresponding to the words in the code scanning interface comprises:
and displaying the paraphrase in a floating mode outside the rectangular frame in the picture data displayed by the code scanning interface.
5. The method for word-taking translation according to claim 1, wherein said translating said word into a predetermined language comprises:
recognizing the language of the characters;
entering an online translation entry at the background, and inputting the characters and the preset language;
and reading a translation result of the characters corresponding to the preset language from a remote server for online translation.
6. The method for word-taking translation according to claim 1, wherein said translating said word into a predetermined language comprises:
recognizing the language of the characters;
acquiring a local translation tool from the language of the character to a preset language;
and calling the local translation tool to translate the characters, and acquiring a translation result of the local translation tool.
7. The method for word-taking translation according to claim 1, wherein said translating said word into a predetermined language comprises:
recognizing the language of the characters;
when a local translation tool from the language of the characters to a preset language is detected, calling the local translation tool to translate the characters, and acquiring a translation result of the local translation tool;
entering an online translation entry at the background when a local translation tool from the language of the characters to a preset language is not detected, and inputting the characters and the preset language; and reading a translation result of the preset language corresponding to the characters from a remote server for online translation.
8. The method for word-taking translation according to claim 1, wherein said translating said word into a predetermined language comprises:
and translating the characters into one or more preset languages.
9. An apparatus for word fetching translation, comprising:
a picture data reading unit for displaying the mode of collecting picture data in real time and the mode of reading the locally stored picture data in a code scanning interface,
displaying the read picture data in the code scanning interface when the locally stored picture data is read,
when picture data are collected in real time, the picture data collected in real time are displayed in the code scanning interface;
the character recognition unit is used for displaying a rectangular frame with adjustable size in a floating mode on the picture data and recognizing characters in the rectangular frame in the picture data;
a text translation unit, configured to translate the text into a preset language, where the translation includes: grammar detection is carried out on the characters in the rectangular frame, when no grammar error is found in the grammar detection of the characters, a translation result of the whole sentence meaning of the characters is generated, and when the grammar error is found in the grammar detection of the characters, a translation result of the word meaning of each character unit of the characters is generated; the grammar detection of the characters in the rectangular frame comprises the following steps: taking the period as a judgment standard of a period, and if the content between two periods, the content before the first period or the content after the last period conforms to the grammatical rules, regarding the content as a whole sentence;
and the translation display unit is used for displaying the paraphrase corresponding to the character in a floating mode in the code scanning interface and displaying at least one of a phonetic symbol, a sound production playing interface and an application example in a floating mode.
10. A terminal device, comprising:
a processor;
a memory; and
a program stored in the memory and configured to be executed by the processor, the program comprising instructions for performing the method of any of claims 1-8.
11. A storage medium storing a program comprising instructions that, when executed by a processor, cause the processor to perform the method of any one of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910450604.9A CN110188365B (en) | 2014-06-24 | 2014-06-24 | Word-taking translation method and device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410289042.1A CN105279152B (en) | 2014-06-24 | 2014-06-24 | A kind of method and apparatus for taking word to translate |
CN201910450604.9A CN110188365B (en) | 2014-06-24 | 2014-06-24 | Word-taking translation method and device |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410289042.1A Division CN105279152B (en) | 2014-06-24 | 2014-06-24 | A kind of method and apparatus for taking word to translate |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110188365A CN110188365A (en) | 2019-08-30 |
CN110188365B true CN110188365B (en) | 2022-12-02 |
Family
ID=55148181
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410289042.1A Active CN105279152B (en) | 2014-06-24 | 2014-06-24 | A kind of method and apparatus for taking word to translate |
CN201910450604.9A Active CN110188365B (en) | 2014-06-24 | 2014-06-24 | Word-taking translation method and device |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410289042.1A Active CN105279152B (en) | 2014-06-24 | 2014-06-24 | A kind of method and apparatus for taking word to translate |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN105279152B (en) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106227398A (en) * | 2016-06-29 | 2016-12-14 | 宇龙计算机通信科技(深圳)有限公司 | A kind of camera image character displaying method and device |
CN106156794B (en) * | 2016-07-01 | 2020-12-25 | 北京旷视科技有限公司 | Character recognition method and device based on character style recognition |
CN106250374B (en) * | 2016-08-05 | 2021-05-18 | Tcl科技集团股份有限公司 | Word-taking translation method and system |
CN108062301B (en) * | 2016-11-08 | 2021-11-05 | 希思特兰国际 | Character translation method and device |
CN106599888A (en) * | 2016-12-13 | 2017-04-26 | 广东小天才科技有限公司 | Translation method, translation device and mobile terminal |
CN107832311A (en) * | 2017-11-30 | 2018-03-23 | 珠海市魅族科技有限公司 | A kind of interpretation method, device, terminal and readable storage device |
CN108319592B (en) * | 2018-02-08 | 2022-04-19 | 广东小天才科技有限公司 | Translation method and device and intelligent terminal |
CN111368562B (en) | 2020-02-28 | 2024-02-27 | 北京字节跳动网络技术有限公司 | Method and device for translating characters in picture, electronic equipment and storage medium |
CN112183122A (en) * | 2020-10-22 | 2021-01-05 | 腾讯科技(深圳)有限公司 | Character recognition method and device, storage medium and electronic equipment |
CN112507736A (en) * | 2020-12-21 | 2021-03-16 | 蜂后网络科技(深圳)有限公司 | Real-time online social translation application system |
CN113625919A (en) * | 2021-08-11 | 2021-11-09 | 掌阅科技股份有限公司 | Method for translating book contents, computing device and computer storage medium |
CN115455981B (en) * | 2022-11-11 | 2024-03-19 | 合肥智能语音创新发展有限公司 | Semantic understanding method, device and equipment for multilingual sentences and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1761343A (en) * | 2005-09-28 | 2006-04-19 | 王永民 | Unit for recognizing, reading, learning characters added for handste to extend functions of recognizing scanned characters, paraphrasing display, and sounding |
CN101059839A (en) * | 2006-04-17 | 2007-10-24 | 宋柏君 | Optical character identification and translation method based on shooting mobile phones |
CN101127036A (en) * | 2006-08-14 | 2008-02-20 | 英华达股份有限公司 | Image identification translating device and method |
US8699819B1 (en) * | 2012-05-10 | 2014-04-15 | Google Inc. | Mosaicing documents for translation using video streams |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101408874A (en) * | 2007-10-09 | 2009-04-15 | 深圳富泰宏精密工业有限公司 | Apparatus and method for translating image and character |
CN101714139A (en) * | 2008-10-07 | 2010-05-26 | 英业达股份有限公司 | System for storing real-time translation data according to cursor positions and method thereof |
CN103309854A (en) * | 2013-06-08 | 2013-09-18 | 开平市中铝实业有限公司 | Translator system for taxis |
CN103488630B (en) * | 2013-09-29 | 2016-06-08 | 小米科技有限责任公司 | The processing method of a kind of image, device and terminal |
-
2014
- 2014-06-24 CN CN201410289042.1A patent/CN105279152B/en active Active
- 2014-06-24 CN CN201910450604.9A patent/CN110188365B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1761343A (en) * | 2005-09-28 | 2006-04-19 | 王永民 | Unit for recognizing, reading, learning characters added for handste to extend functions of recognizing scanned characters, paraphrasing display, and sounding |
CN101059839A (en) * | 2006-04-17 | 2007-10-24 | 宋柏君 | Optical character identification and translation method based on shooting mobile phones |
CN101127036A (en) * | 2006-08-14 | 2008-02-20 | 英华达股份有限公司 | Image identification translating device and method |
US8699819B1 (en) * | 2012-05-10 | 2014-04-15 | Google Inc. | Mosaicing documents for translation using video streams |
Also Published As
Publication number | Publication date |
---|---|
CN105279152B (en) | 2019-04-19 |
CN110188365A (en) | 2019-08-30 |
CN105279152A (en) | 2016-01-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110188365B (en) | Word-taking translation method and device | |
US10078376B2 (en) | Multimodel text input by a keyboard/camera text input module replacing a conventional keyboard text input module on a mobile device | |
CN107656922B (en) | Translation method, translation device, translation terminal and storage medium | |
US9589198B2 (en) | Camera based method for text input and keyword detection | |
KR101667463B1 (en) | Optical character recognition on a mobile device using context information | |
CN111353501A (en) | Book point-reading method and system based on deep learning | |
KR20130106833A (en) | Use camera to augment input for portable electronic device | |
CN105678242B (en) | Focusing method and device under hand-held certificate mode | |
CN111709414A (en) | AR device, character recognition method and device thereof, and computer-readable storage medium | |
CN106527945A (en) | text information extraction method and device | |
CN107111761B (en) | Techniques for providing user image capture feedback for improved machine language translation | |
CN104866308A (en) | Scenario image generation method and apparatus | |
CN101339617A (en) | Mobile phones photographing and translation device | |
Ramiah et al. | Detecting text based image with optical character recognition for English translation and speech using Android | |
CN111382748B (en) | Image translation method, device and storage medium | |
CN103186587A (en) | Method for quickly translating English word of book through mobile phone | |
US20140249798A1 (en) | Translation system and translation method thereof | |
CN112149680A (en) | Wrong word detection and identification method and device, electronic equipment and storage medium | |
CN103186581A (en) | Method for quickly acquiring pronunciation of uncommon word in book through mobile phone | |
CN110717060A (en) | Image mask filtering method and device and storage medium | |
CN113220125A (en) | Finger interaction method and device, electronic equipment and computer storage medium | |
CN111144141A (en) | Translation method based on photographing function | |
CN114154467B (en) | Structure picture restoration method, device, electronic equipment, medium and program product | |
CN102855291A (en) | Method and device for adding vocabulary entry into input method word library | |
JP6822261B2 (en) | Information processing equipment, programs and information processing methods |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |