CN110188365A - A kind of method and apparatus for taking word to translate - Google Patents
A kind of method and apparatus for taking word to translate Download PDFInfo
- Publication number
- CN110188365A CN110188365A CN201910450604.9A CN201910450604A CN110188365A CN 110188365 A CN110188365 A CN 110188365A CN 201910450604 A CN201910450604 A CN 201910450604A CN 110188365 A CN110188365 A CN 110188365A
- Authority
- CN
- China
- Prior art keywords
- image data
- translation
- text
- languages
- translate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Abstract
The invention discloses a kind of method and apparatus for taking word to translate.This method comprises: reading display image data;Text is identified in the image data by optical character recognition technology;It is preset languages by the character translation;The result that display of floating above the image data is translated.Read image data, text is identified from image data, by character translation at the result for display translation of floating above the image data after preset languages, can the image data to various sources carry out that word is taken to translate, it is done directly from the process for obtaining picture to translation, the manual switching being omitted between various tools is called, and simplifies translating operation.
Description
Technical field
The present embodiments relate to field of image processing more particularly to a kind of method and apparatus for taking word to translate.
Background technique
Many mobile terminals can call camera now, carry out Image Acquisition input to the region that needs are translated, so
Laggard line character identification, finally again timely translates the character identified.
But currently existing scheme needs first to call camera when being translated for scanning, identification process, and camera is only
Can the out-of-plane content of capture apparatus, identification can not be shot for the image content that interior of mobile phone has been saved, can not achieve
The translation demand that local picture includes.
Summary of the invention
The invention proposes a kind of method and apparatus for taking word to translate, by reading display image data, from picture number
According to middle identification text, by character translation at display translation of floating above the image data after preset languages as a result, energy
It is enough that the image data in various sources is carried out that word is taken to translate, it is done directly from the process for obtaining picture to translation, and simplify translation
Operation.
To realize above-mentioned design, the invention adopts the following technical scheme:
On the one hand using a kind of method for taking word to translate, this method comprises:
Read display image data;
Text is identified in the image data by optical character recognition technology;
It is preset languages by the character translation;
The result that display of floating above the image data is translated.
On the other hand using a kind of device for taking word to translate, which includes:
Image data reading unit, for reading display image data;
Word recognition unit, for identifying text in the image data by optical character recognition technology;
Character translation unit, for being preset languages by the character translation;
Display unit is translated, the result for display translation of floating above the image data.
The beneficial effects of the present invention are: image data is shown by reading, and text is identified from image data, by text
Translate into display translation of being floated after preset languages above the image data as a result, it is possible to picture number to various sources
According to carrying out that word is taken to translate, it is done directly from picture is obtained to the process of translation, the manual switching tune between various tools is omitted
With, and simplify translating operation.
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, institute in being described below to the embodiment of the present invention
Attached drawing to be used is needed to be briefly described, it should be apparent that, the accompanying drawings in the following description is only some implementations of the invention
Example, for those of ordinary skill in the art, without creative efforts, can also implement according to the present invention
The content of example and these attached drawings obtain other attached drawings.
Fig. 1 is a kind of one embodiment method flow diagram of method for taking word to translate provided in an embodiment of the present invention.
Fig. 2 is a kind of second embodiment method flow diagram of method for taking word to translate provided in an embodiment of the present invention.
Fig. 3 is a kind of method flow diagram of the third embodiment of method for taking word to translate provided in an embodiment of the present invention.
Fig. 4 is a kind of method flow diagram of the 4th embodiment of method for taking word to translate provided in an embodiment of the present invention.
Fig. 5 is a kind of structural block diagram of one embodiment of device for taking word to translate provided in an embodiment of the present invention.
Fig. 6 is a kind of structural block diagram of second embodiment of device for taking word to translate provided in an embodiment of the present invention.
Fig. 7 is a kind of structural block diagram of the third embodiment of device for taking word to translate provided in an embodiment of the present invention.
Fig. 8 is a kind of structural block diagram of the 4th embodiment of device for taking word to translate provided in an embodiment of the present invention.
Fig. 9 is device structure block diagram involved in the embodiment of the present invention.
Specific embodiment
To keep the technical problems solved, the adopted technical scheme and the technical effect achieved by the invention clearer, below
It will the technical scheme of the embodiment of the invention will be described in further detail in conjunction with attached drawing, it is clear that described embodiment is only
It is a part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those skilled in the art exist
Every other embodiment obtained under the premise of creative work is not made, shall fall within the protection scope of the present invention.
Referring to FIG. 1, it is the method stream of one embodiment of the method provided in an embodiment of the present invention for taking word to translate
Cheng Tu.The method for taking word to translate of the present embodiment is mainly used in various intelligent mobile terminals, such as tablet computer, smart phone
Deng.
As shown in Figure 1, the method for taking word to translate, comprising:
Step S101: display image data is read.
The translation of text data operates fairly simple for current translation tool, but many times translates object
Be not in the form of text data existing for, or even save as the data mode that computer can identify, this programme not yet
Handled for the data of existing image data or the physical presence of energy transformed picture data.
Step S102: the text in the image data is identified by optical character recognition technology.
Optical character identification, i.e. OCR (Optical Character Recognition) refer to that electronic equipment (such as is swept
Retouch instrument or digital camera) check the character printed on paper, its shape is determined by the mode for detecting dark, bright, is then known with character
Shape is translated into computer by other method can recognize the process of text;Text information is scanned, then to image file
It is analyzed and processed, obtains the process of text and layout information.It, temporarily can not also be from image data for translation tool
The text translated is directly obtained, so in this step by optical character recognition technology to the text in image data
Word is identified, image data is converted to the editable text data that machine can identify, so as to the subsequent progress of translation tool
Translation.
Step S103: being preset languages by the character translation.
Because language is many kinds of, the operating area on image data is also than relatively limited, during being translated
It only needs character translation to be preset languages, without obtaining the translation result of all languages.On the one hand for user
For obtain translation object all languages translation result, be greatly it is unnecessary, on the other hand in picture number
It is not easy according to upper presentation, browsing is inconvenient.
Step S104: the result for display translation of floating above the image data.
In general, the screen of mobile terminal is all that single task is shown, the translation result is floating in the top of image data
Dynamic display, to facilitate user to check.
In conclusion the embodiment of the present invention identifies text, by character translation by reading image data from image data
After preset languages, display translation of floating above the image data as a result, it is possible to image data to various sources
It carries out that word is taken to translate, is done directly the manual switching being omitted between various tools from the process for obtaining picture to translation and calls,
To simplify translating operation.
Referring to FIG. 2, it is the method stream of second embodiment of the method provided in an embodiment of the present invention for taking word to translate
Cheng Tu.The main distinction of the present embodiment and embodiment of the method is, further illustrate the detailed process for reading image data and
The detailed process of translation on line.
As shown in Fig. 2, the method for taking word to translate, comprising:
Step S201: opening camera, and the image data that camera acquires is cached in memory.
In the embodiment of the present invention, after opening camera, camera can acquire image data, and be cached in memory,
That is, the embodiment of the present invention will not generate image data as taking pictures and save operation in being locally stored, and only need
Directly by image data temporary cache into memory.
Step S202: reading and shows the image data in the memory.
The image data cached in previous step is presented.For intuitive angle, step S201 and step S202 are almost
It realizes simultaneously, that is to say, that after opening camera, the collected image data of camera can directly be shown.But for
For the data handling procedure of machine, it is divided into step S201 in fact and two step of step S202 is completed, whole process can be considered as
The process of two dimensional code or bar code is swept with barcode scanning tool, and camera is aligned to the process of text.Such as the screen of electronic product
On the webpage for carrying text, the newspapers and periodicals with text, with scan the two-dimensional code or bar code the difference is that after
Continuous handles the decoded process of barcode scanning.
Step S203: display of floating on the image data is by touching adjusted size of rectangle frame;Setting adjustment knot
The part in rectangle frame after beam is the range for needing to identify in the image data.
In general, the specific aim for taking word to translate is stronger, primarily directed to word or phrase, so needing in picture number
According to the range of the middle image data for adjusting and being identified, in the present solution, display of floating on image data passes through touch
Adjusted size of rectangle frame, by adjusting the size of rectangle frame, will need to identify in be placed within rectangle frame, reduce redundancy
Identification and translation.There is realization by the way that the scheme of rectangle frame range of choice in the prior art more, such as in the processing of various pictures
In tool, screenshot just selects the mode of rectangle frame.
Step S204: it needs to identify text in the range identified in the image data by optical character recognition technology.
Optical character recognition technology can be used in numerous applications.Every kind of application has different bottom layer treatment methods, people
All making great efforts to improve bottom layer treatment method to improve the accuracy rate of identification, specific identification technology more realization,
Nor the discussion emphasis of this programme, which is not described herein again.
Step S205: detecting preset languages, identifies the languages of text.
The type of language is rich and varied, and for single text unit, the process of translation is usually from a languages
It is not had substantially from a languages to another several languages from a languages to other all languages at most to another languages
The case where kind, occurs.The languages of text can directly confirm that the languages for needing to translate into are then after Text region comes out
It needs to preset, languages directly detect preset languages, all do not exceed three kinds generally.
Step S206: enter translation on line entrance on backstage, input the text and preset languages.
Step S206 is not necessarily to show the process of translation on line, and directly backstage enters translation on line entrance, enters in translation on line
The text and preset languages that mouth input identifies.Translation on line entrance has had been provided with powerful interpretative function.Language
Kind, the processing method of translation are all relatively abundanter, can quickly obtain the translation result of text.
Step S207: the translation result that the text corresponds to preset languages is read from the remote server of translation on line.
Taking as a result, being not necessarily to the display on local terminal to long-range for translation is directly read from the remote server of translation on line
The interface as the result is shown at the access interface and remote server of business device, directly reads the translation result, obtains object content and is
It can.
Step S208: the result for display translation of floating above the image data.
In image data, the region except the range to be identified be used equally for display translate as a result, in general, most
Fortunately lower section display, can clearly be compared with the display of image data.
In conclusion the solution of the present invention further illustrates a kind of detailed process and translation on line for reading image data
Detailed process, in general, by read image data, text is identified from image data, by character translation at preset
After languages above the image data float display translation as a result, it is possible to the image data in various sources carry out that word is taken to turn over
It translates, is done directly the manual switching being omitted between various tools from the process for obtaining picture to translation and calls, and simplify and turn over
Translate operation.Meanwhile the solution of the present invention can make the specific aim of the text to identification and translation by adjusting the range of identification
Stronger, the translation result obtained by translation on line also can guarantee that the result of translation is more reasonable.
Referring to FIG. 3, it is the method stream of the third embodiment of the method provided in an embodiment of the present invention for taking word to translate
Cheng Tu.The present embodiment and the main distinction of second embodiment of method are, further illustrate and read the another of image data
Kind detailed process, the process of adjustment identification range and the process translated using local translation tool.
As shown in figure 3, the method for taking word to translate includes:
Step S301: the image data that display is locally stored is read.
The present embodiment is suitable for carrying out that word is taken to translate on the basis of having the image data of storage, records before browsing
Image data (such as scanning obtains image data, the image data for acquisition of taking pictures, the image data that saves from network) when,
The image data being locally stored can be read, is realized on the basis of local image data and word is taken to translate.
Step S302: display of floating on the image data is by touching adjusted size of rectangle frame;Setting adjustment knot
The part in rectangle frame after beam is the range for needing to identify in the image data.
In general, the specific aim for taking word to translate is stronger, primarily directed to word or phrase, so needing in rectangle frame
Adjust show image data range, will need identify in be placed within the scope of, reduce the identification and translation of redundancy.
Step S303: text is identified in the range for needing to identify in the image data by optical character recognition technology
Word.
Step S304: detecting preset languages, identifies the languages of text.
Step S305: confirmation has from the languages of the text to the local translation tool of preset languages.
In general, a local translation tool can only realize the translation between bilingual, it is multilingual between translation
It then needs to realize by multiple local translation tools, so local translation tool is generally just for mainstream languages, such as the United Nations
Six kinds of working languages.Local translation tool can use under off-line state, more adaptable.
Step S306: Background scheduling local translation tool obtains the result of the translation of local translation tool.
Directly in Background scheduling local translation tool, the result of translation is obtained.
Step S307: above the image data, the result for display translation of floating.
Specific translation result, which includes at least paraphrase, further may also include phonetic symbol, sounding, applicating example etc..
In conclusion further illustrating another detailed process for reading image data, adjustment identification in the present embodiment
The process of range and the process translated using local translation tool.In general, by reading picture number in the present embodiment
According to, text is identified from image data, by character translation at after preset languages above the image data float display turn over
It is translating as a result, it is possible to the image data in various sources carry out that word is taken to translate, directly image data complete from obtain picture to
The process of translation, the manual switching being omitted between various tools is called, and simplifies translating operation.Meanwhile reality of the invention
It example is applied by reading image data from local, is translated using local translation tool, it can further expansion the technical program
Application range.
Referring to FIG. 4, it is the method stream of the 4th embodiment of the method provided in an embodiment of the present invention for taking word to translate
Cheng Tu.The present embodiment and the main distinction of method third embodiment are, further illustrate the different modes of translation.Such as figure
Shown in 4, which includes:
Step S401: display image data is read.
The image data is bmp image data, JPEG picture data, GIF image data, TIFF image data or PNG
Image data.
For intelligent terminal, general image data all can easily be read, just for subsequent identification text
For the step of word, optical character recognition technology is to determine its shape by detecting dark, bright mode, then other with character recognition
Shape is translated into computword by method, so, black-and-white photograph for Text region advantageously.
Step S402: display of floating on the image data is by touching adjusted size of rectangle frame;Setting adjustment knot
The part in rectangle frame after beam is the range for needing to identify in the image data.
Step S403: it needs to identify text in the range identified in the image data by optical character recognition technology.
Step S404: detecting preset languages, identifies the languages of text.
Step S405: confirmation has from the languages of the text to the local translation tool of preset languages;Background scheduling sheet
Ground translation tool obtains the result of the translation of local translation tool.
Step S406: confirmation is not from the languages of the text to the local translation tool of preset languages;Backstage into
Enter translation on line entrance, inputs the text and preset languages;The text pair is read from the remote server of translation on line
Answer the result of the translation of preset languages.
In step S405 and step S406, the translation, comprising:
Grammer detection is carried out to the text;
When the grammer detection to the text is without discovery syntax error, the translation of the whole sentence meaning of the text is generated
As a result;
When the grammer detection discovery syntax error to the text, the meaning of word of each text unit of the text is generated
Translation result.
Carrying out grammer detection to the text is not that all texts identified are carried out grammer inspection as a whole
It surveys, but is detected respectively as unit of sentence.Because the range for adjusting obtained identification is substantially the model of a rectangle
It encloses, and in corresponding picture, the text in the range of a rectangle is likely to be a word and the last word is all
A part, at this time using fullstop as the judgment criteria of a word, if before content, first fullstop between two fullstops
The inside content grammaticality perhaps after the last one fullstop is then regarded as a whole sentence.
When grammer detection passes through, confirmation generates the translation result of the whole sentence meaning of whole sentence when translating.Especially to it is online
For translation, currently in order to which the translation for guaranteeing whole sentence meaning is accurate, much all translated by the way of sentence comparison, without
Only all translations are spliced to together in order after each word translation.Sentence comparison mentioned here is translation on line work
Tool finds close or identical with sentence sentence, generates from most popular interpretative system translating as a result, generation
As a result closer to the speech habits of preset languages.If grammer detection discovery mistake, generates the word to each text unit
The result of justice translation.Specific character translation process can be there are many implementation, and which is not described herein again.
Step S407: the result for display translation of floating above the image data.
The first two embodiment is compared therewith, and the translation in the present embodiment is not only to consider local translation tool or online
Translation, but the combination of the two.Only when there is no the translation tool between corresponding two languages in local translation tool, choosing
It is handled with translation on line.
In fact, as a kind of operating process with scan the two-dimensional code or bar code has the scheme of certain similarity, can will
This programme is integrally disposed in two dimensional code or barcode scanning software, shares the barcode scanning interface of scanning software.
In conclusion further illustrating the different modes of translation in the present embodiment.In general, pass through in the present embodiment
Read image data, text is identified from image data, by character translation at after preset languages above the image data
Float display translation as a result, it is possible to the image data in various sources carry out that word is taken to translate, be done directly from obtain picture to
The process of translation, the manual switching being omitted between various tools is called, and simplifies translating operation.Different turn over is selected simultaneously
Translate mode, available as comprehensive as possible and adaptable interpretative system.
The following are the embodiments of the device provided in an embodiment of the present invention for taking word to translate.The embodiment for the device for taking word to translate
Belong to same design with the above-mentioned embodiment of the method for taking word to translate, not detailed description in the embodiment for the device for taking word to translate
Detail content, can be with reference to the above-mentioned embodiment of the method for taking word to translate.
Referring to FIG. 5, it is the structure side of one embodiment of the device provided in an embodiment of the present invention for taking word to translate
Block diagram.The device for taking word to translate, including following content:
Image data reading unit 510, for reading display image data;
Word recognition unit 520, for identifying text in the image data by optical character recognition technology;
Character translation unit 530, for being preset languages by the character translation;
Display unit 540 is translated, the result for display translation of floating above the image data.
In conclusion the collaborative work of above-mentioned each unit identifies text by reading image data from image data,
By character translation at display translation of floating above the image data after preset languages as a result, it is possible to various sources
Image data carries out that word is taken to translate, and is done directly from picture is obtained to the process of translation, is omitted manual between various tools
Switching is called, and simplifies translating operation.
Referring to FIG. 6, it is the structure side of second embodiment of the device provided in an embodiment of the present invention for taking word to translate
Block diagram.The present embodiment and the main distinction of device one embodiment are, further illustrate image data reading unit 510
Specific works module and character translation unit 530 specific works module.The device for taking word to translate, including following content:
Image data reading unit 510, for reading display image data;
Word recognition unit 520, for identifying the text in image data by optical character recognition technology;
Character translation unit 530, for being preset languages by the character translation;
Display unit 540 is translated, the result for display translation of floating above the image data.
Wherein, the image data reading unit 510, comprising:
Image data cache module 511 is used to open camera, and the image data that camera acquires is cached to memory
In.
Memory read module 512, for reading the image data in display memory.
Wherein, the character translation unit 530, comprising:
Languages confirmation module 531 identifies the languages of text for detecting preset languages;
Translation on line module 532 inputs the text and preset languages for entering translation on line entrance on backstage;
As a result read module 533 correspond to preset languages for reading the text from the remote server of translation on line
Translation result.
In conclusion this programme further illustrates the specific works module of image data reading unit 510 and text turns over
The specific works module of unit 530 is translated, in general, by reading image data, text is identified from image data, by text
Translate into display translation of being floated after preset languages above the image data as a result, it is possible to picture number to various sources
According to carrying out that word is taken to translate, it is done directly from picture is obtained to the process of translation, the manual switching tune between various tools is omitted
With, and simplify translating operation.Meanwhile the range for adjusting identification makes the specific aim of the text of identification and translation stronger, by
Line interpretative system also can guarantee that translation result is more reasonable.
Referring to FIG. 7, it is the structure side of the third embodiment of the device provided in an embodiment of the present invention for taking word to translate
Block diagram.The present embodiment further illustrates the specific works content and character translation unit 530 of image data reading unit 510
Specific works module.The device for taking word to translate, including following content:
Image data reading unit 510, for reading display image data;
Word recognition unit 520, for identifying the text in image data by optical character recognition technology;
Character translation unit 530, for being preset languages by the character translation;
Display unit 540 is translated, the result for display translation of floating above the image data.
Wherein, the image data reading unit 510, is specifically used for:
Read the image data that display is locally stored.
Wherein, further includes:
Range adjustment unit 550, for display of floating on the image data by touching adjusted size of rectangle frame;
Part in the rectangle frame after adjustment is set as the range that needs to identify in the image data;
The word recognition unit 520, needs to know for identifying in the image data by optical character recognition technology
Text in other range.
Character translation unit 530, comprising:
Languages confirmation module 531 identifies the languages of text for detecting preset languages;
Local translation confirmation module 534, for confirming the local translation having from the languages of the text to preset languages
Tool;
As a result module 535 is obtained, Background scheduling local translation tool is used for, obtains the knot of the translation of local translation tool
Fruit.
In conclusion further illustrating the specific works content and text of image data reading unit 510 in the present embodiment
The specific works module of word translation unit 530.In general, by reading image data in the present embodiment, from image data
Text is identified, by character translation at display translation of floating above the image data after preset languages as a result, it is possible to right
The image data in various sources carries out that word is taken to translate, and is done directly from picture is obtained to the process of translation, various tools are omitted
Between manual switching call, and simplify translating operation.Meanwhile image data is read from local, utilize local translation tool
It is translated, the application range of energy further expansion the technical program.
Referring to FIG. 8, it is the structural block diagram of the device provided in an embodiment of the present invention for taking word to translate.The present embodiment into
One step illustrates the more fully specific works module of character translation unit 530.As shown in figure 8, the device for taking word to translate, packet
Include following content:
Image data reading unit 510, for reading display image data;
The image data is bmp image data, JPEG picture data, GIF image data, TIFF image data or PNG
Image data.
Word recognition unit 520, for identifying text in image data by optical character recognition technology;
Character translation unit 530, for being preset languages by the character translation;
Display unit 540 is translated, the result for display translation of floating above the image data.
Wherein, further includes:
Range adjustment unit, for display of floating on the image data by touching adjusted size of rectangle frame;If
Set the tone it is whole after rectangle frame in part be the range that needs to identify in the image data;
Word recognition unit 520, for identifying the range for needing to identify in image data by optical character recognition technology
In text.
The character translation unit 530, comprising:
Languages confirmation module 531 identifies the languages of text for detecting preset languages;
First translation module 536 has for confirming from the languages of the text to the local translation tool of preset languages;
Background scheduling local translation tool obtains the result of the translation of local translation tool;
Second translation module 537, confirmation not from the languages of the text to the local translation tool of preset languages,
Backstage enters translation on line entrance, inputs the text and preset languages;Described in the reading of the remote server of translation on line
Text corresponds to the translation result of preset languages.
The translation, comprising:
Grammer detection is carried out to the text;
When the grammer detection to the text is without discovery syntax error, confirmation generates the whole sentence meaning of the text
Translation result;
When the grammer detection discovery syntax error to the text, confirmation generates each text unit of the text
The translation result of the meaning of word.
In conclusion further illustrating the more fully specific works module of character translation unit 530 in the present embodiment.
In general, text is identified from image data, by character translation at preset language by reading image data in the present embodiment
Kind after above the image data float display translation as a result, it is possible to the image data in various sources carry out that word is taken to turn over
It translates, is done directly the manual switching being omitted between various tools from the process for obtaining picture to translation and calls, and simplify and turn over
Translate operation.Different interpretative systems, available as comprehensive as possible and adaptable interpretative system are selected simultaneously.
Referring to FIG. 9, it is device structure block diagram involved in the embodiment of the present invention, which can be used in implementation
State the method for taking word to translate proposed in embodiment, the device that carrying takes word to translate.Specifically:
The equipment may include processor 110, display screen 120, storage equipment 130 and camera 140.Those skilled in the art
Member is not it is appreciated that the equipment in Fig. 9 constitutes absolute restriction, such as power supply does not give as the necessary parts of common-sense
It is bright.It may include perhaps combining certain components or different component layouts than illustrating more or fewer components.Wherein:
Processor 110 is handled for caching image data to the result of Text region and translation in image data
It obtains.
Display screen 120, image data, receives the operation to image data for rendering.
Such as when range is arranged, common display screen 120 can be, the setting of range is realized by peripheral hardware;It can also be with
It is touch display screen 120, the setting of range is realized by touch operation, certainly, from concrete function, touches and aobvious
Show realized by two different structures, does not make too careful division herein, display screen is directly considered as touch and display
Function synthesized body.
Equipment 130 is stored, for providing pre-stored image data, while for storing local translation tool.
Camera 140, for providing the image data obtained immediately.
Those of ordinary skill in the art will appreciate that realizing that all or part of the steps of above-described embodiment can pass through hardware
It completes, relevant hardware can also be instructed to complete by program, which can store in computer readable storage medium
In, storage medium may include memory, disk or CD etc..
The above is only a preferred embodiment of the present invention, for those of ordinary skill in the art, according to the present invention
Thought, there will be changes in the specific implementation manner and application range, and the content of the present specification should not be construed as to the present invention
Limitation.
Claims (15)
1. a method of take word to translate characterized by comprising
The mode for the image data that the mode of acquisition image data and reading are locally stored in real time is shown in barcode scanning interface;
When the image data being locally stored described in the reading, read image data is shown in the barcode scanning interface;
When acquisition image data in real time, the image data acquired in real time is shown in the barcode scanning interface;
Identify the text in the image data;
It is preset languages by the character translation;
The result of translation is shown in the image data.
2. a kind of method for taking word to translate according to claim 1, which is characterized in that the real-time acquisition image data,
Include:
Camera is opened, and the image data that camera acquires is cached in memory;
Read the image data in simultaneously display memory.
3. a kind of method for taking word to translate according to claim 1, which is characterized in that this method further include:
It floats on the image data and shows the rectangle frame of adjusting;
Part in the rectangle frame after adjustment is set as the range that needs to identify in the image data.
4. a kind of method for taking word to translate according to claim 3, which is characterized in that the text in the identification image data
Word, comprising:
Text in the range for needing to identify in the image data is identified by optical character recognition technology.
5. a kind of method for taking word to translate according to claim 1, which is characterized in that in the viewing area of the image data
The result of translation is shown in domain, comprising:
It floats in the image data and shows the result of the translation.
6. a kind of method for taking word to translate according to claim 1, which is characterized in that in the viewing area of the image data
Shown in domain translation as a result, comprising:
The result of the translation is shown except the literal scope for needing to identify in the image data.
7. a kind of method for taking word to translate according to claim 1, which is characterized in that described by the character translation is pre-
If languages, comprising:
Identify the languages of the text;
Enter translation on line entrance on backstage, inputs the text and preset languages;
The translation result that the text corresponds to the preset languages is read from the remote server of translation on line.
8. a kind of method for taking word to translate according to claim 1, which is characterized in that described by the character translation is pre-
If languages, comprising:
Identify the languages of the text;
It obtains from the languages of the text to the local translation tool of preset languages;
It calls the local translation tool to translate the text, obtains the translation result of local translation tool.
9. a kind of method for taking word to translate according to claim 1, which is characterized in that described by the character translation is pre-
If languages, comprising:
Identify the languages of the text;
When detecting the local translation tool from the languages of the text to preset languages, the local translation tool is called
The text is translated, the translation result of local translation tool is obtained;
Enter translation on line on backstage when the local translation tool from the languages of the text to preset languages is not detected
Entrance inputs the text and preset languages;It is corresponded to from the remote server of the translation on line reading text default
The translation result of languages.
10. a kind of method for taking word to translate according to claim 1, which is characterized in that described to be by the character translation
Preset languages, comprising:
Grammer detection is carried out to the text;
When the grammer detection to the text is without discovery syntax error, the translation knot of the whole sentence meaning of the text is generated
Fruit;
When the grammer detection discovery syntax error to the text, turning over for the meaning of word of each text unit of the text is generated
Translate result.
11. a kind of method for taking word to translate according to claim 1, which is characterized in that described on the image data
Side, which floats, shows the result of translation, comprising:
It floats above the image data and shows that paraphrase corresponding with the text, phonetic symbol, sounding playback interface and application are lifted
At least one of example.
12. a kind of method for taking word to translate according to claim 1, which is characterized in that described to be by the character translation
Preset languages, comprising:
It is preset one or more languages by the character translation.
13. a kind of device for taking word to translate characterized by comprising
Image data reading unit, for showing in barcode scanning interface, the mode of acquisition image data and reading are locally stored in real time
The mode of image data,
When the image data being locally stored described in the reading, read image data is shown in the barcode scanning interface,
When acquisition image data in real time, the image data acquired in real time is shown in the barcode scanning interface;
Word recognition unit, for identification text in the image data;
Character translation unit, for being preset languages by the character translation;
Display unit is translated, for showing the result of translation in the image data.
14. a kind of terminal device characterized by comprising
Processor;
Memory;And
Program is stored in the memory and is configured as being executed by the processor, and described program includes being used for perform claim
It is required that the instruction of any one of 1-12 the method.
15. a kind of storage medium, be stored with program, described program includes instruction, described instruction when being executed by a processor so that
The processor executes such as method of any of claims 1-12.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910450604.9A CN110188365B (en) | 2014-06-24 | 2014-06-24 | Word-taking translation method and device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910450604.9A CN110188365B (en) | 2014-06-24 | 2014-06-24 | Word-taking translation method and device |
CN201410289042.1A CN105279152B (en) | 2014-06-24 | 2014-06-24 | A kind of method and apparatus for taking word to translate |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410289042.1A Division CN105279152B (en) | 2014-06-24 | 2014-06-24 | A kind of method and apparatus for taking word to translate |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110188365A true CN110188365A (en) | 2019-08-30 |
CN110188365B CN110188365B (en) | 2022-12-02 |
Family
ID=55148181
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910450604.9A Active CN110188365B (en) | 2014-06-24 | 2014-06-24 | Word-taking translation method and device |
CN201410289042.1A Active CN105279152B (en) | 2014-06-24 | 2014-06-24 | A kind of method and apparatus for taking word to translate |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410289042.1A Active CN105279152B (en) | 2014-06-24 | 2014-06-24 | A kind of method and apparatus for taking word to translate |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN110188365B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113625919A (en) * | 2021-08-11 | 2021-11-09 | 掌阅科技股份有限公司 | Method for translating book contents, computing device and computer storage medium |
CN115455981A (en) * | 2022-11-11 | 2022-12-09 | 合肥智能语音创新发展有限公司 | Semantic understanding method, device, equipment and storage medium for multi-language sentences |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106227398A (en) * | 2016-06-29 | 2016-12-14 | 宇龙计算机通信科技(深圳)有限公司 | A kind of camera image character displaying method and device |
CN106156794B (en) * | 2016-07-01 | 2020-12-25 | 北京旷视科技有限公司 | Character recognition method and device based on character style recognition |
CN106250374B (en) * | 2016-08-05 | 2021-05-18 | Tcl科技集团股份有限公司 | Word-taking translation method and system |
CN108062301B (en) * | 2016-11-08 | 2021-11-05 | 希思特兰国际 | Character translation method and device |
CN106599888A (en) * | 2016-12-13 | 2017-04-26 | 广东小天才科技有限公司 | Translation method and device, and mobile terminal |
CN107832311A (en) * | 2017-11-30 | 2018-03-23 | 珠海市魅族科技有限公司 | A kind of interpretation method, device, terminal and readable storage device |
CN108319592B (en) * | 2018-02-08 | 2022-04-19 | 广东小天才科技有限公司 | Translation method and device and intelligent terminal |
CN111368562B (en) | 2020-02-28 | 2024-02-27 | 北京字节跳动网络技术有限公司 | Method and device for translating characters in picture, electronic equipment and storage medium |
CN112183122A (en) * | 2020-10-22 | 2021-01-05 | 腾讯科技(深圳)有限公司 | Character recognition method and device, storage medium and electronic equipment |
CN112507736A (en) * | 2020-12-21 | 2021-03-16 | 蜂后网络科技(深圳)有限公司 | Real-time online social translation application system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1761343A (en) * | 2005-09-28 | 2006-04-19 | 王永民 | Unit for recognizing, reading, learning characters added for handste to extend functions of recognizing scanned characters, paraphrasing display, and sounding |
CN101059839A (en) * | 2006-04-17 | 2007-10-24 | 宋柏君 | Optical character identification and translation method based on shooting mobile phones |
CN101127036A (en) * | 2006-08-14 | 2008-02-20 | 英华达股份有限公司 | Image identification translating device and method |
US8699819B1 (en) * | 2012-05-10 | 2014-04-15 | Google Inc. | Mosaicing documents for translation using video streams |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101408874A (en) * | 2007-10-09 | 2009-04-15 | 深圳富泰宏精密工业有限公司 | Apparatus and method for translating image and character |
CN101714139A (en) * | 2008-10-07 | 2010-05-26 | 英业达股份有限公司 | System for storing real-time translation data according to cursor positions and method thereof |
CN103309854A (en) * | 2013-06-08 | 2013-09-18 | 开平市中铝实业有限公司 | Translator system for taxis |
CN103488630B (en) * | 2013-09-29 | 2016-06-08 | 小米科技有限责任公司 | The processing method of a kind of image, device and terminal |
-
2014
- 2014-06-24 CN CN201910450604.9A patent/CN110188365B/en active Active
- 2014-06-24 CN CN201410289042.1A patent/CN105279152B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1761343A (en) * | 2005-09-28 | 2006-04-19 | 王永民 | Unit for recognizing, reading, learning characters added for handste to extend functions of recognizing scanned characters, paraphrasing display, and sounding |
CN101059839A (en) * | 2006-04-17 | 2007-10-24 | 宋柏君 | Optical character identification and translation method based on shooting mobile phones |
CN101127036A (en) * | 2006-08-14 | 2008-02-20 | 英华达股份有限公司 | Image identification translating device and method |
US8699819B1 (en) * | 2012-05-10 | 2014-04-15 | Google Inc. | Mosaicing documents for translation using video streams |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113625919A (en) * | 2021-08-11 | 2021-11-09 | 掌阅科技股份有限公司 | Method for translating book contents, computing device and computer storage medium |
CN115455981A (en) * | 2022-11-11 | 2022-12-09 | 合肥智能语音创新发展有限公司 | Semantic understanding method, device, equipment and storage medium for multi-language sentences |
CN115455981B (en) * | 2022-11-11 | 2024-03-19 | 合肥智能语音创新发展有限公司 | Semantic understanding method, device and equipment for multilingual sentences and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN105279152B (en) | 2019-04-19 |
CN105279152A (en) | 2016-01-27 |
CN110188365B (en) | 2022-12-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105279152B (en) | A kind of method and apparatus for taking word to translate | |
US10078376B2 (en) | Multimodel text input by a keyboard/camera text input module replacing a conventional keyboard text input module on a mobile device | |
US10176409B2 (en) | Method and apparatus for image character recognition model generation, and vertically-oriented character image recognition | |
JP6317772B2 (en) | System and method for real-time display of foreign language character sets and their translations on resource-constrained mobile devices | |
US10248878B2 (en) | Character input method and system as well as electronic device and keyboard thereof | |
KR101667463B1 (en) | Optical character recognition on a mobile device using context information | |
US20160344860A1 (en) | Document and image processing | |
CN105843800B (en) | A kind of language message methods of exhibiting and device based on DOI | |
CN107656922A (en) | A kind of interpretation method, device, terminal and storage medium | |
CN105786930A (en) | Touch interaction based search method and apparatus | |
CN101339617A (en) | Mobile phones photographing and translation device | |
Ponsard et al. | An ocr-enabled digital comic books viewer | |
Ramiah et al. | Detecting text based image with optical character recognition for English translation and speech using Android | |
CN201035576Y (en) | Device of implementing instant translation using digital camera technique | |
CN103186587A (en) | Method for quickly translating English word of book through mobile phone | |
KR100615058B1 (en) | Mobile handset and the method of selecting an objective area of the chatacter recognition on a mobile handset | |
US10552535B1 (en) | System for detecting and correcting broken words | |
CN109542569A (en) | Method, apparatus, terminal and the storage medium of display language are set | |
US20140249798A1 (en) | Translation system and translation method thereof | |
CN201251767Y (en) | Intelligent electronic dictionary | |
CN103186581A (en) | Method for quickly acquiring pronunciation of uncommon word in book through mobile phone | |
KR20160133335A (en) | System for making dynamic digital image by voice recognition | |
JP6746947B2 (en) | Translation program and information processing device | |
JP7098897B2 (en) | Image processing equipment, programs and image data | |
KR20120133149A (en) | Data tagging apparatus and method thereof, and data search method using the same |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |