CN103488630B - The processing method of a kind of image, device and terminal - Google Patents

The processing method of a kind of image, device and terminal Download PDF

Info

Publication number
CN103488630B
CN103488630B CN201310456269.6A CN201310456269A CN103488630B CN 103488630 B CN103488630 B CN 103488630B CN 201310456269 A CN201310456269 A CN 201310456269A CN 103488630 B CN103488630 B CN 103488630B
Authority
CN
China
Prior art keywords
described
image
translation
language kind
current character
Prior art date
Application number
CN201310456269.6A
Other languages
Chinese (zh)
Other versions
CN103488630A (en
Inventor
石新明
底浩
张雷
Original Assignee
小米科技有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 小米科技有限责任公司 filed Critical 小米科技有限责任公司
Priority to CN201310456269.6A priority Critical patent/CN103488630B/en
Publication of CN103488630A publication Critical patent/CN103488630A/en
Application granted granted Critical
Publication of CN103488630B publication Critical patent/CN103488630B/en

Links

Abstract

The disclosure is directed to the processing method of a kind of image, device and terminal, it is possible to realize automatically the character translation of identification user to be become cognizable word, improve efficiency, facilitate user. Wherein, the treating method comprises: obtain image, and identify the current character comprised in described image; Described current character is translated into object language kind by current language kind, and the target text after being translated; Described target text is replaced described current character, and is shown in described current character location in described image.

Description

The processing method of a kind of image, device and terminal

Technical field

The disclosure is directed to computer image processing technology field, especially with respect to the processing method of a kind of image, device and terminal.

Background technology

In correlation technique, along with the fast development of computer (Computer), communication (Communication) and consumption electronic product (ConsumerElectronic) industry, there is increasing people that mobile terminal can be used as the aid in life. For example, common mobile terminal has personal digital assistant (PDA, PersonalDigitalAssistant), mobile phone (mobilephone) and intelligent mobile phone (smartphone) etc., the volume of these mobile terminals is light and handy, easy to carry, having become people's routine work indispensable part of life, its required function is also more and more wider.

For the people often going country variant to go on business or to travel, owing to the difference of category of language distinguishes the words identification that unclear clothing, food, shelter, use, row etc. are relevant sometimes, so bring a lot of inconvenience. People can install translation software on mobile terminals, it is impossible to the word of identification is manually entered in translation software and translates. But needing people to learn how be able to not the word of identification to be manually entered in translation software, it has not been convenient to user uses, and the process inputting word needs the regular hour. Therefore, how the character translation of identification cognizable word can not be become to become technical problem urgently to be resolved hurrily at present user easily and flexibly.

Summary of the invention

For overcoming Problems existing in correlation technique, the disclosure provides the processing method of a kind of image, device and terminal, it is possible to realizes automatically the character translation of identification user to be become cognizable word, improves efficiency, facilitate user.

On the one hand, present disclose provides the processing method of a kind of image, including:

Obtain image, and identify the current character comprised in described image;

Described current character is translated into object language kind by current language kind, and the target text after being translated;

Described target text is replaced described current character, and is shown in described current character location in described image.

In the disclosure by user can not identification current character shooting or be scanned into image, by identifying the current character comprised in image, current character is translated into user can the target text of identification, and target text is replaced current character directly display in current character location in the picture, achieve real time translation, user current character need not be manually entered in translation software translate, also without user's memory or record translation result after having translated, improve efficiency, facilitate user;And user can be made directly to understand relevant information according to the target text in image, convenient.

Described described current character is translated into object language kind by current language kind before, described processing method also comprises determining that described object language kind. The disclosure can adopt various ways determine object language kind.

Described determine described object language kind, including: using the category of language of the information of storage as described object language kind; Or category of language user selected is as described object language kind. Category of language user selected in the disclosure is as object language kind so that user has autonomy, motility.

Described method also includes: open global positioning system, obtains geographical location information; Described described current character is translated into object language kind by current language kind, including: by the preset translation storehouse mated with described geographical location information, described current character is translated into object language kind by current language kind. The disclosure carries out real time translation by the preset translation storehouse mated with geographical location information, improves efficiency and the accuracy of translation.

Described image comprises geographical location information; Described described current character is translated into object language kind by current language kind, including: from described image, read geographical location information; By the preset translation storehouse mated with described geographical location information, described current character is translated into object language kind by current language kind. The disclosure carries out real time translation by the preset translation storehouse mated with geographical location information, improves efficiency and the accuracy of translation.

Described image comprises scene information; Described described current character is translated into object language kind by current language kind, including: identify the scene information of described image; The geographical location information of image according to described scene acquisition of information; By the preset translation storehouse mated with described geographical location information, described current character is translated into object language kind by current language kind. The disclosure carries out real time translation by the preset translation storehouse mated with geographical location information, improves efficiency and the accuracy of translation.

Described translation storehouse comprises at least two region stratum, described at least two region stratum sequential from large to small; Described by the preset translation storehouse mated with described geographical location information, described current character is translated into object language kind by current language kind, including: translation storehouse minimum with level in described at least two region stratum for described current character is risen and sequentially mates to the translation storehouse that level is bigger, till matching the translation result being consistent. The disclosure solves the problem owing to same word is likely to have different meanings in different areas.

On the other hand, present disclose provides the process device of a kind of image, including:

Identification module, is used for obtaining image, and identifies the current character comprised in described image;

Translation module, for described current character is translated into object language kind by current language kind, and the target text after being translated;

Display module, for described target text is replaced described current character, and is shown in described current character location in described image.

Described process device also includes:

Determine module, for before described current character is translated into object language kind by current language kind by described translation module, it is determined that described object language kind.

Described determine that module includes:

First determines unit, for will the category of language of information of storage as described object language kind; Or

Second determines unit, for the category of language user selected as described object language kind.

Described process device also includes: acquisition module, is used for opening global positioning system, obtains geographical location information;

Described translation module includes:

Translation unit, for by the preset translation storehouse mated with described geographical location information, being translated into object language kind by described current character by current language kind.

Described translation module includes:

Read unit, for when described image comprises geographical location information, reading geographical location information from described image;

Translation unit, for by the preset translation storehouse mated with described geographical location information, being translated into object language kind by described current character by current language kind.

Described translation module includes:

Recognition unit, for when comprising scene information in described image, identifying the scene information of described image;

Acquiring unit, for the geographical location information of image according to described scene acquisition of information;

Translation unit, for by the preset translation storehouse mated with described geographical location information, being translated into object language kind by described current character by current language kind.

Described translation unit includes:

Translation subelement, for comprising at least two region stratum when described translation storehouse, during described at least two region stratum sequential from large to small, translation storehouse minimum with level in described at least two region stratum for described current character is risen and sequentially mates to the translation storehouse that level is bigger, till matching the translation result being consistent.

On the other hand, present disclose provides a kind of terminal unit, terminal unit includes image acquiring device, display screen, picture editting's processor, translation processor and memorizer;

Wherein said image acquiring device, is used for obtaining image;

Described picture editting's processor, for identifying the current character comprised in image;

Described translation processor, for described current character is translated into object language kind by current language kind, the target text after being translated;

Described display screen, for showing the composograph that described target text replaces with described current character.

It should be appreciated that above general description and details hereinafter describe and be merely illustrative of, the disclosure can not be limited.

Accompanying drawing explanation

Accompanying drawing described herein is used for providing further understanding of the disclosure, constitutes the part of the application, is not intended that restriction of this disclosure. In the accompanying drawings:

Fig. 1 is the exemplary process diagram of the processing method of image;

Fig. 2 is the exemplary process diagram of the method utilizing global positioning system acquisition geographical location information;

Fig. 3 is the exemplary process diagram of the method reading geographical location information from image;

Fig. 4 is the exemplary process diagram of the method for the geographical location information according to scene acquisition of information image;

Fig. 5 is the exemplary process diagram of the processing method of application image;

Fig. 6 is the main exemplary configurations figure processing device of image;

Fig. 7 is the current Detailed example structure chart processing device of image;

Fig. 8 determines that the exemplary block diagram of module;

Fig. 9 is the target Detailed example structure chart processing device of image;

Figure 10 is a kind of exemplary block diagram of translation module;

Figure 11 is the another kind of exemplary block diagram of translation module;

Figure 12 is the structural representation of terminal unit.

By above-mentioned accompanying drawing, it has been shown that the embodiment that the disclosure is clear and definite, will there is more detailed description hereinafter. These accompanying drawings and word describe the scope being not intended to be limited disclosure design by any mode, but are the concept that those skilled in the art illustrate the disclosure by reference specific embodiment.

Detailed description of the invention

For making the purpose of the disclosure, technical scheme and advantage clearly understand, below in conjunction with embodiment and accompanying drawing, the disclosure is described in further details. At this, the exemplary embodiment of the disclosure and explanation thereof are used for explaining the disclosure, but are not intended as restriction of this disclosure.

Disclosure embodiment provides the processing method of a kind of image, device and terminal, below in conjunction with accompanying drawing, the disclosure is described in detail.

In disclosure embodiment by user can not identification current character shooting or be scanned into image, by identifying the current character comprised in image, current character is translated into user can the target text of identification, and target text is replaced current character directly display in current character location in the picture, achieve real time translation, user current character need not be manually entered in translation software translate, also without user's memory or record translation result after having translated, improve efficiency, facilitate user; And user can be made directly to understand relevant information according to the target text in image, convenient.

In one embodiment, it is possible to realized by terminal. As it is shown in figure 1, the processing method of image comprises the following steps:

Step 101: obtain image, and identify the current character comprised in image.

Such as, OCR(OpticalCharacterRecognition, optical character recognition can be adopted) current character that comprises in technology identification image. OCR technique is that text information is scanned, and is then analyzed image file processing, obtains the process of word and layout information. Additionally, the current character comprised in other image recognition technology identification image also can be adopted, will not enumerate herein.

Step 102: current character is translated into object language kind by current language kind, and the target text after being translated.

Before step 102, may also include step A: determine object language kind.

Wherein, step A determines object language kind, can using the category of language of the information of storage as object language kind, such as, user uses Chinese as the category of language of daily note or session, the note now stored or the category of language of session are Chinese, then can using Chinese as object language kind. The category of language that can also user be selected is as object language kind, for instance, it is possible to output polyglot kind selects for user, and user can select oneself cognizable category of language from polyglot kind apoplexy due to endogenous wind so that user has autonomy, motility.

In a step 102, current character is translated into object language kind by current language kind, has multiple implementation, for instance:

Mode A1, opens global positioning system, obtains geographical location information; By the preset translation storehouse mated with geographical location information, current character is translated into object language kind by current language kind.

Such as, in the process of moving, Zhang San runs into a notice board, Zhang San can not word on this notice board of identification, now Zhang San's triggering terminal opens global positioning system, terminal obtains geographical location information by global positioning system, shoot the word on notice board by photographic head, it is thus achieved that an image, identify the current character comprised in image, and then by the preset translation storehouse mated with geographical location information, by current language kind, current character is translated into user can the object language kind of identification.In this scenario, opening global positioning system can also be by the Preset Time after the word on photographic head shooting notice board in terminal, because if beyond Preset Time, Zhang San is likely to have been moved off place during shooting image, and the geographical location information so obtained is likely to inaccurate. For example, it may be in terminal by 10 minutes unlatching global positioning systems after photographic head shooting image, and obtain geographical location information by global positioning system.

Mode A2, when comprising geographical location information in image, reads geographical location information from image; By the preset translation storehouse mated with geographical location information, current character is translated into object language kind by current language kind. Owing to image comprising file header and picture material, file header stores the attribute of image, for instance, the title of image, establishment time, path, size, geographical location information etc., therefore can read geographical location information from image.

Mode A3, when comprising scene information in image, identifies the scene information of image; Geographical location information according to scene acquisition of information image; By the preset translation storehouse mated with geographical location information, current character is translated into object language kind by current language kind. Such as, the scene information comprised in image is " Great Wall ", the geographical location information obtaining image according to scene information " Great Wall " is " BeiJing, China ", by the preset translation storehouse mated with geographical location information " BeiJing, China ", current character is translated into object language kind by current language kind.

In aforesaid way A1 to mode A3, carry out real time translation by the preset translation storehouse mated with geographical location information, improve efficiency and the accuracy of translation. Such as, by the current character comprised in OCR software identification image, recognition result is that the category of language of this current character is probably Japanese and is also likely to be Korean, so arises that mistake during translation, causes that the result of translation is inaccurate. When the geographical location information determining image is Korea S, can by preset be that the translation storehouse that Korea S mates carries out real time translation with geographical location information, thus improve the accuracy of translation. Wherein, preset translation storehouse may be for the important area of different country origin regional tourisms, the translation storehouse that set bulletin content as regional in airport, hotel, sight spot, restaurant etc., map content or meal title etc. are set up.

Additionally, owing to same word is likely to there is different meanings in different areas, therefore, translation storehouse in aforesaid way A1 to mode A3 can comprise at least two region stratum, this at least two region stratum sequential from large to small, now by preset translation storehouse, current character is translated into object language kind by current language kind, step B can be included: risen in translation storehouse minimum with level in the stratum of at least two region for current character and sequentially mate to the translation storehouse that level is bigger, till matching the translation result being consistent. Such as, when geographical location information is Chicago, Illinois, USA, this region stratum is descending is sequentially the U.S., Illinois, Chicago. Performing by preset translation storehouse, when current character is translated into by current language kind the step of object language kind, can mate the translation storehouse in preferential Chicago minimum by the level of region stratum, if mating unsuccessful, just toward the translation storehouse coupling of the bigger Illinois of level, if mating unsuccessful, just toward the translation storehouse coupling of the bigger U.S. of level, till matching the translation result being consistent.Except by the classification as region stratum of the region size, may be used without the mode of label in other embodiments and is classified in translation storehouse, for instance live the labels such as row and classify eating clothing.

Step 103: target text is replaced current character, and is shown in current character location in the picture.

The target text that translation obtains is replaced current character by step 103 after editing and processing and directly displays in current character location in the picture, it is not necessary to user's memory or record, facilitate user to consult.

The multiple implementation being described above in the embodiment shown in Fig. 1 each link, is discussed in detail, below by several embodiments, the process of realization.

In one embodiment, can be realized by terminal, as in figure 2 it is shown, utilize the method that global positioning system obtains geographical location information to comprise the following steps:

Step 201: open global positioning system, obtains geographical location information.

Step 202: obtain the image shot by photographic head, and identify the current character comprised in this image.

Step 203: the prompting to user's output from the selected category of language of polyglot kind apoplexy due to endogenous wind.

Step 204: receive the category of language that user selectes.

Step 205: category of language user selected is as object language kind.

Step 206: by the preset translation storehouse mated with geographical location information, current character is translated into object language kind by current language kind, the target text after being translated.

In step 206, translation storehouse can comprise at least two region stratum, this region stratum sequential from large to small, can rise translation storehouse minimum with level in the stratum of at least two region for current character and sequentially mate to the translation storehouse that level is bigger, till matching the translation result being consistent.

Step 207: the target text after translation is replaced current character, and is shown in current character location in the picture.

In the present embodiment, utilize global positioning system to obtain geographical location information, by the preset translation storehouse mated with geographical location information, current character is translated into object language kind by current language kind, improves the accuracy of translation; And the target text replacement current character after translation is directly displayed in current character location in the picture, it is not necessary to and user's memory or record, facilitate user to consult.

In one embodiment, can be realized by terminal, as it is shown on figure 3, the method reading geographical location information from image comprises the following steps:

Step 301: obtain image, and identify the current character comprised in image.

Step 302: using the category of language of the information of storage as object language kind.

In step 302, it is also possible to the prompting to user's output from the selected category of language of polyglot kind apoplexy due to endogenous wind; Receive the category of language that user selectes; Category of language user selected is as object language kind so that user has autonomy, motility.

Step 303: read geographical location information from image.

Step 304: by the preset translation storehouse mated with geographical location information, current character is translated into object language kind by current language kind, the target text after being translated.

Step 305: the target text after translation is replaced current character, and is shown in current character location in the picture.

In the present embodiment, from image, read geographical location information, by the preset translation storehouse mated with geographical location information, current character is translated into object language kind by current language kind, improves the accuracy of translation;And the target text replacement current character after translation is directly displayed in current character location in the picture, it is not necessary to and user's memory or record, facilitate user to consult.

In one embodiment, can be realized by terminal, as shown in Figure 4, when image comprises scene information, comprise the following steps according to the method for the geographical location information of scene acquisition of information image:

Step 401: obtain image, and identify the current character comprised in image.

Step 402: using the category of language of the information of storage as object language kind.

Step 403: identify the scene information in image.

Step 404: the geographical location information according to scene acquisition of information image.

Step 405: by the preset translation storehouse mated with geographical location information, current character is translated into object language kind by current language kind, the target text after being translated.

Step 406: the target text after translation is replaced current character, and is shown in current character location in the picture.

In the present embodiment, when image comprises scene information, identify the scene information of image, geographical location information according to scene acquisition of information image, by the preset translation storehouse mated with geographical location information, current character is translated into object language kind by current language kind, improves the accuracy of translation; And the target text replacement current character after translation is directly displayed in current character location in the picture, it is not necessary to and user's memory or record, facilitate user to consult.

In one embodiment, can be realized by mobile phone terminal, this embodiment can be applicable to such a scene: Zhang San determines to go to Korea S to travel, but he is proficient in Chinese, slightly understand Korean, after having arrived Korea S, he opens the global positioning system of mobile phone, by mobile phone shooting can not the Korean picture of identification, mobile phone can automatically by he can not the Korean of identification translate into he can the Chinese of identification, so the trip to the Korea S of Zhang San brings facility. As it is shown in figure 5, the method for this embodiment comprises the following steps:

Step 501: global positioning system opened by mobile phone, obtains geographical location information, and this geographical location information is " South Korea Seoul ".

Step 502: mobile phone obtains the image of photographic head shooting, and identifies the current character comprised in this image, and the category of language of this current character is " Korean ".

Step 503: mobile phone selectes the prompting of a category of language to Zhang San's output from " Chinese, English, Korean, Japanese, German ".

Step 504: mobile phone receives " Chinese " category of language that Zhang San selectes.

Step 505: current character, by the preset translation storehouse mated with " South Korea Seoul ", is translated into " Chinese " by " Korean ", the target text after being translated by mobile phone.

The region stratum in this translation storehouse is descending is sequentially Korea S, Soul. Current character is preferentially mated with the translation storehouse of Soul, if coupling less than, current character is mated with the translation storehouse of Korea S, till matching the translation result being consistent.

Step 506: the target text after translation is replaced current character by mobile phone, and is shown in current character location in the picture.

In the present embodiment, mobile phone automatically by user Zhang San can not the Korean of identification translate into user Zhang San can the Chinese of identification, facilitate user; Mobile phone, by the preset translation storehouse mated with geographical location information, improves the accuracy of translation; And the target text after translation is replaced current character and is directly displayed in current character location in the picture by mobile phone, facilitates user to consult.

The processing method having been understood image by above description realizes process, and this process can be realized by device, and internal structure and function to device are introduced below.

In one embodiment, as shown in Figure 6, the process device of image includes: identification module 601, translation module 602 and display module 603.

Identification module 601, is used for obtaining image, and identifies the current character comprised in image;

Translation module 602, for current character is translated into object language kind by current language kind, and the target text after being translated;

Display module 603, for target text is replaced current character, and is shown in current character location in the picture.

In one embodiment, as it is shown in fig. 7, the process device that above-mentioned Fig. 6 shows may also include that and determines module 701.

Determine module 701, for before current character is translated into object language kind by current language kind by above-mentioned translation module 602, it is determined that object language kind.

In one embodiment, as shown in Figure 8, above-mentioned determine that module 701 comprises the steps that first determines that unit 801 or the second determines unit 802.

First determines unit 801, for will the category of language of information of storage as object language kind.

Second determines unit 802, for the category of language user selected as object language kind.

In one embodiment, as it is shown in figure 9, the process device that above-mentioned Fig. 6 shows may also include that acquisition module 901.

Acquisition module 901, is used for opening global positioning system, obtains geographical location information.

Above-mentioned translation module 602 comprises the steps that translation unit 902.

Translation unit 902, for by the preset translation storehouse mated with geographical location information, being translated into object language kind by current character by current language kind.

In one embodiment, as shown in Figure 10, above-mentioned translation module 602 comprises the steps that reading unit 1001 and translation unit 902.

Read unit 1001, for when image comprises geographical location information, reading geographical location information from image;

Translation unit 902, for by the preset translation storehouse mated with geographical location information, being translated into object language kind by current character by current language kind.

In one embodiment, as shown in figure 11, above-mentioned translation module 602 comprises the steps that recognition unit 1101, acquiring unit 1102 and translation unit 902.

Recognition unit 1101, for when comprising scene information in image, identifying the scene information of image;

Acquiring unit 1102, for the geographical location information according to scene acquisition of information image;

Translation unit 902, for by the preset translation storehouse mated with geographical location information, being translated into object language kind by current character by current language kind.

In one embodiment, above-mentioned translation unit 902 comprises the steps that

Translation subelement, at least two region stratum is comprised for storehouse of serving as interpreter, during this at least two region stratum sequential from large to small, translation storehouse minimum with level in the stratum of at least two region for current character is risen and sequentially mates to the translation storehouse that level is bigger, till matching the translation result being consistent.

Figure 12 is the structural representation of terminal unit in disclosure embodiment. Referring to Figure 12, the method that this terminal may be used for implementing to provide in above-described embodiment. Preferred:

Terminal unit 800 can include communication unit 110, include the memorizer 120 of one or more computer-readable recording mediums, input block 130, display unit 140, sensor 150, voicefrequency circuit 160, WIFI(WirelessFidelity, Wireless Fidelity) module 170, include the parts such as processor 180 and power supply 190 of or more than one process core.It will be understood by those skilled in the art that the terminal unit structure shown in Figure 12 is not intended that the restriction to terminal unit, it is possible to include ratio and illustrate more or less of parts, or combine some parts, or different parts are arranged. Wherein:

Communication unit 110 can be used for receiving and sending messages or in communication process, the reception of signal and transmission, and this communication unit 110 can be RF(RadioFrequency, radio frequency) circuit, router, modem, etc. network communication equipment. especially, when communication unit 110 is RF circuit, after the downlink information of base station is received, transfer to one or more than one processor 180 processes, it addition, be sent to base station by relating to up data. generally, RF circuit as communication unit includes but not limited to antenna, at least one amplifier, tuner, one or more agitator, subscriber identity module (SIM) card, transceiver, bonder, LNA(LowNoiseAmplifier, low-noise amplifier), duplexer etc. communicate additionally, communication unit 110 can also pass through radio communication with network and other equipment. described radio communication can use arbitrary communication standard or agreement, include but not limited to GSM(GlobalSystemofMobilecommunication, global system for mobile communications), GPRS(GeneralPacketRadioService, general packet radio service), CDMA(CodeDivisionMultipleAccess, CDMA), WCDMA(WidebandCodeDivisionMultipleAccess, WCDMA), LTE(LongTermEvolution, Long Term Evolution), Email, SMS(ShortMessagingService, Short Message Service) etc. memorizer 120 can be used for storing software program and module, and processor 180 is stored in software program and the module of memorizer 120 by running, thus performing the application of various function and data process. memorizer 120 can mainly include storage program area and storage data field, and wherein, storage program area can store the application program (such as sound-playing function, image player function etc.) etc. needed for operating system, at least one function, storage data field can store the data (such as voice data, phone directory etc.) etc. that the use according to terminal unit 800 creates. additionally, memorizer 120 can include high-speed random access memory, it is also possible to include nonvolatile memory, for instance at least one disk memory, flush memory device or other volatile solid-state parts. correspondingly, memorizer 120 can also include Memory Controller, to provide processor 180 and the input block 130 access to memorizer 120.

Input block 130 can be used for receiving numeral or the character information of input, and produce the keyboard relevant with user setup and function control, mouse, action bars, optics or trace ball signal and input. Preferably, input block 130 can include Touch sensitive surface 131 and other input equipments 132. Touch sensitive surface 131, also referred to as touching display screen or Trackpad, user can be collected thereon or neighbouring touch operation (such as user uses any applicable object such as finger, stylus or adnexa operation on Touch sensitive surface 131 or near Touch sensitive surface 131), and drive corresponding connecting device according to formula set in advance. Optionally, Touch sensitive surface 131 can include touch detecting apparatus and two parts of touch controller.Wherein, the touch orientation of touch detecting apparatus detection user, and detect the signal that touch operation brings, transmit a signal to touch controller; Touch controller receives touch information from touch detecting apparatus, and is converted into contact coordinate, then gives processor 180, and can receive order that processor 180 sends and be performed. Furthermore, it is possible to adopt the polytypes such as resistance-type, condenser type, infrared ray and surface acoustic wave to realize Touch sensitive surface 131. Except Touch sensitive surface 131, input block 130 can also include other input equipments 132. Preferably, other input equipments 132 can include but not limited to one or more in physical keyboard, function key (such as volume control button, switch key etc.), trace ball, mouse, action bars etc.

Display unit 140 can be used for showing the various graphical user interface of information or the information being supplied to user and the terminal unit 800 inputted by user, and these graphical user interface can be made up of figure, text, icon, video and its combination in any. Display unit 140 can include display floater 141, optionally, it is possible to adopt LCD(LiquidCrystalDisplay, liquid crystal display), OLED(OrganicLight-EmittingDiode, Organic Light Emitting Diode) etc. form configure display floater 141. Further, Touch sensitive surface 131 can cover display floater 141, when Touch sensitive surface 131 detects thereon or after neighbouring touch operation, send processor 180 to determine the type of touch event, on display floater 141, provide corresponding visual output with preprocessor 180 according to the type of touch event. Although in fig. 12, Touch sensitive surface 131 and display floater 141 are to realize input and input function as two independent parts, but in some embodiments it is possible to by integrated to Touch sensitive surface 131 and display floater 141 and realize input and output function.

Terminal unit 800 may also include at least one sensor 150, such as optical sensor, motion sensor and other sensors. Optical sensor can include ambient light sensor and proximity transducer, wherein, ambient light sensor can regulate the brightness of display floater 141 according to the light and shade of ambient light, and proximity transducer when terminal unit 800 moves in one's ear, can cut out display floater 141 and/or backlight. One as motion sensor, Gravity accelerometer can detect the size of the acceleration that (is generally three axles) in all directions, can detect that the size of gravity and direction time static, can be used for identifying the application (such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating) of mobile phone attitude, Vibration identification correlation function (such as pedometer, knock) etc.; Other sensors such as the gyroscope that can also configure as terminal unit 800, barometer, drimeter, thermometer, infrared ray sensor, do not repeat them here.

Voicefrequency circuit 160, speaker 161, microphone 162 can provide the audio interface between user and terminal unit 800. Voicefrequency circuit 160 can by receive voice data conversion after the signal of telecommunication, be transferred to speaker 161, by speaker 161 be converted to acoustical signal output; On the other hand, the acoustical signal of collection is converted to the signal of telecommunication by microphone 162, voice data is converted to after being received by voicefrequency circuit 160, after again voice data output processor 180 being processed, through RF circuit 110 to be sent to such as another terminal unit, or voice data is exported to memorizer 120 to process further.Voicefrequency circuit 160 is also possible that earphone jack, to provide the communication of peripheral hardware earphone and terminal unit 800.

In order to realize radio communication, this terminal unit can being configured with wireless communication unit 170, this wireless communication unit 170 can be WIFI module. WIFI belongs to short range wireless transmission technology, and terminal unit 800 can help user to send and receive e-mail by wireless communication unit 170, browse webpage and access streaming video etc., and it has provided the user wireless broadband internet and has accessed. Although Figure 12 has illustrated wireless communication unit 170, but it is understood that, it is also not belonging to must be configured into of terminal unit 800, completely can as needed in do not change invention essence scope in and omit.

Processor 180 is the control centre of terminal unit 800, utilize various interface and the various piece of the whole mobile phone of connection, it is stored in the software program in memorizer 120 and/or module by running or performing, and call the data being stored in memorizer 120, perform the various functions of terminal unit 800 and process data, thus mobile phone is carried out integral monitoring. Optionally, processor 180 can include one or more process core; Preferably, processor 180 can integrated application processor and modem processor, wherein, application processor mainly processes operating system, user interface and application program etc., and modem processor mainly processes radio communication. It is understood that above-mentioned modem processor can not also be integrated in processor 180.

Terminal unit 800 also includes the power supply 190(such as battery powered to all parts), preferably, it is logically contiguous with processor 180 that power supply can pass through power-supply management system, realizes the functions such as management charging, electric discharge and power managed thereby through power-supply management system. Power supply 190 can also include one or more direct current or alternating current power supply, recharging system, power failure detection circuit, power supply changeover device or the random component such as inverter, power supply status indicator.

Although not shown, terminal unit 800 can also include photographic head, bluetooth module etc., does not repeat them here. In the present embodiment, terminal unit includes image acquiring device, display screen, picture editting's processor, translation processor and memorizer;

Wherein, image acquiring device, it is used for obtaining image;

Picture editting's processor, for identifying the current character comprised in image;

Translation processor, for current character is translated into object language kind by current language kind, the target text after being translated;

Display screen, for showing the composograph that target text replaces with current character.

Memorizer can also include the instruction performing following operation:

Before current character is translated into object language kind by current language kind, processing method also comprises determining that object language kind.

Memorizer can also include the instruction performing following operation:

Determine object language kind, including:

Using the category of language of the information of storage as object language kind; Or

Category of language user selected is as object language kind.

Memorizer can also include the instruction performing following operation:

Method also includes: open global positioning system, obtains geographical location information;

Current character is translated into object language kind by current language kind, including:

By the preset translation storehouse mated with geographical location information, current character is translated into object language kind by current language kind.

Memorizer can also include the instruction performing following operation:

Image comprises geographical location information;

Current character is translated into object language kind by current language kind, including:

Geographical location information is read from image;

By the preset translation storehouse mated with geographical location information, current character is translated into object language kind by current language kind.

Memorizer can also include the instruction performing following operation:

Image comprises scene information;

Current character is translated into object language kind by current language kind, including:

Identify the scene information of image;

Geographical location information according to scene acquisition of information image;

By the preset translation storehouse mated with geographical location information, current character is translated into object language kind by current language kind.

Memorizer can also include the instruction performing following operation:

Translation storehouse comprises at least two region stratum, this at least two region stratum sequential from large to small;

By the preset translation storehouse mated with geographical location information, current character is translated into object language kind by current language kind, including:

Translation storehouse minimum with level in the stratum of at least two region for current character is risen and sequentially mates to the translation storehouse that level is bigger, till matching the translation result being consistent.

In disclosure embodiment by user can not identification current character shooting or be scanned into image, by identifying the current character comprised in image, current character is translated into user can the target text of identification, and target text is replaced current character directly display in current character location in the picture, achieve real time translation, user current character need not be manually entered in translation software translate, also without user's memory or record translation result after having translated, improve efficiency, facilitate user; And user can be made directly to understand relevant information according to the target text in image, convenient.

Additionally, typically, the mobile terminal described in the disclosure can be various hand-held terminal device, for instance mobile phone, PDA(Personal Digital Assistant) etc., and therefore the protection domain of the disclosure should not limit as certain certain types of mobile terminal.

Additionally, be also implemented as the computer program performed by central processor CPU according to disclosed method. When this computer program is performed by CPU, perform the above-mentioned functions limited in disclosed method.

Additionally, said method step and system unit can also utilize controller and for storing so that controller realizes the computer readable storage devices realization of the computer program of above-mentioned steps or Elementary Function.

In addition, it should be appreciated that computer readable storage devices as herein described (such as, memorizer) can be volatile memory or nonvolatile memory, or volatile memory and nonvolatile memory can be included. Nonrestrictive as an example, nonvolatile memory can include read only memory (ROM), programming ROM (PROM), electrically programmable ROM(EPROM), electrically erasable programmable ROM(EEPROM) or flash memory. Volatile memory can include random-access memory (ram), and this RAM can serve as external cache. Nonrestrictive as an example, RAM can obtain in a variety of forms, such as synchronous random access memory (DRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate SDRAM(DDRSDRAM), strengthen SDRAM(ESDRAM), synchronization link DRAM(SLDRAM) and direct RambusRAM(DRRAM).The storage device of disclosed aspect is intended to include but not limited to the memorizer of these and other suitable type.

Those skilled in the art will also understand is that, may be implemented as electronic hardware, computer software or both combinations in conjunction with the various illustrative logical blocks described by disclosure herein, module, circuit and algorithm steps. In order to clearly demonstrate this interchangeability of hardware and software, to it, general description is carried out with regard to the function of various exemplary components, square, module, circuit and step. This function is implemented as software and is also implemented as hardware and depends on specifically applying and being applied to the design constraint of whole system. Those skilled in the art can realize described function in every way for every kind of concrete application, but this realization decision is not necessarily to be construed as and causes departing from the scope of the present disclosure.

The following parts being designed to perform function described here can be utilized to realize or perform in conjunction with the various illustrative logical blocks described by disclosure herein, module and circuit: any combination of general processor, digital signal processor (DSP), special IC (ASIC), field programmable gate array (FPGA) or other PLD, discrete gate or transistor logic, discrete nextport hardware component NextPort or these parts. General processor can be microprocessor, but alternatively, processor can be any conventional processors, controller, microcontroller or state machine. Processor can also be implemented as the combination of computing equipment, for instance, DSP and the combination of microprocessor, multi-microprocessor, one or more microprocessor are in conjunction with DSP core or other this configuration any.

Step in conjunction with the method described by disclosure herein or algorithm can be directly contained in the software module performed in hardware, by processor or in combination of the two. Software module may reside within the storage medium of RAM memory, flash memory, ROM memory, eprom memory, eeprom memory, depositor, hard disk, removable dish, CD-ROM or other form any known in the art. Exemplary storage medium is coupled to processor so that processor can read information from this storage medium or write information to this storage medium. In an alternative, described storage medium can be integral to the processor together. Processor and storage medium may reside within ASIC. ASIC may reside within user terminal. In an alternative, processor and storage medium can be resident in the user terminal as discrete assembly.

In one or more exemplary design, described function can realize in hardware, software, firmware or its combination in any. If realized in software, then described function can be stored on a computer-readable medium as one or more instructions or code or transmitted by computer-readable medium. Computer-readable medium includes computer-readable storage medium and communication media, and this communication media includes any medium contributing to that computer program is sent to another position from a position. Storage medium can be able to any usable medium being accessed by a general purpose or special purpose computer. Nonrestrictive as an example, this computer-readable medium can include RAM, ROM, EEPROM, CD-ROM or other optical disc memory apparatus, disk storage equipment or other magnetic storage apparatus, or may be used for carrying or required program code that storage form is instruction or data structure and other medium any that can be accessed by universal or special computer or universal or special processor.Additionally, any connection can be properly termed as computer-readable medium. Such as, if use coaxial cable, optical fiber cable, twisted-pair feeder, Digital Subscriber Line or such as infrared ray, radio and microwave wireless technology come from website, server or other remote source send software, then the wireless technology of above-mentioned coaxial cable, optical fiber cable, twisted-pair feeder, DSL or such as infrared first, radio and microwave is included in the definition of medium. As used herein, disk and CD include compact disk (CD), laser disk, CD, digital versatile disc (DVD), floppy disk, Blu-ray disc, and wherein disk generally magnetically reproduces data, and CD reproduces data with utilizing laser optics. The combination of foregoing should also be as including in the scope of computer-readable medium.

Although content disclosed above illustrates the exemplary embodiment of the disclosure, it should be noted that under the premise of the scope of the present disclosure limited without departing substantially from claim, it is possible to it is variously changed and revises. Need to not perform with any particular order according to the open function of claim to a method of embodiment described herein, step and/or action. Although additionally, the element of the disclosure can describe or requirement with individual form, it is also contemplated that multiple, it is odd number unless explicitly limited.

Above-described detailed description of the invention; purpose of this disclosure, technical scheme and beneficial effect have further described; it is it should be understood that; the foregoing is only the detailed description of the invention of the disclosure; it is not used to limit the protection domain of the disclosure; within all spirit in the disclosure and principle, any amendment of making, equivalent replacement, improvement etc., should be included within the protection domain of the disclosure.

Claims (9)

1. the processing method of an image, it is characterised in that including:
Obtain image, and identify the current character comprised in described image;
Described current character is translated into object language kind by current language kind, and the target text after being translated;
Described target text is replaced described current character, and is shown in described current character location in described image;
Comprising geographical location information in described image, wherein, described image includes file header, stores the attribute of image in described file header, and the attribute of described image includes: the title of described image, establishment time, path, size and geographical location information;
Described described current character is translated into object language kind by current language kind, including:
Geographical location information is read from the attribute of described image;
By the preset translation storehouse mated with described geographical location information, described current character is translated into object language kind by current language kind; Or
Described image comprises scene information;
Described described current character is translated into object language kind by current language kind, including:
Identify the scene information of described image;
The geographical location information of image according to described scene acquisition of information;
By the preset translation storehouse mated with described geographical location information, described current character is translated into object language kind by current language kind.
2. processing method according to claim 1, it is characterised in that described described current character is translated into object language kind by current language kind before, described processing method also includes:
Determine described object language kind.
3. processing method according to claim 2, it is characterised in that described determine described object language kind, including:
Using the category of language of the information of storage as described object language kind; Or
Category of language user selected is as described object language kind.
4. processing method according to claim 1, it is characterised in that
Described translation storehouse comprises at least two region stratum, described at least two region stratum sequential from large to small;
Described by the preset translation storehouse mated with described geographical location information, described current character is translated into object language kind by current language kind, including:
Translation storehouse minimum with level in described at least two region stratum for described current character is risen and sequentially mates to the translation storehouse that level is bigger, till matching the translation result being consistent.
5. the process device of an image, it is characterised in that including:
Identification module, is used for obtaining image, and identifies the current character comprised in described image;
Translation module, for described current character is translated into object language kind by current language kind, and the target text after being translated;
Display module, for described target text is replaced described current character, and is shown in described current character location in described image;
Described translation module includes:
Read unit, for when the attribute of described image comprises geographical location information, geographical location information is read from the attribute of described image, wherein, described image includes file header, storing the attribute of image in described file header, the attribute of described image includes: the title of described image, establishment time, path, size and geographical location information;
Translation unit, for by the preset translation storehouse mated with described geographical location information, being translated into object language kind by described current character by current language kind; Or
Described translation module includes:
Recognition unit, for when comprising scene information in described image, identifying the scene information of described image;
Acquiring unit, for the geographical location information of image according to described scene acquisition of information;
Translation unit, for by the preset translation storehouse mated with described geographical location information, being translated into object language kind by described current character by current language kind.
6. process device according to claim 5, it is characterised in that described process device also includes:
Determine module, for before described current character is translated into object language kind by current language kind by described translation module, it is determined that described object language kind.
7. process device according to claim 6, it is characterised in that described determine that module includes:
First determines unit, for will the category of language of information of storage as described object language kind; Or
Second determines unit, for the category of language user selected as described object language kind.
8. process device according to claim 5, it is characterised in that described translation unit includes:
Translation subelement, for comprising at least two region stratum when described translation storehouse, during described at least two region stratum sequential from large to small, translation storehouse minimum with level in described at least two region stratum for described current character is risen and sequentially mates to the translation storehouse that level is bigger, till matching the translation result being consistent.
9. a terminal unit, it is characterised in that terminal unit includes image acquiring device, display screen, picture editting's processor, translation processor and memorizer;
Wherein said image acquiring device, is used for obtaining image;
Described picture editting's processor, for identifying the current character comprised in image;
Described translation processor, for described current character is translated into object language kind by current language kind, the target text after being translated;
Described display screen, for showing the composograph that described target text replaces with described current character;
Comprising geographical location information in described image, wherein, described image includes file header, stores the attribute of image in described file header, and the attribute of described image includes: the title of described image, establishment time, path, size and geographical location information;
Described translation processor is additionally operable to:
Geographical location information is read from the attribute of described image;
By the preset translation storehouse mated with described geographical location information, described current character is translated into object language kind by current language kind;
Or
Described image comprises scene information;
Described translation processor is additionally operable to:
Identify the scene information of described image;
The geographical location information of image according to described scene acquisition of information;
By the preset translation storehouse mated with described geographical location information, described current character is translated into object language kind by current language kind.
CN201310456269.6A 2013-09-29 2013-09-29 The processing method of a kind of image, device and terminal CN103488630B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310456269.6A CN103488630B (en) 2013-09-29 2013-09-29 The processing method of a kind of image, device and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310456269.6A CN103488630B (en) 2013-09-29 2013-09-29 The processing method of a kind of image, device and terminal

Publications (2)

Publication Number Publication Date
CN103488630A CN103488630A (en) 2014-01-01
CN103488630B true CN103488630B (en) 2016-06-08

Family

ID=49828872

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310456269.6A CN103488630B (en) 2013-09-29 2013-09-29 The processing method of a kind of image, device and terminal

Country Status (1)

Country Link
CN (1) CN103488630B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104933037A (en) * 2014-03-20 2015-09-23 无锡伍新网络科技有限公司 Personal information translation method and apparatus
CN104933035A (en) * 2014-03-20 2015-09-23 无锡伍新网络科技有限公司 Assistant translation method and apparatus
CN104933034A (en) * 2014-03-20 2015-09-23 无锡伍新网络科技有限公司 Aided translation method and apparatus for personal form-filling information
CN104933036A (en) * 2014-03-20 2015-09-23 无锡伍新网络科技有限公司 Assistant translation method and apparatus
CN105279152B (en) * 2014-06-24 2019-04-19 腾讯科技(深圳)有限公司 A kind of method and apparatus for taking word to translate
CN104219459A (en) * 2014-09-30 2014-12-17 上海摩软通讯技术有限公司 Video language translation method and system and intelligent display device
CN106650727A (en) * 2016-12-08 2017-05-10 宇龙计算机通信科技(深圳)有限公司 Information display method and AR (augmented reality) device
CN108604128A (en) * 2016-12-16 2018-09-28 华为技术有限公司 a kind of processing method and mobile device
CN106895848A (en) * 2016-12-30 2017-06-27 深圳天珑无线科技有限公司 With reference to image identification and the air navigation aid and its system of intelligent translation
CN106815581A (en) * 2017-01-19 2017-06-09 珠海格力电器股份有限公司 A kind of document input method, system and electronic equipment
CN107679128A (en) * 2017-09-21 2018-02-09 北京金山安全软件有限公司 A kind of information displaying method, device, electronic equipment and storage medium
CN108920224A (en) * 2018-04-11 2018-11-30 Oppo广东移动通信有限公司 Dialog information processing method, device, mobile terminal and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101222568A (en) * 2007-12-11 2008-07-16 江苏科技大学 Image gathering device based on multi-component information embedding and multi-component information gathering method
CN101641718A (en) * 2007-03-12 2010-02-03 索尼株式会社 Image processing device, image processing method, and image processing system
CN101667251A (en) * 2008-09-05 2010-03-10 三星电子株式会社 OCR recognition method and device with auxiliary positioning function
CN102479177A (en) * 2010-11-23 2012-05-30 英业达股份有限公司 Real-time translating method for mobile device
CN102915326A (en) * 2012-08-30 2013-02-06 杭州藕根科技有限公司 Mobile terminal scenery identifying system based on GPS (Global Positioning System) and image search technique
CN103329123A (en) * 2011-01-06 2013-09-25 高通股份有限公司 Methods and apparatuses for use in providing translation information services to mobile stations

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8144990B2 (en) * 2007-03-22 2012-03-27 Sony Ericsson Mobile Communications Ab Translation and display of text in picture
US8842909B2 (en) * 2011-06-30 2014-09-23 Qualcomm Incorporated Efficient blending methods for AR applications

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101641718A (en) * 2007-03-12 2010-02-03 索尼株式会社 Image processing device, image processing method, and image processing system
CN101222568A (en) * 2007-12-11 2008-07-16 江苏科技大学 Image gathering device based on multi-component information embedding and multi-component information gathering method
CN101667251A (en) * 2008-09-05 2010-03-10 三星电子株式会社 OCR recognition method and device with auxiliary positioning function
CN102479177A (en) * 2010-11-23 2012-05-30 英业达股份有限公司 Real-time translating method for mobile device
CN103329123A (en) * 2011-01-06 2013-09-25 高通股份有限公司 Methods and apparatuses for use in providing translation information services to mobile stations
CN102915326A (en) * 2012-08-30 2013-02-06 杭州藕根科技有限公司 Mobile terminal scenery identifying system based on GPS (Global Positioning System) and image search technique

Also Published As

Publication number Publication date
CN103488630A (en) 2014-01-01

Similar Documents

Publication Publication Date Title
US20160364390A1 (en) Contact Grouping Method and Apparatus
CN103701926B (en) A kind of methods, devices and systems for obtaining fault reason information
CN103501333B (en) Method, device and terminal equipment for downloading files
CN103327102B (en) A kind of method and apparatus recommending application program
JP6130926B2 (en) Gesture conversation processing method, apparatus, terminal device, program, and recording medium
CN103514581B (en) Screen picture capturing method, device and terminal equipment
CN103327189B (en) Method and device for uploading, browsing and deleting pictures
CN104219617B (en) Service acquisition method and device
CN103632165B (en) A kind of method of image procossing, device and terminal device
CN104113782A (en) Video-based sign-in method, terminal, server and system
US9507990B2 (en) Two-dimensional code recognition method and apparatus
CN103559518B (en) A kind of NFC data transmission, device and terminal device
US10095666B2 (en) Method and terminal for adding quick link
CN104133832B (en) The recognition methods of pirate application and device
US9241242B2 (en) Information recommendation method and apparatus
CN104516893B (en) Information storage means, device and communicating terminal
CN103905885A (en) Video live broadcast method and device
US10462764B2 (en) Method and apparatus for identifying pseudo base-station, and terminal
CN103871051A (en) Image processing method, device and electronic equipment
CN105005457A (en) Geographical location display method and apparatus
CN103546740B (en) A kind of method, device and terminal equipment detecting camera
CN104518953A (en) Message deleting method, instant messaging terminal and system
CN103874018A (en) Access point information sharing method and device
WO2014169715A1 (en) Information recommendation method and apparatus
CN103455582A (en) Display method of navigation page of browser and mobile terminal

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant