CN104463158A - Translation method and device - Google Patents

Translation method and device Download PDF

Info

Publication number
CN104463158A
CN104463158A CN201410758187.1A CN201410758187A CN104463158A CN 104463158 A CN104463158 A CN 104463158A CN 201410758187 A CN201410758187 A CN 201410758187A CN 104463158 A CN104463158 A CN 104463158A
Authority
CN
China
Prior art keywords
image
camera
word information
word
acquisition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410758187.1A
Other languages
Chinese (zh)
Other versions
CN104463158B (en
Inventor
吴磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201410758187.1A priority Critical patent/CN104463158B/en
Publication of CN104463158A publication Critical patent/CN104463158A/en
Application granted granted Critical
Publication of CN104463158B publication Critical patent/CN104463158B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language

Abstract

The invention is applied to the field of information technologies and provides a translation method and device. The method includes the steps that an image is captured through a camera, and the obtained image is recognized; when word information is recognized from the obtained image, the camera is controlled to rotate and obtain the next image until no word information is recognized from the obtained next image or the camera is rotated to the largest angle value; recognized word information is processed according to the sequence of image obtaining, and the processed word information is translated. According to the translation method and device, the word information in short sentences or long sentences or paragraphs is extracted by controlling the camera to rotate, one-by-one word fetching and translating are avoided, and translation efficiency is improved; the extracted word information is translated after all the images are obtained, and therefore the accuracy of translation results is improved.

Description

Interpretation method and device
Technical field
The invention belongs to areas of information technology, provide interpretation method and device.
Background technology
Existing mobile terminal, when being undertaken by camera getting word translation, can only identify a word at every turn.When user wants a translation long sentence, need to identify translation by camera one by one to word, the artificial translation result according to each word is translated this long sentence or this section of words again, low, the consuming time length of translation efficiency, and more inaccurate to the translation result of long sentence.
Summary of the invention
Given this, embodiments provide a kind of interpretation method and device, to solve existing low, the consuming time length of translation efficiency of carrying out getting word translation by camera, and to the more inaccurate problem of the translation result of long sentence.
On the one hand, embodiments provide a kind of interpretation method, comprising:
Obtain image by camera, and the described image obtained is identified;
When recognizing word information from the described image obtained, controlling described camera and rotate and obtain next image, until identify from next image described in acquisition less than word information or the rotation of described camera to maximal angle value;
According to the word information that the sequencing processing and identification of Image Acquisition arrives, and the word information after process is translated.
Second aspect, embodiments provides a kind of translating equipment, comprising:
Image acquisition unit, for obtaining image by camera, and identifies the described image obtained;
Camera rotary unit, for when recognizing word information from the described image obtained, control described camera rotate and obtain next image, until identify from next image described in acquisition and to rotate to maximal angle value less than word information or described camera;
Translation unit, for the word information arrived according to the sequencing processing and identification of Image Acquisition, and translates the word information after process.
The beneficial effect that the embodiment of the present invention compared with prior art exists is: the embodiment of the present invention obtains image by camera, when recognizing word information from the image obtained, Sustainable Control camera rotates and obtains next image, until identify when rotating to maximal angle value less than word information or camera from next image obtained, again according to the word information that the sequencing processing and identification of Image Acquisition arrives, and the word information after process is translated, short sentence is extracted thus by controlling camera rotation, word information in long sentence or paragraph, avoid getting word translation one by one, improve translation efficiency, and again the word information extracted is translated after obtaining all images, thus improve the accuracy of translation result.
Accompanying drawing explanation
In order to be illustrated more clearly in the technical scheme in the embodiment of the present invention, be briefly described to the accompanying drawing used required in embodiment or description of the prior art below, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
Fig. 1 is the realization flow figure of the interpretation method that the embodiment of the present invention provides;
Fig. 2 is the specific implementation process flow diagram being obtained image described in the interpretation method step S101 that provides of the embodiment of the present invention by camera;
Fig. 3 is the specific implementation process flow diagram of the word information arrived according to the sequencing processing and identification of Image Acquisition described in the interpretation method step S103 that provides of the embodiment of the present invention;
Fig. 4 is the structured flowchart of the translating equipment that the embodiment of the present invention provides.
Embodiment
In order to make object of the present invention, technical scheme and advantage clearly understand, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein only in order to explain the present invention, be not intended to limit the present invention.
Fig. 1 shows the realization flow figure of the interpretation method that the embodiment of the present invention provides, and details are as follows:
In step S101, obtain image by camera, and the described image obtained is identified.
It should be noted that, the executive agent of the embodiment of the present invention for having the mobile terminal of rotating camera, such as, can have the mobile phone or panel computer etc. of rotating camera; The executive agent of the embodiment of the present invention can also for having other device of rotating camera, in this no limit.
Alternatively, the preview image obtained by screen display camera, aims to make user the word region that need translate by camera according to preview image.Obtained after image by camera, then by image that screen display obtains.
In step s 102, when recognizing word information from the described image obtained, control described camera rotate and obtain next image, until identify from next image described in acquisition and to rotate to maximal angle value less than word information or described camera.
In embodiments of the present invention, when recognizing word information from the image obtained, Sustainable Control camera rotates and obtains next image, makes camera continue image corresponding to the word information of acquisition short sentence, long sentence or a paragraph.In the process controlling camera rotation, if camera rotates in rear next acquisition image do not comprise new word information, or camera rotation is to maximal angle value, then stop the rotation camera.
Preferably, control described camera described in step S102 to rotate and obtain next image and be specially:
Control described camera rotate predetermined angle towards preset direction and obtain next image.
As one embodiment of the present of invention, here, according to user's reading habit determination preset direction, to improve translation efficiency.Such as, preset direction is to the right.In the process controlling camera rotation, camera often rotates predetermined angle and obtains an image, to guarantee under the prerequisite not missing word, reduces the number of the image obtained, thus is reducing the data volume of later image synthesis.
In step s 103, according to the word information that the sequencing processing and identification of Image Acquisition arrives, and the word information after process is translated.
In embodiments of the present invention, after Image Acquisition completes, extract the word information in all images, the word information of extraction is connected into short sentence, long sentence or paragraph, and translated connecting the short sentence, long sentence or the paragraph that obtain by local data base or cloud server, thus translate according to the context of each word, avoid translating one by one word, improve the accuracy of carrying out getting word translation by camera.
Fig. 2 shows the specific implementation process flow diagram being obtained image described in interpretation method step S101 that the embodiment of the present invention provides by camera, with reference to Fig. 2:
In step s 201, according to first preset value generate get word region, and show on screen generation described in get word region;
In step S202, in described scope of getting the restriction of word region, obtain image by described camera.
Here, the first preset value by system default, also can be arranged by user, in this no limit.First preset value can set the length and width of getting word region, makes translating equipment carry out the extraction of word information to the image got in word region.
Preferably, control described camera in step s 102 to rotate and after obtaining next image, described method also comprises:
When recognizing word information from next image described in acquisition, described in expanding according to the second preset value, get word region.
When recognizing word information from next image obtained, show to a great extent active user want translate be not a word, but short sentence, long sentence or a paragraph, now, expand according to the second preset value and get word region, to obtain more word information by obtaining an image.
Alternatively, the maximal value of getting word region is the 3rd preset value.Here, limited by the 3rd preset value and get word region, thus the area getting word region is limited within the specific limits, thus the word information that may comprise in an image is limited within the specific limits, make the word information that may comprise in an image be limited within the scope of user's acceptable, improve Consumer's Experience.
Fig. 3 shows the specific implementation process flow diagram of the word information arrived according to the sequencing processing and identification of Image Acquisition described in interpretation method step S103 that the embodiment of the present invention provides, with reference to Fig. 3:
In step S301, the sequencing according to Image Acquisition synthesizes the image obtained, and obtains composograph;
In step s 302, from described composograph, word information is extracted.
As one embodiment of the present of invention, after Image Acquisition completes, Images uniting is carried out to the image obtained, and the part that image repeats carries out Mulching treatment, non-repetitive for image part is carried out splicing, thus obtains composograph.From composograph, extract word information again, thus extracting directly short sentence, long sentence or paragraph, improve translation efficiency and translation accuracy.
Should be understood that in embodiments of the present invention, the size of the sequence number of above-mentioned each process does not also mean that the priority of execution sequence, and the execution sequence of each process should be determined with its function and internal logic, and should not form any restriction to the implementation process of the embodiment of the present invention.
The embodiment of the present invention obtains image by camera, when recognizing word information from the image obtained, Sustainable Control camera rotates and obtains next image, until identify when rotating to maximal angle value less than word information or camera from next image obtained, again according to the word information that the sequencing processing and identification of Image Acquisition arrives, and the word information after process is translated, the word information extracted in short sentence, long sentence or paragraph is rotated thus by controlling camera, avoid getting word translation one by one, improve translation efficiency; And again the word information extracted is translated after obtaining all images, thus improve the accuracy of translation result.
Fig. 4 shows the structured flowchart of the translating equipment that the embodiment of the present invention provides, and this device may be used for the interpretation method described in service chart 1 to Fig. 3.For convenience of explanation, illustrate only part related to the present embodiment.
With reference to Fig. 4, this device comprises:
Image acquisition unit 41, for obtaining image by camera, and identifies the described image obtained;
Camera rotary unit 42, for when recognizing word information from the described image obtained, control described camera rotate and obtain next image, until identify from next image described in acquisition and to rotate to maximal angle value less than word information or described camera;
Translation unit 43, for the word information arrived according to the sequencing processing and identification of Image Acquisition, and translates the word information after process.
Preferably, described image acquisition unit 41 comprises:
Get word Area generation subelement 411, for according to first preset value generate get word region, and show on screen generation described in get word region;
Image Acquisition subelement 412, for obtaining image by described camera in described scope of getting the restriction of word region.
Preferably, described device also comprises:
Getting word region expanding unit 44, for when recognizing word information from next image described in acquisition, described in expanding according to the second preset value, getting word region.
Preferably, described camera rotary unit 42 specifically for:
When recognizing word information from the described image obtained, control described camera and rotate predetermined angle obtain next image towards preset direction, until identify from next image described in acquisition and to rotate to maximal angle value less than word information or described camera.
Preferably, described translation unit 43 comprises:
Images uniting subelement 431, synthesizes the image obtained for the sequencing according to Image Acquisition, obtains composograph;
Word information extracts subelement 432, for extracting word information from described composograph.
The embodiment of the present invention obtains image by camera, when recognizing word information from the image obtained, Sustainable Control camera rotates and obtains next image, until identify when rotating to maximal angle value less than word information or camera from next image obtained, again according to the word information that the sequencing processing and identification of Image Acquisition arrives, and the word information after process is translated, the word information extracted in short sentence, long sentence or paragraph is rotated thus by controlling camera, avoid getting word translation one by one, improve translation efficiency; And again the word information extracted is translated after obtaining all images, thus improve the accuracy of translation result.
Those of ordinary skill in the art can recognize, in conjunction with unit and the algorithm steps of each example of embodiment disclosed herein description, can realize with the combination of electronic hardware or computer software and electronic hardware.These functions perform with hardware or software mode actually, depend on application-specific and the design constraint of technical scheme.Professional and technical personnel can use distinct methods to realize described function to each specifically should being used for, but this realization should not thought and exceeds scope of the present invention.
Those skilled in the art can be well understood to, and for convenience and simplicity of description, the specific works process of the unit of foregoing description, with reference to the corresponding process in preceding method embodiment, can not repeat them here.
In several embodiments that the application provides, should be understood that disclosed apparatus and method can realize by another way.Such as, device embodiment described above is only schematic, such as, the division of described unit, be only a kind of logic function to divide, actual can have other dividing mode when realizing, such as multiple unit or assembly can in conjunction with or another system can be integrated into, or some features can be ignored, or do not perform.Another point, shown or discussed coupling each other or direct-coupling or communication connection can be by some interfaces, and the indirect coupling of unit or communication connection can be electrical, machinery or other form.
The described unit illustrated as separating component or can may not be and physically separates, and the parts as unit display can be or may not be physical location, namely can be positioned at a place, or also can be distributed in multiple network element.Some or all of unit wherein can be selected according to the actual needs to realize the object of the present embodiment scheme.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, also can be that the independent physics of unit exists, also can two or more unit in a unit integrated.
If described function using the form of SFU software functional unit realize and as independently production marketing or use time, can be stored in a computer read/write memory medium.Based on such understanding, the part of the part that technical scheme of the present invention contributes to prior art in essence in other words or this technical scheme can embody with the form of software product, this computer software product is stored in a storage medium, comprising some instructions in order to make a computer equipment (can be personal computer, server, or the network equipment etc.) perform all or part of step of method described in each embodiment of the present invention.And aforesaid storage medium comprises: USB flash disk, portable hard drive, ROM (read-only memory) (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD etc. various can be program code stored medium.
The above; be only the specific embodiment of the present invention, but protection scope of the present invention is not limited thereto, is anyly familiar with those skilled in the art in the technical scope that the present invention discloses; change can be expected easily or replace, all should be encompassed within protection scope of the present invention.Therefore, protection scope of the present invention should described be as the criterion with the protection domain of claim.

Claims (10)

1. an interpretation method, is characterized in that, comprising:
Obtain image by camera, and the described image obtained is identified;
When recognizing word information from the described image obtained, controlling described camera and rotate and obtain next image, until identify from next image described in acquisition less than word information or the rotation of described camera to maximal angle value;
According to the word information that the sequencing processing and identification of Image Acquisition arrives, and the word information after process is translated.
2. the method for claim 1, is characterized in that, described by camera obtain image comprise:
According to first preset value generate get word region, and show on screen generation described in get word region;
In described scope of getting the restriction of word region, image is obtained by described camera.
3. method as claimed in claim 2, is characterized in that, after the described camera of control rotates and obtains next image, described method also comprises:
When recognizing word information from next image described in acquisition, described in expanding according to the second preset value, get word region.
4. the method for claim 1, is characterized in that, the described camera of described control rotates and obtains next image and is specially:
Control described camera rotate predetermined angle towards preset direction and obtain next image.
5. the method for claim 1, is characterized in that, the described sequencing processing and identification according to Image Acquisition to word information comprise:
Sequencing according to Image Acquisition synthesizes the image obtained, and obtains composograph;
Word information is extracted from described composograph.
6. a translating equipment, is characterized in that, comprising:
Image acquisition unit, for obtaining image by camera, and identifies the described image obtained;
Camera rotary unit, for when recognizing word information from the described image obtained, control described camera rotate and obtain next image, until identify from next image described in acquisition and to rotate to maximal angle value less than word information or described camera;
Translation unit, for the word information arrived according to the sequencing processing and identification of Image Acquisition, and translates the word information after process.
7. device as claimed in claim 6, it is characterized in that, described image acquisition unit comprises:
Get word Area generation subelement, for according to first preset value generate get word region, and show on screen generation described in get word region;
Image Acquisition subelement, for obtaining image by described camera in described scope of getting the restriction of word region.
8. device as claimed in claim 7, it is characterized in that, described device also comprises:
Getting word region expanding unit, for when recognizing word information from next image described in acquisition, described in expanding according to the second preset value, getting word region.
9. device as claimed in claim 6, is characterized in that, described camera rotary unit specifically for:
When recognizing word information from the described image obtained, control described camera and rotate predetermined angle obtain next image towards preset direction, until identify from next image described in acquisition and to rotate to maximal angle value less than word information or described camera.
10. device as claimed in claim 6, it is characterized in that, described translation unit comprises:
Images uniting subelement, synthesizes the image obtained for the sequencing according to Image Acquisition, obtains composograph;
Word information extracts subelement, for extracting word information from described composograph.
CN201410758187.1A 2014-12-10 2014-12-10 Interpretation method and device Active CN104463158B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410758187.1A CN104463158B (en) 2014-12-10 2014-12-10 Interpretation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410758187.1A CN104463158B (en) 2014-12-10 2014-12-10 Interpretation method and device

Publications (2)

Publication Number Publication Date
CN104463158A true CN104463158A (en) 2015-03-25
CN104463158B CN104463158B (en) 2018-02-16

Family

ID=52909174

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410758187.1A Active CN104463158B (en) 2014-12-10 2014-12-10 Interpretation method and device

Country Status (1)

Country Link
CN (1) CN104463158B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105760367A (en) * 2016-01-29 2016-07-13 广东小天才科技有限公司 Real-time word translating method and device
CN111144136A (en) * 2019-11-25 2020-05-12 三盟科技股份有限公司 Data conversion method, system, computer device and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101551860A (en) * 2008-03-31 2009-10-07 联想(北京)有限公司 Portable device and character recognizing and translating method thereof
US20090299732A1 (en) * 2008-05-29 2009-12-03 Nokia Corporation Contextual dictionary interpretation for translation
CN103176964A (en) * 2011-12-21 2013-06-26 上海博路信息技术有限公司 Translation auxiliary system based on OCR
CN103699527A (en) * 2013-12-20 2014-04-02 上海合合信息科技发展有限公司 Image translation system and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101551860A (en) * 2008-03-31 2009-10-07 联想(北京)有限公司 Portable device and character recognizing and translating method thereof
US20090299732A1 (en) * 2008-05-29 2009-12-03 Nokia Corporation Contextual dictionary interpretation for translation
CN103176964A (en) * 2011-12-21 2013-06-26 上海博路信息技术有限公司 Translation auxiliary system based on OCR
CN103699527A (en) * 2013-12-20 2014-04-02 上海合合信息科技发展有限公司 Image translation system and method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BUI THANH HUNG ET AL.: ""Divide and Translate Legal Text Sentence by Using Its Logical Structure"", 《2012 SEVENTH INTERNATIONAL CONFERENCE ON KNOWLEDGE, INFORMATION AND CREATIVITY SUPPORT SYSTEMS》 *
王雷 等: ""汉语对应英语定语从句结构的一种自动翻译方法"", 《第五届全国青年计算语言学研讨会》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105760367A (en) * 2016-01-29 2016-07-13 广东小天才科技有限公司 Real-time word translating method and device
CN105760367B (en) * 2016-01-29 2018-09-21 广东小天才科技有限公司 A kind of word real time translating method and device
CN111144136A (en) * 2019-11-25 2020-05-12 三盟科技股份有限公司 Data conversion method, system, computer device and readable storage medium
CN111144136B (en) * 2019-11-25 2024-02-23 三盟科技股份有限公司 Data conversion method, system, computer device and readable storage medium

Also Published As

Publication number Publication date
CN104463158B (en) 2018-02-16

Similar Documents

Publication Publication Date Title
KR102347398B1 (en) Actionable content displayed on a touch screen
CN105976818B (en) Instruction recognition processing method and device
US10672155B2 (en) Non-linear, multi-resolution visualization of a graph
CN108108342B (en) Structured text generation method, search method and device
CN105739981B (en) Code completion implementation method and device and computing equipment
US9001059B2 (en) Method and apparatus for choosing an intended target element from an imprecise touch on a touch screen display
JP6771259B2 (en) Computer-implemented methods for processing images and related text, computer program products, and computer systems
CN104866308A (en) Scenario image generation method and apparatus
KR20160147950A (en) Techniques for distributed optical character recognition and distributed machine language translation
KR20160147969A (en) Techniques for distributed optical character recognition and distributed machine language translation
US8160865B1 (en) Systems and methods for managing coordinate geometry for a user interface template
US9235326B2 (en) Manipulation of user interface controls
CN103399865A (en) Method and device for multi-media file generation
CN104503956A (en) Method, device and mobile terminal for pasting data
CN104636165A (en) Mobile device starting method and device
US20160335081A1 (en) Adding on-the-fly comments to code
CN103617209A (en) File management method and file management device for mobile terminal
CN105790999A (en) Equipment configuration method and device
CN104199917A (en) Method and device for translating webpage content and client
CN104463158A (en) Translation method and device
CN105283882A (en) Production method for portable data carriers
CN103927355A (en) Advertisement intercepting method, advertisement intercepting device and advertisement intercepting system
CN104239043B (en) The execution method and apparatus of instruction
CN105242941A (en) Burning method and apparatus
CN107515720B (en) Message processing method, medium, device and computing equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Patentee after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., Ltd.

Address before: Changan town in Guangdong province Dongguan 523841 usha Beach Road No. 18

Patentee before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., Ltd.

CP03 Change of name, title or address