CN101477520A - Recognition inter-translation method and system, and electronic product having the same - Google Patents
Recognition inter-translation method and system, and electronic product having the same Download PDFInfo
- Publication number
- CN101477520A CN101477520A CNA2009100005945A CN200910000594A CN101477520A CN 101477520 A CN101477520 A CN 101477520A CN A2009100005945 A CNA2009100005945 A CN A2009100005945A CN 200910000594 A CN200910000594 A CN 200910000594A CN 101477520 A CN101477520 A CN 101477520A
- Authority
- CN
- China
- Prior art keywords
- languages
- image
- spoken
- unit
- photoinduction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Machine Translation (AREA)
Abstract
The invention discloses a method and a system for identifying inter-translation and an electric product provided with the system. The system comprises a photoinduction unit, an image processing unit, an image comparison unit, an image comparison database, a judge unit and a language database. The photoinduction unit collects rays on the surface of a specific object, so as to form a two-dimension image corresponding to the object; the image processing unit converts the two-dimension image obtained from the photoinduction unit to a corresponding virtual image; the image comparison database is used for pre-storing real images of a plurality of objects, and the real images are provided with text introductions; the judge unit matches the virtual image corresponding to the object and the real image in the image comparison database, and sends the text introduction of successfully matched real image to the language database; and the language database pre-stores the language words of a plurality of languages, and can translate received text introduction into the language words of specific language according to the requirement. The objects identified through visible light by the invention can be displayed and read out by different languages, and the operation is simple and feasible.
Description
Technical field
The present invention relates to a kind of electronic technology field, the electronic product that relates in particular to a kind of recognition inter-translation method, system and have this system.
Background technology
Modern society, the interchange demand between the different people of mother tongue is more and more, but often because language obstacle, and hinder exchanging in people's work and the life.
Translation tools such as existing Kingsoft Powerword or Wenquxing can be taken out in needs and look into word, do like this to may not be certain courtesy and save time, and for bigger a little people of age, operate also more complicated.
Summary of the invention
The electronic product that the present invention aims to provide a kind of recognition inter-translation method, system and has this system is in order to solve the problem of the intertranslation complicated operation that exists in the prior art.
Purpose of the present invention mainly is achieved through the following technical solutions:
The invention provides a kind of recognition inter-translation system, described system comprises: photoinduction unit, graphics processing unit, image comparison database, judging unit and languages storehouse, wherein,
Described photoinduction unit is used for the light according to user's requirement collection specified object surface, forms the image of this object correspondence;
Described graphics processing unit, being used for its image transitions that obtains from described photoinduction unit is the corresponding virtual image;
Described image comparison database is used for the true picture of a plurality of objects of pre-stored, and described true picture all has textual description;
Described judging unit is used for the true picture of this object corresponding virtual image and image comparison database is mated, and the textual description of the true picture that the match is successful is sent to described languages storehouse;
Described languages storehouse is used for the spoken and written languages of a plurality of languages of pre-stored, and its textual description that receives is translated on request the spoken and written languages of appointment languages.
Further, described system also comprises: phonation unit and/or display unit, wherein,
Described display unit is used for spoken and written languages that explicit user is familiar with and the spoken and written languages of specifying languages, or only explicit user is specified the spoken and written languages of languages;
Described phonation unit is used to read spoken and written languages that the user is familiar with and the spoken and written languages of specifying languages, or only explicit user is specified the spoken and written languages of languages.
Further, described system also comprises:
Image unit be used for the image according to user's requirement picked-up object, and the image that will absorb stores in the image comparison database into.
Wherein, described photoinduction unit specifically comprises: visible light source module, photoinduction module and sending module, wherein,
Described visible light source module is used for by the definite object that will respond to of visible emitting, and triggers the photoinduction module;
Described photoinduction module is used for a sine streak is projected to the object surfaces that will respond to, forms the two dimensional image that body surface has deforming stripe;
Described sending module, the body surface that is used for forming has the two dimensional image of deforming stripe to send to graphics processing unit.
Described graphics processing unit specifically comprises: receiver module and digital filtering module, wherein,
Described receiver module is used to receive the two dimensional image that body surface that described photoinduction module sends has deforming stripe, and the two dimensional image that receives is sent to described digital filtering module;
Described digital filtering module, being used for that body surface is had the fundamental frequency phase filtering partly of the two-dimensional image of deforming stripe is the image of a pure phase position, and draws this object corresponding virtual image according to the correlation peak size of output.
The present invention also provides a kind of recognition inter-translation method, and described method comprises:
Steps A: gather the light on the surface of specified object according to user's requirement, form the image of this object correspondence;
Step B: with this image transitions is this object corresponding virtual image;
Step C: the true picture in this object corresponding virtual image and the image comparison database pre-stored is mated, and the language and characters of specifying languages is translated in the textual description of the true picture that the match is successful by customer requirements.
Further, described method also comprises:
Step D: show and/or read bilingual literal after the intertranslation.
Wherein, described steps A specifically comprises:
Utilize visible light to determine the object that to respond to;
One sine streak is projected to the object surfaces that will respond to, and the body surface that forms is had the two dimensional image of deforming stripe.
Described step B specifically comprises:
Described body surface being had the fundamental frequency phase filtering partly of the two-dimensional image of deforming stripe is the image of a pure phase position, and draws this object corresponding virtual image according to the correlation peak size of output.
The present invention also provides a kind of electronic product, is provided with the recognition inter-translation system at least, and described recognition inter-translation system comprises: photoinduction unit, graphics processing unit, image comparison database, judging unit and languages storehouse, wherein,
Described photoinduction unit is used for the light according to user's requirement collection specified object surface, forms the image of this object correspondence;
Described graphics processing unit, being used for its image transitions that obtains from described photoinduction unit is the corresponding virtual image;
Described image comparison database is used for the true picture of a plurality of objects of pre-stored, and described true picture all has textual description;
Described judging unit is used for the true picture of this object corresponding virtual image and image comparison database is mated, and the textual description of the true picture that the match is successful is sent to described languages storehouse;
Described languages storehouse is used for the spoken and written languages of a plurality of languages of pre-stored, and its textual description that receives is translated on request the spoken and written languages of appointment languages.
Beneficial effect of the present invention is as follows:
System of the present invention can rest on and identifies object on the object by launching one visible light, and the object that is identified can show and pronunciation with different language, thereby brings convenience to people, and operation is simple.
Description of drawings
Fig. 1 is the structural representation of system of the present invention;
Fig. 2 is the schematic flow sheet of the method for the invention.
Embodiment
Specifically describe preferred embodiment of the present invention below in conjunction with accompanying drawing, wherein, accompanying drawing constitutes the application's part, and is used from explaination principle of the present invention with embodiments of the invention one.
As shown in Figure 1, it is the structural representation of the described system of the embodiment of the invention, specifically can comprise: photoinduction unit, graphics processing unit, image comparison database, judging unit, languages storehouse and image unit.
Below each unit is elaborated respectively.
The photoinduction unit, the main light of being responsible for gathering the surface of specified object according to user's requirement, form the two dimensional image (only be that example describes in the embodiment of the invention, can adopt 3-D view etc. as required in the practical application) of this object correspondence with the two dimensional image; Described photoinduction unit specifically can comprise: visible light source module and photoinduction module, wherein,
The visible light source module can be launched visible light one according to user's requirement, utilizes this road visible light to determine the object that will respond to, and after determining the object that will respond to, triggers the photoinduction module.
The photoinduction module, it can project to the surface of wanting sense object with a sine streak, forms the two dimensional image that body surface has deforming stripe, and the two dimensional image that forms is sent to graphics processing unit; Described photoinduction module can adopt existing photoinduction camera lens, optical inductor or similar functions device to wait the function that realizes forming two dimensional image.
Graphics processing unit is used for having the two dimensional image of distortion to handle its body surface that obtains from the photoinduction unit, obtains this object corresponding virtual image; Described graphics processing unit specifically can comprise: receiver module and digital filtering module, wherein,
Receiver module is used to receive the two dimensional image that body surface that the photoinduction module sends has deforming stripe, and the two dimensional image that receives is sent to digital filtering module.
Digital filtering module, being used for that body surface is had the fundamental frequency phase filtering partly of the two dimensional image of deforming stripe is the image of a pure phase position, and draws this object corresponding virtual image according to the correlation peak size of output.Described digital filtering module can adopt existing digital filter or similar functions device to wait and realize above function.
Image comparison database stores the true picture (two dimension or three-dimensional) of voluminous object in advance, and attaches the textual description that these images are arranged.
Judging unit, the true picture of storing in the virtual image of this object that will be after graphics processing unit is handled and the image comparison database mates, (promptly reach predetermined matching degree if the match is successful, such as 90%), then the textual description with this true picture correspondence sends to the languages storehouse, otherwise it fails to match, notifies the user not have the image of coupling.
The languages storehouse is used for depositing in advance the languages that need carry out a plurality of countries of intertranslation, and when receiving the textual description that judging unit sends, the spoken and written languages of specifying languages is translated in the textual description that receives according to user's requirement.
Display unit is used for the spoken and written languages that spoken and written languages that explicit user is familiar with and user specify languages, or only explicit user is specified the spoken and written languages of languages.
Phonation unit is used to read the spoken and written languages that spoken and written languages that the user is familiar with and user specify languages, or only reads the spoken and written languages that the user specifies languages.
The language that the user is familiar with and the language of appointment can oneself be set before the user uses this system, or the language of being familiar with are not set, and only certain languages of oneself wanting to know are set.
Image unit, first-class such as shooting, be used for in-plant shot object, be stored in the image comparison database, with the identification capacity of expanded images comparison database.
The described recognition inter-translation of embodiment of the invention system can be used in mobile phone, camera, MP3 etc. above the electronic product, their usable range is improved in expansion, made the people that is away from home as long as have the portable terminal of this recognition inter-translation system just need not worry on hand puzzlement that language obstacle is brought, and can be manual take, editor, store and expand the update image comparison database by camera by I or other people.
Still with reference to accompanying drawing 1, detailed description is done in the preferred embodiment of system of the present invention below.
As shown in Figure 1, system of the present invention is by the photoinduction unit, graphics processing unit, image comparison database is come recognition object, realize the intertranslation function by languages storehouse and image comparison database, and the picture after handling is sought judgement by judging unit in image comparison database, if judge the image that coupling is arranged, will by interface line with the coupling image with textual description issue the languages storehouse, the spoken and written languages of specifying languages are translated into textual description in the languages storehouse as requested, read out by the spoken and written languages of phonation unit at last want intertranslation, demonstrate by display unit simultaneously, certainly menu setting will be arranged.
As the application of the described system of recognition inter-translation of the present invention in electronic product, this sentences mobile phone and is described below as preferred embodiment, if recognition inter-translation of the present invention system is used on the mobile phone, will realize its function with following principle work:
At first, the choice menus of intertranslation languages is set in mobile phone, for example: Chinese-English, Chinese Russia, Chinese-French, Great Britain and France, Russia's English etc., there is a place that the photoinduction camera lens is installed in the outside of mobile phone, and the visible light source that can close and open is arranged in it, design photoinduction unit and graphics processing unit on cell phone mainboard, image comparison database and languages storehouse are arranged in mobile phone EMS memory, the display screen of mobile phone can be used for doing display unit, the earphone of mobile phone or loudspeaker can be used for doing phonation unit, and the camera of mobile phone is used for photographic images with expansion updated images comparison database.
After advancing menu and having selected the languages of appointment, the recognition inter-translation system just begins to realize its function, run into short-range object, the user is as long as press the photoinduction alignment lens object of mobile phone on the button of startup visible light, on object, stop in several seconds, the photoinduction camera lens will project to body surface with a sine streak, is formed with the two dimensional image of deforming stripe, collects in the photoinduction unit.
Then, the two dimensional image that graphics processing unit will receive from the photoinduction unit is handled by digital filter, the fundamental frequency phase filtering partly that is about to the two-dimensional image of deforming stripe is the image of a pure phase position, and draws a virtual subject image according to the correlation peak size of output.
At last, judging unit is sought in image comparison database and is judged whether having similar to a certain degree or uniform images, if do not have, mobile phone will show with the form of display screen and earphone or loudspeaker or read out tells the user not have the image of coupling; If judge image like the associated class have, the incidental textual description of image can be sent to the languages storehouse, the languages storehouse translates the spoken and written languages of user-selected languages with text explanation, shows and reads out with the form of mobile phone display screen and earphone or loudspeaker.
As shown in Figure 2, Fig. 2 is the schematic flow sheet of the method for the invention, specifically can may further comprise the steps:
Step 200:, set the languages of user's appointment according to user's requirement;
Step 201: according to user's requirement, visible light source is launched visible light one, determines the object that will respond to;
Step 202 a: sine streak is projected to the object surfaces that will respond to, and the body surface that forms is had the two dimensional image of deforming stripe;
Step 203: this two dimensional image is converted to this object corresponding virtual image; That is, body surface being had the fundamental frequency phase filtering partly of the two-dimensional image of deforming stripe is the image of a pure phase position, and draws this object corresponding virtual image according to the correlation peak size of output;
Step 204: the true picture in this object corresponding virtual image and the image comparison database pre-stored is mated, if the match is successful, then execution in step 205, otherwise forward step 207 to;
Step 205: the textual description of the true picture that the match is successful is translated into the language and characters of specifying languages by customer requirements, and execution in step 206;
Step 206: the spoken and written languages that show and/or read the languages of user's appointment;
Step 207: coupling is unsuccessful, and notifying the user to say does not have the image of coupling.
For the specific implementation process of the method for the invention,, just repeat no more owing to elaborate in the said system herein.
In sum, the electronic product that the invention provides a kind of recognition inter-translation method, system and have this system, and recognition inter-translation of the present invention system also is provided with the function of manual expanded images comparison database, if I can be filmed the object of required frequent contact by others' inconvenience, edit title then, be kept in the image comparison database, like this picture of energy rich image comparison database, and the object that more gears to actual circumstances, the consistance of easier judgment object.
The above; only for the preferable embodiment of the present invention, but protection scope of the present invention is not limited thereto, and anyly is familiar with those skilled in the art in the technical scope that the present invention discloses; the variation that can expect easily or replacement all should be encompassed within protection scope of the present invention.Therefore, protection scope of the present invention should be as the criterion with the protection domain of claims.
Claims (10)
1, a kind of recognition inter-translation system is characterized in that described system comprises: photoinduction unit, graphics processing unit, image comparison database, judging unit and languages storehouse, wherein,
Described photoinduction unit is used for the light according to user's requirement collection specified object surface, forms the image of this object correspondence;
Described graphics processing unit, being used for its image transitions that obtains from described photoinduction unit is the corresponding virtual image;
Described image comparison database is used for the true picture of a plurality of objects of pre-stored, and described true picture all has textual description;
Described judging unit is used for the true picture of this object corresponding virtual image and image comparison database is mated, and the textual description of the true picture that the match is successful is sent to described languages storehouse;
Described languages storehouse is used for the spoken and written languages of a plurality of languages of pre-stored, and its textual description that receives is translated on request the spoken and written languages of appointment languages.
2, system according to claim 1 is characterized in that, described system also comprises: phonation unit and/or display unit, wherein,
Described display unit is used for spoken and written languages that explicit user is familiar with and the spoken and written languages of specifying languages, or only explicit user is specified the spoken and written languages of languages;
Described phonation unit is used to read spoken and written languages that the user is familiar with and the spoken and written languages of specifying languages, or only explicit user is specified the spoken and written languages of languages.
3, system according to claim 1 and 2 is characterized in that, described system also comprises:
Image unit be used for the image according to user's requirement picked-up object, and the image that will absorb stores in the image comparison database into.
4, system according to claim 1 and 2 is characterized in that, described photoinduction unit specifically comprises: visible light source module, photoinduction module and sending module, wherein,
Described visible light source module is used for by the definite object that will respond to of visible emitting, and triggers the photoinduction module;
Described photoinduction module is used for a sine streak is projected to the object surfaces that will respond to, forms the two dimensional image that body surface has deforming stripe;
Described sending module, the body surface that is used for forming has the two dimensional image of deforming stripe to send to graphics processing unit.
5, system according to claim 4 is characterized in that, described graphics processing unit specifically comprises: receiver module and digital filtering module, wherein,
Described receiver module is used to receive the two dimensional image that body surface that described photoinduction module sends has deforming stripe, and the two dimensional image that receives is sent to described digital filtering module;
Described digital filtering module, being used for that body surface is had the fundamental frequency phase filtering partly of the two-dimensional image of deforming stripe is the image of a pure phase position, and draws this object corresponding virtual image according to the correlation peak size of output.
6, a kind of recognition inter-translation method is characterized in that, described method comprises:
Steps A: gather the light on the surface of specified object according to user's requirement, form the image of this object correspondence;
Step B: with this image transitions is this object corresponding virtual image;
Step C: the true picture in this object corresponding virtual image and the image comparison database pre-stored is mated, and the language and characters of specifying languages is translated in the textual description of the true picture that the match is successful by customer requirements.
7, method according to claim 6 is characterized in that, described method also comprises:
Step D: show and/or read the spoken and written languages that the user is familiar with and specify the spoken and written languages of languages, or only show and/or read the spoken and written languages that the user specifies languages.
8, according to claim 6 or 7 described methods, it is characterized in that described steps A specifically comprises:
Utilize visible light to determine the object that to respond to;
One sine streak is projected to the object surfaces that will respond to, and the body surface that forms is had the two dimensional image of deforming stripe.
9, method according to claim 8 is characterized in that, described step B specifically comprises:
Described body surface being had the fundamental frequency phase filtering partly of the two-dimensional image of deforming stripe is the image of a pure phase position, and draws this object corresponding virtual image according to the correlation peak size of output.
10, a kind of electronic product is characterized in that, is provided with the recognition inter-translation system at least, and described recognition inter-translation system comprises: photoinduction unit, graphics processing unit, image comparison database, judging unit, languages storehouse, phonation unit and/or display unit, wherein,
Described photoinduction unit is used for the light according to user's requirement collection specified object surface, forms the image of this object correspondence;
Described graphics processing unit, being used for its image transitions that obtains from described photoinduction unit is the corresponding virtual image;
Described image comparison database is used for the true picture of a plurality of objects of pre-stored, and described true picture all has textual description;
Described judging unit is used for the true picture of this object corresponding virtual image and image comparison database is mated, and the textual description of the true picture that the match is successful is sent to described languages storehouse;
Described languages storehouse is used for the spoken and written languages of a plurality of languages of pre-stored, and its textual description that receives is translated on request the spoken and written languages of appointment languages;
Described display unit is used for spoken and written languages that explicit user is familiar with and the spoken and written languages of specifying languages, or only explicit user is specified the spoken and written languages of languages;
Described phonation unit is used to read spoken and written languages that the user is familiar with and the spoken and written languages of specifying languages, or only explicit user is specified the spoken and written languages of languages.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNA2009100005945A CN101477520A (en) | 2009-01-16 | 2009-01-16 | Recognition inter-translation method and system, and electronic product having the same |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNA2009100005945A CN101477520A (en) | 2009-01-16 | 2009-01-16 | Recognition inter-translation method and system, and electronic product having the same |
Publications (1)
Publication Number | Publication Date |
---|---|
CN101477520A true CN101477520A (en) | 2009-07-08 |
Family
ID=40838238
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNA2009100005945A Pending CN101477520A (en) | 2009-01-16 | 2009-01-16 | Recognition inter-translation method and system, and electronic product having the same |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101477520A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102375824A (en) * | 2010-08-12 | 2012-03-14 | 富士通株式会社 | Device and method for acquiring multilingual texts with mutually corresponding contents |
CN105654952A (en) * | 2014-11-28 | 2016-06-08 | 三星电子株式会社 | Electronic device, server, and method for outputting voice |
CN107154173A (en) * | 2017-04-06 | 2017-09-12 | 苏州爱灵格教育科技有限公司 | A kind of interactive learning methods and system |
WO2018133275A1 (en) * | 2017-01-19 | 2018-07-26 | 广景视睿科技(深圳)有限公司 | Object recognition and projection interactive installation |
CN112148179A (en) * | 2020-10-19 | 2020-12-29 | 深圳创维-Rgb电子有限公司 | Display device menu language detection method and device and computer device |
-
2009
- 2009-01-16 CN CNA2009100005945A patent/CN101477520A/en active Pending
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102375824A (en) * | 2010-08-12 | 2012-03-14 | 富士通株式会社 | Device and method for acquiring multilingual texts with mutually corresponding contents |
CN102375824B (en) * | 2010-08-12 | 2014-10-22 | 富士通株式会社 | Device and method for acquiring multilingual texts with mutually corresponding contents |
CN105654952A (en) * | 2014-11-28 | 2016-06-08 | 三星电子株式会社 | Electronic device, server, and method for outputting voice |
CN105654952B (en) * | 2014-11-28 | 2021-03-30 | 三星电子株式会社 | Electronic device, server and method for outputting voice |
WO2018133275A1 (en) * | 2017-01-19 | 2018-07-26 | 广景视睿科技(深圳)有限公司 | Object recognition and projection interactive installation |
CN107154173A (en) * | 2017-04-06 | 2017-09-12 | 苏州爱灵格教育科技有限公司 | A kind of interactive learning methods and system |
CN112148179A (en) * | 2020-10-19 | 2020-12-29 | 深圳创维-Rgb电子有限公司 | Display device menu language detection method and device and computer device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101477520A (en) | Recognition inter-translation method and system, and electronic product having the same | |
KR101685980B1 (en) | Mobile terminal and method for controlling the same | |
EP2275953A2 (en) | Mobile terminal | |
JP5554311B2 (en) | Device operation support system and device operation support method | |
US20140188477A1 (en) | Method for correcting a speech response and natural language dialogue system | |
CN102467343A (en) | Mobile terminal and method for controlling the same | |
CN102236986A (en) | Sign language translation system, device and method | |
EP2498168A2 (en) | Mobile terminal and method of controlling the same | |
CN102956132A (en) | System, device and method for translating sign languages | |
KR20110138542A (en) | Mobile terminal and method for generating group thereof | |
CN102447780A (en) | Mobile terminal and controlling method thereof | |
US8180370B2 (en) | Mobile terminal and method of display position on map thereof | |
CN110188365A (en) | A kind of method and apparatus for taking word to translate | |
US20180293440A1 (en) | Automatic narrative creation for captured content | |
KR101695812B1 (en) | Mobile terminal and method for controlling the same | |
US20140058999A1 (en) | Mobile terminal and control method thereof | |
CN104735205A (en) | Mobile phone holder and utilization method thereof | |
TWM457241U (en) | Picture character recognition system by combining augmented reality | |
US20200150794A1 (en) | Portable device and screen control method of portable device | |
CN101668071A (en) | Mobile communication terminal with scanning function and implement method thereof | |
US11867904B2 (en) | Method and electronic device for providing augmented reality environment | |
CN102479177A (en) | Real-time translating method for mobile device | |
KR20110022217A (en) | Mobile and method for controlling the same | |
CN101472104A (en) | Video signal display device with translation function | |
EP3171579A2 (en) | Mobile device and method of controlling therefor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Open date: 20090708 |