CN110472482A - A kind of method and device of object identification and real time translation - Google Patents
A kind of method and device of object identification and real time translation Download PDFInfo
- Publication number
- CN110472482A CN110472482A CN201910585408.2A CN201910585408A CN110472482A CN 110472482 A CN110472482 A CN 110472482A CN 201910585408 A CN201910585408 A CN 201910585408A CN 110472482 A CN110472482 A CN 110472482A
- Authority
- CN
- China
- Prior art keywords
- real time
- central processing
- processing unit
- image
- translation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013519 translation Methods 0.000 title claims abstract description 50
- 238000000034 method Methods 0.000 title claims abstract description 27
- 238000012545 processing Methods 0.000 claims abstract description 50
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 17
- 239000000284 extract Substances 0.000 claims abstract description 4
- 238000011161 development Methods 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 3
- 230000003190 augmentative effect Effects 0.000 claims description 2
- 238000003062 neural network model Methods 0.000 claims description 2
- 238000000605 extraction Methods 0.000 description 7
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000012937 correction Methods 0.000 description 3
- 230000018109 developmental process Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000977 initiatory effect Effects 0.000 description 2
- 206010057855 Hypotelorism of orbit Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 230000006698 induction Effects 0.000 description 1
- 230000008140 language development Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008271 nervous system development Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/06—Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
- G09B5/065—Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Business, Economics & Management (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Image Analysis (AREA)
Abstract
The present invention relates to image procossing identification and automatic translation technical field, a kind of method for proposing object identification and real time translation, comprising the following steps: pass through the image that camera obtains objects in front captured in real-time;Described image is inputted into convolutional neural networks model, extracts the depth characteristic information of described image;Extracted depth characteristic information input image recognition model identifies the classification of object, and exports the object category that identification obtains;The object category is translated as object language by translation algorithm and is exported.The present invention also proposes a kind of device using the above method, including central processing unit, image acquisition units, display screen, camera, crust of the device, wherein the one side of crust of the device is arranged in camera, the another side of crust of the device is arranged in display screen, and central processing unit and image acquisition units are integrally disposed in inside crust of the device.The present invention, which can be realized, to be identified current object and translates, and the learning experience degree of user is improved.
Description
Technical field
The present invention relates to image procossing identification and automatic translation technical fields, more particularly, to a kind of object identification and
The device of the method for real time translation and a kind of object identification and real time translation.
Background technique
Infant period is the period that child's nervous system development is most fast, language development is the most key, is carried out to child
The inning of language education.Currently, generally occurred early education product course in the market, however current early education product course is more
It mostly is biased to exam-oriented education type, rather than interest guidance type, uninteresting inflexible exam-oriented education class course can not really promote study
Interest of the person to English study.
At present on the market about initiation English study class software or device be limited only to fixed plane formula card into
Row identification, therefore there is the problems such as homogeneity is serious, content is limited.In addition, existing initiation English study class software or device
It is based primarily upon traditional word cards and story-book, therefore can only realize simple cognition experience, cannot flexibly combine in kind carry out
Study.
Summary of the invention
The present invention is to overcome that identification content described in the above-mentioned prior art is limited, cannot flexibly learn in conjunction with material object,
A kind of method of object identification and real time translation is provided, and a kind of object identification using the above method and real time translation are provided
Device.
In order to solve the above technical problems, technical scheme is as follows:
A kind of method of object identification and real time translation, comprising the following steps:
S1: the image of objects in front captured in real-time is obtained by camera;
S2: described image is inputted into convolutional neural networks model, extracts the depth characteristic information of described image;
S3: extracted depth characteristic information input image recognition model identifies the classification of object, and exports
Identify obtained object category;
S4: the object category is translated as by object language by translation algorithm and is exported.
In the technical program, the image of objects in front is obtained by camera and is handled, wherein acquired image
In object to be identified include the position of geographical environment and object in the picture where object, acquired image inputs volume
After carrying out depth characteristic information extraction in product neural network model, input in trained image recognition model according to being extracted
Depth characteristic information object is identified, finally to recognition result according to the syntax rule of object language to object category into
Row translation, in translation process, translation object is word or simple phrase, is arranged again further according to syntax rule
Sequence.
Preferably, the convolutional neural networks model in step S2 includes convolutional layer and pond layer.
Preferably, specific step is as follows in step S3:
S3.1: component convolution will be carried out after the corresponding depth characteristic information input image recognition model of the object feature point
Operation, obtains the apparent statement of each component of the object;
S3.2: each component of the object is apparently stated and carries out structuring operation, determines each component of the object
Optimal location;
S3.3: according to the optimal location of each component of the object, random field structural model is carried out using average algorithm
Reasoning obtains the object category that reasoning obtains.
Preferably, image recognition model is to be trained by the symbolic mathematical system frames programmed based on data flow
Obtained image recognition model, wherein the symbolic mathematical system frame based on data flow programming is TensorFlow frame.
Preferably, further comprising the steps of in step S3:
S3.4: similar by convolutional neural networks (CNN) algorithm picks from database according to the depth characteristic information
Spend highest three kinds of object categories, and the similarity for the object category that the similarity of the object category and the reasoning are obtained
It compares, is exported using the highest object category of similarity as the object category finally identified.
Preferably, database be by network retrieval search classification picture, and carry out handmarking, artificial screening processing obtain
, and the historical data acquisition for acquiring and identifying by history.
The present invention also proposes a kind of object identification and real time translator, using above-mentioned object identification and real time translation
Method.
A kind of object identification and real time translator, including central processing unit, image acquisition units, display screen, camera,
Crust of the device, wherein the one side of crust of the device is arranged in camera, and the another side of crust of the device is arranged in display screen, center
Processor and image acquisition units are integrally disposed in inside crust of the device;The output end of camera and the input of image acquisition units
End electrical connection, the output end of image acquisition units are electrically connected with the input terminal of central processing unit;First output of central processing unit
End is electrically connected with the input terminal of camera, and the second output terminal of central processing unit is electrically connected with the input terminal of display screen;Centre
Reason device is for executing the above method when running.
In the technical program, device carries out Image Acquisition to current object by camera, then passes through Image Acquisition list
Member handles acquired image, then input central processing unit in acquired image carry out kind of object identification and
Real time translation, specifically, acquired image by preset convolutional neural networks model carry out depth characteristic information extraction,
Then the identification of depth characteristic information is carried out by preset image recognition model, obtains the object category that identification obtains, then leads to
It crosses after recognition result is translated as object language by preset translation algorithm, is output in display screen and is shown.In addition, camera
Acquired image can be transmitted in display screen by image acquisition units, central processing unit and carry out real-time display, when being acquired
Image complete object identification and real time translation after, translation result is transmitted in display screen and figure collected by central processing unit
As simultaneous display.
Preferably, the augmented reality that secondary development is carried out based on unity 3D engine is provided in central processing unit
(AR) algorithm routine, for carrying out the identification of depth characteristic information and translation to acquired image.
Preferably, device further includes push-button unit, sensing unit and audio unit, and wherein push-button unit is arranged in display screen
Side, and push-button unit is electrically connected with central processing unit;The side of display screen is arranged in sensing unit, and sensing unit is in
Central processor electrical connection;Audio unit includes microphone and loudspeaker, and audio unit is arranged on crust of the device, and audio unit
It is electrically connected with central processing unit.Push-button unit shot for control device, object identification, real time translation, and sensing unit is used
The display situation of display screen is adjusted in the service condition by induction user, audio unit is for playing real time translation result.
Preferably, sensing unit includes range sensor and light sensor, and wherein range sensor is used for incuding
The distance between person and display screen, when distance is lower than preset secure threshold, central processing unit is shown by display screen and is alerted
Window;Light sensor is used to incude the light luminance of ambient enviroment, is then delivered in central processing unit and carries out judgement processing,
Realize that automatic adjustment display screen is bright according to the ambient brightness to the screen intensity of display screen transmission electric signal control display screen again
Degree.
Compared with prior art, the beneficial effect of technical solution of the present invention is:
(1) depth characteristic information extraction is carried out to image by convolutional neural networks model, passes through image recognition model pair
The classification of object is identified, the translation of object category is then carried out by translation algorithm, is realized to the object figure acquired in real time
As being identified and being translated in real time, the learning experience degree of user is improved;
(2) the highest three kinds of object categories of similarity are chosen from database by CNN neural network algorithm to be obscured
Prediction, can effectively improve the accuracy of identification.
Detailed description of the invention
Fig. 1 is the flow chart of the object identification of embodiment 1 and the method for real time translation.
Fig. 2 is the object identification of embodiment 2 and the structural schematic diagram of real time translator.
Fig. 3 is the object identification of embodiment 2 and the front schematic view of real time translator.
Fig. 4 is the object identification of embodiment 2 and the schematic rear view of real time translator.
Specific embodiment
The attached figures are only used for illustrative purposes and cannot be understood as limitating the patent;
In order to better illustrate this embodiment, the certain components of attached drawing have omission, zoom in or out, and do not represent actual product
Size;
To those skilled in the art, it is to be understood that certain known features and its explanation, which may be omitted, in attached drawing
's.
The following further describes the technical solution of the present invention with reference to the accompanying drawings and examples.
Embodiment 1
As shown in Figure 1, the flow chart of the method for the object identification and real time translation of the present embodiment.
A kind of method that the present embodiment proposes object identification and real time translation, comprising the following steps:
S1: the image of objects in front captured in real-time is obtained by camera.
S2: described image is inputted into convolutional neural networks model, extracts the depth characteristic information of described image.
In this step, convolutional neural networks model includes convolutional layer and pond layer, and the depth for extracting input picture is special
Reference breath.
S3: extracted depth characteristic information input image recognition model identifies the classification of object, and exports
Identify obtained object category.The specific steps of which are as follows:
S3.1: component convolution will be carried out after the corresponding depth characteristic information input image recognition model of the object feature point
Operation, obtains the apparent statement of each component of the object, wherein image recognition model is to be carried out by TensorFlow frame
The image recognition model that training obtains;
S3.2: each component of the object is apparently stated and carries out structuring operation, determines each component of the object
Optimal location;
S3.3: according to the optimal location of each component of the object, random field structural model is carried out using average algorithm
Reasoning obtains the object category that reasoning obtains;
S3.4: similarity highest is chosen by CNN neural network algorithm from database according to the depth characteristic information
Three kinds of object categories, and the similarity for the object category that the similarity of the object category and the reasoning obtain is carried out pair
Than being exported using the highest object category of similarity as the object category finally identified.
Database in this step is to search classification picture by network retrieval, and carry out handmarking, at artificial screening
Reason obtains, and the historical data for acquiring and identifying by history obtains.
S4: the object category is translated as by object language by translation algorithm and is exported.
In the specific implementation process, the image of objects in front is obtained by camera and is handled, wherein acquired
Object to be identified in image includes the position of geographical environment and object in the picture where object, and acquired image is defeated
Enter after carrying out depth characteristic information extraction in convolutional neural networks model, inputs in trained image recognition model according to institute
The depth characteristic information of extraction tentatively identifies object, while choosing phase by CNN neural network algorithm from database
Fuzzy prediction is realized like highest three kinds of object categories are spent, it then will be corresponding to the result of fuzzy prediction and preliminary recognition result
Similarity is compared, and is exported using the highest object category of similarity as recognition result, finally to recognition result according to
The syntax rule of object language translates object category, and in translation process, translation object is for word or simply
Phrase is resequenced further according to syntax rule.
In the present embodiment, depth characteristic information extraction is carried out to image by convolutional neural networks model, is known by image
Other model identifies the classification of object, and the translation of object category is then carried out by translation algorithm, realizes to real-time acquisition
Subject image identified and translated in real time, the learning experience of user, and object identification and translation can be increased substantially
Accuracy it is higher.
Embodiment 2
The present embodiment proposes a kind of object identification and real time translator, using the object identification and in real time of above-described embodiment
The method of translation.It as shown in figs. 2 to 4, is the object identification of the present embodiment and the schematic diagram of real time translator.
In the object identification of the present embodiment and real time translator, including central processing unit 1, image acquisition units 2, display
Shield 3, camera 4, crust of the device 5, push-button unit 6, range sensor 7, light sensor 8, loudspeaker 9, wherein camera 4 is set
The one side in crust of the device 5 is set, the another side of crust of the device 5, central processing unit 1 and Image Acquisition is arranged in display screen 3
Unit 2 is integrally disposed in inside crust of the device 5, and push-button unit 6, range sensor 7, light sensor 8, loudspeaker 9 are set respectively
It sets on crust of the device 5.Specifically, the output end of camera 4 is electrically connected with the input terminal of image acquisition units 2, Image Acquisition
The output end of unit 2 is electrically connected with the input terminal of central processing unit 1;First output end of central processing unit 1 is defeated with camera 4
Enter end electrical connection, the second output terminal of central processing unit 1 is electrically connected with the input terminal of display screen 3;Push-button unit 6, Distance-sensing
Device 7, light sensor 8, loudspeaker 9, microphone 10 are electrically connected with central processing unit 1 respectively.
In the present embodiment, central processing unit 1 is used to executing when running the object identification and real time translation of above-described embodiment
Method;Image acquisition units 2 are for pre-processing 4 acquired image frame of camera;Display screen 3 is taken the photograph for real-time display
As first 4 acquired image and object identification and the result of real time translation;Push-button unit 6 shot for control device,
Object identification, real time translation;Range sensor 7 is for incuding the distance between user and display screen 3, when distance is lower than default
Secure threshold when, central processing unit 1 shows warning window by display screen;Light sensor 8 is for incuding ambient enviroment
Light luminance is then delivered in central processing unit 1 and carries out judgement processing, then aobvious by transmitting electric signal control to display screen 3
The screen intensity of display screen 3, realization automatically adjust brightness of display screen according to the ambient brightness.
In the present embodiment, the AR algorithm journey that secondary development is carried out based on unity 3D engine is provided in central processing unit 1
Sequence, for carrying out the identification of depth characteristic information and translation to acquired image.
In the specific implementation process, device carries out Image Acquisition to current object by camera 4, is then adopted by image
Collection unit 2 handles acquired image, then inputs in central processing unit 1 and carries out kind of object to acquired image
Identification and real time translation, specifically, acquired image by preset convolutional neural networks model in central processing unit 1 into
Then row depth characteristic information extraction carries out the identification of depth characteristic information by preset image recognition model, is identified
The object category arrived after recognition result is then translated as object language by preset translation algorithm, is output in display screen 3
It is shown, while transferring target language audio corresponding with recognition result from the database stored in central processing unit 1,
By being played out to loudspeaker 9.
4 acquired image of camera can be transmitted in display screen 3 and be carried out by image acquisition units 2, central processing unit 1
Real-time display, after acquired image completes object identification and real time translation, translation result is transmitted to aobvious by central processing unit 1
In display screen 3 with acquired image simultaneous display.
In use, central processing unit 1 obtains the distance between user and display screen 3 by range sensor 7
And judged, when distance is lower than preset secure threshold, central processing unit 1 sends electric signal, display screen 3 to display screen 3
Warning window is popped up, so that user be avoided to affect vision with 3 hypotelorism of display screen;Central processing unit 1 passes through light level
Then the brightness number of 8 acquisition device periphery environment of device sends electric signal to display screen 3 according to acquired ambient brightness numerical value
Automatically adjust the brightness of display screen 3 according to ambient brightness.
Object identification and real time translator in the present embodiment can be combined with correction module use, the correction module
For object to be carried out upload of taking pictures by camera 4 when user has found that recognition result is not consistent with actual object title
It into server, while sending corresponding correct Chinese and being fed back, staff can be fed back by collection of server
Information is updated processing to database, to realize error correction.
Object identification and real time translator in the present embodiment can be combined with carrying out as the microphone of audio unit
It uses, the side of display screen 3 is arranged in the microphone, for acquiring the sound in external environment and being transmitted to central processing unit
It is handled in 1, central processing unit 1 carries out processing analysis to sound by preset acoustic processing model, and executes corresponding instruction.
The same or similar label correspond to the same or similar components;
The terms describing the positional relationship in the drawings are only for illustration, should not be understood as the limitation to this patent;
Obviously, the above embodiment of the present invention be only to clearly illustrate example of the present invention, and not be pair
The restriction of embodiments of the present invention.For those of ordinary skill in the art, may be used also on the basis of the above description
To make other variations or changes in different ways.There is no necessity and possibility to exhaust all the enbodiments.It is all this
Made any modifications, equivalent replacements, and improvements etc., should be included in the claims in the present invention within the spirit and principle of invention
Protection scope within.
Claims (10)
1. a kind of method of object identification and real time translation, which comprises the following steps:
S1: the image of objects in front captured in real-time is obtained by camera;
S2: described image is inputted into convolutional neural networks model, extracts the depth characteristic information of described image;
S3: extracted depth characteristic information input image recognition model identifies the classification of object, and exports identification
Obtained object category;
S4: the object category is translated as by object language by translation algorithm and is exported.
2. the method for object identification according to claim 1 and real time translation, it is characterised in that: the volume in the step S2
Product neural network model includes convolutional layer and pond layer.
3. the method for object identification according to claim 1 and real time translation, it is characterised in that: the tool in the step S3
Steps are as follows for body:
S3.1: component convolution behaviour will be carried out after the corresponding depth characteristic information input image recognition model of the object feature point
Make, obtains the apparent statement of each component of the object;
S3.2: each component of the object is apparently stated and carries out structuring operation, determines the optimal of each component of the object
Position;
S3.3: according to the optimal location of each component of the object, making inferences random field structural model using average algorithm,
Obtain the object category that reasoning obtains.
4. the method for object identification according to claim 3 and real time translation, it is characterised in that: described image identification model
For the image recognition model being trained by the symbolic mathematical system frame programmed based on data flow.
5. the method for object identification according to claim 3 and real time translation, it is characterised in that: in the step S3, also
The following steps are included:
S3.4: convolutional neural networks algorithm picks similarity highest three is passed through from database according to the depth characteristic information
Kind object category, and the similarity for the object category that the similarity of the object category is obtained with the reasoning is compared,
It is exported using the highest object category of similarity as the object category finally identified.
6. the method for object identification according to claim 5 and real time translation, it is characterised in that: the database is to pass through
Network retrieval searches classification picture, and carries out handmarking, artificial screening processing acquisition, and acquire and identify by history
Historical data obtains.
7. a kind of object identification and real time translator, it is characterised in that: including central processing unit, image acquisition units, display
Screen, camera, crust of the device, wherein the one side of crust of the device is arranged in the camera, the display screen is arranged in device
The another side of shell, the central processing unit and image acquisition units are integrally disposed in inside crust of the device;
The output end of the camera is electrically connected with the input terminal of image acquisition units, the output end of described image acquisition unit with
The input terminal of central processing unit is electrically connected;First output end of the central processing unit is electrically connected with the input terminal of camera, institute
The second output terminal for stating central processing unit is electrically connected with the input terminal of display screen;
The central processing unit is for executing the described in any item methods of the claims 1~6 when running.
8. object identification according to claim 7 and real time translator, it is characterised in that: set in the central processing unit
It is equipped with the augmented reality algorithm routine that secondary development is carried out based on unity3D engine.
9. object identification according to claim 7 and real time translator, it is characterised in that: described device further includes key
Unit, sensing unit and audio unit, wherein the side of display screen is arranged in the push-button unit, and the push-button unit is in
Central processor electrical connection;
The side of the sensing unit setting display screen, and the sensing unit is electrically connected with central processing unit;
The audio unit includes microphone and loudspeaker, and the audio unit is arranged on crust of the device, and the audio list
Member is electrically connected with central processing unit.
10. object identification according to claim 9 and real time translator, it is characterised in that: the sensing unit includes
Range sensor and light sensor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910585408.2A CN110472482A (en) | 2019-07-01 | 2019-07-01 | A kind of method and device of object identification and real time translation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910585408.2A CN110472482A (en) | 2019-07-01 | 2019-07-01 | A kind of method and device of object identification and real time translation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110472482A true CN110472482A (en) | 2019-11-19 |
Family
ID=68507446
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910585408.2A Pending CN110472482A (en) | 2019-07-01 | 2019-07-01 | A kind of method and device of object identification and real time translation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110472482A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111562815A (en) * | 2020-05-04 | 2020-08-21 | 北京花兰德科技咨询服务有限公司 | Wireless head-mounted device and language translation system |
CN115797815A (en) * | 2021-09-08 | 2023-03-14 | 荣耀终端有限公司 | AR translation processing method and electronic device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104090871A (en) * | 2014-07-18 | 2014-10-08 | 百度在线网络技术(北京)有限公司 | Picture translation method and system |
JP2015049372A (en) * | 2013-09-02 | 2015-03-16 | 有限会社Bruce Interface | Foreign language learning support device and foreign language learning support program |
CN106570522A (en) * | 2016-10-24 | 2017-04-19 | 中国科学院自动化研究所 | Object recognition model establishment method and object recognition method |
CN107203753A (en) * | 2017-05-25 | 2017-09-26 | 西安工业大学 | A kind of action identification method based on fuzzy neural network and graph model reasoning |
CN109460557A (en) * | 2018-08-30 | 2019-03-12 | 山东讯飞淘云贸易有限公司 | A kind of control method of translator |
CN109614947A (en) * | 2018-12-19 | 2019-04-12 | 深圳供电局有限公司 | Electric power component recognition model training method and device and computer equipment |
-
2019
- 2019-07-01 CN CN201910585408.2A patent/CN110472482A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2015049372A (en) * | 2013-09-02 | 2015-03-16 | 有限会社Bruce Interface | Foreign language learning support device and foreign language learning support program |
CN104090871A (en) * | 2014-07-18 | 2014-10-08 | 百度在线网络技术(北京)有限公司 | Picture translation method and system |
CN106570522A (en) * | 2016-10-24 | 2017-04-19 | 中国科学院自动化研究所 | Object recognition model establishment method and object recognition method |
CN107203753A (en) * | 2017-05-25 | 2017-09-26 | 西安工业大学 | A kind of action identification method based on fuzzy neural network and graph model reasoning |
CN109460557A (en) * | 2018-08-30 | 2019-03-12 | 山东讯飞淘云贸易有限公司 | A kind of control method of translator |
CN109614947A (en) * | 2018-12-19 | 2019-04-12 | 深圳供电局有限公司 | Electric power component recognition model training method and device and computer equipment |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111562815A (en) * | 2020-05-04 | 2020-08-21 | 北京花兰德科技咨询服务有限公司 | Wireless head-mounted device and language translation system |
CN111562815B (en) * | 2020-05-04 | 2021-07-13 | 北京花兰德科技咨询服务有限公司 | Wireless head-mounted device and language translation system |
CN115797815A (en) * | 2021-09-08 | 2023-03-14 | 荣耀终端有限公司 | AR translation processing method and electronic device |
CN115797815B (en) * | 2021-09-08 | 2023-12-15 | 荣耀终端有限公司 | AR translation processing method and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101803081B1 (en) | Robot for store management | |
CN110909780B (en) | Image recognition model training and image recognition method, device and system | |
CN106980852B (en) | Based on Corner Detection and the medicine identifying system matched and its recognition methods | |
WO2020125499A9 (en) | Operation prompting method and glasses | |
CN110956060A (en) | Motion recognition method, driving motion analysis method, device and electronic equipment | |
CN106570491A (en) | Robot intelligent interaction method and intelligent robot | |
CN110309813B (en) | Model training method, detection method and device for human eye state detection based on deep learning, mobile terminal equipment and server | |
CN110865705A (en) | Multi-mode converged communication method and device, head-mounted equipment and storage medium | |
US9959480B1 (en) | Pixel-structural reference image feature extraction | |
CN109711309B (en) | Method for automatically identifying whether portrait picture is eye-closed | |
CN110472482A (en) | A kind of method and device of object identification and real time translation | |
CN112487844A (en) | Gesture recognition method, electronic device, computer-readable storage medium, and chip | |
CN110309767A (en) | In vivo detection equipment, recognition methods, device and storage medium | |
CN111724199A (en) | Intelligent community advertisement accurate delivery method and device based on pedestrian active perception | |
CN110473176A (en) | Image processing method and device, method for processing fundus images, electronic equipment | |
KR102320005B1 (en) | Image analysis based abnormal object detection system and method | |
CN110148234A (en) | Campus brush face picks exchange method, storage medium and system | |
KR20210048271A (en) | Apparatus and method for performing automatic audio focusing to multiple objects | |
CN114745592A (en) | Bullet screen message display method, system, device and medium based on face recognition | |
CN114863318A (en) | Behavior recognition method based on multi-modal data fusion | |
GB2611481A (en) | Visual impaired assisting smart glasses, and system and control method thereof | |
Kawaguchi et al. | Basic investigation of sign language motion classification by feature extraction using pre-trained network models | |
CN117079808B (en) | Be used for ocular surface periocular image collection and artificial intelligence health analysis system | |
CN110751126A (en) | Analysis method for judging character characters based on face features | |
CN112784631A (en) | Method for recognizing face emotion based on deep neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191119 |
|
RJ01 | Rejection of invention patent application after publication |