CN108922274A - A kind of English assistant learning system of multifunctional application - Google Patents
A kind of English assistant learning system of multifunctional application Download PDFInfo
- Publication number
- CN108922274A CN108922274A CN201810836910.1A CN201810836910A CN108922274A CN 108922274 A CN108922274 A CN 108922274A CN 201810836910 A CN201810836910 A CN 201810836910A CN 108922274 A CN108922274 A CN 108922274A
- Authority
- CN
- China
- Prior art keywords
- data
- image
- english
- module
- english word
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/06—Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
- G09B5/065—Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/06—Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
- G09B5/067—Combinations of audio and projected visual presentation, e.g. film, slides
Abstract
The present invention relates to a kind of English assistant learning systems of multifunctional application, including:Timer, AR glasses (1), image acquisition device (2), micro projector (3), acoustical generator, microphone, arrangement of time typing mechanism (5), display (6) and central controller;The present invention is by AR technical application into system, pass through the Image Acquisition to real world object, the subject image that the image of acquisition is stored with system is compared, the information of current object can be picked out after successful match, then object Chinese name and English word corresponding to the object information are extracted in system memory, and image conversion is carried out, virtual display object is obtained, virtual display object is directly displayed on AR glasses in conjunction with real scene;It is thus achieved that figure and English word are associated with, while the pronunciation by listening to English word and oral account Chinese translation, further deepen the memory depth of word, while improving the effective rate of utilization of learning efficiency and time.
Description
Technical field
The present invention relates to English teaching apparatus fields, and in particular to a kind of English assistant learning system of multifunctional application.
Background technique
Currently, English study has obtained the attention of more and more people, from the education of the initiation in child's period into work, English
Language study is either in study or in working stage, in occupation of a certain proportion of time.The study in child's period, which is compared, is
System, student have special learning time and space, but for the office worker for wanting education and charging, the item of English study
Part is fairly limited, and work and daily life have occupied the most of the time of life, can not extract special time progress out
Study, this is difficult to adhere to getting off for adult in current allegro life.In addition, not passing through a system
Learning planning, the spare time can be occupied by some non-essential things, to waste the time to study English.
And people are on weekdays or holiday, can all accumulate more free time, if be not used when causing instead
Between waste;If these free times can be made full use of, the waiting anxiety of people can not only be mitigated, moreover it is possible in chip time
Learn the inconvenience and boredom that avoid the monolith time to learn to genuine English.And existing English learning system
It is usually completed on computer, TV or mobile phone, vehicle environment can not be suitable for, do not account for especially and how to utilize
The chip time in leisure is learnt, do not account for more how to take into account English study with work, life mutually merge.
In addition, in learning English, either to Oral Activities, or for literacy, word
Accumulation is all the matter of utmost importance for needing to overcome.Traditional vocabulary memorization method is the concrete meaning and usage for first understanding word,
Then learner's consolidating memory is helped by way of pronunciation mark and spelling.But the efficiency for carrying on the back word in this way is lower,
And need a large amount of time to carry out repetitive memory, while being also weary of psychology because uninteresting memory pattern causes learner to generate, into
One step reduces learning efficiency.For this purpose, some educational institutions provide various Associative mnemonics, such as:Etyma and affix association note
Recall method, clang association mnemonics, the meaning of a word and compare mnemonics and figure Associative mnemonics etc., quickly to improve the note of English word
Recall speed.It is and in above-mentioned many associative memory methods, figure and the English word mode that is mutually related is maximally efficient, it is mesh
The preceding mnemonics most in accordance with the brain operating mode of the mankind, efficiency improves 3-10 times compared with other memory styles.
Currently, the operational mode of figure Associative mnemonics is static graphics to print on books or is broadcast with teaching equipment
The pre-set image put is as teaching reference, it is desirable to provide and specific time and environment are completed to learn, therefore, the application of this method
It is had some limitations in learning process.
Summary of the invention
It is an object of the present invention to solve in the prior art, there is no the English religions for the application of figure Associative mnemonics
System, provides a kind of English assistant learning system of multifunctional application, system of the invention by using augmented reality,
The material object of reality scene is interrelated with English word, teaching can be completed at any time in any actual scene, overcome existing
There is the limitation of mode of learning.
To achieve the above object, the present invention provides a kind of English assistant learning system of multifunctional application, including:Timing
Device, AR glasses, image acquisition device, micro projector, acoustical generator, microphone, arrangement of time typing mechanism, display and center control
Device processed;
The timer is used for timing, and the triggered time arranges the starting of typing mechanism when reaching preset timing point;
At least two image acquisition devices are symmetrically arranged on the two sides mirror holder of the AR glasses, the image acquisition device is used
Image in its visual range of acquisition, and the image data of generation is sent to central controller;
The arrangement of time typing mechanism is connect with display, is sent prompt to user by display on startup
Information, and the arrangement of time data that prompt information input is pressed by user are recorded, the prompt information is that request user is defeated
Send the message of arrangement of time data, the arrangement of time data be record user corresponding processing affairs of each period,
Affairs material information;
The signal input part of the central controller is connect with arrangement of time typing mechanism, arranges number for receiving time
According to analysis obtains the chip time for English study, drives image acquisition device to carry out Image Acquisition in each section of chip time, so
Feature extraction is carried out to object from the image data that obtains of acquisition afterwards, by after analyzing characteristic to pick out object
Body information, and object Chinese name corresponding to object information and English word are subjected to image conversion, obtain virtual display pair
As being rendered in conjunction with real scene to virtual display object, and pass through micro projector for the virtual display object after rendering
It is projected, while the corresponding pronunciation data of English word is sent to acoustical generator, and carry out with the sound wave electric signal received
Semantic analysis extracts through parsing English word corresponding to obtained text, then carries out image conversion;The central controller is also
It is corresponding by being generated after analyzing gesture data for identifying the gesture data of human body from received image data
Action data, and man-machine interactive operation is executed using the action data;
The signal input part of the micro projector is connect with central controller, the projection end of the micro projector with AR
The half-transmitting mirror being arranged on mirror is opposite, the image projection for exporting central controller to half-transmitting mirror;
The signal input part of the acoustical generator is connect with central controller, the pronunciation number for exporting central controller
It is played out according to voice signal is converted into;
The signal input part of the microphone is connect with central controller, for receiving the voice of human body sending, and will
Voice is sent to central controller after being converted into sound wave electric signal.
As a further improvement of the above technical scheme, the image acquisition device carries out image using CCD camera
Acquisition.
As a further improvement of the above technical scheme, the central controller includes:Time analysis module, acquisition are driven
Dynamic model block, signal processing module, characteristic extracting module, object identification module, knowledge base, image rendering module, pronunciation data pass
Send module, semantic meaning analysis module and gesture identification module;
The time analysis module be used for receiving time arrangement data, and by corresponding affairs importance of each period with
The importance rate of setting is compared, and is extracted lower than the period corresponding to importance rate as chip time;
The acquisition drive module drives image for receiving chip time data, and when each section of chip time reaches
Collector carries out Image Acquisition;
The signal processing module is used to receive the picture signal of image acquisition device output, and will be at picture signal
Reason generates the image data identified for characteristic extracting module;
The characteristic extracting module is used to identify the characteristic of object in image data, and by the feature of acquisition
Data are sent to object identification module, and the characteristic includes color, texture, profile and the three-dimensional shape data of object;
The object identification module matches the feature composite sequence stored in characteristic and knowledge base, extracts
Object Chinese name corresponding to matched feature composite sequence, English word and the corresponding pronunciation data of English word;
The semantic meaning analysis module carries out semantic analysis for receiving sound wave electric signal, and to the sound wave electric signal, will
The English word for parsing obtained text and storing in knowledge base is matched, and is extracted matched English word and is sent to figure
As rendering module;
For the knowledge base for storing various feature composite sequences, the feature composite sequence includes the face by object
The feature combination and its corresponding object Chinese name, English word and English that color, texture, profile and three-dimensional shape data form
The corresponding pronunciation data of word;
The image rendering module is used to match the object Chinese name obtained and English word carries out image conversion,
Virtual display object is obtained, and combines the display position of the selected virtual display object in the position of object in real scene, while root
Illumination render is carried out to virtual display object according to the brightness in real scene picture frame;
The pronunciation data delivery module is used to the corresponding pronunciation data of English word being sent to acoustical generator;
The gesture identification module is used to identify the gesture data of human body in image data, and by gesture number
According to generating corresponding action data after being analyzed, and with the operation of action data control central controller.
As a further improvement of the above technical scheme, the signal processing module includes:A/D converter, signal are put
Big device and filter, the signal for exporting image acquisition device successively carry out analog-to-digital conversion, amplification and filtering processing.
As a further improvement of the above technical scheme, the central controller further includes mode of learning selecting module,
Several modes of learning are provided in the mode of learning selecting module, the object for obtaining the matching of object identification module
Chinese name and English word are compared with the mode of learning that user selectes, if there are matching acquisitions in mode of learning
Object Chinese name and English word then export object Chinese name and English word to image rendering module, otherwise not defeated
The object Chinese name and English word that the matching obtains out;
The mode of learning is to combine sequence to all features stored in knowledge base according to different grades of know-how
Column are divided, and are combined into corresponding object Chinese name and English word set.
As a further improvement of the above technical scheme, the knowledge base also store with the associated phonetic symbol of English word,
Example sentence and mind map data;
The object identification module is also used to extract and the associated phonetic symbol of English word, example sentence and mind map data,
And it is sent to image rendering module;
Phonetic symbol, example sentence and the mind map data that the image rendering module is also used to receive carry out image and turn
Change, obtain virtual display object, and combine the display position of the selected virtual display object in the position of object in real scene, simultaneously
Illumination render is carried out to virtual display object according to the brightness in real scene picture frame;
The mind map data be using match obtain English word as core, will all English relevant to its attribute
The associated diagram that literary word constructs.
A kind of English assistant learning system advantage of multifunctional application provided by the invention is:
The present invention into English learning system, by the Image Acquisition to real world object, and will acquire AR technical application
The subject image that is stored with system of image compare, the information of current object can be picked out after successful match, at this time
Object Chinese name, English word corresponding to the object information and the corresponding hair of English word can be extracted in system memory
Sound data, these English words are exactly that learner needs to remember;For this purpose, object Chinese name and English word are carried out image
Conversion obtains virtual display object, and virtual display object is directly displayed on AR glasses in conjunction with real scene;It is thus achieved that
Figure is associated with English word, while the pronunciation by listening to English word, further deepens the memory depth of word, thus
Teaching can be completed at any time in any actual scene, overcome the limitation of existing mode of learning, improved learning efficiency;This
System is analyzed by the daily schedule to people, can extract the free time for English study in people's daily life
Time, enable English learner's more rationality arranges oneself spare time, reduces unnecessary activity schedule to the greatest extent,
Strive for obtaining more effective times to carry out English study, sufficiently improves the effective rate of utilization of time;In order to improve knowledge science
The relevance of habit, the present invention also pass through voice input system of the microphone by learner of setting, by central controller into
Row semantics recognition helps learner to obtain it and gives an oral account english vocabulary corresponding to Chinese vocabulary, learner is facilitated to expand memory, into
One step improves learning efficiency, while also meeting the multi-functional demand of learner.
Detailed description of the invention
Fig. 1 is a kind of English assistant learning system structural schematic diagram of multifunctional application provided by the invention;
Fig. 2 is to utilize English study equipment deadline program operation schematic diagram of the invention;
Fig. 3 is the central controller structural schematic diagram provided in one embodiment of the invention;
Fig. 4 is the central controller structural schematic diagram provided in another embodiment of the present invention;
Fig. 5 is the signal processing module structural schematic diagram provided in the embodiment of the present invention;
Fig. 6 a is the AR eyeglasses-wearing schematic diagram provided in the embodiment of the present invention;
Fig. 6 b is the AR glasses external structure schematic diagram provided in the embodiment of the present invention.
Fig. 7 is the arrangement of time typing mechanism appearance schematic diagram for integrating formula structure provided in the embodiment of the present invention;
Fig. 8 is arrangement of time typing mechanism side view shown in Fig. 7 of the present invention.
Appended drawing reference
1, AR glasses 2, image acquisition device
3, micro projector 4, half-transmitting mirror
5, arrangement of time typing mechanism 6, display
7, hand strap 8, earphone interface
9, USB interface
Specific embodiment
With reference to the accompanying drawings and examples to a kind of English assistant learning system of multifunctional application of the present invention into
Row is described in detail.
As shown in Figure 1, a kind of English assistant learning system of multifunctional application provided by the invention, the system are specifically wrapped
It includes:Timer, AR glasses, image acquisition device, micro projector, acoustical generator, microphone, arrangement of time typing mechanism, display
And central controller;
The timer is used for timing, and the triggered time arranges typing mechanism 5 to start when reaching preset timing point;
As shown in Figure 6 b, at least two image acquisition devices 2 are symmetrically arranged on the two sides mirror holder of the AR glasses 1, it is described
Image acquisition device 2 be used to acquire image in its visual range, and the image data of generation is sent to central controller;
As shown in fig. 7, the arrangement of time typing mechanism 5 is connect with display 6, on startup by display 6 to
User sends prompt information, and records the arrangement of time data that prompt information input is pressed by user, the prompt information
For the message for requesting user's time of delivery arrangement data, the arrangement of time data are record user in each period pair
The processing affairs answered, affairs material information;
The signal input part of the central controller is connect with arrangement of time typing mechanism 5, arranges number for receiving time
According to, analysis obtains the chip time for English study, and drive image acquisition device 2 to carry out Image Acquisition in each section of chip time,
Then feature extraction is carried out to object from the image data that obtains of acquisition, by after analyzing characteristic to pick out
Object information, and object Chinese name corresponding to object information and English word are subjected to image conversion, obtain virtual display
Object renders virtual display object in conjunction with real scene, and passes through micro projector for the virtual display pair after rendering
Be sent to acoustical generator as being projected, while by the corresponding pronunciation data of English word, and with the sound wave electric signal that receives into
Row semantic analysis extracts through parsing English word corresponding to obtained text, then carries out image conversion;The central controller
It is also used to identify the gesture data of human body from received image data, is corresponded to by being generated after analyzing gesture data
Action data, and using the action data execute man-machine interactive operation;
The signal input part of the acoustical generator is connect with central controller, the pronunciation number for exporting central controller
It is played out according to voice signal is converted into;
The signal input part of the microphone is connect with central controller, for receiving the voice of human body sending, and will
Voice is sent to central controller after being converted into sound wave electric signal.
The signal input part of the micro projector is connect with central controller, as shown in Figure 6 b, the micro projector 3
The half-transmitting mirror 4 being arranged on projection end and AR glasses 1 is opposite, the image projection for exporting central controller to half-transmitting mirror
On 4, and by half-transmitting mirror 4 will virtual display object reflection to people eye;
The light that the half-transmitting mirror 4 can be such that micro projector 3 projects is reflected, and makes the light of external environment incidence
It is transmitted, so that real scene and virtual objects are realized image co-registration in human eye portion.Wherein, the state of AR glasses is worn
As shown in Figure 6 a.
Frontend experience of the AR technology as current virtual reality technology, has a wide range of applications in various industries.It is one
The concept of kind of image procossing be combined with each other with multi view tool and becomes as a special kind of skill, that is, it has often been said that enhancing it is existing
It is real.The target of this technology is that virtual world is covered in real world and interacted on the screen, is needed on realizing effect
The multimedias such as camera, sensor are merged with scene, augmented reality not only presents the information of real world, but also will
Virtual information shows that two kinds of information are complementary to one another, are superimposed simultaneously.
The present invention into English teaching system, by the Image Acquisition to real world object, and incites somebody to action above-mentioned AR technical application
The subject image that the image of acquisition is stored with system compares, and the information of current object can be picked out after successful match,
Object Chinese name and English word corresponding to the object information, these English words can be extracted in system memory at this time
Exactly learner needs to remember;For this purpose, object Chinese name and English word are carried out image conversion, virtual display pair is obtained
As virtual display object is directly displayed on AR glasses in conjunction with real scene;It is thus achieved that the pass of figure and English word
Connection, while the pronunciation by listening to English word deepen the memory depth of word, further so as in any actual scene
In at any time complete teaching, overcome the limitation of existing mode of learning, improve learning efficiency.
Compared to existing many teaching equipments, English study is carried out using system of the invention, user does not need
Study is absorbed in regular time or place, it is only necessary to wear moveable AR vision glasses, walking in everyday environments is
Image conversion, associated with real world object english knowledge can be observed at any time.At the same time, in conjunction with the perception function of human body gesture
Can, it can be realized man-machine interactive operation, the operation of person's manual control system easy to produce.
In order to improve the relevance of knowledge learning, the present invention also passes through voice typing of the microphone by learner of setting
System carries out semantics recognition by central controller, helps learner to obtain it and gives an oral account english vocabulary corresponding to Chinese vocabulary,
Facilitate learner to expand memory, further increases learning efficiency, while also meeting the multi-functional demand of learner.
In addition, the time service condition that English assistant learning system of the invention is daily by analytic learning person, with determination
The activity schedule of its spare time within the non-sleep period.It for office workers, mainly include that working day and weekend/section are false
Two kinds of periods of day.
Non-working time section on weekdays, timer through the invention can set the period of prompt, Mei Geyi
Learner is putd question in such a way that voice prompting or information are shown within a hour, what a hour before doing, and allows learner
Make their own judgement, learner gives selection and anti-using voice answering or the mode inputted in arrangement of time typing mechanism
Feedback, marks what affairs current slot has done, whether which belongs to necessary event, could learn using this period
Practise English.After making clearly judgement, equipment can give record on the period, and whether the period is labeled as to advise
Draw the learning time to English.
In weekend or holiday time section, timer through the invention can equally set the period of prompt, advise
Put question to learner using voice prompting or information are shown in such a way that in fixed interval time, what is doing for the previous period, mark
Remember the importance of the affairs, could be studied English using this period.After making clearly judgement, equipment can be in the time
Record is given in section, and the period is labeled as to whether can be planned for the learning time of English.
As shown in Fig. 2, for the schematic diagram for realizing time planning according to above-mentioned rule using equipment of the invention, shade in figure
Part indicates chip time cooked up, for English study.
Using above-mentioned time planning function provided by English study equipment of the invention, enable English learner more
It the spare time for arranging oneself for adding rationality, reduces unnecessary activity schedule to the greatest extent, strives for that obtaining more effective times comes
Carry out English study.
In order to realize the intelligent control function of above-mentioned central controller, as shown in figure 3, the central controller in the present embodiment
May include:Time analysis module, acquisition drive module, signal processing module, characteristic extracting module, object identification module, knowledge
Library, image rendering module, pronunciation data delivery module, semantic meaning analysis module and gesture recognize module;
The time analysis module be used for receiving time arrangement data, and by corresponding affairs importance of each period with
The importance rate of setting is compared, and is extracted lower than the period corresponding to importance rate as chip time;
The acquisition drive module drives image for receiving chip time data, and when each section of chip time reaches
Collector carries out Image Acquisition;
The signal processing module is used to receive the picture signal of image acquisition device output, and will be at picture signal
Reason generates the image data identified for characteristic extracting module;
The characteristic extracting module is used to identify the characteristic of object in image data, and by the feature of acquisition
Data are sent to object identification module, and the characteristic includes color, texture, profile and the three-dimensional shape data of object;
The object identification module matches the feature composite sequence stored in characteristic and knowledge base, extracts
Object Chinese name corresponding to matched feature composite sequence, English word and the corresponding pronunciation data of English word;
The semantic meaning analysis module carries out semantic analysis for receiving sound wave electric signal, and to the sound wave electric signal, will
The English word for parsing obtained text and storing in knowledge base is matched, and is extracted matched English word and is sent to figure
As rendering module;
For the knowledge base for storing various feature composite sequences, the feature composite sequence includes the face by object
The feature combination and its corresponding object Chinese name, English word and English that color, texture, profile and three-dimensional shape data form
The corresponding pronunciation data of word;
The image rendering module is used to match the object Chinese name obtained and English word carries out image conversion,
Virtual display object is obtained, and combines the display position of the selected virtual display object in the position of object in real scene, while root
Illumination render is carried out to virtual display object according to the brightness in real scene picture frame;
The pronunciation data delivery module is used to the corresponding pronunciation data of English word being sent to acoustical generator;
The gesture identification module is used to identify the gesture data of human body in image data, and by gesture number
According to generating corresponding action data after being analyzed, and with the operation of action data control central controller.
As shown in figure 4, in another embodiment of the present invention, the central controller may also include mode of learning choosing
Module is selected, is provided with several modes of learning in the mode of learning selecting module, for obtaining the matching of object identification module
The object Chinese name and English word obtained is compared with the mode of learning that user selectes, if there is this in mode of learning
Object Chinese name and English word with acquisition, then export object Chinese name and English word to image rendering module,
Otherwise the object Chinese name and English word of matching acquisition are not exported;
The mode of learning is to combine sequence to all features stored in knowledge base according to different grades of know-how
Column are divided, and are combined into corresponding object Chinese name and English word set.
For different crowds, there can be different learning demands;For this purpose, system of the invention is with different knowledge
English word is divided into multiple set by level;Such as:Can be divided into for student examination meet small English learning, junior English,
Senior High School English, university's Band Four Test, six grades of examinations of university, professional Band Four Test, professional eight grades of examinations, TOEFL, refined thinking
A variety of word finders such as examination, can be divided into for office worker and meet daily communication term, business exchange term, technology specialty term etc.
A variety of word finders.Learner can select corresponding mode of learning according to the grade of itself demand and know-how;It is selected in mode
After fixed, vocabulary present in system meeting output mode not shows the over range vocabulary that matching obtains, realizes individual character
The instructional function of change.
In addition, as shown in figure 4, CCD camera can be used in the image acquisition device in present system is acquired to image.
CCD is the abbreviation of Charge Coupled Device (charge-coupled device), it is a kind of semiconductor imaging device, thus is had
There is high sensitivity, resist that strong light, distortion is small, small in size, the service life is long, anti-vibration;In addition, the present invention with CCD by taking the photograph
As head realization Image Acquisition, the higher clear image of resolution ratio can be obtained, is provided for the subsequent data analysis of central controller
It ensures.
As shown in figure 5, the signal processing module can be by the A/D converter of setting, by its received analog signal
Be converted to the digital signal for characteristic extracting module and gesture identification module identification;Meanwhile in order to reduce the noise jamming of signal,
The detection accuracy of signal is improved, the signal processing module may also include signal amplifier and filter, for adopting image
The signal of storage output is amplified and is filtered.
In addition, the knowledge base is also stored closes with English word in order to further increase the memory effect of english knowledge
Phonetic symbol, example sentence and the mind map data of connection.
For this purpose, the object identification module is also used to extract and the associated phonetic symbol of English word, example sentence and mind map
Data, and it is sent to image rendering module;Phonetic symbol, example sentence and the thinking that the image rendering module is also used to receive are led
Diagram data carries out image conversion, obtains virtual display object, and combines the selected virtual display pair in the position of object in real scene
The display position of elephant, while illumination render is carried out to virtual display object according to the brightness in real scene picture frame.The present invention
System the information of various dimensions is associated with English word, can further deepen understanding of the learner to english knowledge, together
Its vocabulary operational capability of Shi Tigao.
Above-mentioned mind map is called intelligence and leads figure, is the active graphical media of thinking for expressing divergent thinking, it is simply but
Again very effectively.The skill that mind map is laid equal stress on picture and text, the relationship of themes at different levels with being mutually subordinate to and relevant stratal diagram
It shows, the foundation such as subject key words and image, color memory is linked.Mind map sufficiently uses the function of left and right brain,
Using the rule of memory, reading, thinking, people's balanced growth between science and art, the logical AND imagination is assisted, to open
Therefore the unlimited potential of human brain, mind map have the power of human thinking.
In addition, mind map is a kind of method for visualizing thinking.It is understood that radiant thinking is human brain
Thought of nature mode, each enters the data of brain, whether feeling, memory or idea --- including text, number, symbol
Code, fragrance, food, lines, color, image, rhythm, note etc. can become a thinking center, and thus center is outside
Thousands of artis is exhaled, each artis represents a connection with central theme, and each connection can
To become another central theme, then thousands of artis is exhaled outward, show radioactivity stereochemical structure.
The marrow remembered with mind map is, balances the utilization of National People's Congress's brain left brain and right brain, stimulates out in brain
Potential, the information that will be present in brain becomes visual picture and text and is remembered, because of human brain connecing for graph text information
Be higher by nearly a hundred times by memory capability than individual text, wherein need most mass memory no more than English word.
Based on the principle of above-mentioned mind map, the diagram data of leading provided by the present invention for study is to match the English obtained
Literary word is core, the associated diagram that all English words relevant to its attribute are constructed, by the way that mind map, energy is arranged
It is enough to help learner to be extended the knowledge of study while improving memorizing rate.
In addition, as shown in fig. 7, the arrangement of time typing mechanism 5 can be keyboard-type structure, in order to improve equipment just
The property taken, the arrangement of time typing mechanism 5 and timer (not shown), display 6 are integrated integrated structure, and by setting
The hand strap 7 set is covered in wrist in a manner of wearing wrist-watch.As shown in figure 8, the equipment designed in this example is also set up in side
There is earphone interface 8, it is convenient to be connect with earphone;It is additionally provided with USB interface 9 simultaneously, so that ready teaching data to be transmitted to
Memory built in equipment, and can provide charge function, keep equipment more portable and use.The arrangement of time typing mechanism
Wireless transmission method can be used between 5 and central controller (not shown) to connect, not to avoid data line transmission mode bring
Just.
It should be noted last that the above examples are only used to illustrate the technical scheme of the present invention and are not limiting.Although ginseng
It is described the invention in detail according to embodiment, those skilled in the art should understand that, to technical side of the invention
Case is modified or replaced equivalently, and without departure from the spirit and scope of technical solution of the present invention, should all be covered in the present invention
Scope of the claims in.
Claims (6)
1. a kind of English assistant learning system of multifunctional application, which is characterized in that including:Timer, AR glasses (1), image
Collector (2), micro projector (3), acoustical generator, microphone, arrangement of time typing mechanism (5), display (6) and center control
Device;
The timer is used for timing, and the triggered time arranges typing mechanism (5) starting when reaching preset timing point;
At least two image acquisition devices (2), the image acquisition device are symmetrically arranged on the two sides mirror holder of the AR glasses (1)
(2) for acquiring the image in its visual range, and the image data of generation is sent to central controller;
The arrangement of time typing mechanism (5) is connect with display (6), is sent out on startup by display (6) to user
Prompt information is sent, and records the arrangement of time data for pressing prompt information input by user, the prompt information is that request makes
The message of user's time of delivery arrangement data, the arrangement of time data are record user in the corresponding processing of each period
Affairs, affairs material information;
The signal input part of the central controller is connect with arrangement of time typing mechanism (5), is used for receiving time arrangement data,
Analysis obtains the chip time for English study, drives image acquisition device (2) to carry out Image Acquisition in each section of chip time, so
Feature extraction is carried out to object from the image data that obtains of acquisition afterwards, by after analyzing characteristic to pick out object
Body information, and object Chinese name corresponding to object information and English word are subjected to image conversion, obtain virtual display pair
As being rendered in conjunction with real scene to virtual display object, and pass through micro projector (3) for the virtual display pair after rendering
Be sent to acoustical generator as being projected, while by the corresponding pronunciation data of English word, and with the sound wave electric signal that receives into
Row semantic analysis extracts through parsing English word corresponding to obtained text, then carries out image conversion;The central controller
It is also used to identify the gesture data of human body from received image data, is corresponded to by being generated after analyzing gesture data
Action data, and using the action data execute man-machine interactive operation;
The signal input part of the micro projector (3) is connect with central controller, the projection end of the micro projector (3) and AR
The half-transmitting mirror (4) being arranged on glasses (1) relatively, image projection for exporting central controller to half-transmitting mirror (4);
The signal input part of the acoustical generator is connect with central controller, and the pronunciation data for exporting central controller turns
Voice signal is changed into play out;
The signal input part of the microphone is connect with central controller, for receiving the voice of human body sending, and by voice
Central controller is sent to after being converted into sound wave electric signal.
2. the English assistant learning system of multifunctional application according to claim 1, which is characterized in that the image is adopted
Storage is acquired image using CCD camera.
3. the English assistant learning system of multifunctional application according to claim 1, which is characterized in that the center control
Device processed includes:Time analysis module, signal processing module, characteristic extracting module, object identification module, is known acquisition drive module
Know library, image rendering module, pronunciation data delivery module, semantic meaning analysis module and gesture and recognizes module;
The time analysis module is used for receiving time arrangement data, and by corresponding affairs importance of each period and sets
Importance rate be compared, extract lower than the period corresponding to importance rate as chip time;
The acquisition drive module drives Image Acquisition for receiving chip time data, and when each section of chip time reaches
Device carries out Image Acquisition;
The signal processing module is used to receive the picture signal of image acquisition device output, and picture signal is handled, raw
At the image data identified for characteristic extracting module;
The characteristic extracting module is used to identify the characteristic of object in image data, and by the characteristic of acquisition
It is sent to object identification module, the characteristic includes color, texture, profile and the three-dimensional shape data of object;
The object identification module matches the feature composite sequence stored in characteristic and knowledge base, extracts warp
Object Chinese name corresponding to the feature composite sequence matched, English word and the corresponding pronunciation data of English word;
The semantic meaning analysis module carries out semantic analysis for receiving sound wave electric signal, and to the sound wave electric signal, will parse
Obtained text and the English word stored in knowledge base are matched, and are extracted matched English word and are sent to image wash with watercolours
Contaminate module;
The knowledge base for storing various feature composite sequences, the feature composite sequence include by object color,
The feature combination of texture, profile and three-dimensional shape data composition and its corresponding object Chinese name, English word and English are single
The corresponding pronunciation data of word;
The image rendering module is used to match the object Chinese name obtained and English word carries out image conversion, obtains
Virtual display object, and the display position of the selected virtual display object in the position of object in real scene is combined, while according to true
Brightness in real field scape picture frame carries out illumination render to virtual display object;
The pronunciation data delivery module is used to the corresponding pronunciation data of English word being sent to acoustical generator;
The described gesture identification module is used to identify the gesture data of human body in image data, and by gesture data into
Corresponding action data is generated after row analysis, and with the operation of action data control central controller.
4. the English assistant learning system of multifunctional application according to claim 3, which is characterized in that at the signal
Managing module includes:A/D converter, signal amplifier and filter, the signal for exporting image acquisition device successively carry out mould
Number conversion, amplification and filtering processing.
5. the English assistant learning system of multifunctional application according to claim 3, which is characterized in that the center control
Device processed further includes mode of learning selecting module, is provided with several modes of learning in the mode of learning selecting module, is used for
Object Chinese name and English word that the matching of object identification module obtains are compared with the mode of learning that user selectes, if
There are the object Chinese names and English word of matching acquisition in mode of learning, then by object Chinese name and English word
Otherwise output does not export the object Chinese name and English word of matching acquisition to image rendering module;
The mode of learning be according to different grades of know-how to all feature composite sequences stored in knowledge base into
Row divides, and is combined into corresponding object Chinese name and English word set.
6. the English assistant learning system of multifunctional application according to claim 3, which is characterized in that the knowledge base
Also storage and the associated phonetic symbol of English word, example sentence and mind map data;
The object identification module is also used to extract and the associated phonetic symbol of English word, example sentence and mind map data, concurrently
It send to image rendering module;
Phonetic symbol, example sentence and the mind map data that the image rendering module is also used to receive carry out image conversion, obtain
It obtains and virtually shows object, and combine the display position of the selected virtual display object in the position of object in real scene, while basis
Brightness in real scene picture frame carries out illumination render to virtual display object;
The mind map data be to match the English word obtained as core, will all English relevant to its attribute it is single
The associated diagram that word constructs.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810836910.1A CN108922274A (en) | 2018-07-26 | 2018-07-26 | A kind of English assistant learning system of multifunctional application |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810836910.1A CN108922274A (en) | 2018-07-26 | 2018-07-26 | A kind of English assistant learning system of multifunctional application |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108922274A true CN108922274A (en) | 2018-11-30 |
Family
ID=64417636
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810836910.1A Withdrawn CN108922274A (en) | 2018-07-26 | 2018-07-26 | A kind of English assistant learning system of multifunctional application |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108922274A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110111617A (en) * | 2019-06-19 | 2019-08-09 | 江苏海事职业技术学院 | A kind of English language study auxiliary system |
CN111047924A (en) * | 2019-10-24 | 2020-04-21 | 太原理工大学 | Visualization method and system for memorizing English words |
CN111599232A (en) * | 2020-06-28 | 2020-08-28 | 江苏科技大学 | Smart phone based on virtual reality and augmented reality technology and English learning assisting method |
CN112489222A (en) * | 2020-11-13 | 2021-03-12 | 贵州电网有限责任公司 | AR-based construction method of information fusion system of information machine room operation |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102968915A (en) * | 2012-10-23 | 2013-03-13 | 中国石油化工股份有限公司 | Chemical device training management device, device knowledge base and training management system |
CN107037589A (en) * | 2017-03-30 | 2017-08-11 | 河南工学院 | A kind of scene-type foreign languages translation and learning system |
-
2018
- 2018-07-26 CN CN201810836910.1A patent/CN108922274A/en not_active Withdrawn
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102968915A (en) * | 2012-10-23 | 2013-03-13 | 中国石油化工股份有限公司 | Chemical device training management device, device knowledge base and training management system |
CN107037589A (en) * | 2017-03-30 | 2017-08-11 | 河南工学院 | A kind of scene-type foreign languages translation and learning system |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110111617A (en) * | 2019-06-19 | 2019-08-09 | 江苏海事职业技术学院 | A kind of English language study auxiliary system |
CN111047924A (en) * | 2019-10-24 | 2020-04-21 | 太原理工大学 | Visualization method and system for memorizing English words |
CN111599232A (en) * | 2020-06-28 | 2020-08-28 | 江苏科技大学 | Smart phone based on virtual reality and augmented reality technology and English learning assisting method |
CN112489222A (en) * | 2020-11-13 | 2021-03-12 | 贵州电网有限责任公司 | AR-based construction method of information fusion system of information machine room operation |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108922274A (en) | A kind of English assistant learning system of multifunctional application | |
CN110531860B (en) | Animation image driving method and device based on artificial intelligence | |
CN108877344A (en) | A kind of Multifunctional English learning system based on augmented reality | |
CN108665744A (en) | A kind of intelligentized English assistant learning system | |
Schiel et al. | The SmartKom Multimodal Corpus at BAS. | |
KR20180073836A (en) | System for Psychological Diagnosis using Virtual Reality Environment Configuration | |
CN110991381A (en) | Real-time classroom student state analysis and indication reminding system and method based on behavior and voice intelligent recognition | |
CN107393357A (en) | Learning device in finger ring | |
CN108877340A (en) | A kind of intelligent English assistant learning system based on augmented reality | |
CN106200886A (en) | A kind of intelligent movable toy manipulated alternately based on language and toy using method | |
CN204650422U (en) | A kind of intelligent movable toy manipulated alternately based on language | |
CN109637207A (en) | A kind of preschool education interactive teaching device and teaching method | |
CN110110169A (en) | Man-machine interaction method and human-computer interaction device | |
CN110517689A (en) | A kind of voice data processing method, device and storage medium | |
CN109885595A (en) | Course recommended method, device, equipment and storage medium based on artificial intelligence | |
CN108777083A (en) | A kind of wear-type English study equipment based on augmented reality | |
CN109064799A (en) | A kind of Language Training system and method based on virtual reality | |
CN113835522A (en) | Sign language video generation, translation and customer service method, device and readable medium | |
CN103745423B (en) | A kind of shape of the mouth as one speaks teaching system and teaching method | |
Gsöllpointner et al. | Digital synesthesia: a model for the aesthetics of digital art | |
CN110427099A (en) | Information recording method, device, system, electronic equipment and information acquisition method | |
CN201986001U (en) | Mouth shape identification input mobile terminal | |
CN116524791A (en) | Lip language learning auxiliary training system based on meta universe and application thereof | |
CN103973953A (en) | Imaging device, displaying device, reproducing device, imaging method and displaying method | |
CN108877311A (en) | A kind of English learning system based on augmented reality |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20181130 |