CN101436354A - Language learning system and method with synchronization display words and sound - Google Patents

Language learning system and method with synchronization display words and sound Download PDF

Info

Publication number
CN101436354A
CN101436354A CNA2007101876111A CN200710187611A CN101436354A CN 101436354 A CN101436354 A CN 101436354A CN A2007101876111 A CNA2007101876111 A CN A2007101876111A CN 200710187611 A CN200710187611 A CN 200710187611A CN 101436354 A CN101436354 A CN 101436354A
Authority
CN
China
Prior art keywords
archives
word
individual character
voice
selected individual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA2007101876111A
Other languages
Chinese (zh)
Inventor
邱全成
陈丽俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inventec Corp
Original Assignee
Inventec Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inventec Corp filed Critical Inventec Corp
Priority to CNA2007101876111A priority Critical patent/CN101436354A/en
Publication of CN101436354A publication Critical patent/CN101436354A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Electrically Operated Instructional Devices (AREA)

Abstract

The invention relates to a language learning system and a method thereof for synchronously displaying characters and sound. The method classifies a word range according to personal requirement, and generates a character file, a sound file and a synchronously displayed output file, so as to carry out technical means of language learning at any time and place. The method can solve the problems that the word range can not be classified for learning according to the personal requirement and the characters and the sound can not be learned synchronously, thereby realizing the technical function of enhancing language learning effect.

Description

Langue leaning system and method thereof with synchronization display words and voice
Technical field
A kind of langue leaning system and method thereof are meant langue leaning system and the method thereof of using synchronization display words and voice about a kind of especially.
Background technology
Foreign language aptitude with more than one is that everyone is needed at present, no matter is on the job market or in dialogue, possesses the people of multiple foreign language aptitude, can be paid attention to more naturally virtually.So the method regular meeting that how to increase the foreign language studying ability is mentioned by people,, many hardware devices or software approach have been proposed on the market to reach the ability of foreign language studying in order to increase foreign language aptitude.
The software that utilization studies English, it is exactly the individual character scope that can list each different stage, for example: primary school, middle school etc., allow after your range of choice, beginning begins to shoulder word from letter " a ".Because these English-words are pre-designed, therefore, can not sort out the word scope according to your actual needs.
The recite words function of software of studying English has: the broadcast of word, word test etc.The method of these recite words all is " vision " by eyes, remembers.But the people that are familiar with language learning know that it is limited only to use single kind " vision " to remember its efficient of individual character, if can cooperate " sense of hearing ", read out so that the form of sound is bright, constantly stimulate people's brain, help the memory of people for word, can allow language learning efficient further.
Therefore, the audio archives that much record are arranged now, can allow you be placed on uses " sense of hearing " to carry out Chinese character memory in the player, but but use the Hard copy mode to allow the user read, therefore need be at fixed-site, can carry out language learning, can't allow the user can carry out language learning anywhere or anytime, this also is a problem.
In view of this, can know that in language learning to utilize the results of learning of two kinds of sense organs of people better, and allow the user sort out the word scope according to the actual needs of oneself, more can promote results of learning, at last, can carry out language learning anywhere or anytime in cooperation, be the present target of pursuing.
In sum, prior art has existed since the midium or long term always and can't sort out the word scope according to demands of individuals and learn as can be known, and solving can't literal and the problem of voice synchronous study, therefore is necessary to propose improved technological means, solves this problem.
Summary of the invention
Because can't sorting out the word scope according to demands of individuals, the prior art existence learns, and solving can't literal and the problem of voice synchronous study, the present invention discloses a kind of langue leaning system and method thereof with synchronization display words and voice then, wherein:
The disclosed langue leaning system with synchronization display words and voice of the present invention, its system comprises: load module, search module, numerical digit dictionary, literal generation module, speech production module and show output module synchronously.
Wherein, load module is in order to receive words searching, setting information.
Search module in the numerical digit dictionary, is found out at least one selected individual character/word that meets words searching, setting information and at least one selected individual character/word explanation of selecting individual character/word correspondence according to words searching, setting information.
The literal generation module is generated as the literal archives in order to will select individual character/word and to select individual character/word explanation.
The speech production module is generated as the voice archives in order to will select individual character/word and to select individual character/word explanation.
Show output module synchronously,, will select individual character/word and selected individual character/word explanation merging and be output as synchronous demonstration output archives, carry out the synchronous demonstration of literal archives and voice archives with synchronous demonstration output archives according to literal archives and voice archives.
The disclosed interactive learning methods of the present invention with synchronization display words and voice, its method comprises the following step:
At first, receive words searching, setting information.
Then, in the numerical digit dictionary, find out at least one selected individual character/word that meets words searching, setting information and at least one selected individual character/word explanation of selecting individual character/word correspondence according to words searching, setting information.
Then, be generated as the literal archives in order to will select individual character/word and to select individual character/word explanation.
Then, be generated as the voice archives in order to will select individual character/word and to select individual character/word explanation.
Then, according to literal archives and voice archives, will select individual character/word and selected individual character/word explanation merging and be output as synchronous demonstration output archives.
At last, carry out the synchronous demonstration of literal archives and voice archives with synchronous demonstration output archives.
The disclosed System and method for of the present invention as above, and the difference between the prior art is, the present invention can be according to the words searching, setting information that the user imported, sort out the word scope according to demands of individuals, the user can learn at similar words, and results of learning are preferable, and can generate literal archives, voice archives and show the output archives synchronously, cooperate again to have the electronic product of playing function and Presentation Function, so that carry out language learning anywhere or anytime.
By above-mentioned technological means, the present invention can reach the technology effect of promoting the language learning effect.
Description of drawings
Fig. 1 has the langue leaning system calcspar of synchronization display words and voice for the present invention.
Fig. 2 has the interactive learning methods process flow diagram of synchronization display words and voice for the present invention.
Fig. 3 is a words searching, setting information of the present invention interface synoptic diagram.
Fig. 4 is a literal archives synoptic diagram of the present invention.
Fig. 5 sets the interface synoptic diagram for voice output of the present invention.
Has the body embodiment
Below will cooperate diagram and embodiment to describe embodiments of the present invention in detail, with this to the present invention how the application technology means implementation procedure that solves technical matters and reach the technology effect can fully understand and implement according to this.
To illustrate that below the present invention has the langue leaning system of synchronization display words and voice, and please also refer to shown in Figure 1ly that Fig. 1 has the langue leaning system calcspar of synchronization display words and voice for the present invention.The disclosed langue leaning system with synchronization display words and voice of the present invention, its system comprises: load module 10, search module 20, numerical digit dictionary 21, literal generation module 30, speech production module 40 and show output module 50 synchronously.
Wherein, load module 10 is in order to receive words searching, setting information.
Aforesaid words searching, setting information can set in order to search part of speech, radical, part of speech, radical, prefix, have a dinner, move, various words theme such as test and search self-defined word classification, and can further the word theme be carried out once more the word classification according to the various words type; Particularly: when being word types with " part of speech ", " part of speech " word types can be carried out word and is categorized as " noun ", " verb " and " adjective " ... classify etc. various words; Or when being word types with " motion ", " motion " word types can be carried out word and is categorized as " ball ", " gymnastics " and " track and field " ... classify etc. various words.
Search module 20 in numerical digit dictionary 21, is found out at least one selected individual character/word and at least one selected individual character/word explanation corresponding with selecting individual character/word of meeting words searching, setting information according to the words searching, setting information of load module 10 output.
Literal generation module 30 is generated as the literal archives in order to selected individual character/word and selected individual character/word explanation with search module 20.
Literal generation module 30 is except can being generated as the literal archives, more comprise selected individual character/word and selected individual character/word explanation are generated as Graphic Documentation, no matter be generated as literal archives or Graphic Documentation, can allow the user carry out visual language learning.
In addition, literal archives that generate for literal generation module 30 or Graphic Documentation are except opening on the executable platform of computer, allow the user carry out outside the visual language learning, literal archives or Graphic Documentation also can be sent to other digital apparatus (digital apparatus described herein is the electronic product that has playing function and Presentation Function) by transport interface and go up unlatching, also can allow the user carry out visual language learning.
But, it is limited that present digital apparatus is confined to the resolution of display screen, therefore, literal generation module 30 more comprises sets the generation literal item, set the generation literal item by literal generation module 30, can reduce the content of interpretation, only show topmost several explanations, on digital apparatus, use more convenient.
Speech production module 40 is generated as the voice archives in order to selected individual character/word and selected individual character/word explanation with search module 20.
The voice archives comprise the pronunciation of selected individual character/word, the independent pronunciation and the selected individual character/word explanation of selected each letter of individual character/word; And speech production module 40 more comprises settings such as setting speech production number of times, speech production recycle design, speech production form and speech production catalogue.
Show output module 50 synchronously, according to the literal archives of literal generation module 30 generations and the voice archives of speech production module 40 generations, to select individual character/word and selected individual character/word explanation merging and be output as synchronous demonstration output archives, carry out the synchronous demonstration of literal archives and voice archives with synchronous demonstration output archives.
In addition, the present invention also can be transferred to digital apparatus with literal archives, voice archives and the archives of demonstration output synchronously by transport interface, to show synchronously that on digital apparatus the output archives carry out the synchronous demonstration of literal archives and voice archives, make language learning to learn at any time.
Then, explain orally function mode of the present invention and flow process, and please refer to Fig. 2 to shown in Figure 5 with a specific embodiment.Fig. 2 has the interactive learning methods process flow diagram of synchronization display words and voice for the present invention; Fig. 3 is a words searching, setting information of the present invention interface synoptic diagram; Fig. 4 is a literal archives synoptic diagram of the present invention; Fig. 5 sets the interface synoptic diagram for speech production of the present invention.
Please refer to shown in Figure 3ly, at first, load module 10 receives words searching, setting information 61 (steps 100) of users' input.
Words searching, setting information 61 adopts pulldownmenus to select input, but do not limit to the present invention with this, with embodiment, when the user is a word types with " part of speech ", the part of speech word types can be carried out word and is categorized as " noun ", " verb " and " adjective " ... etc. the various words classification, be the word classification in this selection " verb ".
The user can select whether to carry out " outside output " 62; When the user finishes words searching, setting information 61, to press " determining " (as shown in Figure 3) and finish input, load module 10 can receive the words searching, setting information 61 of user's input.
Then, the words searching, setting information 61 that search module 20 can receive according to load module 10 is found out at least one selected individual character/word and at least one selected individual character/word explanation (step 200) corresponding with selecting individual character/word of meeting words searching, setting information 61 in numerical digit dictionary 21.
Please refer to shown in Figure 4ly, literal generation module 30 will be in order to will select individual character/word and selected individual character/word explanation is generated as literal archives (step 300).
The literal archives comprise selected individual character/word 63 and reach the selected individual character/word explanation 64 corresponding with selected individual character/word for " 1. to make and to move forward for " advance "; Advance, promote that 2. will ... 3. prepayment in advance ".
Then, speech production module 40 is generated as voice archives (step 400) in order to selected individual character/word 63 and the selected individual character/word explanation 64 with search module 20.
When the user chooses " outside output " 62, speech production module 40 will need the user to import the speech production setting, in order to generate according to different setting different phonetic matrixs (for example: MP3 format, WMV form ... etc.), the voice archives comprise the pronunciation of selected individual character/word 63, the independent pronunciation and the selected individual character/word explanation 64 of selected individual character/word 63 each letter.
The speech production of speech production module as shown in Figure 5,40 is set and is comprised settings such as setting speech production number of times 65, speech production recycle design 66, speech production form 67 and speech production catalogue 68.
With embodiment, speech production number of times 65 be " 3 times ", speech production recycle design 66 for " single cycle ", speech production form 67 for " 42 kilo hertzs of 8 two-channels " and speech production catalogue 68 for " C: ".
When " determining " in pressing Fig. 5, speech production module 40 is finished the voice archives and is generated in order to will select individual character/word 63 and selected individual character/word explanation 64.
Then, show synchronously the voice archives that literal archives that output module 50 generates according to literal generation module 30 and speech production module 40 generate, will select individual character/word and selected individual character/word explanation and merge and be output as synchronous demonstration output archives (step 500).
At last, carry out the synchronous demonstration (step 600) of literal archives and voice archives with synchronous demonstration output archives.
In addition, the present invention also can be transferred to digital apparatus with literal archives, voice archives and the archives of demonstration output synchronously by transport interface, to show synchronously that on digital apparatus the output archives carry out the synchronous demonstration of literal archives and voice archives, make language learning to learn at any time.
In sum, as can be known the difference between the present invention and the prior art be to have the present invention can be according to the words searching, setting information that the user imported, sort out the word scope according to demands of individuals, the user can learn at similar words, results of learning are preferable, and can generate the literal archives, voice archives and the archives of demonstration output synchronously, cooperate electronic product again with playing function and Presentation Function, so that carry out the technological means of language learning anywhere or anytime, can solve the existing problem of prior art by this technological means, and then reach the technology effect of promoting the language learning effect.
Though the disclosed embodiment of the present invention as above, described content is not in order to direct qualification scope of patent protection of the present invention.Any those skilled in the art under the prerequisite that does not break away from the disclosed spirit and scope of the present invention, can do a little change what implement in form and on the details.Scope of patent protection of the present invention still must be with being as the criterion that appending claims was defined.

Claims (10)

1, a kind of langue leaning system with synchronization display words and voice, described system comprises:
One load module is in order to receive a words searching, setting information;
One search module in a numerical digit dictionary, is found out at least one selected individual character/word and at least one selected individual character/word explanation corresponding with described selected individual character/word that meet described words searching, setting information according to described words searching, setting information;
One literal generation module is in order to be generated as literal archives with described selected individual character/word and described selected individual character/word explanation;
One speech production module is in order to be generated as voice archives with described selected individual character/word and described selected individual character/word explanation; And
One shows output module synchronously, according to described literal archives and this voice archives, described selected individual character/word and described selected individual character/word explanation merging are output as the archives of demonstration output synchronously, carry out the synchronous demonstration of described literal archives and described voice archives with described synchronous demonstration output archives.
2, the langue leaning system with synchronization display words and voice as claimed in claim 1, wherein said words searching, setting information are set in order to search part of speech, radical, prefix, to have a dinner, move, test the various words theme.
3, the langue leaning system with synchronization display words and voice as claimed in claim 1, more comprise by transport interface described literal archives, described voice archives and described synchronous demonstration output archives are transferred to a digital apparatus, carry out the synchronous demonstration of described literal archives and described voice archives with described synchronous demonstration output archives.
4, the langue leaning system with synchronization display words and voice as claimed in claim 3, wherein said digital apparatus is the electronic product that has playing function and Presentation Function.
5, described selected individual character/word explanation that the langue leaning system with synchronization display words and voice as claimed in claim 1, wherein said literal generation module more comprise described selected individual character/word and described selected individual character/word correspondence is generated as a Graphic Documentation.
6, the langue leaning system with synchronization display words and voice as claimed in claim 1, wherein said speech production module more comprise sets speech production number of times, speech production recycle design, speech production form and speech production catalogue.
7, a kind of interactive learning methods with synchronization display words and voice, described method comprises the following step:
Receive a words searching, setting information;
In a numerical digit dictionary, find out at least one selected individual character/word and at least one selected individual character/word explanation corresponding that meet described words searching, setting information according to described words searching, setting information with described selected individual character/word;
In order to described selected individual character/word and described selected individual character/word explanation are generated as literal archives;
In order to described selected individual character/word and described selected individual character/word explanation are generated as voice archives;
According to described literal archives and described voice archives, described selected individual character/word and described selected individual character/word explanation merging are output as the archives of demonstration output synchronously; And
Carry out the synchronous demonstration of described literal archives and described voice archives with described synchronous demonstration output archives.
8, the interactive learning methods with synchronization display words and voice as claimed in claim 7, wherein said words searching, setting information are set in order to search self-defined word classification.
9, the interactive learning methods with synchronization display words and voice as claimed in claim 7, the step that wherein generates described literal archives more comprise sets the step that generates literal item.
10, the interactive learning methods with synchronization display words and voice as claimed in claim 7, the step that wherein generates described voice archives more comprises the step of setting speech production number of times, speech production recycle design, speech production form and speech production catalogue.
CNA2007101876111A 2007-11-15 2007-11-15 Language learning system and method with synchronization display words and sound Pending CN101436354A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNA2007101876111A CN101436354A (en) 2007-11-15 2007-11-15 Language learning system and method with synchronization display words and sound

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNA2007101876111A CN101436354A (en) 2007-11-15 2007-11-15 Language learning system and method with synchronization display words and sound

Publications (1)

Publication Number Publication Date
CN101436354A true CN101436354A (en) 2009-05-20

Family

ID=40710784

Family Applications (1)

Application Number Title Priority Date Filing Date
CNA2007101876111A Pending CN101436354A (en) 2007-11-15 2007-11-15 Language learning system and method with synchronization display words and sound

Country Status (1)

Country Link
CN (1) CN101436354A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103942990A (en) * 2013-01-23 2014-07-23 郭毓斌 Language learning device
CN108763372A (en) * 2018-05-17 2018-11-06 上海尬词教育科技有限公司 Interactive learning methods, system, program product and mobile terminal
CN109035994A (en) * 2018-09-28 2018-12-18 郭派 A kind of method and system constructing English word monoid

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103942990A (en) * 2013-01-23 2014-07-23 郭毓斌 Language learning device
CN108763372A (en) * 2018-05-17 2018-11-06 上海尬词教育科技有限公司 Interactive learning methods, system, program product and mobile terminal
CN109035994A (en) * 2018-09-28 2018-12-18 郭派 A kind of method and system constructing English word monoid

Similar Documents

Publication Publication Date Title
US6377925B1 (en) Electronic translator for assisting communications
Imai et al. The sound symbolism bootstrapping hypothesis for language acquisition and language evolution
Allwood Multimodal corpora
Ok Use of iPads as assistive technology for students with disabilities
US20070255570A1 (en) Multi-platform visual pronunciation dictionary
McCarten Corpus-informed course book design
CN101425054A (en) Chinese learning system
Strange et al. Cross-language categorization of French and German vowels by naïve American listeners
US20050048450A1 (en) Method and system for facilitating reading and writing without literacy
CN101436354A (en) Language learning system and method with synchronization display words and sound
Hinton et al. A dictionary for whom? Tensions between academic and nonacademic functions of bilingual dictionaries
Evangeline A survey on Artificial Intelligent based solutions using Augmentative and Alternative Communication for Speech Disabled
McKee et al. Sign language lexicography
Elsheikh et al. Mada tawasol symbols & mobile app
AU2020103820A4 (en) SUSAN- Sign [Languages]Universal Sign [Languages] Auslang New
Basu et al. Vernacula education and communication tool for the people with multiple disabilities
Baehaqi et al. Morphological analysis of speech translation into Indonesian sign language system (SIBI) on android platform
Nonaka et al. Linguistic and cultural design features of the manual syllabary in Japan
Mills Optophones and musical print
KR20170043292A (en) Method and apparatus of speech synthesis for e-book and e-document data structured layout with complex multi layers
Ellis et al. Learning a physical skill via a computer: a case study exploring Australian Sign Language
Oirere et al. Swahili text and speech corpus: a review
Mulatsih et al. Textual Meaning of the Lecturers’ Utterances and Gestures Used in Teaching Reading and Writing: A Systemic Functional Multimodal Discourse Analysis (SFMDA)
Baowidan et al. A New N-gram analytics tool in ELAN and its application to improve automatic fingerspelling generation
KR20110039747A (en) System for learning a foreign language and a learning method using the same

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Open date: 20090520