WO2017082447A1 - 외국어 독음 및 표시장치와 그 방법, 및 이를 이용한 외국어 리듬 동작 감지 센서 기반의 운동학습장치와 운동학습방법, 이를 기록한 전자매체 및 학습교재 - Google Patents
외국어 독음 및 표시장치와 그 방법, 및 이를 이용한 외국어 리듬 동작 감지 센서 기반의 운동학습장치와 운동학습방법, 이를 기록한 전자매체 및 학습교재 Download PDFInfo
- Publication number
- WO2017082447A1 WO2017082447A1 PCT/KR2015/012741 KR2015012741W WO2017082447A1 WO 2017082447 A1 WO2017082447 A1 WO 2017082447A1 KR 2015012741 W KR2015012741 W KR 2015012741W WO 2017082447 A1 WO2017082447 A1 WO 2017082447A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- foreign language
- language
- foreign
- phoneme
- pronunciation
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 123
- 238000000034 method Methods 0.000 title claims abstract description 121
- 239000000463 material Substances 0.000 title claims abstract description 12
- 230000001020 rhythmical effect Effects 0.000 title abstract 2
- 238000006243 chemical reaction Methods 0.000 claims abstract description 77
- 230000033001 locomotion Effects 0.000 claims description 280
- 230000033764 rhythmic process Effects 0.000 claims description 185
- 238000004458 analytical method Methods 0.000 claims description 83
- 230000009471 action Effects 0.000 claims description 21
- 238000004891 communication Methods 0.000 claims description 20
- 230000014509 gene expression Effects 0.000 claims description 19
- 230000008569 process Effects 0.000 claims description 18
- 238000000926 separation method Methods 0.000 claims description 14
- 210000001638 cerebellum Anatomy 0.000 claims description 13
- 230000006870 function Effects 0.000 claims description 13
- 230000015654 memory Effects 0.000 claims description 12
- 239000002858 neurotransmitter agent Substances 0.000 claims description 8
- 230000004936 stimulating effect Effects 0.000 claims description 8
- 230000005540 biological transmission Effects 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 3
- 238000013519 translation Methods 0.000 claims description 3
- 210000000214 mouth Anatomy 0.000 description 49
- 238000010586 diagram Methods 0.000 description 21
- 238000007726 management method Methods 0.000 description 6
- 238000010295 mobile communication Methods 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000007787 long-term memory Effects 0.000 description 4
- 230000008450 motivation Effects 0.000 description 4
- 210000000056 organ Anatomy 0.000 description 4
- 230000002354 daily effect Effects 0.000 description 3
- 230000001939 inductive effect Effects 0.000 description 3
- 210000003127 knee Anatomy 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000003203 everyday effect Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000001965 increasing effect Effects 0.000 description 2
- 206010010219 Compulsions Diseases 0.000 description 1
- 241001674048 Phthiraptera Species 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000002860 competitive effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- VJYFKVYYMZPMAB-UHFFFAOYSA-N ethoprophos Chemical compound CCCSP(=O)(OCC)SCCC VJYFKVYYMZPMAB-UHFFFAOYSA-N 0.000 description 1
- 210000001035 gastrointestinal tract Anatomy 0.000 description 1
- 230000002650 habitual effect Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 239000003607 modifier Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000638 stimulation Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 230000003313 weakening effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/02—Feature extraction for speech recognition; Selection of recognition unit
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B19/00—Teaching not covered by other main groups of this subclass
- G09B19/06—Foreign languages
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/06—Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/005—Language recognition
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/04—Segmentation; Word boundary detection
- G10L15/05—Word boundary detection
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/02—Feature extraction for speech recognition; Selection of recognition unit
- G10L2015/025—Phonemes, fenemes or fenones being the recognition units
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/02—Feature extraction for speech recognition; Selection of recognition unit
- G10L2015/027—Syllables being the recognition units
Definitions
- the present invention relates to a foreign language reading and display device and a method thereof, and to a foreign language rhythm motion detection sensor-based exercise learning apparatus and exercise learning method, and to an electronic medium and a learning textbook recording the same.
- the present invention provides a device and a method for converting a Korean language into a native language and display the same, using procedural memory through motor learning, not declarative memory through memorization learning.
- Exercise learning device and exercise learning method based on foreign language rhythm motion detection sensor that provides foreign language phoneme, syllable and rhythm movements to be measured, judged and managed through motion sensor, sound sensor and vibration monitoring sensor , And electronic media and learning materials recording the same.
- this method also has low reliability due to large differences in individual levels of native speakers or instructors with respect to pronunciation accuracy, and the learner learns the existing language because it acquires foreign language sounds through the meaning of letters rather than foreign language sounds through English phonetic symbols. Due to the confusion with the method, there is a problem in that learning by acquiring accurate foreign language is delayed.
- Korean Patent Laid-Open Publication No. 10-2003-0013993 if a person knows only Korean as a foreign language such as daily English or daily Japanese, which is frequently used in daily life, the pronunciation content is translated into Korean notation expressions according to the selected field situation. The displayed service is being provided.
- the cerebellum's motor capacity operates at 10 Hz per second (10 vibrations per second) as an upper limit.
- English listening and speaking like a bicycle ride, correspond to exercise learning and must be stored in procedural memory through repeated exercise. Respond immediately to listening and speaking.
- Exercise learning has a characteristic that once it is remembered, it will not be forgotten for a long time, but it takes a long time to remember, and most learners who do not use English as their first language fail because of lack of absolute time.
- an object of the present invention is to separate the sentences of the foreign language input by word for accurate acquisition of a foreign language by the words and to pronounce the pronunciation of each of the separated words MPA (Mglish Phonetic Alphabet) Separate the phonemes by phoneme, write accents, and convert the pronunciation of the separated foreign language words into national phonemes according to the predetermined pronunciation rules, and combine the national phonemes according to the predetermined combination rules to generate national language syllables, words, and sentences.
- MPA Mah Phonetic Alphabet
- Another object of the present invention is to provide a foreign language and a native language notation method and apparatus using an English phonetic symbol so that the accent of the pronunciation of a separated foreign language is reflected on the syllables and words of the native language.
- Another object of the present invention is to provide a foreign language and a native language notation method and apparatus using an English phonetic symbol so that only syllables of foreign language pronunciations separated by words can be displayed as syllables and words of a native language.
- Another object of the present invention is based on foreign language rhythm motion detection sensor for measuring and determining through foreign language acquisition principle and motion sensor or sound sensor, vibration sensor to learn and react to the body easily, exciting and fun through exercise It is to provide an exercise learning apparatus, and an exercise learning method using the same.
- the present invention is a motion learning device based on a foreign language rhythm motion detection sensor for stimulating the cerebellum, which is an exercise organ, by stimulating the cerebellum, which is an exercise organ, so that memory is continuously secreted and stored for a long time. And to provide a motor learning method using the same.
- the present invention provides a motion learning device based on a foreign language rhythm motion detection sensor for combining learner's intrinsic victory desire and foreign language learning involved in motivation, competition, and learning for the learner, and an exercise learning method using the same. It is for.
- the separated foreign-language word is divided into phonemes using a predetermined pronunciation symbol, and the part corresponding to the syllable of the foreign-language word among the separated foreign-language phonemes is divided.
- Generates the national language syllables, words, and sentences by combining and combining the generated national language phonemes according to the foreign language combining rules, and generates and displays the national language syllables, words, and sentences according to the predetermined foreign language pronunciation rules, or separate foreign languages
- the part of the phoneme that does not correspond to the syllable of the foreign language word is converted server 300 to be displayed as a foreign language phoneme according to a predetermined foreign language pronunciation rules;
- a display unit 500 which displays at least one of a national language sentence and an input foreign language sentence of the conversion server 300 on a screen.
- the conversion server 300 The conversion server 300,
- a foreign language word separator 310 for dividing the input foreign language sentence into word units
- a foreign-language phoneme separator 320 for dividing each foreign-language word separated from the foreign-language word divider 310 into a phoneme unit using a predetermined phonetic symbol and expressing accents;
- the pronunciation of the phoneme corresponding to the syllable of the foreign language word among the foreign phonemes separated from the foreign language phoneme separating unit 320 is matched to the pronunciation of one native language phoneme based on a pronunciation rule between the foreign language and the native language, and matched to the foreign phoneme.
- the native phonemes are generated at least one of the syllables, words, and sentences of the native language and transmitted to the display unit.At the same time, the pronunciation of the phonemes that do not correspond to the syllables of the foreign language words among the separated foreign language phonemes And a native language converter 330 which transmits the foreign phoneme to the display unit according to the pronunciation rule.
- the national language conversion unit 330 The national language conversion unit 330,
- the pronunciation rule of the phoneme that does not correspond to the syllable of the foreign language words among the foreign phonemes separated at the same time is the pronunciation rule analysis module 331 to output the foreign phoneme itself;
- a combination rule analysis module 332 for outputting a national language syllable combining consonants and vowels of a native language matching a pronunciation of a foreign language syllable according to a predetermined combination rule
- the native language output module 334 for outputting the phoneme and syllables of the foreign language phonemes output from the pronunciation rule analysis module 331 and the combination rule analysis module 332 and the national languages matching the foreign language syllables to the display unit 500. It characterized in that the configuration, including.
- the pronunciation rule analysis module 3331 The pronunciation rule analysis module 331,
- the pronunciation of the phoneme that does not correspond to the syllable of the foreign language is characterized in that it is configured to output the foreign phoneme itself according to a predetermined foreign language pronunciation rule.
- the national language conversion unit 330 The national language conversion unit 330,
- the apparatus may further include a rhythm rule analysis module 333 which reflects the stress included in the pronunciation of the syllable of the foreign language input according to a predetermined rhythm rule to the national syllable of the combined rule analysis module 332.
- a rhythm rule analysis module 333 which reflects the stress included in the pronunciation of the syllable of the foreign language input according to a predetermined rhythm rule to the national syllable of the combined rule analysis module 332.
- the rhythm rule analysis module 333 is
- Characterized in the pronunciation of the syllable of the input foreign language is reflected to the national language syllables, words and sentences generated according to a predetermined rhythm rule is characterized in that it is provided to be delivered to the native language output module.
- the national language output module 334 is the national language output module 334.
- the conversion server 300 is
- the apparatus may further include a foreign language conversion unit 340 which transmits the foreign syllable syllables of each foreign language word separated from the foreign language word separation unit 310 to the display unit 500 so as to visually display the foreign syllable parts.
- a foreign language conversion unit 340 which transmits the foreign syllable syllables of each foreign language word separated from the foreign language word separation unit 310 to the display unit 500 so as to visually display the foreign syllable parts.
- the foreign language conversion unit 340 The foreign language conversion unit 340,
- a syllable analysis module 341 which derives and outputs syllables from each foreign language word separated from the foreign language word divider 310, and a foreign language output module 342 which transmits the derived foreign language syllables to the display to visually distinguish them. Characterized in that comprises a.
- the display unit 500 The display unit 500,
- the individual word When displaying the translation of the national language and foreign language, the individual word, which is the minimum unit of meaning, applies the rule passed through the conversion server as it is, and when the individual words combine to represent meaning or form a sentence, analysis of parts of speech is performed. It is characterized by displaying the function word and the key word separately.
- the keyword is any one of a noun, a main verb, an adjective, an adverb, and an interjection, and the conversion rule is applied as it is.
- the function word is any one of pronouns, prepositions, modal verbs, Be verbs, qualifiers, and conjunctions.
- the function word is characterized in that a process of displaying at least one of a predetermined shape, font size, and color to a predetermined setting value is displayed on the screen.
- a foreign language word separator 310 for separating the input foreign language sentence into word units
- a foreign-language phoneme separator 320 for dividing each foreign-language word separated from the foreign-language word divider 310 into a phoneme unit using a predetermined phonetic symbol and expressing accents;
- the pronunciation of the phoneme corresponding to the syllable of the foreign language word among the foreign phonemes separated from the foreign language phoneme separating unit 320 is matched to the pronunciation of one native language phoneme based on a pronunciation rule between the foreign language and the native language, and matched to the foreign phoneme.
- the native phonemes are generated at least one of the syllables, words, and sentences of the native language and transmitted to the display unit.At the same time, the pronunciation of the phonemes that do not correspond to the syllables of the foreign language words among the separated foreign language phonemes And a native language converter 330 which transmits the foreign phoneme to the display unit according to the pronunciation rule.
- the national language conversion unit 330 The national language conversion unit 330,
- the pronunciation rule of the phoneme that does not correspond to the syllable of the foreign language words among the foreign phonemes separated at the same time is the pronunciation rule analysis module 331 to output the foreign phoneme itself;
- a combination rule analysis module 332 for outputting a national language syllable combining consonants and vowels of a native language matching a pronunciation of a foreign language syllable according to a predetermined combination rule
- the native language output module 334 for outputting the phoneme and syllables of the foreign language phonemes output from the pronunciation rule analysis module 331 and the combination rule analysis module 332 and the national languages matching the foreign language syllables to the display unit 500. It characterized in that the configuration, including.
- the pronunciation rule analysis module 3331 The pronunciation rule analysis module 331,
- the pronunciation of the phoneme that does not correspond to the syllable of the foreign language is characterized in that it is configured to output the foreign phoneme itself according to a predetermined foreign language pronunciation rule.
- the separated foreign-language word is divided into phonemes by using a predetermined pronunciation symbol, and the portion corresponding to the syllable of the foreign-language word among the separated foreign-language phonemes is predetermined.
- Generates the national language syllables of one of the native consonants and vowels according to the pronunciation rules of foreign languages generates and displays national language syllables, words, and sentences by combining and combining the generated national language phonemes according to the foreign language combining rules.
- a portion which does not correspond to the syllable of the foreign language word is converted into a foreign language phoneme according to a predetermined foreign language pronunciation rule;
- the pronunciation of the phoneme corresponding to the syllable of the foreign language word among the foreign phonemes separated from the foreign phoneme separation step is matched to the pronunciation of one native language phoneme based on the pronunciation rules between the foreign language and the native language, and the native phoneme matched to the foreign phoneme Based on the predetermined rules for combining foreign languages, it generates at least one of syllables, words, and sentences of the native language and transmits them to the display unit.At the same time, pronunciation of phonemes that do not correspond to the syllables of foreign language words among the foreign language phonemes It is characterized in that it comprises a; native language conversion step of transmitting to the display unit as a foreign language phoneme itself.
- Characterized in accordance with the predetermined rhythm rule is characterized in that it is configured to reflect the stress included in the pronunciation of the syllable of the foreign language to the generated national language syllable.
- the method may further include a foreign language conversion step of processing to display visually distinguished foreign syllable portions of each foreign language word separated in the foreign language word separation step.
- the present invention also provides a computer readable medium including instructions for performing each step of the foreign language reading and display method according to the present invention.
- the present invention provides an electronic medium in which each step of the foreign language reading and display method according to the present invention is recorded.
- the present invention provides a textbook that visually records each step of the foreign language reading and display method according to the present invention.
- Control the sensor unit 610 to detect any voice and motion representing the foreign language rhythm, analyze and score any voice and motion detected from the sensor unit 610, and a voice recognition module ( 631, a controller 630 including a gesture recognition module 632 and a scoring utilization module 633; And
- a storage unit 640 that stores setting information for analyzing phonemes, syllables, sentences, and motion detection, and information on accents, strong and weak sounds.
- the speech recognition module 631 may include:
- Phoneme recognition means 631a for recognizing and analyzing consonants and vowel actions corresponding to phoneme movements of learners from the sensor unit 610;
- Syllable recognition means 631b for performing analysis by recognizing the syllable movement of the learner by using the result of phoneme analysis according to the shape of the mouth and tongue by the phoneme recognition means 631a;
- the words analyzed by the syllable recognition means 631b are classified into at least one or more words forming sentences which are units of combined communication, and are analyzed as nouns, main verbs, adjectives, and adverbs.
- the phoneme recognition means 631a In addition, in the exercise learning apparatus based on a foreign language rhythm motion sensor according to the present invention, the phoneme recognition means 631a,
- the eight M1 to M8 corresponding to the mouth shape type are:
- M1 (Mouth 1) corresponding to the pronunciation 'a'; M2 (Mouth 2) corresponding to 'i' and 'e';'I',' ⁇ ',' Corresponding M3 (Mouth 3); M4 (Mouth 4) corresponding to 'u'; M5 (Mouth 5) corresponding to 'o'; M6 (Mouth 6) corresponding to the pronunciations 'b', 'p', and 'm'; M7 (Mouth 7) corresponding to 'f' and 'v'; M8 (Mouth 8) corresponding to 's' and 'z'; It is characterized by that.
- T1 to T8 corresponding to the tongue position type are:
- T1 (Tongue 1) which is a basic position for the pronunciation 'a', 'o', 'u', ' ⁇ ' and 'I'; Tongue 2, behind the lower lower teeth for 's', 'z'; Tongue 3, which is the upper upper molar tip for 'r'; Tongue 4, which is the middle of the upper upper teeth for 'i', 'e', and ' ⁇ '; T5 (Tongue 5), which is the front of the front upper teeth for ' ⁇ ', ' ⁇ '; Tongue 6, behind the upper upper teeth to 'l'; Tongue 7, in front of the Upper Hard Palete for 'd', 't', and 'n'; T8 (Tongue 8), behind the Upper Soft Palete for 'k', 'g', and ' ⁇ '; It is characterized by that.
- the syllable recognition means 631b includes:
- the syllable is divided into 'first accent', which pronounces the syllable strongly above the preset frequency, 'second accent', which pronounces weaker than the first stress, 'syllable-free syllable', and 'silence' without sound.
- the first stressed motion is set to 'first stress' and the second stressed motion to 'second stressed' so that movements by the first and second stressed motions are performed together with mouth shape and tongue position recognition. It is done.
- the motion recognition module 632 may include:
- a decibel (dB) preset or a strong sound action such as a strong hand or a stepping on the X, Y, and Z axes is set by the sensor unit 610.
- a weak hand motion of less than a predetermined decibel (dB) and a weak sound action such as a step may be set to be recognized by the sensor unit 610.
- the motion recognition module 632 may include:
- the neurotransmitter responsible for memory is continuously secreted by stimulating the cerebellum by the strong and weak motions stored in the storage unit 640. It is characterized by performing a function to be stored.
- the scoring utilization module 633 In addition, in the exercise learning apparatus based on a foreign language rhythm motion detection sensor according to the present invention, the scoring utilization module 633,
- the first accenting operation and the second accenting operation set by the syllable recognition unit 631b, and the strong and weak operation set by the sentence recognition unit 631c are detected by the foreign language rhythm operation using the sensor unit 610. It is characterized by using.
- the scoring utilization module 633 In addition, in the exercise learning apparatus based on a foreign language rhythm motion detection sensor according to the present invention, the scoring utilization module 633,
- the sensor unit 610 In addition, in the exercise learning device based on the foreign language rhythm motion detection sensor according to the present invention, the sensor unit 610,
- It includes a 'motion detection sensor', 'sound detection sensor', 'vibration detection sensor', and any other detectable means for detecting any motion expressing a foreign language rhythm.
- a controller configured to control the sensor unit to detect any voice and motion expressing a foreign language rhythm, analyze and control any voice and motion detected from the sensor unit, and include a voice recognition module and a motion recognition module; Characterized in that it comprises a.
- the speech recognition module In addition, in the exercise learning apparatus based on a foreign language rhythm motion detection sensor according to the present invention, the speech recognition module
- Phoneme recognition means for recognizing and analyzing consonants and vowel actions corresponding to phoneme movements of a learner from the sensor unit;
- Syllable recognition means for performing the analysis by recognizing the syllable movement of the learner by using the results of phoneme analysis according to the mouth shape and tongue shape by the phoneme recognition means;
- the words analyzed by the syllable recognition means are divided into at least one or more words that form a sentence which is a unit of communication that is combined, and are analyzed to extract nouns, main verbs, adjectives, and adverbs that are the core elements of the sentence, and set them as strong and strong.
- Sentence recognition means for setting the functional element to the abbreviation.
- the motion recognition module is
- a strong sound action such as a decibel (dB) or a standard value displayed on the X, Y, and Z axes, or a strong hand or foot, which is set in advance on the strong sound set by the sentence recognition means, is set to be recognized by the sensor unit. It is characterized in that a weak hand operation, such as a weak hand less than a set decibel (dB), such as a foot step is set to be recognized by the sensor unit.
- dB decibel
- dB decibel
- Phoneme recognition means 631a for recognizing and analyzing consonants and vowel actions corresponding to phoneme movements of the learner from the sensor unit 610;
- Syllable recognition means 631b for performing analysis by recognizing the syllable movement of the learner by using the result of phoneme analysis according to the shape of the mouth and tongue by the phoneme recognition means 631a;
- the words analyzed by the syllable recognition means 131b are classified into at least one or more words forming sentences, which are units of combined communication, and are analyzed as nouns, main verbs, adjectives, and adverbs.
- Sentence recognizing means 631c for extracting and setting a strong tone of emphasis and setting a weak tone for a functional element; Characterized in that it comprises a speech recognition module 631 configured to include.
- a strong hand gesture such as a strong hand or a foot beat
- a strong hand gesture is set by the sensor unit 610 to a strong sound set by the sentence recognizing means 631c, and is set in decibels (dB) for the weak sound.
- a motion recognition module 632 for setting a weak hand gesture such as less than a weak hand and one step to be recognized by the sensor unit 610; It characterized in that it further comprises.
- Scoring utilization module 633 It characterized in that it further comprises.
- the exercise learning apparatus 600 based on the foreign language rhythm motion detection sensor 600 recognizes and analyzes consonants and vowel mouth motions corresponding to the phoneme movements of the learner from the sensor unit 610, and includes eight M1 to M8 corresponding to mouth shape types.
- the exercise learning apparatus 600 based on the foreign language rhythm motion detection sensor utilizes the results of phoneme analysis according to the shape of the mouth and tongue, so that each syllable is recognized for the recognized words to perform the analysis through the recognition of the syllable movement of the learner.
- the first category is divided into 'first accent' strongly pronounced by the stress above the predetermined frequency, 'second accent' pronounced weaker than the first stress, 'syllable-free syllable', and the 'silent' without sound Three steps;
- the exercise learning apparatus 600 based on the foreign language rhythm motion sensor detects a first accent motion in 'first accent' and a second accent motion in 'second accent' on words input from the sensor unit 610. Analyzing the first accent and the second accent in at least one syllable included; It characterized in that it further comprises.
- the foreign language rhythm motion detection sensor-based exercise learning device 600 divides the analyzed words into sentences, which are units of communication combined with the analyzed words, and analyzes each sentence composed of at least one word to analyze the sentences.
- a fifth step of extracting nouns, main verbs, adjectives, and adverbs which are the core elements and recognizing them as emphatic strongness, and recognizing functional elements as weakness;
- the exercise learning apparatus 600 When the exercise learning apparatus 600 based on the foreign language rhythm motion detection sensor 600 recognizes the strong sound for the recognized strong sound, it determines whether the strong sound motion is recognized by the sensor unit 610, and the weak sound action for the recognized weak sound. A sixth step of determining whether it is recognized by the sensor unit 610; It characterized in that it further comprises.
- the foreign language rhythm motion detection sensor-based exercise learning device 600 is characterized by recognizing sentences and motions in a state in which the phoneme rules, syllable rules, and rhythm rules applied for each level, situation, and country are different.
- At least one sentence including both a language of a country using English as a native language or a language of a country using English as a second foreign language is recognized.
- the present invention also provides a computer reading medium including instructions for performing each step of a foreign language rhythm motion detection sensor-based exercise learning method.
- the present invention also provides an electronic medium in which each step of a motion learning method based on a foreign language rhythm motion detection sensor according to the present invention is recorded.
- the present invention provides a textbook for visually recording each step of the motion learning method based on a foreign language rhythm motion sensor.
- a pronunciation rule unit for dividing the foreign language pronunciation into a pronunciation divided based on a mouth shape and a pronunciation separated based on a tongue position; And a sentence expression unit displaying a rhythm image in a foreign language and a native language according to a pronunciation rule and a preset rhythm rule of the pronunciation rule unit.
- the foreign language or the native language is characterized by consisting of a sentence representation that displays the rhythm image according to the stress.
- the sentence expression unit may include a foreign language expression unit in which each word forming a foreign language sentence is arranged; A national language expression unit which is arranged so that a national language or a national language and a foreign language are mixed and matched with the foreign language sentence; And a motion image representation unit arranged to match the rhythm image according to the stress of the sentence disposed in the foreign language representation unit or the native language representation unit.
- the rhythm image may be represented by one or more selected from a font font size difference, a color difference, a thickness difference of a letter, and a specific shape.
- the motion image is characterized by being represented by any one or more of a hand motion, a foot motion, a body motion.
- the native language can be displayed as it is the pronunciation of the foreign language read aloud, it is possible to accurately acquire the foreign language.
- the apparatus for reading and displaying a foreign language may reflect the accent of the pronunciation of a foreign language in syllables and words of the native language, and display only the syllables of the foreign language pronunciation separated by words as syllables and words of the native language.
- the visual enhancement effect makes it easier and easier to learn foreign languages with proper pronunciation.
- the exercise learning apparatus based on the foreign language rhythm motion detection sensor and the exercise learning method using the same, the neurotransmitter responsible for memory by stimulating the cerebellum, which is an exercise organ, is maintained by detecting the movement at the same time as the pronunciation of the foreign language. Is secreted into the body to provide long-term memory.
- the foreign language rhythm motion detection sensor-based exercise learning device and the exercise learning method using the same, the foreign language learning principle and motion detection sensor or sound sensor, vibration, easy to learn and react to the body easily through exercise It provides the effect that can be measured and judged through the sensor.
- the exercise learning apparatus based on the foreign language rhythm motion detection sensor and the exercise learning method using the same have the effect of combining the intrinsic victory desire and foreign language learning that are involved in the motivation of the learner and inducing the competitive spirit. To provide.
- the foreign language learning teaching material according to the present invention provides a long-term memory effect through the foreign language acquisition principle that the body learns and responds easily and excitingly and funly through exercise with the pronunciation of the foreign language.
- FIG. 1 is a view schematically showing the configuration of a foreign language reading and display device according to an embodiment of the present invention
- FIG. 2 is a diagram schematically illustrating a configuration of a native language conversion unit of FIG. 1;
- FIG. 3 is a view briefly illustrating a configuration of a foreign language conversion unit of FIG. 1;
- FIG. 4 is a view showing briefly the operation sequence of the conversion server of FIG.
- FIG. 5 is a diagram illustrating a native language conversion process for Nos. 1, 2, and 3 of FIG. 4;
- FIG. 6 is a diagram illustrating an example of converting a native language to 4 of FIG. 4;
- FIG. 7 is a diagram illustrating an image conversion process of FIG. 4 as an example
- FIG. 8 is a view showing a national language and a foreign language representation state generated through the input foreign language sentences and the conversion server according to an embodiment of the present invention.
- FIG. 9 is a schematic diagram showing a principle of motion learning conversion in an exercise learning apparatus based on a foreign language rhythm motion detection sensor according to an embodiment of the present invention.
- FIG. 10 is a block diagram illustrating a configuration of an exercise learning apparatus based on a foreign language rhythm motion detection sensor according to an embodiment of the present invention
- FIG. 11 is a diagram illustrating a system using information scored by foreign language rhythm motion detection in the exercise learning apparatus based on the foreign language rhythm motion detection sensor of FIG. 10;
- FIG. 11 is a diagram illustrating a system using information scored by foreign language rhythm motion detection in the exercise learning apparatus based on the foreign language rhythm motion detection sensor of FIG. 10;
- 12 to 15 are diagrams showing setting information for analyzing and detecting phonemes, syllables, and sentences in an exercise learning apparatus based on a foreign language rhythm motion detection sensor;
- 16 is a flowchart illustrating an exercise learning method using an exercise learning apparatus based on a foreign language rhythm motion detection sensor according to an embodiment of the present invention
- Figure 17 is a schematic diagram showing briefly the configuration of the foreign language learning teaching material according to an embodiment of the present invention.
- FIG. 1 is a view schematically showing the configuration of a foreign language reading and display device according to an embodiment of the present invention
- Figure 2 is a simplified view showing the configuration of the native language conversion unit of Figure 1
- Figure 3 is a configuration of the foreign language conversion unit of FIG. 4 is a view schematically showing the operation procedure of the conversion server of FIG. 1
- FIG. 5 is a view showing an example of a native language conversion process for steps 1, 2, and 3 of FIG. 4
- FIG. 4 is a view illustrating an example of a native language conversion process of FIG. 4
- FIG. 7 is a view illustrating an image conversion process of FIG. 4
- FIG. 8 is generated through an input foreign language sentence and a conversion server according to an embodiment of the present invention.
- FIG. 9 is a flowchart illustrating a process of displaying a foreign language reading according to an embodiment of the present invention
- FIG. 10 is a schematic diagram illustrating a movement learning conversion principle in an exercise learning apparatus based on a foreign language rhythm motion detection sensor according to an embodiment of the present invention
- FIG. 11 Is a block diagram illustrating a configuration of a motion learning apparatus based on a foreign language rhythm motion detection sensor according to an embodiment of the present invention, and FIG. 12 is scored by foreign language rhythm motion detection in the motion learning device based on a foreign language rhythm motion detection sensor of FIG. 11.
- 13 to 16 are diagrams illustrating system using information
- FIG. 13 to 16 are diagrams illustrating setting information for analyzing and detecting phonemes, syllables, and sentences in an exercise learning apparatus based on a foreign language rhythm motion detection sensor, and FIG. Flow chart illustrating an exercise learning method using an exercise learning apparatus based on a foreign language rhythm motion detection sensor according to an embodiment of the present disclosure; 18 is a schematic diagram briefly illustrating a configuration of a foreign language learning textbook according to an embodiment of the present invention.
- the foreign language reading and display apparatus separates the foreign language sentences inputted by words for accurate acquisition of foreign languages and separates the foreign languages separated by words.
- the separated foreign language pronunciation is converted into national phoneme and foreign phoneme according to the predetermined pronunciation rule, and the national phoneme is combined and generated into the national language syllables, words and sentences according to the predetermined combination rule.
- the foreign language input unit 100, the conversion server 300, and the display unit 500 may be included.
- the foreign language input unit 100 is provided to load and display a file (word, text, subtitle, etc.) already displayed as a foreign language sentence or input a foreign language sentence, and has a voice recognition function, a PDA, a mobile terminal, and a keypad. It is provided with one of the foreign language is input to the conversion server 300.
- the input foreign language may have a pronunciation or an image of a native speaker.
- the present invention exemplifies only Korean as a native language, but is not limited thereto, and may be applied to any language that does not use English, such as Japanese and Chinese, as a first foreign language.
- the conversion server 300 divides the foreign language sentences input by the foreign language input unit 100 into word units, and separates the separated foreign language words into phonemes using a predetermined pronunciation symbol, and syllables of foreign language words among the separated foreign phonemes. Generates and displays the national language syllables, words, and sentences by generating the national language phonemes of one of the national consonants and vowels according to the prescribed foreign language pronunciation rules, and combining and combining the generated national language phonemes according to the foreign language combining rules. It is provided to. In this case, the part of the separated foreign phonemes that do not correspond to the syllables of the foreign language word may be provided to be displayed as foreign phonemes according to a predetermined foreign language pronunciation rule.
- Such a conversion server 300 the foreign language word separator 310 for separating the input foreign language sentence by word unit; A foreign language phoneme separator 320 for dividing each foreign language word separated from the foreign language word divider into phoneme units according to a foreign phoneme phoneme rule for a predetermined foreign language pronunciation; Among the foreign phonemes separated from the foreign phoneme separator, the pronunciation of the phoneme corresponding to the syllable of the foreign language word is matched to the pronunciation of one native language phoneme based on the pronunciation rules between the foreign language and the native language, and the native phoneme matched to the foreign phoneme It includes a native language conversion unit 330 to generate at least one of the syllables, words and sentences of the native language based on the combination rule of the foreign language and to deliver to the display unit.
- the foreign language phoneme separator 320 may separate each foreign language word separated from the foreign word word separator into a phoneme unit using a predetermined phonetic symbol, but may also express stress.
- the native language conversion unit 330 transmits the pronunciation of the phoneme which does not correspond to the syllable of the foreign language word among the foreign language phonemes to the display unit as the foreign language phoneme itself according to the predetermined foreign language pronunciation rule without converting to the native language. Can be mixed and displayed at the same time.
- the conversion server 300 may further include a foreign language conversion unit 340 for transmitting to the display unit to visually display the foreign language syllable portion of each foreign language word separated from the foreign language word separation unit.
- the foreign language word separating unit 310 divides the input foreign language sentence into word units. For example, in the case of I am a student, the foreign language word separating unit 310 separates the input foreign language sentences into I, am, a, and student.
- the foreign phoneme separator 320 separates each foreign word into phonemes by using a predetermined phoneme rule or a phonetic symbol (for example, an MPA symbol).
- a predetermined phoneme rule or a phonetic symbol for example, an MPA symbol.
- mother is m, o, t, h, e, r, or as shown in Figure 4, I is aI1, am is separated into ⁇ 0 and m, a is ⁇ 0, student is s, t, u: 1, d, ⁇ 2, separated by n, t.
- '1' of u: 1 indicates that there is 1 accent in the phone
- '0' of ⁇ 0 means there is no accent in the phone
- '2' of ⁇ 2 accents 2 in the phone To indicate that there is.
- the native language converter 330 outputs a native phoneme of one of a consonant and a vowel of a native language matching the pronunciation of a phoneme of a foreign language according to a predetermined pronunciation rule, or a foreign language word among foreign phonemes separated from a foreign phoneme separator.
- the phoneme According to the phonetic pronunciation of the syllables of the syllable, the phoneme outputs one of the native consonants and vowels of the native language matching the pronunciation of the phoneme of the foreign language, and at the same time does not correspond to the syllables of the foreign word among the separated foreign phonemes
- Pronunciation rule analysis module 331 outputs the phoneme pronunciation of the foreign phoneme itself, and combination rule analysis that outputs the national syllables combined with the consonants and vowels of the native language matching the pronunciation of the foreign language syllable according to a predetermined combining rule.
- Foreign language phonemes and foreign language syllables output from the module 332, the pronunciation rule analysis module 331 and the association rule analysis module 332.
- a native language output module 334 for outputting the phonemes and syllables of the native language matching to the display unit.
- the national language conversion unit 330 further includes a rhythm rule analysis module 333 reflecting the stress included in the pronunciation of the syllable of the foreign language input according to a predetermined rhythm rule in the national language syllable of the combination rule analysis module 332. can do.
- the pronunciation rule analysis module 331 generates a national phoneme of one of the consonants and vowels of the native language matching the phoneme of the foreign language, or consonants and vowels of the native language matching the pronunciation of the phoneme of the foreign language according to a predetermined pronunciation symbol. Outputs the phonemes of one of the native languages, and outputs only the phonetic pronunciations of the syllables of the foreign language among the foreign phonemes as the native phonemes, and the pronunciation of the phonemes that do not correspond to the syllables of the foreign words among the foreign phonemes Therefore, the foreign phonemes themselves are output.
- the foreign phonemes are converted into phonemes of the native language and become native phonemes.
- some of the pronunciation of the foreign language phonemes of the native language is generated according to a predetermined combination rule, wherein the combination rule includes a known silent rule, a modified rule, and a softphone rule.
- the silence rule is as follows.
- an (d) if the end of the word ends with d / t, 8)
- an (d) (h) is, (d, h) is silent, 9) the end of (th) eir, when (th) e, the sunsets (th) at, an (d) (th) ey're Sound of them / them, adverb there, buried at the end of the preceding word, 10) for example, tryin (g) to be followed by a word ending in ing, such as tryin (g) to be g mute, 11 )
- w (h) en wh in words that begin with wh, 12)
- direc (t) ly, mos (t) ly, fac (t) s as (k) ed After the syllable, the silence of T or k, 13)
- Phonemes are generated and output, and 14) for example, p (o) lice, p (e) r (h) aps, b (a) lloon, an unstressed vowel (o, e, a) that precedes accented syllables ) Or pronounced as a weakened vowel (Schwi, Schwa, Schwu) It is good.
- a native phoneme is generated from a foreign phoneme.
- the modification rules are as follows, and the national language syllables are generated from the phonemes separated based on these modification rules.
- consonant rules are as follows, and a national language collection is generated from the foreign phonemes according to the consonant rules.
- the national phoneme may be generated as a national phoneme, a syllable, a word, and a sentence determined according to a predetermined combination rule for mute, transformation, and consonant pronunciation.
- the phoneme and syllable of the foreign language phoneme output from the pronunciation rule analysis module 331 and the combination rule analysis module 332 and the national language matching the foreign syllable are mixed and converted into the native language and foreign language as they are on the screen. Is displayed.
- the rhythm rule analysis module 333 is provided to reflect the stress included in the pronunciation of the syllable of the foreign language input to the national language syllables, words and sentences generated according to the predetermined rhythm rule.
- rhythm rules for foreign phonemes are as follows.
- the national phoneme is generated by reflecting the rhythm rules of the native phonemes for the foreign phonemes.
- the foreign phonemes converted according to the rhythm rules and the corresponding native phonemes generated according to the rhythm rules are the font size, color, specific shape, and the arrangement of fonts. Various changes may be displayed to distinguish them from the rest of the phonemes.
- phoneme representation is font size (strong-32 font, one beat-28) Fonts, 1/2 beats-18 fonts, 1/4 beats-10.5 fonts, or proportional font sizes with change identification) and various ways to visually show changes, such as color adjustments, shapes of specific shapes, and the arrangement of fonts It may include.
- the consonant s phoneme is 10.5 fonts
- the consonant t phoneme with one accent and the "tu" and "tu" of the vowel u phoneme are 32 fonts, the second vowel part of one or more syllables.
- Phonemes, vowels e-phones and consonants n-phones “den” and “den” are 18 fonts and consonant t-phones 10.5 fonts.
- rhythm rules for foreign language syllables are determined in the same way as the rhythm rules for foreign phonemes.
- the rhythm rules for national language sentences reflecting the strength of foreign language sentences are as follows.
- rhythm analysis notation of a sentence may be visually displayed by various changes such as font font size, color, shape of a specific form, and arrangement of fonts of a foreign language and a native language.
- the native language sentence reflecting a predetermined rhythm rule based on the foreign language pronunciation is transmitted to the native language output module 334 in a mixed state of the foreign language and the native language, and the native language output module is mixed with the foreign language and the native language. Display the native language sentence in the closed state.
- the foreign language conversion unit 340 outputs a syllable analysis module 341 for deriving and outputting syllables from each foreign language word separated from the foreign language word separator and a foreign language output for transmitting to the display unit to visually display the derived foreign language syllables.
- a syllable analysis module 341 for deriving and outputting syllables from each foreign language word separated from the foreign language word separator and a foreign language output for transmitting to the display unit to visually display the derived foreign language syllables.
- the syllables derived from the foreign language converter 340 and the syllables converted into the native language by the native language converter 330 are matched with each other.
- the display unit 500 includes a foreign language word or a sentence and a sentence and a national language output module 334 of the national language converter 330 that visually distinguish syllables transmitted from the foreign language output module 340 of the foreign language converter 340.
- the native language word or sentence mixed with the transferred foreign language may be displayed to match each other.
- each word which is the minimum unit of meaning, is displayed by applying the rules passed through the conversion server as it is, and when the words are combined to express meaning or form a sentence.
- key words such as nouns, main verbs, adjectives, adverbs, and interjections apply the conversion rules.
- Functions such as pronouns, prepositions, modal verbs, Be verbs, qualifiers, and conjunctions can be distinguished from key words by using a predetermined shape and font size, It is characterized by processing to change at least one of the colors to a predetermined setting value to be displayed on the screen.
- each word which is the minimum unit of the meaning is applied to the national language conversion unit 330 and foreign language conversion unit of the conversion server by applying the rules that passed through the conversion server as it is (
- the meanings of the first accent (1), the second accent (2), and the non-empty part (0) are represented based on the syllable parts of the native language conversion and the foreign language conversion generated through 340. Images of hands, hands, one or two feet, and other parts of the body, movements, or tools can be shown in syllables to produce.
- the present invention is to move from the pronunciation close to the teeth in the oral cavity to the pronunciation of the laryngeal language and to sequentially express the articulation points of the pronunciation by the native language, and to write the actual sound and the native language of the foreign language so that the actual sound and the foreign language are the same. By matching, effective language acquisition can be achieved.
- the sentence of the foreign language is divided into word units, syllable units, and phoneme units, and the separated foreign phonemes are divided into one of the consonants and vowels of the native language according to the prescribed foreign pronunciation rules.
- Generates national phonemes and generates and displays national language syllables, national language words, and national language sentences according to a predetermined combination rule, displaying only syllables among the pronunciations of foreign languages and displaying the foreign phonemes corresponding to the rest of the pronunciation without conversion explain the sequence of steps.
- Foreign language and national language display service using the phonetic symbols can be provided through a wired or wireless communication network, at this time can access the service after downloading the application program after accessing the website, membership can be provided to the service, This series of processes is well known and known technology, so a detailed description thereof will be omitted.
- the foreign language sentences input through the foreign language input unit 100 are separated by word units, and each of the separated foreign language words is separated by phonemes marked with accents according to predetermined phonetic symbols (for example, MPA symbols).
- the consonants and vowels of the native language matched with the pronunciation of the phonemes of the foreign language according to the pronunciation rules of the pronunciation rule analysis module 341, which are transmitted to the analysis module 331 and correspond to the syllables of the foreign language words among the separated foreign language phonemes.
- the phoneme of the phoneme which is converted to one of the native phonemes of the foreign language and is not the syllable of the foreign language word among the separated foreign phonemes is output as the foreign phoneme itself.
- the combination rule analysis module 342 combines the national language consonants and vowels according to a predetermined combination rule and then converts the combined national language syllables, words, and sentences. And output.
- the process is repeated for all the syllables included in the input foreign language sentence, the foreign language word containing the syllable, and the foreign language sentence containing the foreign language word, and the words and sentences are converted into the native language words and sentences for the foreign language syllable. Since it is the same as a series of processes to be converted, a detailed description thereof will be omitted.
- rhythm rule analysis module 333 a phoneme of a foreign language and a foreign language according to a predetermined rhythm rule for a native language phoneme and syllable mixed with a foreign language phoneme provided from a pronunciation rule analysis module 331 and a combination rule analysis module 332.
- the native phonemes, syllables, and words are generated by reflecting the stresses included in the syllable pronunciation.
- the national language sentence in a state in which the foreign language and the national language generated by the rhythm rule analysis module 333 are mixed is provided to the national language output module 334, and the national language output module is the foreign language generated by the rhythm rule analysis module 333. It handles to display the native language sentence mixed with the native language visually in various ways such as preset font font size, color, specific shape, and font arrangement.
- At least one of the native phonemes, syllables, words, and sentences of the foreign language phonemes of the native language output module 334 is displayed on the screen through the display unit 500.
- the foreign syllable parts derived from the foreign language words of the foreign language output module 342 are visually distinguished from the remaining foreign language word phoneme parts and displayed on the screen through the display unit 500.
- the foreign language words or sentences and the sentences and the national language output module 334 of the national language conversion unit 330 of the syllables transmitted from the foreign language output module 342 of the foreign language conversion unit 340 are visually distinguished and mixed.
- the national language words or sentences that are matched are displayed to match each other.
- FIG. 9 is a schematic diagram illustrating a movement learning conversion principle in the exercise learning apparatus 600 based on a foreign language rhythm motion detection sensor according to an embodiment of the present invention.
- the motion learning conversion principle used in the exercise learning apparatus 600 based on a foreign language rhythm motion detection sensor will be described.
- the minimum unit of sound that composes a speech is called phonemes, and the phonemes are gathered into syllables.
- a syllable sounds as a combination of consonants and vowels or as a vowel itself.
- the syllables are gathered into words and the words are gathered into sentences.
- Foreign languages have their own strengths, distinctions, and intonations that differ from their own. You should be able to distinguish these tone units well and be able to listen rhythmically and talk naturally.
- Vowels which are components of the phoneme (p1), are sounds produced by changes in the shape of the mouth, and consonants are sounds produced by changing the articulation of the tongue. Therefore, it is possible to express consonants and vowels with the mouth and the tongue by combining the mouth movements applied to the vowels and the three sounds produced by the movement of the lips among the consonants. .
- One or more syllables of phonemes are gathered together to form words, the minimum unit of meaning (p3).
- the word is divided into a first accent that strongly pronounces a syllable, a second accent that is weakly pronounced, a syllable without a weakness, and a silent silence (p4). Therefore, the neurotransmitter responsible for memory is stimulated by stimulating the cerebellum with movements using the mouth, tongue position, and movements such as strong hands and feet, which are strong in the first stress, and weak hands and feet, which are weak in the second stress. It is constantly secreted and can be remembered for a long time.
- Sentences the unit of communication in which words are combined, have strong, strong, abbreviated, abbreviated, and intonation of emphasis (p5). Basically, the noun, the main verb, the adjective, and the adverb, which are the core elements of the sentence, are strongly emphasized and the functional elements are weak.
- the neurotransmitter responsible for memory is continuously Secreted and can be stored long term (p9).
- FIG. 10 is a block diagram illustrating a configuration of the exercise learning apparatus 600 based on a foreign language rhythm motion detection sensor according to an exemplary embodiment of the present invention.
- FIG. 11 is a diagram illustrating a system using information scored by foreign language rhythm motion detection in the exercise learning apparatus 600 based on the foreign language rhythm motion detection sensor of FIG. 10.
- 12 to 15 are diagrams illustrating setting information for analyzing and detecting phonemes, syllables, and sentences in the exercise learning apparatus 600 based on a foreign language rhythm motion detection sensor.
- the exercise learning apparatus 600 based on a foreign language rhythm motion detection sensor performs data transmission / reception with a management server 700, a game server 800, a cloud server 900, and the like through a network N. do.
- the network N is a communication network which is a high-speed network of a large communication network capable of large capacity, long distance voice and data service, and may be a next-generation wired and wireless network for providing Internet or high-speed multimedia service.
- the network N may be a synchronous mobile communication network or an asynchronous mobile communication network.
- An example of an asynchronous mobile communication network is a communication network of a wideband code division multiple access (WCDMA) scheme.
- WCDMA wideband code division multiple access
- the mobile communication network may include a Radio Network Controller (RNC).
- RNC Radio Network Controller
- the WCDMA network is taken as an example, it may be an IP network based on a next generation communication network such as 3G LTE network, 4G network, or other IP.
- Network (N) is a function of mutually communicating signals and data between the exercise learning device 600, the management server 700, the game server 800 and the cloud server 900, and other systems based on the foreign language rhythm motion detection sensor Do it.
- the exercise learning apparatus 600 based on a foreign language rhythm motion detection sensor includes a sensor unit 610, a transceiver 620, a controller 630, and a storage unit 640.
- the sensor unit 610 is for detecting any voice or motion expressing a foreign language rhythm, and means a 'motion detection sensor', 'sound detection sensor', 'vibration detection sensor', and any other detectable means.
- the transceiver 620 performs data transmission / reception with the cloud server, other wired / wireless PCs, and the system by interlocking the sensing information with the score information on the motion recognition detected by the sensor unit 110 and analyzed by the controller.
- the storage unit 640 stores setting information for analyzing the phoneme, syllable, and sentence and detecting motion, and information on stress, strong and weak for each phoneme, syllable, and sentence.
- the controller 630 controls the sensor unit 110 to detect any voice and motion expressing a foreign language rhythm, and to analyze and score any voice and motion detected from the sensor unit 610. , A voice recognition module 631, a motion recognition module 632, and a scoring utilization module 633.
- the module may mean a functional and structural combination of hardware for performing the technical idea of the present invention and software for driving the hardware.
- the module may mean a logical unit of a predetermined code and a hardware resource for performing the predetermined code, and means a physically connected code or does not necessarily mean one kind of hardware. It can be easily inferred by the average expert in the technical field.
- the speech recognition module 631 includes phoneme recognition means 631a, syllable recognition means 631b, and sentence recognition means 631c.
- the phoneme recognition unit 631a recognizes and analyzes consonants and vowel actions corresponding to phoneme movements of the learner from the sensor unit 610. More specifically, the phoneme recognition means 631a analyzes eight M1 to M8 corresponding to the mouth shape type and eight T1 to T8 corresponding to the tongue position type. 4 is a diagram showing eight M1 to M8 corresponding to the mouth shape type analyzed by the phoneme recognition means 631a, and FIG. 5 is a diagram corresponding to the tongue position type analyzed by the phoneme recognition means 631a. Is a table showing T1 to T8.
- the vowel among the components of the phoneme which is the minimum unit of the sound, is the sound produced by the change of the mouth motion.
- a combination of five mouth movements applied to vowels and three sounds made by movement of the lips among consonants may be expressed as eight mouth shapes (M1 to M8) as shown in FIG. 4.
- the phoneme recognition means 631a based on eight Mouth Shapes appearing according to the mouth motion may include M1 (Mouth 1) corresponding to the pronunciation 'a'; M2 (Mouth 2) corresponding to the pronunciations 'i' and 'e'; Pronunciation 'I', ' ⁇ ', ' Corresponding M3 (Mouth 3); M4 (Mouth 4) corresponding to the pronunciation 'u'; The M5 (Mouth 5) corresponding to the pronunciation 'o' is analyzed.
- the phoneme recognition means 631a includes M6 (Mouth 6) corresponding to pronunciation 'b', 'p' and 'm' based on the movement of the lips during consonants; M7 (Mouth 7) corresponding to the pronunciations 'f' and 'v'; The M8 (Mouth 8) corresponding to the pronunciation 's' and 'z' is analyzed.
- consonants and vowel tongue positions in the phonetic movements of eight T1 to T8 corresponding to the tongue position type, the consonants among the components of the phoneme, the minimum unit of sound, are produced while changing the consonant points, which are the positions of the tongue. Sound.
- Vowels and consonants can be represented by eight tongue positions that change the position of the tongue.
- the phoneme recognition means 631a is based on eight positions of the tongue, and the lower T1 (Tongue), which is a basic position for the pronunciation 'a', 'o', 'u', ' ⁇ ' and 'I', is used. 1), and the pronunciation 's' and 'z' are analyzed by T2 (Tongue 2), which is the lower part of the lower lower teeth, and T3 (Tongue 3, which is the upper molar end of the upper part of the pronunciation 'r').
- T4 Tongue 4
- T5 Tongue 5
- Upper Upper
- T6 Tongue 6
- T7 Tongue 7
- T8 Tongue 8
- the syllable recognition means 631b uses the results of phoneme analysis according to the shape of the mouth and tongue by the phoneme recognition means 631a to perform the analysis by recognizing the syllable movement of the learner. More specifically, one or more syllables combined with phonemes form words to form the minimum unit of meaning. The word is divided into 'first accent', which pronounces the syllable strongly with a stress above a preset frequency, 'second accent', which pronounces weaker than the first accent, 'syllable-free syllable', and 'silence' without sound.
- the syllable recognition means 631b analyzes the first stress and the second stress in at least one syllable included in words input from the sensor unit 610. do.
- the syllable recognition means 131b may be a second stressed motion such as a weak or fast clasping a second accent 'sit', a stepping step, a strong accent on the first accent 'a', a first accent, such as a stepping on a foot.
- the motion detection setting for recognition through the sensor unit 610 from the learner about the motion is stored on the storage unit 640.
- the neurotransmitter in charge is constantly secreted and functions to ensure long-term memory.
- the sentence recognizing means 631c performs rhythm movement setting and recognition on the sentence. More specifically, the sentence, which is a unit of communication in which words analyzed by syllable recognition means 631b are combined, has emphasis strong, soft, abbreviated, abbreviated, and intonation.
- the sentence recognition unit 631c analyzes a sentence composed of at least one or more words, extracts nouns, main verbs, adjectives, and adverbs, which are the core elements of the sentence, sets them as accents, and sets the functional elements as weak sounds. 640).
- the motion recognition module 632 detects a strong sounding motion such as a strong hand or a stepping on the foot, which is set in decibels (dB) or a standard value indicated on the X, Y, and Z axes by a predetermined sound level by the sentence recognition means 631c.
- a weak sound action such as a weak decibel (dB) or less than a standard value indicated on the X, Y, and Z axes, one step, and the like, is recognized by the sensor unit 610.
- the motion recognition module 632 is responsible for memory by stimulating the cerebellum due to the strong and weak motions stored in the storage unit 640 together with the mouth shape and tongue shape analyzed by the phoneme recognition means 631a. Neurotransmitters are constantly secreted and function to ensure long-term memory.
- the motion recognition module 132 sets and detects strong words such as strong hands on the key words, put, hands, knees, foot stepping, and the like, and weak or fast hand, stepping on the function words I, my, on, and my. By setting a weak sound operation such as the like, the detection by the sensor unit 610 is performed.
- the scoring utilization module 633 uses the sensor unit 610 for the first and second stress operations set by the syllable recognition means 631b, and the strong and weak sounds set by the sentence recognition means 631c. Detect by rhythm action.
- the scoring utilization module 633 combines the first and second accent actions and words that match the first accent and the second accent of words combining the phonemes set and recognized by the phoneme recognition means 631a, respectively. Analyzes and scores whether recognition through the sensor unit 610 is performed on the strong and weak motions matching the key words and the functional words in the sentence. At this time, the scoring process analyzes whether the learner performed the first accent / second accent motion matching the first accent and the second accent of the word, and the accent / weak motion matching the key word or the functional word of the sentence. Can score.
- the scoring utilization module 633 transmits the score obtained through the analysis to the management server 700, the game server 800, the cloud server 900, and other application servers through the network N corresponding to the wired / wireless communication network.
- the transceiver 620 may be controlled.
- the scoring utilization module 633 may provide information to measure and determine the score in the management server 700, game server 800, cloud server 900, etc. by connecting the score to the wired or wireless network. According to the measurement and the determination by the provided information, the learner's individual score management, game, network competition, etc. may be performed. This can be combined with the natural desire to win and foreign language learning, which are involved in sustaining motivation and inducing competition.
- connection method and the measurement method through the network N include all means for connecting the same place and separate places to each other, including wired / wireless dedicated servers and cloud servers.
- FIG. 16 is a flowchart illustrating an exercise learning method using an exercise learning apparatus based on a foreign language rhythm motion detection sensor according to an exemplary embodiment of the present invention.
- the exercise learning apparatus 600 based on a foreign language rhythm motion detection sensor recognizes at least one or more sentences and actions from a learner by performing an operation of the sensor unit 610 (S110).
- the sentence recognized by the exercise learning apparatus may be at least one or more sentences including both a language of a country using English as a native language or a language of a country using English as a second foreign language. .
- the exercise learning apparatus 600 After operation S110, the exercise learning apparatus 600 based on the foreign language rhythm motion detection sensor performs phoneme analysis (S120).
- the exercise learning apparatus 600 based on the foreign language rhythm motion detection sensor 600 recognizes and analyzes consonants and vowel mouth motions corresponding to phoneme movements of the learner from the sensor unit 610, and includes eight M1 to mouth shape types. M8 and eight T1 to T8 corresponding to the tongue position type are analyzed.
- the exercise learning apparatus 600 based on the foreign language rhythm motion detection sensor performs word analysis (S130). More specifically, the exercise learning apparatus 600 based on the foreign language rhythm motion detection sensor utilizes the results of phoneme analysis according to the mouth shape and the tongue shape in step S120 to analyze the learner through recognition of syllable motion. After each word is divided into syllables to be performed, 'first accent' pronounced strongly with the accent over the preset frequency, 'second accent' pronounced weaker than the first accent, 'syllable-free syllable', It is divided into 'silence' without sound.
- the exercise learning apparatus 600 based on the foreign language rhythm motion detection sensor performs syllable motion analysis at operation S140. That is, the exercise learning apparatus 600 based on the foreign language rhythm motion detection sensor 600 may have a first stressed motion such as a strong hand and a stepping on the first stress, a second hand such as a weak hand, a stepping on the second stress, and the like. It is analyzed whether the accent motion matches the first accent and the second accent in at least one syllable included in words input from the sensor unit 610.
- the exercise learning apparatus 600 based on the foreign language rhythm motion detection sensor performs sentence analysis (S150). That is, the foreign language rhythm motion detection sensor-based exercise learning device 600 is divided into sentences (sentences) which is a unit of communication combined with the words analyzed in step S120, consisting of at least one word Analyze each sentence and extract the nouns, main verbs, adjectives, and adverbs that are the core elements of the sentence and recognize them as strong and strong, and recognize the functional elements as weak.
- sentences which is a unit of communication combined with the words analyzed in step S120, consisting of at least one word Analyze each sentence and extract the nouns, main verbs, adjectives, and adverbs that are the core elements of the sentence and recognize them as strong and strong, and recognize the functional elements as weak.
- the exercise learning apparatus 600 based on the foreign language rhythm motion detection sensor performs a rhythm motion analysis (S160). That is, the exercise learning apparatus 600 based on the foreign language rhythm motion detection sensor 600 recognizes the strong sound with respect to the strong sound recognized in the step S150 and at the same time, the strong decibel (dB) or the strong value higher than the standard value displayed on the X, Y, and Z axes. It is determined whether or not a strong sounding motion such as a hand or a foot is recognized by the sensor unit 110. For the weak sound recognized in step S150, the predetermined decibel (dB) or the standard value displayed on the X, Y, and Z axes is less. It is determined whether a weak sound action such as a weak hand, a stepping foot, or the like is recognized by the sensor unit 610.
- a rhythm motion analysis S160. That is, the exercise learning apparatus 600 based on the foreign language rhythm motion detection sensor 600 recognizes the strong sound with respect to the strong sound recognized in the step S150 and at the same time
- the exercise learning apparatus 600 based on the foreign language rhythm motion detection sensor performs scoring in operation S170. That is, the exercise learning apparatus 600 based on the foreign language rhythm motion detection sensor may include a first accent motion and a second accent matching each of the first accent and the second accent in the recognized syllable according to the syllable motion analysis in step S140. 2 Scoring according to the detection of the stress motion, scoring according to the detection of the strong motion and weak motion symmetrical to each of the strong and weak sound in the recognized sentence according to the analysis of the rhythm movement of the step (S150). In this case, as the elements to be scored by the sensor unit 610, scoring may be performed according to sound, vibration, and motion sensor detection.
- the exercise learning apparatus 600 based on the foreign language rhythm motion detection sensor After operation S170, the exercise learning apparatus 600 based on the foreign language rhythm motion detection sensor performs a utilization process through a network in operation S180. That is, the exercise learning apparatus 600 based on the foreign language rhythm motion detection sensor 600 uses the score information generated in operation S170 through a network N corresponding to the wired / wireless communication network to manage the server, the game server, the cloud server, and other application servers. In this way, the learner's individual score management, game, network play, etc. may be performed by the provided information. In this way, it is possible to combine the intrinsic desire for victory and the foreign language learning that are involved in sustaining motivation and inducing competition for learners.
- the invention can also be embodied as computer readable code on a computer readable recording medium.
- Computer-readable recording media include all kinds of recording devices that store data that can be read by a computer system.
- Examples of computer-readable recording media include ROM, RAM, CD-ROM, magnetic tape, floppy disks, optical data storage devices, and the like, which are also implemented in the form of carrier waves (eg, transmission over the Internet). It also includes.
- the computer readable recording medium can also be distributed over network coupled computer systems so that the computer readable code is stored and executed in a distributed fashion.
- functional programs, codes and code segments for implementing the present invention can be easily inferred by programmers in the art to which the present invention belongs.
- the exercise learning device based on a foreign language rhythm motion detection sensor is a wearable form of a device having a motion recognition module configured to recognize a motion such as a hand or a foot, by the sensor unit.
- a motion recognition module configured to recognize a motion such as a hand or a foot, by the sensor unit.
- it can be configured to be worn on the human body.
- the teaching material is a pronunciation rule unit for dividing the foreign language pronunciation into the pronunciation separated by the mouth shape reference and the pronunciation separated based on the tongue position; And a sentence expression unit displaying a rhythm image in a foreign language and a native language according to a pronunciation rule and a preset rhythm rule of the pronunciation rule unit.
- the sentence expression unit may include a sentence expression in which a rhythm image is displayed according to a foreign language or a native language.
- the sentence expression unit may include a foreign language expression unit in which each word forming a foreign language sentence is arranged; A national language expression unit which is arranged so that a national language or a national language and a foreign language are mixed and matched with the foreign language sentence; And a motion image representation unit arranged to match the rhythm image according to the stress of the sentence disposed in the foreign language representation unit or the native language representation unit.
- the sentence expression unit may further include an interpreter for describing the meaning of the native language of the foreign language sentence.
- rhythm image may be represented by any one or more selected from font font size difference, color difference, letter thickness difference, and a specific shape.
- the motion image may be any one of a hand motion, a foot motion, and a body motion. It can be expressed as above.
- the hand motion of the motion image is a clasp
- the strong difference and the weak hand display the difference in stress
- the foot image of the motion image can be rolled
- the two-foot and one-roll display can show the difference in stress.
- the pronunciation of the foreign language input is divided by phoneme, and then the separated foreign language pronunciation is converted into the phoneme of the native language according to the pre-stored pronunciation rules, and the phoneme of the native language syllables, words,
- the foreign language moves from the pronunciation close to the teeth in the oral cavity to the laryngeal pronunciation, and is written in the native language in order to acquire the articulation points of the pronunciation naturally.
- By matching the notation of the native language effective language acquisition can be achieved, and exercise learning devices and learning materials based on foreign language rhythm motion detection sensors can be used. Therefore, the efficiency, reliability, and accuracy of foreign language education are advanced. There is a possibility.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- Multimedia (AREA)
- Acoustics & Sound (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Entrepreneurship & Innovation (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Electrically Operated Instructional Devices (AREA)
Abstract
Description
Claims (53)
- 외국어 문장을 입력하는 외국어 입력부(100);상기 외국어 입력부(100)에서 입력된 외국어 문장을 단어 단위로 분리한 후, 분리된 외국어 단어를 기 정해진 발음기호를 이용하여 음소로 분리하며, 분리된 외국어 음소 중 외국어 단어의 음절에 해당되는 부분을 기 정해진 외국어 발음규칙에 따라 자국어 자음 및 모음 중 하나의 자국어 음소로 생성하고, 생성된 자국어 음소를 외국어 결합규칙에 따라 결합 및 조합하여 자국어 음절, 단어, 및 문장을 생성하여 표시하거나, 분리된 외국어 음소 중 외국어 단어의 음절에 해당되지 않는 부분은 기 정해진 외국어 발음규칙에 따라 외국어 음소로 표시하도록 하는 변환서버(300); 및상기 변환서버(300)의 자국어 문장 및 입력된 외국어 문장 중 적어도 하나를 화면에 표시하는 표시부(500);를 포함하여 구성되는 것을 특징으로 하는 외국어 독음 및 표시장치.
- 제 1 항에 있어서, 상기 변환서버(300)는,상기 입력된 외국어 문장을 단어 단위로 분리하는 외국어 단어 분리부(310);상기 외국어 단어 분리부(310)로부터 분리된 각 외국어 단어를 기 정해진 발음기호를 이용하여 음소 단위로 분리하되 강세도 표기하는 외국어 음소 분리부(320);상기 외국어 음소 분리부(320)로부터 분리된 외국어 음소 중 외국어 단어의 음절에 해당되는 음소의 발음을 기 정해진 외국어 및 자국어 간의 발음규칙을 근거로 하나의 자국어 음소 발음에 매칭시키고, 외국어 음소에 매칭된 자국어 음소를 기 정해진 외국어의 결합규칙을 근거로 자국어의 음절, 단어 및 문장 중 적어도 하나로 생성하여 표시부로 전달하며, 동시에 분리된 외국어 음소 중 외국어 단어의 음절에 해당되지 않는 음소의 발음은 기 정해진 외국어 발음규칙에 따라 외국어 음소 그 자체로 표시부로 전달하는 자국어 변환부(330);를 포함하여 구성되는 것을 특징으로 하는 외국어 독음 및 표시장치.
- 제 2 항에 있어서, 상기 자국어 변환부(330)는,상기 외국어 음소 분리부로부터 분리된 외국어 음소 중 외국어 단어의 음절에 해당되는 음소의 발음을 기 정해진 발음기호에 따라 외국어의 음소의 발음과 매칭되는 자국어의 자음과 모음 중 하나의 자국어 음소를 출력하고, 동시에 분리된 외국어 음소 중 외국어 단어의 음절에 해당되지 않는 음소의 발음은 외국어 음소 그 자체로 출력하는 발음 규칙 분석모듈(331)과,기 정해진 결합규칙에 따라 외국어 음절의 발음과 매칭되는 자국어의 자음과 모음이 결합된 자국어 음절을 출력하는 결합규칙 분석모듈(332)과,상기 발음 규칙 분석모듈(331)과 상기 결합규칙 분석모듈(332)에서 출력된 외국어 음소, 및 외국어 음절과 매칭되는 자국어의 음소 및 음절을 출력하여 상기 표시부(500)로 전달하는 자국어 출력모듈(334);을 포함하여 구성되는 것을 특징으로 하는 외국어 독음 및 표시장치.
- 제 3 항에 있어서, 상기 발음규칙 분석모듈(331)은,기 정해진 발음기호에 따라 외국어의 음소의 발음과 매칭되는 자국어의 자음과 모음 중 하나의 자국어 음소를 출력하되, 외국어 음소 중 외국어 단어의 음절에 해당되는 음소의 발음만을 자국어 음소로 출력하고, 외국어 음소 중 외국어 단어의 음절에 해당되지 않는 음소의 발음은 기 정해진 외국어 발음규칙에 따라 외국어 음소 그 자체로 출력하도록 구성되는 것을 특징으로 하는 외국어 독음 및 표시장치.
- 제 2 항에 있어서, 상기 자국어 변환부(330)는,기 정해진 리듬규칙에 따라 입력된 외국어의 음절의 발음에 포함된 강세를 상기 결합규칙 분석모듈(332)의 자국어 음절에 반영하는 리듬규칙 분석모듈(333)을 더 포함하여 구성되는 것을 특징으로 하는 외국어 독음 및 표시장치.
- 제 5 항에 있어서, 상기 리듬규칙 분석모듈(333)은입력된 외국어의 음절의 발음에 포함된 강세를 기 정해진 리듬규칙에 따라 생성된 자국어 음절, 단어 및 문장에 반영되도록 구비되어 상기 자국어 출력모듈로 전달되도록 구비되는 것을 특징으로 하는 외국어 독음 및 표시장치.
- 제 3 항에 있어서, 상기 자국어 출력모듈(334)은,입력된 외국어 문장 및 외국어 음절과 매칭되어 생성된 자국어의 음절 중 적어도 하나의 리듬규칙이 반영된 음절 및 단어를 소정형상 및 폰트 크기, 색상 중 적어도 하나를 기 정해진 설정값으로 변경하여 화면에 표시되도록 처리하는 것을 특징으로 하는 외국어 독음 및 표시장치.
- 제 2 항에 있어서, 상기 변환서버(300)는상기 외국어 단어 분리부(310)로부터 분리된 각 외국어 단어의 외국어 음절 부분을 시각적으로 구분되게 표시하도록 상기 표시부(500)로 전달하는 외국어 변환부(340)를 더 포함할 수 있는 것을 특징으로 하는 외국어 독음 및 표시장치.
- 제 2 항에 있어서, 상기 외국어 변환부(340)는,상기 외국어 단어 분리부(310)로부터 분리된 각 외국어 단어에서 음절을 도출하여 출력하는 음절분석모듈(341)과, 도출된 외국어 음절을 시각적으로 구분되게 표시하도록 표시부로 전달하는 외국어 출력모듈(342)을 포함하여 구성되는 것을 특징으로 하는 외국어 독음 및 표시장치.
- 제 1 항에 있어서, 상기 표시부(500)는,상기 자국어 및 외국어의 변환을 표시할 때, 의미의 최소 단위인 개개의 단어는 상기 변환서버를 통과한 규칙을 그대로 적용하고, 개개의 단어가 결합하여 의미를 나타내거나 문장을 이룰때는 품사별 분석을 통해 기능어와 핵심어로 나누어 구분되게 표시하는 것을 특징으로 하는 외국어 독음 및 표시장치.
- 제 10 항에 있어서,상기 핵심어는 명사, 본동사, 형용사, 부사, 감탄사 중 어느 하나로, 변환규칙을 그대로 적용하며,상기 기능어는 대명사, 전치사, 조동사, Be동사, 한정사, 접속사 중 어느 하나로, 소정형상 및 폰트 크기, 색상 중 적어도 하나를 기 정해진 설정값으로 변경하여 화면에 표시되도록 처리하는 것을 특징으로 하는 외국어 독음 및 표시장치.
- 입력된 외국어 문장을 단어 단위로 분리하는 외국어 단어 분리부(310);상기 외국어 단어 분리부(310)로부터 분리된 각 외국어 단어를 기 정해진 발음기호를 이용하여 음소 단위로 분리하되 강세도 표기하는 외국어 음소 분리부(320); 및상기 외국어 음소 분리부(320)로부터 분리된 외국어 음소 중 외국어 단어의 음절에 해당되는 음소의 발음을 기 정해진 외국어 및 자국어 간의 발음규칙을 근거로 하나의 자국어 음소 발음에 매칭시키고, 외국어 음소에 매칭된 자국어 음소를 기 정해진 외국어의 결합규칙을 근거로 자국어의 음절, 단어 및 문장 중 적어도 하나로 생성하여 표시부로 전달하며, 동시에 분리된 외국어 음소 중 외국어 단어의 음절에 해당되지 않는 음소의 발음은 기 정해진 외국어 발음규칙에 따라 외국어 음소 그 자체로 표시부로 전달하는 자국어 변환부(330);를 포함하여 구성되는 것을 특징으로 하는 외국어 독음 및 표시장치의 변환서버.
- 제 12 항에 있어서, 상기 자국어 변환부(330)는,상기 외국어 음소 분리부로부터 분리된 외국어 음소 중 외국어 단어의 음절에 해당되는 음소의 발음을 기 정해진 발음기호에 따라 외국어의 음소의 발음과 매칭되는 자국어의 자음과 모음 중 하나의 자국어 음소를 출력하고, 동시에 분리된 외국어 음소 중 외국어 단어의 음절에 해당되지 않는 음소의 발음은 외국어 음소 그 자체로 출력하는 발음 규칙 분석모듈(331)과,기 정해진 결합규칙에 따라 외국어 음절의 발음과 매칭되는 자국어의 자음과 모음이 결합된 자국어 음절을 출력하는 결합규칙 분석모듈(332)과,상기 발음 규칙 분석모듈(331)과 상기 결합규칙 분석모듈(332)에서 출력된 외국어 음소, 및 외국어 음절과 매칭되는 자국어의 음소 및 음절을 출력하여 상기 표시부(500)로 전달하는 자국어 출력모듈(334);을 포함하여 구성되는 것을 특징으로 하는 외국어 독음 및 표시장치의 변환서버.
- 제 13 항에 있어서, 상기 발음규칙 분석모듈(331)은,기 정해진 발음기호에 따라 외국어의 음소의 발음과 매칭되는 자국어의 자음과 모음 중 하나의 자국어 음소를 출력하되, 외국어 음소 중 외국어 단어의 음절에 해당되는 음소의 발음만을 자국어 음소로 출력하고, 외국어 음소 중 외국어 단어의 음절에 해당되지 않는 음소의 발음은 기 정해진 외국어 발음규칙에 따라 외국어 음소 그 자체로 출력하도록 구성되는 것을 특징으로 하는 외국어 독음 및 표시장치의 변환서버.
- 외국어 문장을 입력하는 외국어 입력단계;상기 외국어 입력단계에서 입력된 외국어 문장을 단어 단위로 분리한 후, 분리된 외국어 단어를 기 정해진 발음기호를 이용하여 음소로 분리하며, 분리된 외국어 음소 중 외국어 단어의 음절에 해당되는 부분을 기 정해진 외국어 발음규칙에 따라 자국어 자음 및 모음 중 하나의 자국어 음소로 생성하고, 생성된 자국어 음소를 외국어 결합규칙에 따라 결합 및 조합하여 자국어 음절, 단어, 및 문장을 생성하여 표시하며, 분리된 외국어 음소 중 외국어 단어의 음절에 해당되지 않는 부분은 기 정해진 외국어 발음규칙에 따라 외국어 음소로 표시하도록 하는 변환단계; 및상기 변환단계에 의해 생성된 자국어 문장 및 입력된 외국어 문장 중 적어도 하나를 화면에 표시하는 표시단계;를 포함하여 구성되는 것을 특징으로 하는 외국어 독음 및 표시방법.
- 제 15 항에 있어서, 상기 변환단계는,상기 입력된 외국어 문장을 단어 단위로 분리하는 외국어 단어 분리단계;상기 외국어 단어 분리단계에서 분리된 각 외국어 단어를 기 정해진 발음기호를 이용하여 음소 단위로 분리하되 강세도 표기하는 외국어 음소 분리단계;상기 외국어 음소 분리단계로부터 분리된 외국어 음소 중 외국어 단어의 음절에 해당되는 음소의 발음을 기 정해진 외국어 및 자국어 간의 발음규칙을 근거로 하나의 자국어 음소 발음에 매칭시키고, 외국어 음소에 매칭된 자국어 음소를 기 정해진 외국어의 결합규칙을 근거로 자국어의 음절, 단어 및 문장 중 적어도 하나로 생성하여 표시부로 전달하며, 동시에 분리된 외국어 음소 중 외국어 단어의 음절에 해당되지 않는 음소의 발음은 기 정해진 외국어 발음규칙에 따라 외국어 음소 그 자체로 표시부로 전달하는 자국어 변환단계;를 포함하여 구성되는 것을 특징으로 하는 외국어 독음 및 표시방법.
- 제 16 항에 있어서, 상기 자국어 변환단계는,상기 외국어 음소 분리단계에서 분리된 외국어 음소 중 외국어 단어의 음절에 해당되는 음소의 발음을 기 정해진 발음기호에 따라 외국어의 음소의 발음과 매칭되는 자국어의 자음과 모음 중 하나의 자국어 음소를 출력하고, 동시에 분리된 외국어 음소 중 외국어 단어의 음절에 해당되지 않는 음소의 발음은 외국어 음소 그 자체로 출력하는 제1 단계와,기 정해진 결합규칙에 따라 외국어 음절의 발음과 매칭되는 자국어의 자음과 모음이 결합된 자국어 음절을 출력하는 제2 단계와,상기 제1 단계와 제2 단계에서 출력된 외국어 음소, 및 외국어 음절과 매칭되는 자국어의 음소 및 음절을 출력하여 화면에 전달하는 제3 단계;를 포함하여 구성되는 것을 특징으로 하는 외국어 독음 및 표시방법.
- 제 17 항에 있어서, 상기 제2 단계는,기 정해진 리듬규칙에 따라 입력된 외국어의 음절의 발음에 포함된 강세를 생성된 자국어 음절에 반영하도록 구성되는 것을 특징으로 하는 외국어 독음 및 표시방법.
- 제 17 항에 있어서, 상기 제2 단계는,입력된 외국어의 음절의 발음에 포함된 강세를 기 정해진 리듬규칙에 따라 생성된 자국어 음절, 단어 및 문장에 반영되도록 구비되는 것을 특징으로 하는 외국어 독음 및 표시방법.
- 제 17 항에 있어서, 상기 제2 단계는,입력된 외국어 문장 및 외국어 음절과 매칭되어 생성된 자국어의 음절 중 적어도 하나의 리듬규칙이 반영된 음절 및 단어를 소정형상 및 폰트 크기, 색상 중 적어도 하나를 기 정해진 설정값으로 변경하여 화면에 표시되도록 처리하는 것을 특징으로 하는 외국어 독음 및 표시방법.
- 제 16 항에 있어서, 상기 변환단계는상기 외국어 단어 분리단계에서 분리된 각 외국어 단어의 외국어 음절 부분을 시각적으로 구분되게 표시하도록 처리하는 외국어 변환단계를 더 포함할 수 있는 것을 특징으로 하는 외국어 독음 및 표시방법.
- 제 15 항 내지 제 21 항 중 어느 한 항에 따른 방법의 각 단계를 수행하기 위한 명령어를 포함하는 컴퓨터 판독매체.
- 제 15 항 내지 제 21 항 중 어느 한 항에 따른 방법의 각 단계가 수록된 전자매체.
- 제 15 항 내지 제 21 항 중 어느 한 항에 따른 방법의 각 단계가 시각적으로 수록된 학습교재.
- 외국어 리듬을 표현하는 일체의 음성 및 동작을 감지하기 위한 센서부(610);상기 센서부(610)로부터 감지되고 제어부에 의해 분석된 동작 인식에 대한 점수 정보를 센싱 정보 연동으로 클라우드 서버, 그 밖의 유무선 PC 및 시스템과의 데이터 송수신을 수행하는 송수신부(620);상기 센서부(610)로 하여금 외국어 리듬을 표현하는 일체의 음성 및 동작을 감지하도록 제어하고, 상기 센서부(610)로부터 감지된 일체의 음성 및 동작을 분석하고 점수화하도록 제어하며, 음성 인식 모듈(631), 동작 인식 모듈(632) 및 점수화 활용 모듈(633)를 포함하는 제어부(630); 및음소, 음절, 문장에 대한 분석 및 동작 감지를 위한 설정정보 및 강세, 강음과 약음에 대한 정보를 저장하는 저장부(640);를 포함하여 구성되는 것을 특징으로 하는 외국어 리듬 동작 감지 센서 기반의 운동 학습 장치.
- 청구항 25에 있어서, 상기 음성 인식 모듈(631)은,상기 센서부(610)로부터 학습자의 음소 운동에 해당하는 자음, 모음 입 동작을 인식하여 분석하는 음소 인식 수단(631a);상기 음소 인식 수단(631a)에 의한 입 모양 및 혀 모양에 따른 음소 분석의 결과를 활용해, 학습자의 음절 운동에 대한 인식을 통해 분석을 수행하는 음절 인식 수단(631b); 및상기 음절 인식 수단(631b)에 의해 분석된 단어가 결합된 의사소통(communication)의 단위인 문장(sentences)을 이루는 적어도 하나 이상의 단어로 구분하여 분석하여 문장의 핵심요소인 명사, 본동사, 형용사, 부사를 추출하여 강조의 강음으로 설정하고, 기능적 요소에 대해서는 약음으로 설정하는 문장 인식 수단(631c); 을 포함하여 구성되는 것을 특징으로 하는 외국어 리듬 동작 감지 센서 기반의 운동 학습 장치.
- 청구항 26에 있어서, 상기 음소 인식 수단(631a)은,입 모양 타입에 해당하는 8개의 M1 내지 M8와, 혀 위치 타입에 해당하는 8개의 T1 내지 T8을 분석하는 것을 특징으로 하는 외국어 리듬 동작 감지 센서 기반의 운동 학습 장치.
- 청구항 27에 있어서, 상기 입 모양 타입에 해당하는 8개의 M1 내지 M8는,
- 청구항 27에 있어서, 상기 혀 위치 타입에 해당하는 8개의 T1 내지 T8은,발음 'a', 'o', 'u', '∂', 'I'에 대한 기본 위치인 하단(Below) T1(Tongue 1); 's', 'z'에 대한 하단(Below) 아랫니 뒷부분인 T2(Tongue 2); 'r'에 대해서 입 상단(Upper) 윗 어금니 끝인 T3(Tongue 3); 'i', 'e', 'æ'에 대한 상단(Upper) 윗니 중간인 T4(Tongue 4); 'θ', 'ð'에 대한 전방(Front) 윗니 앞부분인 T5(Tongue 5); 'l'에 대한 상단(Upper) 윗니 뒷부분인 T6(Tongue 6); 'd', 't', 'n'에 대한 상단(Upper) 경구개(Hard Palete) 앞인 T7(Tongue 7); ‘k’, ‘g’, ‘ŋ’에 대한 상단(Upper) 연구개(Soft Palete) 뒤인 T8(Tongue 8); 인 것을 특징으로 하는 외국어 리듬 동작 감지 센서 기반의 운동 학습 장치.
- 청구항 26에 있어서, 상기 음절 인식 수단(631b)은,단어 중 음절을 미리 설정된 주파수 이상의 강세로 강하게 발음하는 '제 1 강세', 제 1 강세보다 약하게 발음하는 '제 2 강세', '강약이 없는 음절', 소리가 없는 '묵음'으로 구분한 뒤, '제 1 강세'에 제 1 강세 동작과, '제 2 강세'에 제 2 강세 동작을 설정하여, 입 모양, 혀 위치 인식과 함께 제 1 및 제 2 강세 동작에 의한 운동이 수행되도록 하는 것을 특징으로 하는 외국어 리듬 동작 감지 센서 기반의 운동 학습 장치.
- 청구항 26에 있어서, 상기 동작 인식 모듈(632)은,상기 문장 인식 수단(631c)에 의해 설정된 강음에 미리 설정된 데시벨(dB) 또는 X,Y,Z축상에 표시된 표준값 이상의 강한 손뼉, 두발 딛기와 같은 강음 동작이 센서부(610)에 의해 인식되도록 설정하며, 약음에 대해서는 미리 설정된 데시벨(dB) 미만의 약한 손뼉, 한발 딛기와 같은 약음 동작이 상기 센서부(610)에 의해 인식되도록 설정하는 것을 특징으로 하는 외국어 리듬 동작 감지 센서 기반의 운동 학습 장치.
- 청구항 31에 있어서, 상기 동작 인식 모듈(632)은,상기 음소 인식 수단(631a)에 의해 분석된 입 모양과 혀 모양과 함께 저장부(640)에 저장된 강음 동작 및 약음 동작에 의해 소뇌를 자극하도록 하여 기억을 담당하는 신경전달 물질이 지속적으로 분비되어 장기 기억되도록 하는 기능을 수행하는 것을 특징으로 하는 외국어 리듬 동작 감지 센서 기반의 운동 학습 장치.
- 청구항 26에 있어서, 상기 점수화 활용 모듈(633)은,상기 음절 인식 수단(631b)에 의해 설정된 제 1 강세 동작 및 제 2 강세 동작, 그리고 상기 문장 인식 수단(631c)에 의해 설정된 강음 동작 및 약음 동작을 상기 센서부(610)를 이용해 외국어 리듬 동작으로 감지하여 활용하는 것을 특징으로 하는 외국어 리듬 동작 감지 센서 기반의 운동 학습 장치.
- 청구항 33에 있어서, 상기 점수화 활용 모듈(633)은,상기 음소 인식 수단(631a)에 의해 설정되고 인식되는 음소를 조합한 단어의 제 1 강세 및 제 2 강세와 각기 매칭되는 제 1 및 제 2 강세 동작, 단어를 조합한 문장에서 핵심어와 기능어와 각기 매칭되는 강음 동작 및 약음 동작에 대해 상기 센서부(610)를 통한 인식이 수행되는 지를 분석하여 점수화하는 것을 특징으로 하는 외국어 리듬 동작 감지 센서 기반의 운동 학습 장치.
- 청구항 25에 있어서, 상기 센서부(610)는,외국어 리듬을 표현하는 일체의 동작을 감지하기 위한 '움직임 감지 센서', '소리 감지센서', '진동 감지센서', 기타 감지 가능한 일체의 수단을 포함하는 것을 특징으로 하는 외국어 리듬 동작 감지 센서 기반의 운동 학습 장치.
- 외국어 리듬을 표현하는 일체의 음성 및 동작을 감지하기 위한 센서부; 및상기 센서부로 하여금 외국어 리듬을 표현하는 일체의 음성 및 동작을 감지하도록 제어하고, 상기 센서부로부터 감지된 일체의 음성 및 동작을 분석하고 제어하며, 음성 인식 모듈과 동작 인식 모듈을 포함하는 제어부;를 포함하여 구성되는 것을 특징으로 하는 외국어 리듬 동작 감지 센서 기반의 운동 학습 장치.
- 청구항 36에 있어서, 상기 음성 인식 모듈은상기 센서부로부터 학습자의 음소 운동에 해당하는 자음, 모음 입 동작을 인식하여 분석하는 음소 인식 수단;상기 음소 인식 수단에 의한 입 모양 및 혀 모양에 따름 음소 분석의 결과를 활용해, 학습자의 음절 운동에 대한 인식을 통해 분석을 수행하는 음절 인식 수단; 및상기 음절 인식 수단에 의해 분석된 단어가 결합된 의사소통의 단위인 문장을 이루는 적어도 하나 이상의 단어로 구분하여 분석하여 문장의 핵심요소인 명사, 본동사, 형용사, 부사를 추출하여 강조의 강음으로 설정하고, 기능적 요소에 대해서는 약음으로 설정하는 문장 인식 수단;을 포함하는 것을 특징으로 하는 외국어 리듬 동작 감지 센서 기반의 운동 학습 장치.
- 청구항 37에 있어서, 상기 동작 인식 모듈은상기 문장 인식 수단에 의해 설정된 강음에 미리 설정된 데시벨(dB) 또는 X,Y,Z축상에 표시된 표준값 이상의 강한 손뼉, 두발 딛기와 같은 강음 동작이 상기 센서부에 의해 인식되도록 설정하며, 약음에 대해서는 미리 설정된 데시벨(dB) 미만의 약한 손뼉, 한발 딛기와 같은 약음 동작이 상기 센서부에 의해 인식되도록 설정하는 것을 특징으로 하는 외국어 리듬 동작 감지 센서 기반의 운동 학습 장치.
- 센서부(610)로부터 학습자의 음소 운동에 해당하는 자음, 모음 입 동작을 인식하여 분석하는 음소 인식 수단(631a);상기 음소 인식 수단(631a)에 의한 입 모양 및 혀 모양에 따른 음소 분석의 결과를 활용해, 학습자의 음절 운동에 대한 인식을 통해 분석을 수행하는 음절 인식 수단(631b); 및상기 음절 인식 수단(131b)에 의해 분석된 단어가 결합된 의사소통(communication)의 단위인 문장(sentences)을 이루는 적어도 하나 이상의 단어로 구분하여 분석하여 문장의 핵심요소인 명사, 본동사, 형용사, 부사를 추출하여 강조의 강음으로 설정하고, 기능적 요소에 대해서는 약음으로 설정하는 문장 인식 수단(631c); 을 포함하여 구성되는 음성 인식 모듈(631)을 포함하는 것을 특징으로 하는 외국어 리듬 동작 감지 센서 기반의 운동 학습 장치.
- 청구항 39에 있어서,상기 문장 인식 수단(631c)에 의해 설정된 강음에 미리 설정된 데시벨(dB) 이상의 강한 손뼉, 두발 딛기와 같은 강음 동작이 센서부(610)에 의해 인식되도록 설정하며, 약음에 대해서는 미리 설정된 데시벨(dB) 미만의 약한 손뼉, 한발 딛기와 같은 약음 동작이 센서부(610)에 의해 인식되도록 설정하는 동작 인식 모듈(632); 을 더 포함하는 것을 특징으로 하는 외국어 리듬 동작 감지 센서 기반의 운동 학습 장치.
- 청구항 39에 있어서,상기 음절 인식 수단(631b)에 의해 설정된 제 1 강세 동작 및 제 2 강세 동작, 그리고 상기 문장 인식 수단(631c)에 의해 설정된 강음 동작 및 약음 동작을 센서부(610)를 이용해 외국어 리듬 동작으로 감지하는 점수화 활용 모듈(633); 을 더 포함하는 것을 특징으로 하는 외국어 리듬 동작 감지 센서 기반의 운동 학습 장치.
- 외국어 리듬 동작 감지 센서 기반의 운동 학습 장치(600)가 센서부(110) 동작을 수행함으로써, 학습자로부터 적어도 하나 이상의 문장과 동작을 인식하는 제 1 단계; 및외국어 리듬 동작 감지 센서 기반의 운동 학습 장치(600)가 센서부(610)로부터 학습자의 음소 운동에 해당하는 자음, 모음 입 동작을 인식하여 분석하며, 입 모양 타입에 해당하는 8개의 M1 내지 M8와, 혀 위치 타입에 해당하는 8개의 T1 내지 T8을 분석하는 제 2 단계; 를 포함하여 구성되는 것을 특징으로 하는 외국어 리듬 동작 감지 센서 기반의 운동 학습 장치를 이용한 운동 학습 방법.
- 청구항 42에 있어서, 상기 제 2 단계 이후,외국어 리듬 동작 감지 센서 기반의 운동 학습 장치(600)가 입 모양 및 혀 모양에 따른 음소 분석의 결과를 활용해, 학습자의 음절 운동에 대한 인식을 통해 분석을 수행하기 위해 인식된 단어에 대해서 각 음절로 분리한 뒤, 미리 설정된 주파수 이상의 강세로 강하게 발음하는 '제 1 강세', 제 1 강세보다 약하게 발음하는 '제 2 강세', '강약이 없는 음절', 소리가 없는 '묵음'으로 구분하는 제 3 단계; 및외국어 리듬 동작 감지 센서 기반의 운동 학습 장치(600)가 '제 1 강세'에 제 1 강세 동작과, '제 2 강세'에 제 2 강세 동작이 센서부(610)로부터 입력되는 단어(words)에 포함된 적어도 하나 이상의 음절에서의 제 1 강세 및 제 2 강세와 매칭되는지 여부를 분석하는 제 4 단계; 를 더 포함하는 것을 특징으로 하는 외국어 리듬 동작 감지 센서 기반의 운동 학습 장치를 이용한 운동 학습 방법.
- 청구항 43에 있어서, 상기 제 4 단계 이후,외국어 리듬 동작 감지 센서 기반의 운동 학습 장치(600)가 분석된 단어가 결합된 의사소통(communication)의 단위인 문장(sentences)으로 구분한 뒤, 적어도 하나 이상의 단어로 이루어진 각 문장을 분석하여 문장의 핵심요소인 명사, 본동사, 형용사, 부사를 추출하여 강조의 강음으로 인식하고, 기능적 요소에 대해서는 약음으로 인식하는 제 5 단계; 및외국어 리듬 동작 감지 센서 기반의 운동 학습 장치(600)가 인식된 강음에 대해서는 강음에 대한 인식과 동시에 강음 동작이 센서부(610)에 의해 인식되는지 여부를 판단하고, 인식된 약음에 대해서는 약음 동작이 센서부(610)에 의해 인식되는지 여부를 판단하는 제 6 단계; 를 더 포함하는 것을 특징으로 하는 외국어 리듬 동작 감지 센서 기반의 운동 학습 장치를 이용한 운동 학습 방법.
- 청구항 44에 있어서, 제 6 단계 이후,외국어 리듬 동작 감지 센서 기반의 운동 학습 장치(600)가 동작 인식을 점수화하여 센싱 정보 연동으로 클라우드 서버와, 그 밖의 유무선 PC 및 시스템과의 데이터 송수신을 수행하는 제 7 단계; 를 더 포함하는 것을 특징으로 하는 외국어 리듬 동작 감지 센서 기반의 운동 학습 장치를 이용한 운동 학습 방법.
- 청구항 42에 있어서, 제 1 단계는,외국어 리듬 동작 감지 센서 기반의 운동 학습 장치(600)가 수준별, 상황별, 국가별로 적용되는 음소 규칙, 음절 규칙, 리듬 규칙이 다르도록 설정된 상태로 문장과 동작을 인식하는 것을 특징으로 하는 외국어 리듬 동작 감지 센서 기반의 운동 학습 장치를 이용한 운동 학습 방법.
- 청구항 42에 있어서, 제 1 단계는,영어를 모국어로 사용하는 국가의 언어 또는 제2 외국어로 영어를 사용하는 국가의 언어를 모두 포함하는 적어도 하나 이상의 문장이 인식되는 것을 특징으로 하는 외국어 리듬 동작 감지 센서 기반의 운동 학습 장치를 이용한 운동 학습 방법.
- 청구항 42 내지 청구항 46 중 어느 한 항에 따른 방법의 각 단계를 수행하기 위한 명령어를 포함하는 컴퓨터 판독매체.
- 청구항 42 내지 청구항 46 중 어느 한 항에 따른 방법의 각 단계가 수록된 전자매체.
- 청구항 42 내지 청구항 46 중 어느 한 항에 따른 방법의 각 단계가 시각적으로 수록된 학습교재.
- 외국어 발음을 입 모양 기준으로 분리된 발음과 혀 위치를 기준으로 분리된 발음으로 나누어 표시한 발음규칙부; 및 상기 발음규칙부의 발음규칙과 기 설정된 리듬규칙에 따라 외국어 및 자국어에 리듬 이미지를 표시한 문장표현부;를 포함하여 구성되거나 또는외국어 또는 자국어가 강세에 따른 리듬 이미지가 표시된 문장표현으로 구성되는 것을 특징으로 하는 학습교재.
- 청구항 51에 있어서,상기 문장표현부는, 외국어 문장을 이루는 각 단어가 배치되는 외국어 표현부; 자국어, 또는 자국어와 외국어가 혼용되어 상기 외국어 문장에 대응하여 매칭되도록 배치되는 자국어 표현부; 및 상기 외국어 표현부 또는 상기 자국어 표현부에 배치된 문장의 강세에 따른 리듬 이미지에 대응하여 매칭되도록 배치되는 동작이미지 표현부;를 포함하여 구성되는 것을 특징으로 하는 학습교재.
- 청구항 51에 있어서,상기 리듬 이미지는, 글자폰트 크기 차이, 색상 차이, 글자의 굵기 차이, 특정한 형태의 모양에서 선택된 어느 하나 이상으로 표현될 수 있으며,상기 동작이미지는, 손 동작, 발 동작, 몸 동작 중 어느 하나 이상으로 표현되는 것을 특징으로 하는 학습교재.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201580084408.4A CN108352126A (zh) | 2015-11-11 | 2015-11-25 | 外语读音及标记装置及其方法,包括利用其装置和方法的基于外语节奏动作传感器的运动学习装置、运动学习方法以及对其进行记录的电子媒体和学习教材 |
US15/774,086 US10978045B2 (en) | 2015-11-11 | 2015-11-25 | Foreign language reading and displaying device and a method thereof, motion learning device based on foreign language rhythm detection sensor and motion learning method, electronic recording medium, and learning material |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2015-0157953 | 2015-11-11 | ||
KR1020150157953A KR101990021B1 (ko) | 2015-11-11 | 2015-11-11 | 영어 발음기호를 이용한 외국어 및 자국어 표시장치 및 방법 |
KR1020150163887A KR101881774B1 (ko) | 2015-11-23 | 2015-11-23 | 외국어 리듬 동작 감지 센서 기반의 운동 학습 장치, 그리고 이를 이용한 운동 학습 방법 |
KR10-2015-0163887 | 2015-11-23 | ||
KR10-2015-0165310 | 2015-11-25 | ||
KR1020150165310A KR102006758B1 (ko) | 2015-11-25 | 2015-11-25 | 외국어 학습교재 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017082447A1 true WO2017082447A1 (ko) | 2017-05-18 |
Family
ID=58695592
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2015/012741 WO2017082447A1 (ko) | 2015-11-11 | 2015-11-25 | 외국어 독음 및 표시장치와 그 방법, 및 이를 이용한 외국어 리듬 동작 감지 센서 기반의 운동학습장치와 운동학습방법, 이를 기록한 전자매체 및 학습교재 |
Country Status (2)
Country | Link |
---|---|
US (1) | US10978045B2 (ko) |
WO (1) | WO2017082447A1 (ko) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020081201A1 (en) * | 2018-10-14 | 2020-04-23 | Microsoft Technology Licensing, Llc | Conversion of text-to-speech pronunciation outputs to hyperarticulated vowels |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10395649B2 (en) * | 2017-12-15 | 2019-08-27 | International Business Machines Corporation | Pronunciation analysis and correction feedback |
US11869494B2 (en) * | 2019-01-10 | 2024-01-09 | International Business Machines Corporation | Vowel based generation of phonetically distinguishable words |
CN110782875B (zh) * | 2019-10-16 | 2021-12-10 | 腾讯科技(深圳)有限公司 | 一种基于人工智能的语音韵律处理方法及装置 |
CN113506563A (zh) * | 2021-07-06 | 2021-10-15 | 北京一起教育科技有限责任公司 | 一种发音识别的方法、装置及电子设备 |
CN113674589B (zh) * | 2021-08-06 | 2023-05-05 | 唐山师范学院 | 一种日语学习用插块式音标练习板 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20020044690A (ko) * | 2000-12-06 | 2002-06-19 | 이성한 | 원격제어 운동체를 이용한 학습 시스템 |
US20040243416A1 (en) * | 2003-06-02 | 2004-12-02 | Gardos Thomas R. | Speech recognition |
KR20050032759A (ko) * | 2003-10-02 | 2005-04-08 | 한국전자통신연구원 | 음운변이 규칙을 이용한 외래어 음차표기 자동 확장 방법및 그 장치 |
KR20100029970A (ko) * | 2008-09-09 | 2010-03-18 | 편두리 | 어학학습장치 |
CN103218924A (zh) * | 2013-03-29 | 2013-07-24 | 上海众实科技发展有限公司 | 一种基于音视频双模态的口语学习监测方法 |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030040899A1 (en) * | 2001-08-13 | 2003-02-27 | Ogilvie John W.L. | Tools and techniques for reader-guided incremental immersion in a foreign language text |
JP2007206975A (ja) * | 2006-02-01 | 2007-08-16 | Toshiba Corp | 言語情報変換装置及びその方法 |
WO2008130663A1 (en) * | 2007-04-20 | 2008-10-30 | Master Key, Llc | System and method for foreign language processing |
JP2009048003A (ja) * | 2007-08-21 | 2009-03-05 | Toshiba Corp | 音声翻訳装置及び方法 |
US8190420B2 (en) * | 2009-08-04 | 2012-05-29 | Autonomy Corporation Ltd. | Automatic spoken language identification based on phoneme sequence patterns |
US20120288833A1 (en) * | 2011-05-13 | 2012-11-15 | Ridgeway Karl F | System and Method for Language Instruction Using Multiple Prompts |
US9679496B2 (en) * | 2011-12-01 | 2017-06-13 | Arkady Zilberman | Reverse language resonance systems and methods for foreign language acquisition |
US9390085B2 (en) * | 2012-03-23 | 2016-07-12 | Tata Consultancy Sevices Limited | Speech processing system and method for recognizing speech samples from a speaker with an oriyan accent when speaking english |
CN103593340B (zh) * | 2013-10-28 | 2017-08-29 | 余自立 | 自然表达信息处理方法、处理及回应方法、设备及系统 |
-
2015
- 2015-11-25 US US15/774,086 patent/US10978045B2/en active Active
- 2015-11-25 WO PCT/KR2015/012741 patent/WO2017082447A1/ko active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20020044690A (ko) * | 2000-12-06 | 2002-06-19 | 이성한 | 원격제어 운동체를 이용한 학습 시스템 |
US20040243416A1 (en) * | 2003-06-02 | 2004-12-02 | Gardos Thomas R. | Speech recognition |
KR20050032759A (ko) * | 2003-10-02 | 2005-04-08 | 한국전자통신연구원 | 음운변이 규칙을 이용한 외래어 음차표기 자동 확장 방법및 그 장치 |
KR20100029970A (ko) * | 2008-09-09 | 2010-03-18 | 편두리 | 어학학습장치 |
CN103218924A (zh) * | 2013-03-29 | 2013-07-24 | 上海众实科技发展有限公司 | 一种基于音视频双模态的口语学习监测方法 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020081201A1 (en) * | 2018-10-14 | 2020-04-23 | Microsoft Technology Licensing, Llc | Conversion of text-to-speech pronunciation outputs to hyperarticulated vowels |
US10923105B2 (en) | 2018-10-14 | 2021-02-16 | Microsoft Technology Licensing, Llc | Conversion of text-to-speech pronunciation outputs to hyperarticulated vowels |
Also Published As
Publication number | Publication date |
---|---|
US20180330715A1 (en) | 2018-11-15 |
US10978045B2 (en) | 2021-04-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017082447A1 (ko) | 외국어 독음 및 표시장치와 그 방법, 및 이를 이용한 외국어 리듬 동작 감지 센서 기반의 운동학습장치와 운동학습방법, 이를 기록한 전자매체 및 학습교재 | |
WO2020231181A1 (en) | Method and device for providing voice recognition service | |
WO2020145439A1 (ko) | 감정 정보 기반의 음성 합성 방법 및 장치 | |
WO2015170945A1 (ko) | 영어 어순 지도를 이용한 영어 학습방법 및 그 시스템 | |
WO2017160073A1 (en) | Method and device for accelerated playback, transmission and storage of media files | |
WO2020027394A1 (ko) | 음소 단위 발음 정확성 평가 장치 및 평가 방법 | |
WO2015099464A1 (ko) | 3차원 멀티미디어 활용 발음 학습 지원 시스템 및 그 시스템의 발음 학습 지원 방법 | |
US5340316A (en) | Synthesis-based speech training system | |
WO2017200258A2 (ko) | 음향 신호를 촉각 신호로 변환하기 방법 및 이를 이용하는 햅틱 장치 | |
WO2019078615A1 (en) | METHOD AND ELECTRONIC DEVICE FOR TRANSLATING A VOICE SIGNAL | |
US5536171A (en) | Synthesis-based speech training system and method | |
WO2020230926A1 (ko) | 인공 지능을 이용하여, 합성 음성의 품질을 평가하는 음성 합성 장치 및 그의 동작 방법 | |
WO2019139301A1 (ko) | 전자 장치 및 그 자막 표현 방법 | |
WO2017142127A1 (ko) | 단어/숙어 시험 문제 출제 방법, 서버 및 컴퓨터 프로그램 | |
WO2015163684A1 (ko) | 적어도 하나의 의미론적 유닛의 집합을 개선하기 위한 방법, 장치 및 컴퓨터 판독 가능한 기록 매체 | |
WO2022080774A1 (ko) | 말 장애 평가 장치, 방법 및 프로그램 | |
WO2021215804A1 (ko) | 대화형 청중 시뮬레이션을 제공하는 장치 및 방법 | |
WO2022260432A1 (ko) | 자연어로 표현된 스타일 태그를 이용한 합성 음성 생성 방법 및 시스템 | |
WO2020246641A1 (ko) | 복수의 화자 설정이 가능한 음성 합성 방법 및 음성 합성 장치 | |
WO2020153717A1 (en) | Electronic device and controlling method of electronic device | |
WO2020138662A1 (ko) | 전자 장치 및 그의 제어 방법 | |
WO2023085584A1 (en) | Speech synthesis device and speech synthesis method | |
EP4014228A1 (en) | Speech synthesis method and apparatus | |
WO2022169208A1 (ko) | 영어 학습을 위한 음성 시각화 시스템 및 그 방법 | |
WO2022108040A1 (ko) | 음성의 보이스 특징 변환 방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15908361 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15774086 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 23/08/2018) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15908361 Country of ref document: EP Kind code of ref document: A1 |