KR20140087956A - Apparatus and method for learning phonics by using native speaker's pronunciation data and word and sentence and image data - Google Patents

Apparatus and method for learning phonics by using native speaker's pronunciation data and word and sentence and image data Download PDF

Info

Publication number
KR20140087956A
KR20140087956A KR1020130000022A KR20130000022A KR20140087956A KR 20140087956 A KR20140087956 A KR 20140087956A KR 1020130000022 A KR1020130000022 A KR 1020130000022A KR 20130000022 A KR20130000022 A KR 20130000022A KR 20140087956 A KR20140087956 A KR 20140087956A
Authority
KR
South Korea
Prior art keywords
data
phonics
pronunciation
sentence
word
Prior art date
Application number
KR1020130000022A
Other languages
Korean (ko)
Inventor
주홍찬
Original Assignee
주홍찬
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주홍찬 filed Critical 주홍찬
Priority to KR1020130000022A priority Critical patent/KR20140087956A/en
Publication of KR20140087956A publication Critical patent/KR20140087956A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/06Foreign languages
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems

Abstract

The present invention relates to a phonics learning apparatus and method using word, sentence, image data, and pronunciation data of a native speaker, and more particularly, to a phonics learning apparatus and method using phonics data, data expressed in a language of a country learning pronunciation of phonics data, A native speech image and sound waveform data, and a syllable classification sound source data; An input unit for selecting phonics data and inputting phonics data; When the learning of the selected phonics data starts, data displayed in a language of a country learning phonics data and pronunciation of phonics data, a word and a sentence provided to remind phonics data, and image data are displayed; An audio processing unit for receiving a learner's voice when a learner sounds a word and a sentence so as to associate phonics data and phonics data displayed at the time of phonics learning; A voice analysis module for analyzing the voice inputted through the audio processor and converting the voice into a waveform and comparing and analyzing the waveform of the native voice with the similarity of the native voice; A camera unit for collecting a positional image of a tooth and a tongue of a face muscle change, a shape of a lip, and a word and a sentence so as to associate phonics data of a learner with phonics data; An image processing unit for normalizing the focus and size of the positional image data of the teeth and the tongue of the facial muscles, the shape of the lips and the changes of the facial muscles when pronouncing the phonics data and the phonics data inputted from the camera unit; And a phonemic data of a native speaker stored in the storage unit and a phoneme data of a phonemic data of the phonemic data stored in the storage unit when the phonemic data and the phonics data of the learner normalized through the image processing unit are provided, And displays the waveforms processed by the speech analysis module and the speech waveform processor and the speech waveforms of the native speakers stored in the speech analysis module, And a control unit for controlling the display unit to display " bad " when the similarity is less than 50%, and a control unit for controlling the phonics learning apparatus to use words, sentences, image data and native pronunciation data.

Figure pat00001

Description

BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a phonics learning apparatus and method using phonics learning data using vocabulary and sentence and image data and pronunciation data of a native speaker,

The present invention relates to words and phrases and image data that can remind phonemes and phonemes displayed in accordance with the phonics rules of learning, and which can also learn syllable intensity, and a plurality of phonemes Value Phoneme, Word and Sentence Phonetic pronoun Pronunciation of different phonemes through phonics and phonics pronunciation Face and face images, phonics Phonics, phonics, phonics, phonics, (Hereinafter, referred to as " pronunciation data "), and more particularly, to a phonics learning apparatus and method using a phonics learning rule to be learned Followed by "alphabetic or short-vowel phonemes or long vowel phonemes or double vowel phonemes or (Hereinafter referred to as " phonics data ") and displays data in a language of a country learning the pronunciation of the displayed phonics data, Words and sentences and an "image or animation or image" (hereinafter referred to as "image data") are provided together with a phonemic data such that phonetic data of a native speaker about the provided words and sentences are reproduced When a native pronunciation image is reproduced, a different color is displayed for each syllable in terms of words and sentences, and a " display sound source such as an electronic sound or an applause sound for distinguishing syllables " (hereinafter referred to as " syllable- , Phonics data and phonics data provided to remind you of words and sentences A learner's pronunciation, a voice and an image are inputted, and a face image of the facial muscle change and lip shape of a native pronunciation, a position of a tooth and a tongue are reproduced, a face muscle change of a learner's pronunciation, To a phonics learning apparatus and method using word and sentence, image data, and pronunciation data of a native speaker for displaying an audio waveform comparison screen to perform effective phonics learning.

In the global age, interest in English is increasing day by day. Accordingly, English teaching devices for children are being developed.

The most widely used English teaching device for children is phonics learning, and learning about pronunciation of alphabetic phonemes is the central system.

Traditional phonics learning has provided memorization-based learning by simply repeating the alphabetical notation and sound. Such simple repetitive learning causes boredom, which causes interest in learning English.

In addition, there is also a problem that it is not possible to pronounce correct English pronunciation despite having invested a long time because I do language learning by relying only on the text and voice which do not know how the native speaker is pronounced.

SUMMARY OF THE INVENTION Accordingly, it is an object of the present invention to solve the above-mentioned problems, and an object of the present invention is to provide a phonemic data management system, The present invention also provides a phonics learning apparatus and method using words, sentences, image data, and pronunciation data of native speakers, which help to increase interest and understanding in phonics learning through visual association learning by providing data.

It is another object of the present invention to provide a phonics learning apparatus and a phonemic learning apparatus, which can separate words and sentences into phonemes and listen to individual phonemes so that words and phrases and image data capable of correctly distinguishing phonemes, And a method.

It is another object of the present invention to provide a speech recognition apparatus and a speech recognition method capable of generating words and sentence and image data in which a different color is applied to each syllable when pronunciation data of a native speaker is reproduced and a syllable- And to provide a phonics learning apparatus and method using the same.

It is also an object of the present invention to provide a method and apparatus for correcting pronunciation of a word and a sentence so that a learner can associate phonics data and phonics data with pronunciation data of a native speaker by comparing pronunciation data of a learner with pronunciation data of a learner, , Phonics learning apparatus and method using word and sentence, image data, and pronunciation data of native speakers.

It is also an object of the present invention to provide a phonics learning apparatus and method using words and sentences, image data, and pronunciation data of native speakers, which enable a user to precisely check the strength and strength of pronunciation through comparison of sound waveforms .

It is another object of the present invention to provide a method and apparatus for providing pronunciation data of a native speaker in various types such as a female, a male, an American pronunciation, an English pronunciation, a phonetic pronunciation, and the like, And to provide a phonics learning apparatus and method using the same.

The phonics learning apparatus using words, sentences, image data, and pronunciation data of native speakers according to the present invention can be applied to phonics data, data expressed in a language of a country that learns pronunciation of phonics data, words and sentence data related to phonics data, Data, native speech sound image and sound waveform data, and syllable-group sound source data;

An input unit for selecting phonics data and inputting phonics data;

When the learning of the selected phonics data starts, data displayed in a language of a country learning phonics data and pronunciation of phonics data, a word and a sentence provided to remind phonics data, and image data are displayed;

An audio processing unit for receiving a learner's voice when a learner sounds a word and a sentence so as to associate phonics data and phonics data displayed at the time of phonics learning;

A voice analysis module for analyzing the voice inputted through the audio processor and converting the voice into a waveform and comparing and analyzing the waveform of the native voice with respect to the inputted voice;

A camera unit for collecting a positional image of a tooth and a tongue of a face muscle change, a shape of a lip, and a word and a sentence so as to associate phonics data of a learner with phonics data;

An image processing unit for normalizing the focus and size of the positional image data of the teeth and the tongue of the facial muscles, the shape of the lips and the changes of the facial muscles when pronouncing the phonics data and the phonics data inputted from the camera unit; And

A face image of a tooth and a tongue of the lips, and a phonemic data of a native speaker stored in the storage unit when the words and sentences are provided to associate the learner's phonics data and phonics data standardized through the image processing unit, The waveforms processed by the speech analysis module and the sound waveform processor and the sound waveforms of the native speakers stored are displayed and the similarity is compared with each other to obtain a 50% And a control unit for controlling the display unit to display " good " in the case of abnormality, and " bad " in the case of less than 50%.

In addition, the control unit may further include a wired / wireless network unit for displaying a word or a sentence of the Internet web browser that matches the word or sentence stored in the storage unit, have.

The controller may further include a DMB module for displaying a word or sentence stored in the storage unit of the words in the DMB English broadcast in a form separated by phonemes.

In addition, the layer of the display unit may be divided into Home, Dic, and Option tabs.

In addition, the Home tab displays sequentially the phonics data selected by the learner, and displays data in the language of the country learning the pronunciation of the displayed phonics data, words and sentence data related to the displayed phonics data, image data, The phonics data of the native speaker, the voice waveform of the native speaker and the voice waveform of the learner, the shape of the lips, the position of the teeth and the tongue when pronouncing the phonics data of the native speaker, the shape of the face of the learner, And syllable-segment sound source data.

The option tab is used to select whether to display image data capable of associating phonics data, to select whether alphabet pronunciation and phonogram pronunciation are to be played, to select whether syllable unit color is displayed, to select whether to reproduce syllable classification sound source data, And selecting at least one of a phonetic symbol selection, a gender selection of a phonetic output, a pronunciation phonetic output selection by a country, and a pronunciation correction test mode selection.

In addition, the control unit may control the display unit to display a problem in any one of text, voice output, and image data in a phonological, word memorization, and syllable strength test, and to display the answer in a selective or subjective manner.

A phonics learning method using words, sentences, image data, and pronunciation data of a native speaker according to the present invention comprises: inputting a selection instruction of phonics data to be learned from a learner;

Displaying phonics data and phonics data in a language of a country that learns pronunciation of the phonics data, words and phrases and image data provided to remind phonics data;

Displaying an image in a native pronunciation state of words and sentences provided so as to associate the selected phonics data and phonics data;

A step of outputting syllable classification sound source data by applying a different color to each syllable in native pronunciation of words and sentences provided to remind the selected phonics data;

Inputting phonics data of a learner according to a word and a sentence provided so as to associate the displayed phonics data and phonics data with voice and image in word and sentence pronunciation;

The phonics data of the native speaker and the phonics data of the learner and the word and sentence are displayed and reproduced when the word and sentence are pronounced;

A step of displaying a comparison screen of a native speech waveform and a learner speech waveform; And

And displaying "bad" if the similarity degree between the native speech waveform and the learner speech waveform is "good" if the similarity is 50% or more and "bad" if the similarity is less than 50%.

According to the phonics learning method using words, sentences, image data, and pronunciation data of native speakers of the present invention, data expressed in a language of a country learning phonics data and phonics data, image data related to phonics data, It is possible to learn a language that enhances interest and understanding of phonics learning by displaying sentence data. When the pronunciation data of a native speaker about words and sentences provided for associating phonics data are reproduced, a different color is applied to each syllable, In the pronunciation of the provided words and sentences so that the learner can associate the phonics data and the phonics data by comparing the pronunciation data of the native speaker with the pronunciation data of the learner, Calibration to be similar to It is possible to check the strength and intensity of the pronunciation by comparing the waveforms of the voice and it is possible to provide the pronunciation data of the native speaker in various kinds such as female, male, American, English, It is effective to make language learning widely.

1 is a block diagram showing the configuration of a phonics learning apparatus using word and sentence, image data, and pronunciation data of a native speaker according to an embodiment of the present invention.
2 is a view showing an example of a phonics data selection screen to be studied according to an embodiment of the present invention;
3 is a diagram illustrating an example of a level selection screen of a selected phoneme data learning according to an embodiment of the present invention;
4 is a view showing an example of a learning screen of selected phonics data according to an embodiment of the present invention;
5 is a diagram showing an example of displaying dictionary information of a word in a Dic tab according to an embodiment of the present invention;
6 is a diagram illustrating an example of a word learning option selection list in an Option tab according to an embodiment of the present invention;
FIG. 7 is a view showing an example of a phonics learning test method in the form of a multiple choice and a subjective type according to an embodiment of the present invention; FIG.
FIG. 8 is a diagram illustrating an example in which a word that is not matched by a learner is checked and displayed as a list in a multiple-choice phonics learning test of a multiple choice and double-entry type according to an embodiment of the present invention.
9 is a flowchart of a phonics learning method using words and sentences, image data, and pronunciation data of native speakers according to an embodiment of the present invention.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, a phonics learning apparatus and method using words, sentences, image data, and pronunciation data of native speakers according to the present invention will be described with reference to the accompanying drawings.

1 is a block diagram showing the configuration of a phonics learning apparatus using words, sentences, image data, and pronunciation data of native speakers according to an embodiment of the present invention.

1, a phonics learning apparatus 100 using words, sentences, image data, and pronunciation data of native speakers includes data written in a language of a country that learns pronunciation of phonics data, phonics data, word and sentence data, A storage unit 104 for storing syllable classification sound source data, an input unit 102 for phonics data selection and phonics data input, and an input unit 102 for inputting phonics data selected by the learner, A display unit 114 displaying words and phrases provided to remind phonics data, image data, and phonemic data displayed at the time of phonics learning; Words and sentences provided to remind phonics data An audio processing unit 106 for receiving a learner's voice in the event of sounding; a sound analyzing unit 106 for analyzing the sound input through the audio processing unit 106, converting the sound into a waveform, comparing the waveform of the native sound with the similarity A camera module 112 for collecting a position image of a tooth and a tongue of a lips and a change of facial muscles when a learner pronounces a word, a sound analyzing module 108 for analyzing the sound waveform, An image processing unit 116 for normalizing the face muscle change and the shape of the lip inputted from the camera unit 112 and the focus and size of the position image of the teeth and the tongue of the learner; In order to associate data and phonics data, the words and phrases provided in the pronunciation of the facial muscles and the shape of the lips, teeth and tongue position images and the above stored The speech analysis module 108 and the speech waveform processor 110 compare the phonemic data of the native speaker and the phonemic data provided by the speech analysis module 108 and the speech waveform processor 110, A control section 122 for displaying a voice waveform of a stored native speaker and comparing the similarities to display "good" in 50% or more similarity and "bad" in case of less than 50% A wired / wireless network unit 118 for displaying a word or a sentence displayed on the screen of the display unit 104 in a form in which words or phrases stored in the storage unit 104 are matched with phonemes, A DMB module for displaying a word or a sentence in the DMB English broadcast, which corresponds to a word or sentence stored in the storage unit 104, (120).

Although not shown, the screen of the display unit 114 is divided into left and right sides to display a screen of the Internet web browser or a DMB broadcast image on the left side, and a word or sentence matching the word or sentence stored in the storage unit 104 When a learner selects a word or a sentence desired to be learned by displaying a sentence, the phonemic data stored in the storage unit 104, data written in a language of a country learning pronunciation of the phonics data, image data, It is preferable to reproduce it.

This enables the learner to check the learning status of the words and sentences through the screen of the Internet web browser or the DMB English broadcast, and to check and review the words and sentences that have not been heard.

Although not shown, a marker for implementing an augmented reality is written in a foreign language learning book or a foreign language learning program output through a display such as a computer, a smart phone, a tablet pc, or a pmp, and the marker is displayed through the camera unit 112 The phonemic data stored in the storage unit 104, data expressed in a language of a country learning phonics data, word and sentence data, image data, native pronunciation images and sound waveform data, and syllable- It is preferable that each piece of data related to the marker is played back on the screen.

This has the effect that the learner is immediately provided with the data related to the learning at the time of learning and also helps the pronunciation learning.

The phonics learning apparatus 100 includes a computer terminal, a tablet PC, a learner, and a smartphone. The input unit 102 inputs a key of the word learning apparatus 100, Pad or touch screen module and touch pan.

The audio processing unit 106 includes a microphone and a speaker to enable audio input, output, and media output.

FIGS. 2 and 3 are views showing examples of phonics data selection and learning level selection for learning according to an embodiment of the present invention, and FIG. 4 is a diagram illustrating an example of a phonics data learning screen according to an embodiment of the present invention.

As shown in FIGS. 2, 3 and 4, the learner first selects the phonics data to be learned.

The learner's phonics data selection method can select a desired part and a learning level as shown in FIGS. 2 and 3, and does not limit the method.

When the learner selects the phonics data to be learned and the learning starts, as shown in Fig. 4, the home tab 10 stores words and sentences related to phonics data and image data 11, 12), data written in the language of the country learning phonics data and phonics data, (13), the shape of the lips at the pronunciation of the native speaker's words, the shape of the tooth and tongue position (14) (15), a native speech waveform (16) and a learner's speech waveform (17) are displayed, and a Next, Repeat, Slow button and a recording and playback button are displayed at the bottom .

First, data displayed in a language of a country learning phonics data and phonics data, image data related to phonics data, and word and sentence data are displayed together. At the same time, when a word of a native speaker who pronounces the word is pronounced, do.

Thereafter, a pop-up menu is displayed so that the learner's voice can be input, so that the phonics learning apparatus 100 using words, sentences, image data, and native pronunciation data of the learner looks at the camera and sounds the corresponding language.

When the voice recording and the video recording are completed, the pronunciation video and the recorded pronunciation video of the learner are enlarged so that the mouth part of the native speaker's pronunciation of the word and sentence data related to the phonics data and phonics data is reproduced.

Preferably, the image processing unit 116 includes a program that can process and standardize an image so that a mouth shape can be displayed on a frame of the corresponding screen. If the recorded image can not be utilized as data, It is desirable to display the requirements to the learner in the form of a pop-up menu.

In addition, the pronunciation video reproduction of the learner allows the learner's facial muscle change, lips shape, and tooth and tongue position to be inputted and displayed in real time through the camera, not by the recording method, Shape, and location of the teeth and tongue directly on the screen, so that they can practice the pronunciation of the native speaker.

At this time, the voice analysis module 108 analyzes the learner's voice and converts the learner's voice to a waveform through the voice waveform processor 110 based on the analyzed data, so that the voice waveform can be compared with the native voice waveform.

5 is a diagram showing an example of displaying dictionary information of a word in a Dic tab according to an embodiment of the present invention.

As shown in FIG. 5, when the word is being reproduced, the dictionary step 20 can be selected to confirm the dictionary meaning and example sentence of the word.

6 is a view showing an example of a phonics learning option selection list in the Option tab according to the embodiment of the present invention.

As shown in FIG. 6, the Option tab 30 includes selection of whether or not to display image data capable of associating phonics data, selection of whether alphabet pronunciation and phonological pronunciation are reproduced, selection of whether a syllable unit color is displayed, Selection of whether to reproduce data, selection of accent marking, selection of pronunciation symbol marking, selection of gender of voice output, selection of pronunciation voice output by country, and selection of a pronunciation correction test mode. So that the learners can adjust themselves in the way that best suits them.

FIG. 7 is a view showing an example of a phonics learning test method in the form of a multiple choice and a subjective expression according to an embodiment of the present invention.

As shown in FIG. 7, in a phonological, word memorizing, and syllable strength test, a method of presenting a problem by using either text, voice output, or image data and selecting an answer by a questionnaire or a questionnaire can be used.

If the learner enters a word in a multiple choice, you can use the finger or touch pen to select the number, or you can enter the number with the keypad. If you type directly, you can write the word directly with your finger or touch pen .

FIG. 8 is a diagram illustrating an example in which a word that is not matched by a learner is checked and displayed in a list in a multiple-choice phonics learning test according to an embodiment of the present invention.

As shown in FIG. 8, a problem that the learner can not match in the phonics learning test problem may be stored in the storage unit 104, checked in the list so that the learner can learn again, and can be learned or terminated again.

9 is a flowchart of a phonics learning method using word and sentence, image data, and pronunciation data of a native speaker according to an embodiment of the present invention.

Referring to FIG. 9, the learner selects phonetic data to be learned through the input unit 102 (S200).

When the phonics data selection through the input unit 102 is completed, the phonics data selected in the Home 10 of the display unit 114 and the phonics data represented in the language of the country learning the phonics data are reminded And the image data and the native-sounding image are displayed and reproduced (S202).

Thereafter, the user inputs phonetic data of the learner's phonics data and phonics data so as to associate the learner's phonics data with the phonics data, and acquires the face image during pronunciation of the learner through the camera unit 112 (S204 ).

(Step S206). In this way, the learner's pronunciated image of the learner, whose focus and size are standardized, is reproduced through the image processing unit 116 (S206).

In the case of the learner's pronunciations, the distance from the camera to the learner, the size of the face occupied by the learner for each learner, and the like are generally changed.

Accordingly, the image processing unit 116 preferably recognizes the learner's face shape and enlarges or reduces the size of the learner's face in the frame so that the learner's pronunciation image can be displayed with the normalized focus and size.

At this time, the learner can apply the recording reproduction or the real time reproduction method to the image reproduction upon pronunciation.

The learner's voice data input through the audio processor 106 is analyzed by the voice analysis module 108 and converted into a waveform through the voice waveform processor 100 so as to be displayed (S208).

The learner's voice waveform is displayed side by side so as to be compared with the voice waveform of the native speaker stored in the storage unit 104 so that the learner can confirm which syllable is strong in the syllable.

A waveform is a type of wavelength that occurs when the waveform of a sound is divided by time difference. The waveform is an important factor for confirming the accent and rhythm when pronouncing a word.

As described above, when the learner selects the desired phonics data to start learning, he or she learns the pronunciation of the selected phonics data and the phonics data in the language of the country learning the phonics data, And sentence and image data are displayed. When video of the native speaker is displayed, the voice and the image are inputted when the word and sentence pronunciation are provided so as to associate the learner's phonics data and phonics data. And it is possible to compare the voice waveform of the native speaker with the voice waveform of the learner, so that it is possible to acquire accurate pronunciation correction and accurate accent.

100: language learning apparatus 102: input unit
104: storage unit 106: audio processing unit
108: Voice analysis module 110: Voice waveform processor
112: camera section 114: display section
116: image processing unit 118: wired / wireless network unit
120: DMB module unit 122:

Claims (8)

A phonics learning apparatus using word and sentence, image data, and pronunciation data of a native speaker, comprising: a phonemic data generating unit for generating phonics data, data expressing pronunciation of phonics data, language word and sentence data, image data, Sound waveform data and syllable-group sound source data; An input unit for selecting phonics data and inputting phonics data; When the learning of the selected phonics data starts, data displayed in a language of a country learning phonics data and pronunciation of phonics data, a word and a sentence provided to remind phonics data, and image data are displayed; An audio processing unit for receiving a learner's voice when a learner sounds a word and a sentence so as to associate phonics data and phonics data displayed at the time of phonics learning; A voice analysis module for analyzing the voice inputted through the audio processor and converting the voice into a waveform and comparing and analyzing the waveform of the native voice with the similarity of the native voice; A camera unit for collecting a positional image of a tooth and a tongue of a face muscle change, a shape of a lip, and a word and a sentence so as to associate phonics data of a learner with phonics data; An image processing unit for normalizing the focus and size of the positional image data of the teeth and the tongue of the facial muscles, the shape of the lips and the changes of the facial muscles when pronouncing the phonics data and the phonics data inputted from the camera unit; And a phonemic data of a native speaker stored in the storage unit and a phoneme data of a phonemic data of the phonemic data stored in the storage unit when the phonemic data and the phonics data of the learner normalized through the image processing unit are provided, And displays the waveforms processed by the speech analysis module and the speech waveform processor and the speech waveforms of the native speakers stored in the speech analysis module, % &Quot; and " bad " if the similarity is less than 50%. The phonics learning apparatus using word and sentence and image data and pronunciation data of native speakers. The mobile communication terminal according to claim 1, further comprising a wired / wireless network unit for allowing the control unit to display, in a phonetic unit, a word or a sentence of the Internet web browser that matches a word or a sentence stored in the storage unit A phonics learning device using characteristic words and sentences, image data, and pronunciation data of native speakers. The apparatus of claim 1, further comprising a DMB module for displaying the words or sentences of the displayed words or sentences in the DMB English broadcast, And a phonics learning device using sentence and image data and pronunciation data of a native speaker. 2. The phonics learning apparatus according to claim 1, wherein the layer of the display unit is divided into Home, Dic and Option tabs, and the phonics learning apparatus using sentence and image data and pronunciation data of a native speaker. The method according to claim 4, wherein the Home tab displays phonics data selected by the learner in sequence and includes data represented in a language of a country learning phonics data displayed, phonics data related words and sentence data displayed, In the pronunciation of the native speaker's words, the phonics data of the native speaker, the shape of the lips, the position of the teeth and the tongue in pronunciation, the shape of the face or lip in pronouncing the learner's phonics data, the shape of the teeth and tongue, A sound waveform, and syllable-segment sound source data. The phonics learning apparatus using word, sentence, image data, and pronunciation data of a native speaker. The method according to claim 4, wherein the Option tab comprises a selection of whether to display image data capable of associating phonics data, a selection of whether alphabet pronunciation and phonological pronunciation are reproduced, whether or not syllable unit color is displayed, , A selection of accent marking, a selection of pronunciation marking selection, a selection of gender of voice output, a selection of pronunciation voice output by country, and a selection of a pronunciation correction test mode. Data and phonics learning device using pronunciation data of native speakers. The method according to claim 1, wherein the control unit controls the display unit to display a question in a phonological, word memorizing, and syllable strength test, And a phonics learning device using sentence and image data and pronunciation data of a native speaker. A phonics learning method using word and sentence, image data, and pronunciation data of a native speaker, comprising: inputting a selection instruction of phonics data to be learned from a learner; Displaying words and phrases and image data provided so as to associate phonetic data with data to be displayed; Displaying an image in a native pronunciation state of words and sentences provided so as to associate the selected phonics data and phonics data; A step of outputting syllable classification sound source data by applying a different color to each syllable in native pronunciation of words and sentences provided to remind the selected phonics data;
Inputting phonics data of a learner according to a word and a sentence provided so as to associate the displayed phonics data and phonics data with voice and image in word and sentence pronunciation; The phonics data of the native speaker and the phonics data of the learner and the word and sentence are displayed and reproduced when the word and sentence are pronounced; A step of displaying a comparison screen of a native speech waveform and a learner speech waveform; And displaying the word " bad " if the comparative degree of similarity between the native speech waveform and the learner speech waveform is " good " if the similarity is 50% or more and less than 50% How to learn phonics
KR1020130000022A 2013-01-01 2013-01-01 Apparatus and method for learning phonics by using native speaker's pronunciation data and word and sentence and image data KR20140087956A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020130000022A KR20140087956A (en) 2013-01-01 2013-01-01 Apparatus and method for learning phonics by using native speaker's pronunciation data and word and sentence and image data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020130000022A KR20140087956A (en) 2013-01-01 2013-01-01 Apparatus and method for learning phonics by using native speaker's pronunciation data and word and sentence and image data

Publications (1)

Publication Number Publication Date
KR20140087956A true KR20140087956A (en) 2014-07-09

Family

ID=51736783

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020130000022A KR20140087956A (en) 2013-01-01 2013-01-01 Apparatus and method for learning phonics by using native speaker's pronunciation data and word and sentence and image data

Country Status (1)

Country Link
KR (1) KR20140087956A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101511527B1 (en) * 2014-10-16 2015-04-13 서창원 Apparatus for studying korean language using cued speech and method thereof
KR101697237B1 (en) * 2016-04-22 2017-01-17 (재) 원암문화재단 Apparatus and method for learning the korean pronunciation of the chinese words
KR102019613B1 (en) * 2018-12-13 2019-09-06 김대호 Method for learning and practicing pronunciation based on tongue movement
KR102165317B1 (en) * 2019-08-20 2020-10-13 정연서 Method for providing phonics learning service enhancing comprehension of untaught word based on rule and combination

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101511527B1 (en) * 2014-10-16 2015-04-13 서창원 Apparatus for studying korean language using cued speech and method thereof
KR101697237B1 (en) * 2016-04-22 2017-01-17 (재) 원암문화재단 Apparatus and method for learning the korean pronunciation of the chinese words
KR102019613B1 (en) * 2018-12-13 2019-09-06 김대호 Method for learning and practicing pronunciation based on tongue movement
KR102165317B1 (en) * 2019-08-20 2020-10-13 정연서 Method for providing phonics learning service enhancing comprehension of untaught word based on rule and combination

Similar Documents

Publication Publication Date Title
KR100900085B1 (en) Language learning control method
JP2001159865A (en) Method and device for leading interactive language learning
KR20150076128A (en) System and method on education supporting of pronunciation ussing 3 dimensional multimedia
KR100900081B1 (en) Language learning control method
Stemberger et al. Phonetic transcription for speech-language pathology in the 21st century
KR20140071070A (en) Method and apparatus for learning pronunciation of foreign language using phonetic symbol
Henrichsen An illustrated taxonomy of online CAPT resources
KR20140087956A (en) Apparatus and method for learning phonics by using native speaker's pronunciation data and word and sentence and image data
Ai Automatic pronunciation error detection and feedback generation for call applications
KR20140078810A (en) Apparatus and method for learning rhythm pattern by using native speaker's pronunciation data and language data.
KR20140107067A (en) Apparatus and method for learning word by using native speakerpronunciation data and image data
JP2020038371A (en) Computer program, pronunciation learning support method and pronunciation learning support device
KR20140075994A (en) Apparatus and method for language education by using native speaker's pronunciation data and thought unit
KR20140087951A (en) Apparatus and method for learning english grammar by using native speaker's pronunciation data and image data.
KR20140079677A (en) Apparatus and method for learning sound connection by using native speaker's pronunciation data and language data.
KR20140028527A (en) Apparatus and method for learning word by using native speaker's pronunciation data and syllable of a word
KR101681673B1 (en) English trainning method and system based on sound classification in internet
KR20140087950A (en) Apparatus and method for learning rhythm pattern by using native speaker's pronunciation data and language data.
KR20140082127A (en) Apparatus and method for learning word by using native speaker's pronunciation data and origin of a word
KR20140079245A (en) Apparatus and method for learning rhythm pattern by using native speaker's pronunciation data and language data.
KR20140073768A (en) Apparatus and method for language education by using native speaker's pronunciation data and thoughtunit
KR20140078081A (en) Apparatus and method for learning phonics by using native speaker's pronunciation data and word and sentence and image data
KR20140074459A (en) Apparatus and method for learning word by using native speaker's pronunciation data and syllable of a word and image data
JP2016224283A (en) Conversation training system for foreign language
KR20140075145A (en) Apparatus and method for learning phonics by using native speaker's pronunciation data and word and sentence and image data

Legal Events

Date Code Title Description
WITN Withdrawal due to no request for examination