KR100995847B1 - Language training method and system based sound analysis on internet - Google Patents

Language training method and system based sound analysis on internet Download PDF

Info

Publication number
KR100995847B1
KR100995847B1 KR20080027328A KR20080027328A KR100995847B1 KR 100995847 B1 KR100995847 B1 KR 100995847B1 KR 20080027328 A KR20080027328 A KR 20080027328A KR 20080027328 A KR20080027328 A KR 20080027328A KR 100995847 B1 KR100995847 B1 KR 100995847B1
Authority
KR
South Korea
Prior art keywords
learner
learning
sound
language learning
page
Prior art date
Application number
KR20080027328A
Other languages
Korean (ko)
Other versions
KR20090102088A (en
Inventor
이기원
Original Assignee
(주)잉큐영어교실
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by (주)잉큐영어교실 filed Critical (주)잉큐영어교실
Priority to KR20080027328A priority Critical patent/KR100995847B1/en
Priority to PCT/KR2009/001394 priority patent/WO2009119991A2/en
Publication of KR20090102088A publication Critical patent/KR20090102088A/en
Application granted granted Critical
Publication of KR100995847B1 publication Critical patent/KR100995847B1/en

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/06Foreign languages

Abstract

The present invention relates to a sound analysis-based language learning method and system on the Internet. The method of the present invention is a language learning method in which a learner computer and a language learning service system are connected to the Internet to provide language learning to a learner. When a learner accesses and logs in to a language learning service system using a learner computer, the language learning service system Providing this learner page, and when the learner chooses to learn from the learner page screen, the language learning service system provides a learning page for learning according to the learner's previous progress, and the learner learns from the learning page. After inputting the answer, the language learning service system evaluates the correct answer and provides the next learning page only when the predetermined value is higher than the predetermined threshold according to the evaluation result, and when the learner selects the recording in the learning page, the language learning service The system activates a timeout mark of a predetermined time Recording the learner's voice in a limited time, and performing a sound analysis of the recorded learner's voice by the language learning service system, and storing and managing the sound analysis result in the database by the language learning service system and managing the analysis result to the learner's side. It consists of providing steps.

Language Learning, Sound Analysis, Pronunciation, Rhythm, Sound Source, Time Limit, Recording

Description

LANGUAGE TRAINING METHOD AND SYSTEM BASED SOUND ANALYSIS ON INTERNET}

The present invention relates to a computer-based online language learning method and system, and more particularly, based on sound analysis on the Internet, which enables the learner to systematically learn by analyzing the learner's sound for the learner's correct language acquisition. The present invention relates to a language learning method and system.

Recently, with the development of transportation and communication means, the importance of language education to teach foreign languages has been very emphasized as business activities and individual daily lives are globalized. Conventional language education has been conducted through academy, but as the Internet is widely used, language learning using the Internet is in the spotlight.

However, the conventional language learning using the Internet attempts to acquire a language through poor content and simple repetition function, and also cannot systematically train the learner himself, who is most important in language acquisition, and learns scientifically about learning. There is a problem in that it does not satisfy the learner's learning needs because it provides a video of the language learning content that has been progressed at the language school without analysis or simply repeats or follows a native speaker's words.

In addition, for Koreans who are familiar with Korean, not only pronunciation and rhythm in English sounds, but also the source of the most basic sounds (Korean is from the neck and English are from the belly) are significantly different from Korean. Even if you study English, it is impossible to be good at listening and speaking English.

The present invention has been proposed to solve the above problems, an object of the present invention is to train the language of the learner himself through the scientific analysis of the learner's sound and learning results using a variety of learning functions, English near native speakers It is to provide a sound analysis-based language learning method and system on the Internet that can make the sound of the voice and keep the learner's motivation to learn continuously.

Another object of the present invention is to provide a sound analysis-based language learning method and system based on sound analysis on the Internet to limit the time at the time of recording to improve the learner's speech learning effect and to accurately analyze the learner's recorded data and evaluate it as a specific analysis item. It is.

In order to achieve the above object, the method of the present invention provides a language learning method in which a learner computer and a language learning service system are connected to the Internet to provide language learning to a learner. Accessing and logging in, the language learning service system providing a learner page; If a learner selects learning on the learner page screen, the language learning service system provides a learning page for learning according to a previous learning progress of the learner; If a learner learns from the learning page and inputs an answer, the language learning service system evaluates the answer with the correct answer and provides the next learning page only when the learning result is greater than or equal to a predetermined reference value according to the evaluation result; When a learner selects recording on the learning page, the language learning service system activates a time limit mark of a predetermined time and records the learner's voice within a limited time; Analyzing a sound of the recorded learner by a language learning service system; And storing, by the language learning service system, the sound analysis result in a database, and providing the analysis result to the learner.

The recording of the learner's voice includes selecting a recording button on a screen of a learning page, counting a time elapsed by displaying a time limit mark of a predetermined time when the record button is selected, and time elapsed by the time limit mark. When the display starts, the learner's voice is input and recorded; when the predetermined time elapses, the time-out display of the time limit mark is terminated, the recording is stopped, and when the upload button is selected, the recorded learner's voice data It consists of transmitting to the sound analysis server. In the recording of the learner's voice, the language learning service system provides a voice recording system screen to record the time limit, and when uploading is selected, the recorded learner's voice data is transmitted to the sound analysis server.

The sound analysis of the learner's voice may include a pronunciation analysis step of analyzing the pronunciation by comparing the learner's voice data with the standard sound data, and a rhythm analysis analyzing the rhythm by comparing the learner's voice data with the rhythm of the standard sound data. And a sound source analysis step of analyzing the source of the sound by determining whether the source of the sound is the abdomen or the vocal cords by comparing the learner's voice data with a predetermined reference value.

The rhythm analysis step analyzes the standard sound waveform, finds the first non-silent place, and sets the starting point if there are no next three points in succession, and the point where the three or more silences are continuous and the starting point and the end point are 10 pixels or more. Determining the end point of; Analyze the learner's waveform to find the first non-silent point and determine if the next three consecutive points have no silence, start point, and determine the end point of the peak with at least three consecutive silence points and at least 10 pixels of start and end points. ; If it is determined that the starting point of the standard sound waveform and the learner waveform is the same, if it is the same, it calculates the waveform area of the standard sound and the learner as a ratio, and if the unvoiced sound processing part compares the end points of the standard sound and the learner's waveform to check whether the unvoiced sound processing is performed. step; Comparing the number of peaks to calculate a matching rate of a predetermined period, and calculating an index while putting the hash table into a hash table to find an accent interval; Finding a comparison start point and a comparison end point in the hash table and calculating a ratio of matching accents; And calculating a score by averaging a ratio between the standard sound waveform and the learner's waveform area ratio, a predetermined section matching ratio, and an accent.

The pronunciation analysis step may include converting the original sentence file into an XML file so that the speech recognition engine can recognize it; Creating a table of options by inputting specific words and phrases to recognize words that are similarly pronounced due to the ringtone; Bringing the learner's recorded sentences to separate sentences recognized by the voice recognition engine for each word and setting only words that have two or more spellings; Extracting the number of recognized words; Removing punctuation and duplicate words from the extracted words; Obtaining an index while putting the original statement into a hash table; Obtaining an index while inserting the recorded sentence into another hash table; Comparing the indexes of the hash table while comparing the original sentence with the recorded sentence in sequence; And calculating a pronunciation analysis score according to the degree of word match as a result of the index comparison.

The sound source analyzing step may include: removing noise by reading learner voice data; Preprocessing by applying an FFT to the learner speech data for frequency analysis; And analyzing the frequency characteristics of the FFT-converted data to determine the source of sound at a ratio of a specific frequency band.

In addition, in order to achieve the above object, the system of the present invention is a language learning service system in which a learner computer and a language learning service system are connected to the Internet to provide a learner to the learner, wherein the learner computer is the language learning. Download and execute the client module for language learning from the service system, display the learning page provided by the language learning service system, display the time limit mark on the screen when the learner selects the recording, and display the learner's voice data. After receiving and storing the input, the learner selects upload to transmit the recorded learner's voice data to the language learning service system, receives a sound analysis result from the language learning service system, and displays the sound analysis result on the screen. , Learning content Receives pronunciation data, rhythm analysis, sound source analysis by receiving content database, membership management database for managing learners who have been registered as members, and voice data of learners, and in the corresponding learner area of the member management database. It provides a sound analysis server for storing and a learning page stored in the content database to a learner computer connected through the Internet, and provides a corresponding learning function according to the learner's operation transmitted from the learner computer, the learner's voice from the learner computer When data is received, the sound analysis server transmits a request for sound analysis, and receives a result of the analysis.

The present invention enables the learning manager not only to check the content learned by the learner regardless of distance, but also to learn the learner's learning content in detail, thereby enabling the learning guidance that is suitable for the learner. In addition, if a learner enters an incorrect answer while learning on the learning page, the learner can repeat the learning a certain number of times, and then provide the correct answer to improve his or her ability to find the correct answer. By doing so, there is an effect that thorough learning is achieved.

And according to the present invention, in addition to the training to see and listen to the learning page in the learning process, using the microphone (recording) function and voice recognition, the learner himself to follow the practice under the time limit to actually speak the sentence (not recording during recording) Training) (continue training until you can do it if you can't record certain content within the time limit), and improve the language learning effect by scientifically analyzing the learner's voice data according to the sound analysis algorithm.

The technical problems achieved by the present invention and the practice of the present invention will be more clearly understood by the preferred embodiments of the present invention described below. The following examples are merely illustrative of the present invention and are not intended to limit the scope of the present invention.

1 is a schematic diagram showing the overall configuration of a language learning service system according to the present invention.

As shown in FIG. 1, the language learning service system according to the present invention joins the language learning service system 130 and the language learning service system 130 connected through the Internet 102 to perform language learning. The learner's computer 110 and the language learning service system 130 are registered to the computer of the learning manager 120 to guide and manage the learning process of the learner in charge.

Referring to FIG. 1, the language learning service system 130 receives a content database 134 storing learning content, a member management database 136 for managing learners subscribed as members, and voice data of learners. The sound analysis server 138 receives the pronunciation analysis, the rhythm analysis, the source analysis of the sound, and stores it in the corresponding learner area of the member management database 136, and the contents of the learner's computer 110 connected through the Internet 102. Providing the learning page stored in the database 134 and provides the corresponding learning function according to the learner's operation delivered from the learner's computer 110, the sound analysis server when the learner's voice data is received from the learner's computer 110 Request for sound analysis by passing to (138) and receives the analysis result is composed of a web server 132 to deliver to the learner's computer (110). The sound analysis server 138 compares the learner's voice data with the standard sound data to analyze the pronunciation, compares the learner's voice data with the rhythm of the standard sound data, and analyzes the rhythm, and compares the learner's voice data with a predetermined reference value. The sound source is analyzed by judging whether the source of the sound is the abdomen or vocal cords.

The learner's computer 110 downloads and executes a client module for language learning from the language learning service system 130, displays a learning page provided by the language learning service system 130, and displays a screen when the learner selects a recording. In addition to displaying the time limit mark, and after receiving and storing the learner's voice data, if the learner selects upload, the recorded learner's voice data is transmitted to the language learning service system 130, and the language learning service system 130 is displayed. Receive sound analysis result from and display on screen.

The learning manager or teacher connects to the language learning service system 130 through the Internet 102 using the computer 120 to guide or manage the learning process through the learner's learning information or records.

When accessing and logging in to the language learning service system 130 according to the present invention, a learner mode for executing language learning and a learning manager mode for managing and teaching a learner's learning process can be logged in. Depending on the screen provided or the accessible area, there is a difference.

2 is a flowchart illustrating a manager-level processing procedure in the language learning method according to the present invention.

Referring to FIG. 2, when a registered learning manager logs in to the system 130 for providing a language learning service according to the present invention, the administrator provides basic screens (201 and 202). In the administrator base screen, login log, lesson log, member list, and contents view are prepared.

When the learning manager selects the login log, the learner login information is displayed (203, 204). In the learner login information, an ID, a login time, a logout time, etc. may be displayed to check the log information of the learner.

When the learning manager selects a lesson log, the learner's lesson log information is displayed (205, 206). The learner's lesson log displays the learner's name and ID, the learning process (progress), and the learning time so that the learner's contents can be grasped. When the learning manager selects the content view, the corresponding content is displayed (207, 208).

When the learning manager selects the member list, the member list is displayed (209, 210). In the member list, the order number, ID, name, study progress / status, contact information, registration date, modification, deletion, and comments are displayed. When a specific member is selected from the member list, the member's learning information screen is displayed (211,212).

When the learning manager selects 'view the learning area' on the member's learning information screen, as shown in FIG. 10, the learner displays grades for the learning area of the corresponding learner (213,214). As shown, the level-specific ability of the learner is displayed (215 and 216). Referring to FIG. 10, the learning area view screen displays a learning area (writing, pronunciation, reading, listening, etc.) of the corresponding learner and a grade graph for each area. Referring to FIG. 11, the level, ability evaluation graph, and average score are displayed on the page view screen.

Then, when the learning manager selects the grades for each page on the learning information screen of the member, as shown in FIG. 12, the grades screen for each page is displayed (217 and 218). Referring to FIG. 12, the graded screen for each page displays the learned page and the ability evaluation graph, average score, detail, and sound preview of the page. Display the page (219, 220). And if you select the detail in the grade-by-page screen, as shown in Figure 13, the number of attempts, scores, answers entered by the learner, etc. are displayed, and if the play button of the sound preview is selected, the pronunciation or voice uploaded by the learner is repeatedly repeated. Can be heard (221, 222). Accordingly, the learning manager can guide and consult the learning more accurately by grasping which answer the learner has entered, what answer, and the learner's pronunciation.

3 is a flowchart illustrating a processing procedure when a learner selects 'learning' in the language learning method according to the present invention.

Referring to FIG. 3, when a learner subscribed to the language learning service system 130 that provides a language learning service of the present invention logs in, a learner basic screen is provided as shown in FIG. 14 (301, 302). At least the 'learn' button and 'learning information' button are prepared on the learner's basic screen.

When the learner selects 'learning', a learning page screen is provided as shown in FIG. 15 to allow learning (303, 304). The learning page shown in FIG. 15 is an example of one of the learning pages stored in the content DB. The help page, the learning content picture, the answer input unit, the microphone button, the play button, the upload button, and the previous page movement (<) are displayed. . In FIG. 15, a button (>) for moving to the next page is displayed, but in the learner mode, the button is displayed only when the learning of the corresponding learning page is successful and all the correct answers are corrected. After the learning is performed in the learning page screen as described above, the answer is input (305).

After entering the answer, if you click the 'View Results' button, the answer is entered and compared with the pre-registered correct answer, and if there is an incorrect answer in the entered answer is checked the number of incorrect answers a predetermined number of times (in the embodiment of the present invention) If it is within 6 times), re-learning request is output by requesting re-learning, and if it is 6 times, the correct answer is guided and then re-learning (306 ~ 312). As described above, in the present invention, when a learner inputs an incorrect answer, the learner is guided to solve the problem without giving the correct answer immediately, and if the incorrect answer appears three times, the incorrect answer number is guided, and the correct answer is repeated six times. By letting them know and letting them learn again, they can learn by themselves.

In addition, if the learner enters the correct answer, the Go button (>) is displayed on the screen so that the user can go to the next learning page. If the previous learning is not completed, the next learning page is displayed. Impossible makes effective learning possible (313,314). Such a feature of the present invention is a learning method reached through scientific research analysis on language learning, and provides an advantage of increasing the learning effect of the learner.

4 is a flowchart illustrating a processing procedure when a learner selects 'learning information' in the language learning method according to the present invention.

When the learner logs in and selects 'learning information' from the learner's main screen, the report card (315-1), study time (315-2), study contents (315-3), and learning counseling button (315-4) The member's learning information screen is displayed (315). Accordingly, when a learner selects a report card from his or her learning information screen, the learner's learning grade screen including the 'learning area view', 'page view', and 'page grades' button is displayed.

If the 'learning area view' is selected, as shown in FIG. 10, the learner's grades are displayed according to the learning area, and if the 'page view' is selected, the ability of each learner's level is displayed as shown in FIG. (316--319).

Next, when the learner selects 'grades by page' on his or her learning information screen, the learner displays information on each page as shown in FIG. 12. If you select a page in the learning information screen for each page, you can display the corresponding learning page for review, and if you select the detail in the learning information screen for each page, the number of attempts, scores, and answers entered by the learners as shown in FIG. And so on. Accordingly, learners can learn more accurately by finding out which answer they have entered for a problem (320 ~ 325).

On the other hand, when the learner selects the learning time 315-2, the learning time calendar is provided, and when the learning content 315-3 is selected, the learning contents replay screen is provided as shown in FIG. Selecting button 315-4 provides a bulletin board for learning counseling (326 ~ 328). If you click View on the Review Contents screen, you can see the relevant contents again.

Figure 5 is a flow chart illustrating an embodiment of a language learning method having a time limit to learner voice recording in accordance with the present invention.

When the learner selects 'learn' on the computer 110 screen, the language learning service system 130 displays a learning page (for example, FIG. 15) to learn according to the learning progress while memorizing the previous learning contents. (401). The learning page is stored in the learning content database 134, and the entire learning process is divided into levels and is composed of a plurality of learning pages. In addition, in order to induce the learner's interest after a certain learning page in each learning process, it is possible to provide a learning screen of various quiz pools and game methods.

When the learner clicks on the content picture in the learning page, the native speaker's pronunciation of words / phrases related to the picture is output (402 and 403).

The learner may select an answer from the learning page or input an answer (404, 405). The answer input method may be implemented in various ways, such as a method of inputting a word using a keyboard or setting a direction key according to a learning content. When the learner clicks on the dictionary in the learning screen, Korean words / phrases for the corresponding content are displayed (406 and 407).

When the learner clicks the microphone, the time-out display of the time-out mark begins to operate, and the learner's voice is spoken through the learner's microphone to hear or see the native language within this time limit. The time-lapse display of the restriction mark stops. In the embodiment of the present invention, when one sentence is composed of approximately five words, the time limit of one sentence is about 1.5 seconds, and the time limit of about 4 seconds is about 10 seconds.

After the upload is selected, the recorded learner voice data is transmitted to the language learning service system 130 (408 ~ 414). In this case, the learner receives a time limit for recording, thereby improving the learning effect, and the time limit can be adjusted by the system operator as needed. The learning manager 120 may listen to the recorded voice of the learner and guide the learning more accurately.

When the learner clicks the printer in the learning screen, the learner prints a writing exercise book corresponding to the current learning page so that the learner can write directly to the printed exercise book (415,416). As described above, according to the present invention, the learner can actually write the learning content by offline, and in addition to simply listening, the learner can directly write learning.

Figure 6 is a schematic diagram showing the overall procedure of the language learning method using sound analysis according to the present invention.

Referring to FIG. 6, when a learner logs in to the web server 132 of the language learning service system 130 through the computer 110, the learner provides a learning page and drives the voice recording system after the learner learns according to the learning page. 17, a voice recording system screen is provided (S1 to S4).

When the user's voice data is uploaded by recording through the microphone function on the voice recording system screen, the recorded voice data is transmitted to the sound analysis server 138 and evaluated (S5, S7).

The learner's voice data is automatically analyzed by the sound analysis server 138. In the exemplary embodiment of the present invention, the learner's voice data is divided into a pronunciation analysis, a rhythm analysis, and a root analysis of the sound. That is, the automatic analysis by the voice recognition module of the sound analysis server 138 is divided into a pronunciation analysis, a rhythm analysis, a source analysis of the sound, and the pronunciation analysis may include an accent analysis or an intonation analysis. The sound analysis server 138 compares the learner's voice data with the standard sound data to analyze the pronunciation, compares the learner's voice data with the rhythm of the standard sound data, and analyzes the rhythm, and compares the learner's voice data with a predetermined reference value. Compared with, to determine whether the source of the sound is the abdomen or vocal cords after analyzing the source of the sound to induce abdominal vocalization.

On the other hand, in some cases, the recording data of the learner is transmitted to the learning manager 120, the learning manager 120 listens to the recording of the learner 110 and manually analyzes the sound and adds the analysis result to the sound analysis server 138. It may also be (S6, S8 to S10). The supplementary analysis by the learning manager may include consonant / vowel analysis and annual analysis, and may be added as a complementary technology to automatic analysis.

The analysis result by the sound analysis server 138 is comprehensively evaluated and then processed in a spectrum, graph, or graphic form so that the learner can intuitively understand it, and is displayed on the voice recording system screen as shown in FIG. 18 (S11 to S13). ). Referring to FIG. 18, the learner's voice spectrum is displayed on the left side of the screen, and the evaluation result is graphically displayed on the upper right side.

The learner's evaluation result is divided into pronunciation analysis, rhythm analysis, and sound source analysis in the evaluation result area for each page, and is displayed as a score. As shown in FIG. 20, when the user clicks the Play button in the results of each page, the user may again hear the recorded voice (S14).

As described above, according to the present invention, the learner's voice is divided into pronunciation analysis, rhythm analysis, and source analysis to be analyzed in detail, thereby accurately recognizing the shortcomings, and the learning manager scientifically recognizes the shortage of the student in charge. To ensure accurate pronunciation.

7 is a flowchart illustrating a detailed procedure of the rhythm analysis of sound analysis according to the present invention.

Referring to FIG. 7, first, a standard sound waveform is loaded into a memory buffer (S701). By analyzing the standard sound waveform, the first non-silent place is searched for and if there are no subsequent three points of silence, it is determined as a starting point (S702). In the standard sound waveform, three or more continuous silences and a start point and an end point of 10 pixels or more are determined as the end points of the peak (S703).

Then, the learner's sound waveform is loaded into the memory buffer (S704). The learner analyzes the waveform first to find a non-silent place, and if there is no silence in the next three points in succession, the start point is set (S705). In the learner's waveform, three or more silences are continuous, and a point at which the start point and the end point are 10 pixels or more is determined as the end point of the peak (S706).

Then, if the standard sound and the learner's waveform start point are the same, if it is the same, the standard area and the learner's waveform area are calculated by the ratio, and in the case of the unvoiced sound part, the sound is processed by comparing the end point of the standard sound and the learner's waveform. S707-S709).

The number of peaks is compared to calculate a matching rate of a predetermined section, and an index is obtained by putting the hash table into a hash table to find an accent section (S710 and S711). The comparison start point and the comparison end point are found in the hash table to obtain a ratio of matching accents (S712). The score is calculated by averaging the ratio between the standard sound and the waveform area ratio of the learner and the matching ratio and the accent of the predetermined section (S713).

8 is a flowchart illustrating a detailed procedure of pronunciation analysis of sound analysis according to the present invention.

Referring to FIG. 8, the original sentence file is converted into an XML file so that the speech recognition engine can recognize it (S801). The option table is input by inputting up to five specific words and phrases by applying an option for recognizing words that are similarly pronounced due to the sound (S802).

Then, the learner's sound data is loaded (S803). The sentences recognized by the voice recognition engine are separated by words and set only words that have two or more spells (S804).

Then, the number of words is extracted (S805). The punctuation marks (".", "?", "!") And duplicate words are removed from the extracted words (S806). The index is obtained by inserting the original statement into the hash table (S807).

The index is obtained by inserting the recorded word map sentence into another hash table (S808).

The indexes of the hash table are compared while comparing the original sentence with the recording sentence (S809). If there is a recognition word, the sequential index of the recognition sentence is extracted and compared by the duplicate sentences (S810 to S812). It is determined by matching two or more words and immediately following the next connected word, if the match is increased, and the pronunciation analysis score is calculated (S813 ~ S815).

9 is a flowchart illustrating a detailed procedure of sound source analysis during sound analysis according to the present invention.

Referring to FIG. 9, learner voice data to be analyzed is called up (S901). Low frequency noise and high frequency noise are removed from the voice data (S902).

In order to analyze the frequency, a fast free transform (FFT) is applied to the learner's speech data and preprocessed (S903).

Analyze frequencies for specific bands, and perform histogram analysis of the analyzed frequencies so that a specific frequency band (e.g. in English, the source of sound is twice as high as the level of approximately 2500 Hz to 3500 Hz band is high) is above a predetermined rate (50%). Determine the recognition (S904 ~ S906). If the specific frequency band is greater than a certain ratio, the sound source of the learner's voice is determined to be doubled, and if less than that, it is determined to be determined by the neck (S907). If necessary, the ratio of the 2500 Hz to 3500 Hz band may be scored and expressed as a score.

The present invention has been described above with reference to one embodiment shown in the drawings, but those skilled in the art will understand that various modifications and equivalent other embodiments are possible therefrom.

1 is a schematic diagram showing the overall configuration of a language learning service system according to the present invention;

2 is a flow chart showing a manager-level processing procedure in the language learning method according to the present invention;

3 is a flowchart illustrating a processing procedure when a learner selects 'learning' in the language learning method according to the present invention;

4 is a flowchart illustrating a processing procedure when a learner selects 'learning information' in the language learning method according to the present invention;

Figure 5 is a flow chart showing an embodiment of a language learning method having a time limit to learner voice recording in accordance with the present invention,

6 is a flowchart illustrating a sound analysis-based language learning procedure according to the present invention;

7 is a flow chart showing the detailed procedure of the rhythm analysis of sound analysis according to the present invention,

8 is a flowchart illustrating a detailed procedure of pronunciation analysis of sound analysis according to the present invention;

9 is a flowchart showing the detailed procedure of the sound source analysis of sound analysis in accordance with the present invention;

10 is an example of a learning area view screen according to the present invention;

11 is an example of a page view screen according to the present invention;

12 is an example of a grade-by-page screen according to the present invention;

13 is an example of a detail screen according to the present invention;

14 is an example of a learner basic screen according to the present invention;

15 is an example of a learning page screen according to the present invention;

16 is an example of a learning content review screen according to the present invention;

17 is an example of a voice recording system screen according to the present invention;

18 is an example of a voice recording system screen displaying a sound analysis result according to the present invention;

19 is an example of a sound analysis result evaluation item according to the present invention;

20 is another example of a sound analysis result evaluation item according to the present invention.

 DESCRIPTION OF THE REFERENCE NUMERALS

102: Internet 110: Learner Computer

120: learning manager computer 130: language learning service system

132: web server 134: content DB

136: member management DB 138: sound analysis server

Claims (11)

In the language learning method that the learner computer and language learning service system can be connected to the Internet to provide language learning to learners, When the learner accesses and logs in to the language learning service system using the learner computer, the language learning service system providing a learner page; If a learner selects learning on the learner page screen, the language learning service system provides a learning page for learning according to a previous learning progress of the learner; When a learner learns from the learning page and inputs an answer, the language learning service system evaluates the answer with the correct answer, and provides the next learning page only when the learning result is greater than or equal to a predetermined reference value according to the evaluation result; When a learner selects recording in the learning page, the language learning service system operating a time-limited mark for a predetermined time and recording a voice of a learner who speaks or sees a foreign language within a limited time; A rhythm analysis step of analyzing the rhythm by comparing the learner's voice data with the rhythm of the standard sound data, a pronunciation analysis step of analyzing the pronunciation by comparing the learner's voice data with the standard sound data, and a predetermined reference value of the learner's voice data. Comprising a sound source analysis step of analyzing the source of sound by determining whether the source of the sound compared to the abdomen or vocal cords, the language learning service system for sound analysis of the recorded learner's voice; And The language learning service system includes storing and managing the sound analysis result in a database and providing the analysis result to the learner. The sound source analysis step Importing learner voice data to remove noise, Preprocessing by applying an FFT to the learner speech data for frequency analysis; Analyzing the frequency characteristics of the FFT-converted data and determining and displaying the source of sound when the ratio of a specific frequency band is above a certain ratio, and judging by the neck when the ratio is below a certain ratio. Sound analysis-based language learning method on the Internet, characterized in that it can induce abdominal spontaneity when pronunciation in a foreign language. The method of claim 1, wherein the recording of the learner's voice is performed. Selecting a recording button on the screen of the learning page; displaying a time limit mark of a predetermined time when the recording button is selected; counting time elapse; and inputting a learner's voice when the time limit display of the time limit mark starts. Receiving and recording, Ending the time-lapse display of the time limit mark when a predetermined time has elapsed, Stop recording, and If the upload button is selected, and transmits the recorded learner's voice data to the sound analysis server Sound analysis based language learning method on the Internet, characterized in that. The method of claim 1, wherein the recording of the learner's voice is performed. The language learning service system provides a voice recording system screen for recording in a limited time, and when upload is selected, the voice analysis based language learning method on the Internet, characterized by transmitting the recorded learner's voice data to a sound analysis server. delete The method of claim 1, wherein the rhythm analysis system Analyze the standard sound waveform to find the first non-silent point and determine if the next three consecutive points have no silence, starting point, and determine the end point of the peak with three or more consecutive silences and a starting point and ending point of 10 pixels or more. ; Analyze the learner's waveform to find the first non-silent point and determine if the next three consecutive points have no silence, start point, and determine the end point of the peak with at least three consecutive silence points and at least 10 pixels of start and end points. ; If it is determined that the starting point of the standard sound waveform and the learner waveform is the same, if it is the same, it calculates the waveform area of the standard sound and the learner as a ratio, and if the unvoiced sound processing part compares the end points of the standard sound and the learner's waveform to check whether the unvoiced sound processing is performed. step; Comparing the number of peaks to calculate a matching rate of a predetermined period, and calculating an index while putting the hash table into a hash table to find an accent interval; Finding a comparison start point and a comparison end point in the hash table and calculating a ratio of matching accents; And And calculating a score by averaging the standard sound waveform and the proportion of the waveform area of the learner and the proportion of the matching interval and the accent of the predetermined section. According to claim 1, The pronunciation analysis step Converting the original sentence file into an XML file for speech recognition engine recognition; Creating a table of options by inputting specific words and phrases to recognize words that are similarly pronounced due to the ringtone; Bringing the learner's recorded sentences to separate sentences recognized by the voice recognition engine for each word and setting only words that have two or more spellings; Extracting the number of recognized words; Removing punctuation and duplicate words from the extracted words; Obtaining an index while putting the original statement into a hash table; Obtaining an index while inserting the recorded sentence into another hash table; Comparing the indexes of the hash table while comparing the original sentence with the recorded sentence in sequence; And The sound analysis-based language learning method on the Internet, characterized in that it comprises the step of calculating the pronunciation analysis score according to the degree of word matching. delete The method of claim 1, wherein the specific frequency band is 2500 Hz to 3500 Hz. The method of claim 1, wherein the sound analysis-based language learning method on the Internet comprises: On the learning page, the help unit, content picture, answer input unit, microphone button, upload button, the previous page (<), displays a dictionary, When the learner clicks on the content picture on the screen of the learning page, the native speaker's pronunciation of the word / sentence associated with the picture is outputted, When the learner clicks the dictionary on the screen of the learning page, the Korean word / sentence for the corresponding content is displayed to enhance the learner's understanding and to train the Hangul in English. When the learner clicks the microphone button on the screen of the learning page, if the upload button is selected after recording the learner's voice through the microphone, the recorded learner voice data is transmitted to the server, When the learner selects a printer on the screen of the learning page, the sound analysis-based language learning method on the Internet, characterized in that for printing the writing exercise book suitable for the current learning page. delete delete
KR20080027328A 2008-03-25 2008-03-25 Language training method and system based sound analysis on internet KR100995847B1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR20080027328A KR100995847B1 (en) 2008-03-25 2008-03-25 Language training method and system based sound analysis on internet
PCT/KR2009/001394 WO2009119991A2 (en) 2008-03-25 2009-03-19 Method and system for learning language based on sound analysis on the internet

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR20080027328A KR100995847B1 (en) 2008-03-25 2008-03-25 Language training method and system based sound analysis on internet

Publications (2)

Publication Number Publication Date
KR20090102088A KR20090102088A (en) 2009-09-30
KR100995847B1 true KR100995847B1 (en) 2010-11-23

Family

ID=41114439

Family Applications (1)

Application Number Title Priority Date Filing Date
KR20080027328A KR100995847B1 (en) 2008-03-25 2008-03-25 Language training method and system based sound analysis on internet

Country Status (2)

Country Link
KR (1) KR100995847B1 (en)
WO (1) WO2009119991A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101411039B1 (en) * 2012-02-07 2014-07-07 에스케이씨앤씨 주식회사 Method for evaluating pronunciation with speech recognition and electronic device using the same

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101631939B1 (en) * 2009-12-02 2016-06-20 엘지전자 주식회사 Mobile terminal and method for controlling the same
KR101671586B1 (en) * 2014-11-27 2016-11-01 김무현 Method of creating and spreading project audio text using smart phone
KR101681673B1 (en) * 2015-01-09 2016-12-01 이호진 English trainning method and system based on sound classification in internet
KR102105889B1 (en) * 2018-09-10 2020-04-29 신한대학교 산학협력단 Apparatus for Providing Learning Service
KR102129825B1 (en) * 2019-09-17 2020-07-03 (주) 스터디티비 Machine learning based learning service system for improving metacognitive ability
CN112837679A (en) * 2020-12-31 2021-05-25 北京策腾教育科技集团有限公司 Language learning method and system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100568167B1 (en) * 2000-07-18 2006-04-05 한국과학기술원 Method of foreign language pronunciation speaking test using automatic pronunciation comparison method
KR20040040979A (en) * 2002-11-08 2004-05-13 주식회사 유니북스 Method and System for Providing Language Training Service by Using Telecommunication Network
KR20050061227A (en) * 2003-12-18 2005-06-22 주식회사 와이비엠시사닷컴 Service method of training pronounciation using internet
KR20050062898A (en) * 2003-12-19 2005-06-28 주식회사 언어과학 System and method for combining studying language with searching dictionary
KR100687442B1 (en) * 2006-03-23 2007-02-27 장성옥 A foreign language studying method and system for student-driven type

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101411039B1 (en) * 2012-02-07 2014-07-07 에스케이씨앤씨 주식회사 Method for evaluating pronunciation with speech recognition and electronic device using the same

Also Published As

Publication number Publication date
WO2009119991A4 (en) 2010-03-04
WO2009119991A3 (en) 2009-12-30
KR20090102088A (en) 2009-09-30
WO2009119991A2 (en) 2009-10-01

Similar Documents

Publication Publication Date Title
CN110782921B (en) Voice evaluation method and device, storage medium and electronic device
US7299188B2 (en) Method and apparatus for providing an interactive language tutor
US8392190B2 (en) Systems and methods for assessment of non-native spontaneous speech
Lynch et al. Listening
CN101105939B (en) Sonification guiding method
CN112307742B (en) Session type human-computer interaction spoken language evaluation method, device and storage medium
US20170287356A1 (en) Teaching systems and methods
CN111833853B (en) Voice processing method and device, electronic equipment and computer readable storage medium
KR100995847B1 (en) Language training method and system based sound analysis on internet
CN109074345A (en) Course is automatically generated and presented by digital media content extraction
CN112487139B (en) Text-based automatic question setting method and device and computer equipment
CN101551947A (en) Computer system for assisting spoken language learning
JP2001159865A (en) Method and device for leading interactive language learning
Peabody Methods for pronunciation assessment in computer aided language learning
Athanaselis et al. Making assistive reading tools user friendly: A new platform for Greek dyslexic students empowered by automatic speech recognition
Ahsiah et al. Tajweed checking system to support recitation
CN109191349A (en) A kind of methods of exhibiting and system of English learning content
Van Moere et al. 21. Technology and artificial intelligence in language assessment
Bernstein et al. ARTIFICIAL INTELLIGENCE FORSCORING ORAL READING FLUENCY
CN113205729A (en) Foreign student-oriented speech evaluation method, device and system
Hönig Automatic assessment of prosody in second language learning
US20220230626A1 (en) Creative work systems and methods thereof
Szyszka Pronunciation learning strategies
KR100701270B1 (en) Online Lecture and Evaluation System and Method of Foreign Languages
Filighera et al. Towards A Vocalization Feedback Pipeline for Language Learners

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
N231 Notification of change of applicant
E701 Decision to grant or registration of patent right
GRNT Written decision to grant
FPAY Annual fee payment

Payment date: 20131108

Year of fee payment: 4

FPAY Annual fee payment

Payment date: 20141029

Year of fee payment: 5

FPAY Annual fee payment

Payment date: 20151116

Year of fee payment: 6

LAPS Lapse due to unpaid annual fee