WO2012046901A1 - Music-based language-learning method, and learning device using same - Google Patents

Music-based language-learning method, and learning device using same Download PDF

Info

Publication number
WO2012046901A1
WO2012046901A1 PCT/KR2010/007017 KR2010007017W WO2012046901A1 WO 2012046901 A1 WO2012046901 A1 WO 2012046901A1 KR 2010007017 W KR2010007017 W KR 2010007017W WO 2012046901 A1 WO2012046901 A1 WO 2012046901A1
Authority
WO
WIPO (PCT)
Prior art keywords
learning
speaker
music
content
pronunciation
Prior art date
Application number
PCT/KR2010/007017
Other languages
French (fr)
Korean (ko)
Inventor
박상철
Original Assignee
Park Sang Cheol
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Park Sang Cheol filed Critical Park Sang Cheol
Priority to SG2012090643A priority Critical patent/SG186705A1/en
Priority to CN2010800686571A priority patent/CN103080991A/en
Priority to JP2013531460A priority patent/JP2013541732A/en
Publication of WO2012046901A1 publication Critical patent/WO2012046901A1/en

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/06Foreign languages
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied

Definitions

  • the present invention relates to a music-based language learning method and a learning apparatus using the same, and in particular, a user who matches a music consisting of lyrics and accompaniment songs, and a language composed of text and pronunciation with a related image or video.
  • the present invention relates to an audiovisual language learning method for creating a customized learning multimedia content, to play and view the same, and to use the same.
  • the online foreign language learning method through the Internet is to record lectures of lecturers, provide recorded contents through the Internet, or learners are provided with unilaterally recorded contents, which decreases learning efficiency.
  • the existing language learning method does not deviate from the offline learning form that provides a voice file recording the pronunciation of native speakers or provides a learning process as a video, and is content-oriented for repeat learning through the provided learning contents.
  • the in-learning method there was a lack of satisfying the needs of learners who prefer to learn with customized learning contents for learners anytime and anywhere by allowing users to reconfigure desired learning contents.
  • the present invention has been made to solve the above-described problems, and generates a customized learning content that matches the image or video associated with the music and characters composed of lyrics and accompaniment, that is, the language composed of text and pronunciation, and reproduces the same.
  • the purpose of this study is to provide an audiovisual music-based language learning method and a learning device using the same.
  • another object of the present invention is to induce continuous interest in language learning by incorporating the characteristics of music unknowingly acquired through the rhythm through the rhythm to repeatedly learn language, by providing a combination of related images and videos, It is to provide a music-based language learning method and learning apparatus using the same that can improve the vocabulary ability that is the basis of language acquisition by connecting with songs so as not to be bored.
  • Music-based language learning method for achieving the above object is a step of specifying a pre-stored word / vocabulary or sentences and accompaniment music by the input unit of the language learning apparatus;
  • Reproducing the customized learning content in the control unit and outputs the text and the relevant image or video through the display unit, and outputs the pronunciation and accompaniment of native speakers through the speaker.
  • the music-based language learning method makes a plurality of words / vocabulary combined by parts of speech or semantic units suitable for the learning level as songs with accompaniment. Outputting a list of contents composed of learning music contents;
  • the control unit plays the contents of the content differently according to the learning function, and outputs text and related images or videos through the display unit, and pronunciation of native speakers through the speaker. And outputting an accompaniment.
  • the music-based language learning apparatus comprises a language learning apparatus comprising an input unit, a display unit, a speaker, a memory unit, and a controller;
  • the memory unit stores a plurality of words / vocabulary and meaning text and pronunciation, accompaniment and related images or videos,
  • the controller may be configured to output the pronunciation and accompaniment of the word / lexical-customized learning content as a sound through a speaker by playing the word / lexical-customized learning content stored in the memory unit, and output the text, the image, or the video through the display unit. .
  • the learner when the learner selects music and words / vocabulary, it creates a customized learning content that matches the music and the words / vocabulary or a related image, and uses the same to practice pronunciation, Memorize words / vocabulary through the game, collect the words / vocabulary specified by the learner, learn the language according to the accompaniment (background music), and learn how to sing a song written using the words / vocabulary specified by the learner. can do.
  • FIG. 1 is a block diagram of a language learning apparatus according to an embodiment of the present invention.
  • FIG. 2 is a configuration diagram of learning information stored in the learning information storage area of the memory unit shown in FIG. 1;
  • FIG. 3 is a flowchart of a language learning method according to a first embodiment of the present invention.
  • FIG. 4 is a flowchart of a language learning method according to a second embodiment of the present invention.
  • 7 to 9 are exemplary screens output to the display unit in the playback step of FIG.
  • 10 to 19 are exemplary screens output to the display unit for each step of FIG.
  • evaluation unit MIC microphone
  • FIG. 1 is a block diagram of a language learning apparatus according to a first embodiment of the present invention.
  • the language learning apparatus 10 includes a control unit 11, an input unit 12, a display unit 13, a voice processing unit 14, a communication unit 15, a memory unit 16, and a custom learning generation unit 17. And an evaluation unit 18.
  • the controller 11 controls the overall operation for performing language learning according to the present invention.
  • the memory unit 16 includes a program storage area for storing a program for performing language learning, a temporary storage area for temporarily storing data generated during the program execution, and learning information for storing learning information necessary for the language learning. It consists of a storage area.
  • the learning content data stored in the memory unit 16 is operated when the learning apparatus is independently operated so that the expansion of the content database is asynchronously changed (the content database is configured in the learning apparatus. Upgrade or update) and when the content database can be expanded synchronously in a communication environment with a content database server (the content database is configured as a server, so that the content database can be configured and utilized in real time under a communication environment). Only content can be downloaded to a learning device), which may vary in configuration depending on how the language learning is provided.
  • a pitch extraction program is stored in the program storage area of the memory unit 16 to extract the pitch (pitch) according to the pronunciation of the native speaker or the learner and display it in a curved graph.
  • 5 and 6 are curve graphs extracted with different pitch extraction programs when the English words polite, current, severe, and remote are pronounced, for example, to show that two different pitch extraction programs draw the same curve graph. have.
  • the evaluation unit 18 compares the curve graph of the native speaker and the learner drawn by the pitch extraction program with each other and scores the score according to the similarity.
  • the pitch and volume of the stored learner's pronunciation in the pitch extraction program are stored.
  • extract the play time information to generate a utterance curve graph while the pitch represents the height of the utterance curve graph, the volume represents the thickness, and the play time represents the length.
  • the evaluation unit 18 calculates a score by evaluating the similarity of the two speech curve graphs.
  • the evaluation unit 18 calculates a score according to the result of the test game, the number of incorrect answers and correct answers when learning a word included in the content and performing a test game related to the learned word.
  • the input unit 12 includes a virtual key input system through a screen supporting a plurality of keys, a mouse, or a touch interface as input means for controlling the operation of the language learning apparatus 10, and a key, mouse, or touch interface pressed by a learner.
  • the manipulated data is output to the control unit 11.
  • the display unit 13 is a screen or a CRT monitor supporting a liquid crystal display (LCD) or a touch interface.
  • the display unit 13 displays various language learning information as text, images, and videos under the control of the controller 11.
  • the voice processor 14 is connected to the microphone MIC and the speaker SP, and processes the analog voice input through the microphone MIC in the PCM, EVRC or MP3 format, and outputs the voice to the controller 11. 11 stores this in the recording information 16e of the memory unit 16.
  • the voice processor 14 converts voice data (pronounced) or song file stored in the memory unit 16 and outputs the sound through the speaker SP.
  • the communication unit 15 connects to the Internet network under the control of the control unit 11 and provides learning information provided to the learning information server, not shown.
  • learning information server not shown.
  • the word e dictionary 16a and the sentence e dictionary 16b of FIG. Data related to the information 16c and the like are downloaded and updated.
  • the personalized learning generation unit 17 generates music-based personalized learning content by matching the music and words / vocabulary selected by the learner with relevant images or videos, and generates the personalized learning content based on the personalized learning content information 16d of the memory unit 16. Save it.
  • the voice data for the word / vocabulary designated by the user is various learners according to the length of the voice according to the word / vocabulary (one beat, two beat types, etc.) and the pitch type (basic type, change type a, change type b, etc.) Combinations can be made, so that even with the same accompaniment, different music-based customized learning contents are generated according to the lyrics (words / vocabulary).
  • the language learning apparatus 10 may be implemented in the form of a dedicated terminal for each learning object, or may be implemented as a terminal owned by an individual (a mobile phone including a smartphone, various portable devices supporting voice and video, etc.), and a display unit ( 13 preferably includes a screen supporting the touch interface such that the display unit 13 also functions as the input unit 12.
  • the language learning apparatus 10 is executed by dividing into a user learning mode which is a main menu and a recommended learning mode which is a quick menu.
  • a learner selects a word / vocabulary and accompaniment and associates the selected word / vocabulary and accompaniment with the image.
  • the language can be learned using music-based customized learning content newly created by matching with the video.
  • content information (prepared by the learning information provider provided in the language learning apparatus 10) is pre-stored. Learning content, or customized learning content that the learner parasiticized and stored through the user learning mode) may be output for each learning function to learn a language.
  • FIG. 2 is a configuration diagram of learning information stored in the learning information storage area of the memory unit shown in FIG. 1.
  • the word e dictionary 16a As shown, in the learning information storage area of the memory unit 16, the word e dictionary 16a, the sentence e dictionary 16b, the content function information 16c, the customized learning content information 16d, and the recording information 16e. Etc. are stored.
  • the word e dictionary 16a is a dedicated dictionary of the language learning apparatus 10.
  • the word e dictionary 16a can search a word through the word e dictionary 16a, and the searched word is stored in the history.
  • the word e dictionary 16a stores a plurality of words, multimedia files of words, and accompaniment music of various genres prepared to match these files.
  • the word e dictionary 16a includes music information 16aa, text information 16ab, voice information 16ac, image information 16ad, and the like.
  • Music information (16aa) is information about a number of songs consisting of accompaniment (background music) or melody (hereafter referred to as accompaniment), approximately 3 minutes per song, the length of music familiar to the general music user during playback. It is configured to have a reproduction length of a degree.
  • the text information 16ab includes a word and its meaning
  • the image information 16ac includes a picture file representing the meaning or meaning of the word and is stored in correspondence with the word of the text information 16ab.
  • the voice information 16ad includes a voice file such as a sound corresponding to the word and its meaning, and is stored in correspondence with the word of the text information 16ab.
  • the sentence e-dictionary 16b can search a sentence and make a list, and stores a plurality of sentences and sentences, categorized into idioms, proverbs, conversations, and accompaniment music of various genres prepared to match these files. do.
  • the sentence e dictionary 16b includes music information 16ba, text information 16bb, voice information 16bc, image information 16bd, and the like.
  • the music information 16ba is configured to have a reproduction length of approximately three minutes per song, which is a music length that is familiar to a general music user during reproduction as information on a plurality of songs consisting of accompaniment.
  • the text information 16bb includes sentences (idioms, proverbs, conversations) and translations thereof, and the image information 16bc includes a picture file indicating the meaning or meaning of the sentence and corresponds to the sentences of the text information 16bb. Is stored.
  • the voice information 16bd includes a voice file such as a sentence corresponding to the sentence and its meaning, and is stored in correspondence with the sentence of the text information 16bb.
  • the learner selects the content function information 16c to learn the language, which is stored in advance, and stored according to the learning level, the video function 16ca, the interval exercise function 16cb, the pronunciation practice function 16cc. , Multimedia book function 16cd, wordbook function 16ce, example sentence function 16cf, game function 16cg.
  • the content is composed of a learning music content made of a song with accompaniment or parts of words or vocabulary combined with the accompaniment unit.
  • the basic form of the song is a method of alternating words / vocabulary and meaning.
  • the interval between outputs is between 0.5 and 0.7 seconds.
  • the video function 16ca is a function of outputting a video file in which an image representing each word included in the content is matched with a typography (text image having a visual effect).
  • the section practice function 16cb divides the content into predetermined sections (for example, eight sections) and outputs a pre-recorded native speaker's song (voice data composed of a content database).
  • predetermined sections for example, eight sections
  • a pre-recorded native speaker's song voice data composed of a content database.
  • One song consisting of about three minutes can be learned by dividing it into small units by allowing the user to sing along a predetermined section.
  • Pronunciation practice function (16cc) is to divide the word / vocabulary contained in the content into a predetermined word / vocabulary, and then output the native speaker's pronunciation (voice data provided by the memory unit) and the image.
  • it is easy to imitate the tone of the native speaker including the stress of the word because it follows the rhythm, and the visual enjoyment through the image containing the meaning of the word / vocabulary appearing on the display unit 13 You can learn a lot of words in a short time with interest.
  • Multimedia book function (16cd) is a function to output the image related to the word / vocabulary included in the content by one cut on the screen of the display unit along with the voice file, the language learning apparatus 10 is a voice every time the screen is touched It reacts in the way it is output or the screen is switched.
  • the vocabulary function (16ce) is a function that allows you to view the headwords of words / vocabulary and corresponding information (such as etymology, derivatives, synonyms, antonyms, idioms, etc.) included in the contents in the form of an e-book.
  • Example sentence function (16cf) is a function to output a sentence made of the headword of the words / vocabulary included in the content, you can learn by matching the pronunciation of the native speaker (voice data provided by the memory unit), learning by collecting only sentences can do.
  • the game content 16cg is a function of simultaneously outputting the correct answer and the wrong answer of the language included in the content on the display unit and allowing the learner to select one of them, and the evaluation unit 18 calculates a score according to the number of the incorrect answer and the correct answer.
  • the personalized learning content information 16d stores music-based personalized learning content generated by matching the word / vocabulary or sentence and accompaniment designated by the learner with the associated image in the personalized learning generation unit 17 as multimedia content.
  • the recording information 16e stores the learner's pronunciation in the form of voice data when the learner reproduces the customized learning content or the previously provided learning content, and the learner follows it.
  • FIG. 3 is a flowchart of a language learning method according to a first embodiment of the present invention.
  • the mode to be learned through the input unit 12 is selected in the mode selection window output to the display unit 13 (S302).
  • the user selects the user learning mode and the pre-stored content (completed learning content provided by the learning information provider, or the learner user learning mode). If you want to learn using the custom learning content stored in the parasitic through) select the recommended learning mode.
  • the word / vocabulary or sentence is designated.
  • the learner selects a desired word / vocabulary or sentence through the input unit 12.
  • Words / vocabulary or sentences may also be specified (S304).
  • the input unit 12 designates accompaniment music to be used in a song after a word / vocabulary or a sentence is designated (S306).
  • the custom learning generation unit 17 generates music-based customized learning content by matching the accompaniment and words specified when the learner selects the word option with the pronunciation and the image, and the designated accompaniment when the learner selects the word + meaning option. Matching the word with the meaning, pronunciation, and image to generate a music-based customized learning content and stores it in the customized learning content information 16d of the memory unit 16 as the name of the list (S308).
  • the custom learning generation unit 17 generates music-based customized learning content by matching the accompaniment and sentences specified when the learner selects the sentence option with the pronunciation and the image, and the designated accompaniment when the learner selects the sentence + translation option.
  • the music-based customized learning content is generated by matching the translation, the pronunciation, and the image to the sentence and stored in the customized learning content information 16d of the memory unit 16 under the name of the list.
  • the display unit 13 displays the image of the corresponding word along with the text, for example, at 1.2 second or 2.4 second intervals (word length is one beat or two beats). 7 to 9, the pronunciation of the word is output through the speaker SP at intervals of 1.2 seconds or 2.4 seconds, for example, in accordance with the specified accompaniment.
  • the display unit 13 displays the image of the corresponding word along with the text, for example, at 1.2 second or 2.4 second intervals (whether the length of the word is one beat or two).
  • the sound and pronunciation of the word are output at intervals of, for example, 0.6 seconds or 1.2 seconds through the speaker SP.
  • the display unit 13 when the customized learning content generated by the selection of the sentence option is output, the display unit 13 outputs the image of the corresponding sentence together with the text, and the pronunciation of the sentence is output in accordance with the specified accompaniment through the speaker SP. do.
  • the output interval between the specific sentence and the next sentence adds, for example, 2 seconds to the length (time) of the specific sentence.
  • a typography text image with visual effect
  • the pitch information and volume information are extracted from the pronunciation of the sentence, and the extracted information is reflected in the text to increase the height of the text according to the height of the pitch.
  • the curve may be generated in the text differently, and the text may be output to the display unit 13 by varying the thickness of the text according to the degree of volume.
  • the display unit 13 when the customized learning content generated by the selection of the sentence + translation option is output, the display unit 13 outputs the image of the corresponding sentence together with the text, and the pronunciation and the translation voice of the sentence are recalled through the speaker SP. The output is in line with the specified accompaniment.
  • the pronunciation of the learner is input through the microphone (MIC) and is recorded in the recording information 16e of the memory unit 16 via the voice processing unit 14 and stored as voice data (S312).
  • the controller 11 may output the pronunciation of the recorded learner through the speaker SP together with the native speaker's pronunciation, and compare the pronunciation difference in audible manner (S314).
  • a pitch extraction program extracts the pitch, volume, and duration information of native speakers and learner's pronunciation to generate an ignition curve graph, where the pitch is high and low in the utterance curve graph.
  • the time is displayed so that the length is output to the display unit 13 so that the pronunciation difference between the native speaker and the learner can be visually compared (S316).
  • the evaluation unit 18 calculates a score by comparing the similarity between the native speaker and the learner's speech curve graph.
  • FIG. 4 is a flowchart illustrating a language learning method according to a second embodiment of the present invention.
  • step S302 the display unit 13 outputs a learning level selection window (see FIG. 10), and the learner selects his or her learning level through the input unit 12 (S402).
  • the level of learning can be classified according to the learning object (elementary school student, junior high school student, high school student, TOEIC / TOEFL) or the degree of learning (beginner, intermediate, advanced) after determining the composition of the content database.
  • the learning object electronic school student, junior high school student, high school student, TOEIC / TOEFL
  • the degree of learning beginner, intermediate, advanced
  • the display unit 18 outputs a plurality of content lists corresponding to the selected learning level, and the learner selects one content through the input unit 12 (S404).
  • This content consists of multimedia contents made of songs along with accompaniment of words / vocabulary combined in parts of speech or semantic units.
  • the basic form of the song is a way of alternating words / vocabulary and meaning.
  • the interval between outputs is between 0.5 and 0.7 seconds.
  • the display unit 13 After the content is selected, the display unit 13 outputs a learning function list corresponding to the content (see FIG. 11), and the learner selects a function to learn the selected content through the input unit 12 (S406).
  • the function to learn the content includes a video, section practice, pronunciation practice, multimedia book, vocabulary, example sentences, games.
  • the selected learning function content is reproduced through the speaker SP and the display unit 13 differently for each learning function (S408).
  • the content is divided into predetermined sections (for example, eight sections), and previously recorded native speakers' songs (voice data composed of a content database) are output to the speaker SP together with the accompaniment. Then, an image and text representing each word are output to the display unit 13 (see FIG. 13).
  • the learner listens and sings along with the native speaker's song, but the color of the text displayed on the display unit 13 is different before starting the song and when the native speaker sings and the user sings in order to prevent confusion in learning. do.
  • the content is divided into predetermined units and the native speaker's pronunciation (voice data provided by the memory unit) is output, but options such as word, word + meaning, unaccompaniment, and screen off can be selected. Can be.
  • the display unit 13 When the word / vocabulary is selected, the display unit 13 outputs the relevant image of the word / vocabulary at a predetermined interval (for example, 2.4 seconds) together with the text, and the accompaniment and pronunciation of the native speaker are output at a predetermined interval (for example, the speaker SP). For example, 2.4 seconds).
  • a predetermined interval for example, 2.4 seconds
  • the accompaniment and pronunciation of the native speaker are output at a predetermined interval (for example, the speaker SP). For example, 2.4 seconds).
  • the display unit 13 If the word / vocabulary + meaning is selected, the display unit 13 outputs the relevant image of the word / vocabulary along with the text at a predetermined interval (for example, 2.4 seconds) (see FIG. 14), and the native speaker pronounces the speaker SP. Audio files are output at predetermined intervals (for example, 1.2 seconds) in accompaniment with accompaniment.
  • the learner learns (memorizes) words and meanings by accommodating the pronunciation and meaning voice files of native speakers in accompaniment.
  • the audio file is output to the speaker SP without accompaniment, and the image and text are output to the display unit 13.
  • the relevant image of the word / vocabulary included in the content is output one by one on the display unit 13, and the native speaker's pronunciation is output to the speaker SP accordingly.
  • a word is output by one cut by the learner's selection, or the same word is repeatedly output.
  • Options of the multimedia book function 16cd include native speaker pronunciation, song, mute, information, and a full view.
  • the native speaker's pronunciation of the corresponding word is output through the speaker SP every time the learner makes a cut.
  • a song of the corresponding word / vocabulary is output through the speaker SP every time the screen of the display unit is turned by one cut.
  • the information window (word, derivative, synonym, antonym, idiom, etc.) of the corresponding word / vocabulary is output to the display unit 13 (see FIGS. 15 and 16).
  • screenshots corresponding to words / vocabulary that constitute a content are output to the display unit (see FIG. 17).
  • the headword of the word / vocabulary and the corresponding information (the etymology, the derivative, the synonym, the antonym, the idiom, etc.) contained in the selected content are output through the display unit 13 and the speaker SP ( 18).
  • the learner checks only the unknown words / vocabulary, and outputs only the checked words / vocabulary words and corresponding information through the display unit 13 and the speaker SP.
  • the sentence made of the headword of the word / vocabulary included in the selected content is output through the display unit 13 and the speaker SP.
  • the learner may check only the sentence to be learned and output only the checked sentence and its translation through the display unit 13 and the speaker SP.
  • the accompaniment and the pronunciation of the native speaker (voice data provided by the memory unit) can be matched and can be learned by collecting only sentences.
  • the learner clicks what he thinks is the correct answer in time, and after the game is finished, the evaluation unit 18 calculates a score according to the number of incorrect answers and correct answers.
  • the learner follows the pronunciation of the native speaker according to the selected learning function.
  • the learner's pronunciation or meaning is input through a microphone (MIC), and is then recorded through the voice processor 14 and the recording information 16e of the memory unit 16e. Recording is stored in the voice data. (S410).
  • the accompaniment and pronunciation of native speakers are output at a predetermined interval (for example, 2.4 seconds) through the speaker SP. (E.g., 1.2 seconds), followed by pronunciation, which is input through a microphone MIC and stored in the recording information 16e of the storage unit 16.
  • the controller 11 outputs the recorded pronunciation of the learner through the speaker SP together with the native speaker's pronunciation, so that the pronunciation difference can be compared audibly (S412).
  • the pitch extraction program extracts the pitch, volume, and playing time information of native speakers and learner's pronunciation to generate an ignition curve graph, where the pitch is the height, the volume is the thickness, and the playback time is the length. 13) by visually comparing the pronunciation difference between the native speaker and the learner (S414).
  • the evaluation unit 18 compares the similarity of the utterance curve graph of the native speaker and the learner to calculate a score (S416).
  • the content consisting of a learning music content made of songs with accompaniment of a plurality of words / vocabulary combined by parts of speech or semantic units stored in the language learning apparatus 10 It can be used as various learning functions to learn (memorize) the words and vocabulary and the pronunciation and meaning (meaning or translation) of sentences as songs.
  • the learner when the learner selects music and words / vocabulary, the user creates the customized learning content by matching the music with the words / vocabulary and related images.
  • the customized learning content accompanies the music designated by the learner.
  • the words / vocabulary designated by the learner becomes the lyrics, and the related image is converted into a video to generate multimedia contents such as a music video.
  • the learner uses the practice to practice pronunciation and uses words and music through the game. You can improve your vocabulary skills, which are the basis of language learning, by inducing constant interest in language learning and avoiding boredom in the same way that you can memorize vocabulary and enjoy a song you wrote yourself. It is.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The present invention relates to a music-based language-learning method and to a learning device using same, and more particularly, to an audio-visual language-learning method and to a learning device using same. According to the method, user-customized multimedia learning content is generated and played to be listened to and watched, wherein music, which mainly consists of a song having lyrics and accompaniment, and language, which consists of text and the pronunciation thereof, are matched to a related image or video in the multimedia content. The music-based language-learning method according to the present invention comprises the steps of: selecting a previously stored musical accompaniment and a word/vocabulary or sentence using an input unit of a learning device; generating and storing music-based content for customized learning by matching the musical accompaniment and the word/vocabulary or sentence selected through the input unit to text, pronunciation by a native speaker, and a related image or video using a customized learning content generation unit; and reproducing the customized learning content using a control unit for outputting the text and the related image or video through a display unit and outputting the pronunciation of the native speaker and the musical accompaniment through a speaker.

Description

음악기반 언어 학습방법과 이를 활용한 학습장치Music-based language learning method and learning device using it
본 발명은 음악기반 언어 학습방법과 이를 활용한 학습장치에 관한 것으로, 특히, 가사와 반주로 이루어지는 노래를 중심으로 하는 음악과, 텍스트와 발음으로 구성되는 언어를 유관된 이미지 또는 동영상과 매칭시킨 사용자 맞춤학습 멀티미디어 콘텐츠를 생성하고, 이를 재생하여 보고 듣는 시청각적 언어 학습방법과 이를 활용한 학습장치에 관한 것이다. The present invention relates to a music-based language learning method and a learning apparatus using the same, and in particular, a user who matches a music consisting of lyrics and accompaniment songs, and a language composed of text and pronunciation with a related image or video. The present invention relates to an audiovisual language learning method for creating a customized learning multimedia content, to play and view the same, and to use the same.
현대 사회가 글로벌화되어 가면서 영어, 일본어, 한자 등 외국어가 사회생활에서 매우 중요한 위치를 차지하게 되었다.As the modern society became globalized, foreign languages such as English, Japanese, and Chinese characters became very important in social life.
이에 따라 언어교육이 조기에 실시되고 있으며, 효과적으로 언어학습을 하기 위한 다양한 방법이 개발되고 있다.Accordingly, language education is being conducted early, and various methods for effective language learning are being developed.
예를 들어 외국어를 학습하는데 있어서, 외국어 학원에 직접 등록 및 출석하여 강사의 강의를 듣는 방법이 일반적이나 이는 비싼 비용과 시간적 제한이 있다.For example, in learning a foreign language, it is common to register and attend a foreign language school and listen to the lecturer's lecture, but this is expensive and time-limited.
또한, 인터넷을 통한 온라인 외국어 학습방법은 강사의 강의를 녹화하고, 녹화된 내용을 인터넷을 통해 제공하는 것이나, 학습자가 일방적으로 녹화된 내용을 제공받게 되어 있어 학습의 효율이 떨어진다.In addition, the online foreign language learning method through the Internet is to record lectures of lecturers, provide recorded contents through the Internet, or learners are provided with unilaterally recorded contents, which decreases learning efficiency.
최근에는 글로벌 시대의 조류에 발맞춰 외국어 교육의 중요성이 부각됨에 따라 다양한 외국어 교육법이 시도되고 있으나, 언어 습득의 기본이 되는 어휘 능력 개발은 개인의 노력에 의존하는 경향이 지배적이어서 단어나 숙어의 학습에 대한 효과는 미미하였다.Recently, as the importance of foreign language education has emerged in line with the trend of the global era, various foreign language education methods have been tried.However, the development of vocabulary ability that is the basis of language acquisition is dominant depending on the efforts of individuals. The effect on was insignificant.
또한, 기존의 언어 학습 방법은 원어민의 발음을 녹음한 음성 파일로 제공하거나, 학습 과정을 동영상으로 제공하는 오프라인의 학습형태를 벗어나지 못하고 있으며, 기제공되는 학습 콘텐츠를 통해 반복학습을 지향하는 콘텐츠 주도적인 학습 방법으로, 원하는 학습콘텐츠를 사용자가 재구성할 수 있도록 하여, 언제 어디서나 학습자를 위한 맞춤형 학습 콘텐츠로 학습하기를 선호하는 학습자의 요구를 만족시키기에 부족한 점이 많았다.In addition, the existing language learning method does not deviate from the offline learning form that provides a voice file recording the pronunciation of native speakers or provides a learning process as a video, and is content-oriented for repeat learning through the provided learning contents. As the in-learning method, there was a lack of satisfying the needs of learners who prefer to learn with customized learning contents for learners anytime and anywhere by allowing users to reconfigure desired learning contents.
그리고 일부 맞춤형 학습 콘텐츠를 지향하는 학습방법의 경우 기존의 콘텐츠 주도적 학습 방법으로 설계된 학습 콘텐츠를 재배열하는 형태로, 사용자가 참여하여 직접 자신의 학습 콘텐츠를 생성하는 사용자 주도형 학습 콘텐츠로 보기 어려우며, 학습자에게 널리 활용되지 못하고 있다.In addition, in the case of learning methods that are aimed at customized learning content, it is difficult to view it as user-directed learning content in which a user participates in generating their own learning content by rearranging the learning content designed by the existing content-driven learning method. It is not widely used for.
본 발명은 상술한 문제점을 해결하기 위하여 안출된 것으로서, 가사와 반주로 구성되는 음악과 문자 즉, 텍스트와 발음으로 구성되는 언어를, 연관된 이미지 또는 동영상을 매칭시킨 맞춤학습 콘텐츠를 생성하고, 이를 재생하여 보고 듣는 시청각적 음악기반 언어 학습방법과 이를 활용한 학습장치를 제공하는데 그 목적이 있다.The present invention has been made to solve the above-described problems, and generates a customized learning content that matches the image or video associated with the music and characters composed of lyrics and accompaniment, that is, the language composed of text and pronunciation, and reproduces the same. The purpose of this study is to provide an audiovisual music-based language learning method and a learning device using the same.
또한, 본 발명의 다른 목적은 언어 학습에 반복적 시청으로 리듬을 통해 부지불식간에 체득하는 음악의 특성을 접목하고, 유관된 이미지와 동영상을 병합하여 제공함으로써 언어 학습에 대한 지속적인 흥미를 유발시키고, 하나의 노래와 같이 연결시켜서 지루함을 주지 않게 하여 언어 습득의 기본이 되는 어휘 능력을 향상시킬 수 있는 음악기반 언어 학습방법과 이를 활용한 학습장치를 제공하는데 있다.In addition, another object of the present invention is to induce continuous interest in language learning by incorporating the characteristics of music unknowingly acquired through the rhythm through the rhythm to repeatedly learn language, by providing a combination of related images and videos, It is to provide a music-based language learning method and learning apparatus using the same that can improve the vocabulary ability that is the basis of language acquisition by connecting with songs so as not to be bored.
상술한 목적을 달성하기 위한 본 발명의 실시예에 따른 음악기반 언어 학습방법은 언어 학습장치의 입력부에 의해 기저장된 단어/어휘나 문장과 반주음악이 지정되는 단계;Music-based language learning method according to an embodiment of the present invention for achieving the above object is a step of specifying a pre-stored word / vocabulary or sentences and accompaniment music by the input unit of the language learning apparatus;
맞춤학습생성부에서 입력부에 의해 지정된 단어/어휘나 문장 및 반주를 텍스트와 유관 이미지나 동영상 및 원어민의 발음과 함께 매칭시켜 음악 기반의 맞춤학습 콘텐츠를 생성하고 저장하는 단계; 및Generating and storing music-based customized learning content by matching words / vocabulary, sentences, and accompaniment designated by the input unit with text and related images, videos, and native speaker's pronunciation in the customized learning generation unit; And
제어부에서 상기 맞춤학습 콘텐츠를 재생하되, 표시부를 통해 텍스트와 유관 이미지나 동영상을 출력하고, 스피커를 통해 원어민의 발음과 반주를 출력하는 단계를 포함하여 구성된다.Reproducing the customized learning content in the control unit, and outputs the text and the relevant image or video through the display unit, and outputs the pronunciation and accompaniment of native speakers through the speaker.
또한 본 발명에 따른 음악기반 언어 학습방법은 언어 학습장치의 입력부에 의해 학습 수준이 선택되는 경우 제어부에서 그 학습 수준에 맞는 품사 또는 의미 단위로 조합한 다수의 단어/어휘를 반주와 함께 노래로 만든 학습용 음악 콘텐츠로 구성되는 콘텐츠의 목록이 출력되는 단계;In addition, when the learning level is selected by the input unit of the language learning apparatus, the music-based language learning method according to the present invention makes a plurality of words / vocabulary combined by parts of speech or semantic units suitable for the learning level as songs with accompaniment. Outputting a list of contents composed of learning music contents;
입력부에 의해 콘텐츠 목록 중 어느 하나의 콘텐츠가 선택되는 경우에 제어부에 의해 그 콘텐츠의 학습 기능 목록이 출력되는 단계; 및Outputting a learning function list of the content by the control unit when one of the contents is selected by the input unit; And
입력부에 의해 학습 기능 목록 중 어느 하나의 학습 기능이 선택되는 경우에 제어부에 의해 콘텐츠의 내용을 학습 기능별로 다르게 재생하되, 표시부를 통해 텍스트와 유관 이미지나 동영상을 출력하고, 스피커를 통해 원어민의 발음과 반주를 출력하는 단계를 포함하여 구성된다.When the learning function of any one of the learning function list is selected by the input unit, the control unit plays the contents of the content differently according to the learning function, and outputs text and related images or videos through the display unit, and pronunciation of native speakers through the speaker. And outputting an accompaniment.
그리고 본 발명에 따른 음악기반 언어 학습장치는 입력부, 표시부, 스피커, 메모리부 및 제어부로 구성된 언어 학습장치에 있어서;In addition, the music-based language learning apparatus according to the present invention comprises a language learning apparatus comprising an input unit, a display unit, a speaker, a memory unit, and a controller;
상기 메모리부에는 다수의 단어/어휘와 뜻에 대한 텍스트 및 발음, 반주와 유관 이미지나 동영상을 저장하고,The memory unit stores a plurality of words / vocabulary and meaning text and pronunciation, accompaniment and related images or videos,
상기 입력부에 의해 선택된 메모리부의 단어/어휘와 반주를 이미지나 동영상과 매칭시켜 음악 기반의 단어/어휘 맞춤학습 콘텐츠를 생성하고 메모리부에 저장하는 맞춤학습생성부가 더 구비되되,Matching the word / vocabulary and accompaniment of the memory unit selected by the input unit with the image or video to create a music-based vocabulary / vocabulary custom learning content and store in the memory unit further,
상기 제어부는 메모리부에 저장된 단어/어휘 맞춤학습 콘텐츠를 재생하여 스피커를 통해 단어/어휘 맞춤학습 콘텐츠의 발음과 반주를 소리로 출력하고, 표시부를 통해 텍스트와 이미지나 동영상을 출력하는 것을 특징으로 한다.The controller may be configured to output the pronunciation and accompaniment of the word / lexical-customized learning content as a sound through a speaker by playing the word / lexical-customized learning content stored in the memory unit, and output the text, the image, or the video through the display unit. .
상술한 과제의 해결 수단에 의하면, 학습자가 음악과 단어/어휘를 선택하는 경우에 그 음악과 단어/어휘 또는 유관 이미지를 매칭시킨 맞춤학습 콘텐츠를 생성하고, 이를 이용하여 발음을 연습하고, 음악과 게임을 통해 단어/어휘를 암기하며 학습자가 지정한 단어/어휘를 모아서 반주(배경음악)에 맞추어 언어를, 학습자가 스스로 지정하는 단어/어휘를 활용하여 작사한 하나의 노래를 부르는 것과 같은 방법으로 학습할 수 있다.According to the above-mentioned means for solving the problem, when the learner selects music and words / vocabulary, it creates a customized learning content that matches the music and the words / vocabulary or a related image, and uses the same to practice pronunciation, Memorize words / vocabulary through the game, collect the words / vocabulary specified by the learner, learn the language according to the accompaniment (background music), and learn how to sing a song written using the words / vocabulary specified by the learner. can do.
또한, 언어 학습에 음악 또는 유관된 이미지와 동영상을 병합하여 제공함으로써 언어 학습에 대한 지속적인 흥미를 유발시키고, 하나의 노래와 같이 연결되어 지루함을 주지 않게 하여 언어 습득의 기본이 되는 단어/어휘 능력을 향상시킬 수 있다.In addition, by integrating music or related images and videos into language learning, it creates a continuous interest in language learning and connects with one song so as not to be bored. Can be improved.
도 1은 본 발명의 실시예에 따른 언어 학습장치의 구성도,1 is a block diagram of a language learning apparatus according to an embodiment of the present invention;
도 2는 도 1에 나타낸 메모리부의 학습정보 저장영역에 저장되는 학습정보의 구성도,FIG. 2 is a configuration diagram of learning information stored in the learning information storage area of the memory unit shown in FIG. 1;
도 3은 본 발명의 제1실시예에 따른 언어 학습방법의 순서도,3 is a flowchart of a language learning method according to a first embodiment of the present invention;
도 4는 본 발명의 제2실시예에 따른 언어 학습방법의 순서도,4 is a flowchart of a language learning method according to a second embodiment of the present invention;
도 5와 도 6는 본 발명에 적용되는 음정추출 프로그램에 의한 곡선그래프,5 and 6 are curve graphs by the pitch extraction program applied to the present invention,
도 7 내지 도 9는 도 3의 재생 단계에서 표시부에 출력되는 예시 화면,7 to 9 are exemplary screens output to the display unit in the playback step of FIG.
도 10내지 도 19는 도 4의 각 단계별 표시부에 출력되는 예시 화면.10 to 19 are exemplary screens output to the display unit for each step of FIG.
* 도면부호의 설명* Explanation of Reference Numbers
10: 언어 학습장치 11: 제어부10: language learning apparatus 11: control unit
12: 입력부 13: 표시부12: input unit 13: display unit
14: 음성처리부 15: 통신부14: voice processing unit 15: communication unit
16: 메모리부 17: 맞춤학습생성부16: memory unit 17: customized learning generation unit
18: 평가부 MIC: 마이크18: evaluation unit MIC: microphone
SP: 스피커SP: Speaker
이하 본 발명의 실시예에 대하여 첨부된 도면을 참고로 그 구성 및 작용을 설명하기로 한다.Hereinafter, the configuration and operation of the present invention will be described with reference to the accompanying drawings.
도 1은 본 발명의 제1실시예에 따른 언어 학습장치의 구성도이다.1 is a block diagram of a language learning apparatus according to a first embodiment of the present invention.
도시된 바와 같이 언어 학습장치(10)는 제어부(11), 입력부(12), 표시부(13), 음성처리부(14), 통신부(15), 메모리부(16), 맞춤학습생성부(17) 및 평가부(18)를 포함하여 구성된다.As shown, the language learning apparatus 10 includes a control unit 11, an input unit 12, a display unit 13, a voice processing unit 14, a communication unit 15, a memory unit 16, and a custom learning generation unit 17. And an evaluation unit 18.
제어부(11)는 본 발명에 따른 언어 학습을 수행하기 위한 전반적인 동작을 제어한다.The controller 11 controls the overall operation for performing language learning according to the present invention.
메모리부(16)는 어학 학습을 수행하기 위한 프로그램을 저장하는 프로그램 저장영역과, 상기 프로그램 수행 중에 발생하는 데이터들을 일시 저장하기 위한 임시 저장영역과, 상기 언어 학습에 필요한 학습정보를 저장하는 학습정보 저장영역으로 구성된다.The memory unit 16 includes a program storage area for storing a program for performing language learning, a temporary storage area for temporarily storing data generated during the program execution, and learning information for storing learning information necessary for the language learning. It consists of a storage area.
또한, 메모리부(16)에 저장되는 학습 콘텐츠 데이터는 학습장치가 독립적으로 운영되어 콘텐츠 데이터베이스의 확장이 비동기적으로 변경되는 경우(콘텐츠 데이터베이스가 학습장치에 구성되며, 이를 변경시 학습자가 콘텐츠 데이터베이스를 업그레이드하거나, 업데이트함)와, 콘텐츠 데이터베이스 서버를 두어 통신 가능한 환경에서 동기적으로 콘텐츠 데이터베이스가 확장될 수 있는 경우(콘텐츠 데이터 베이스가 서버로 구성되어, 통신 환경하에서 실시간으로 콘텐츠 데이터베이스를 구성하여 활용 가능한 콘텐츠들만 학습장치에 다운로드)로 나뉠 수 있으며, 이는 언어 학습을 제공하는 방법에 따라 그 구성을 달리할 수 있다.In addition, the learning content data stored in the memory unit 16 is operated when the learning apparatus is independently operated so that the expansion of the content database is asynchronously changed (the content database is configured in the learning apparatus. Upgrade or update) and when the content database can be expanded synchronously in a communication environment with a content database server (the content database is configured as a server, so that the content database can be configured and utilized in real time under a communication environment). Only content can be downloaded to a learning device), which may vary in configuration depending on how the language learning is provided.
상기 메모리부(16)의 프로그램 저장영역에는 음정추출 프로그램이 저장되어 원어민이나 학습자의 발음에 따른 음정(피치)을 추출하여 곡선 그래프로 표시한다.A pitch extraction program is stored in the program storage area of the memory unit 16 to extract the pitch (pitch) according to the pronunciation of the native speaker or the learner and display it in a curved graph.
도 5와 도 6은 예를 들어 영어 단어인 polite, current, severe, remote를 발음한 경우 서로 다른 음정추출 프로그램으로 추출한 곡선그래프로서 두 개의 서로 다른 음정추출 프로그램이 동일한 곡선그래프를 그리고 있음을 알 수 있다.5 and 6 are curve graphs extracted with different pitch extraction programs when the English words polite, current, severe, and remote are pronounced, for example, to show that two different pitch extraction programs draw the same curve graph. have.
평가부(18)는 음정추출 프로그램에 의해 그려진 원어민과 학습자의 곡선그래프를 서로 비교하여 그 유사도에 따라 점수를 채점한다.The evaluation unit 18 compares the curve graph of the native speaker and the learner drawn by the pitch extraction program with each other and scores the score according to the similarity.
즉, 학습자가 스피커(SP)로 출력되는 원어민의 발음(메모리부에서 제공하는 음성 데이터)을 소정간격을 두고 따라하고 이를 녹음하여 저장하는 경우, 상기 음정추출 프로그램에서 이 저장된 학습자 발음의 음정, 음량, 재생시간 정보를 추출하여 발화 곡선그래프를 생성하되, 음정은 발화 곡선그래프에서 높낮이를, 음량은 두께를, 재생시간은 길이를 나타내도록 하여 원어민과 학습자의 발음 차이를 시각적으로 비교할 수 있도록 하며, 평가부(18)는 두 발화 곡선그래프의 유사도를 평가하여 점수를 산출한다.That is, when a learner follows a native speaker's pronunciation (voice data provided by the memory unit) output to the speaker SP at a predetermined interval and records and stores it, the pitch and volume of the stored learner's pronunciation in the pitch extraction program are stored. In order to visually compare the pronunciation difference between native speakers and learners, extract the play time information to generate a utterance curve graph, while the pitch represents the height of the utterance curve graph, the volume represents the thickness, and the play time represents the length. The evaluation unit 18 calculates a score by evaluating the similarity of the two speech curve graphs.
또한 상기 평가부(18)는 콘텐츠에 포함된 단어를 학습하고 그 학습한 단어에 관련된 테스트 게임을 수행하는 경우 그 테스트 게임의 결과, 오답과 정답의 수에 따라 점수를 산출한다.In addition, the evaluation unit 18 calculates a score according to the result of the test game, the number of incorrect answers and correct answers when learning a word included in the content and performing a test game related to the learned word.
입력부(12)는 언어 학습장치(10)의 동작을 제어하기 위한 입력수단으로서 다수의 키나 마우스 또는 터치인터페이스를 지원하는 스크린을 통한 가상키 입력 시스템을 구비하고, 학습자에 의해 눌려진 키나 마우스 또는 터치인터페이스로 조작된 데이터를 제어부(11)로 출력한다.The input unit 12 includes a virtual key input system through a screen supporting a plurality of keys, a mouse, or a touch interface as input means for controlling the operation of the language learning apparatus 10, and a key, mouse, or touch interface pressed by a learner. The manipulated data is output to the control unit 11.
표시부(13)는 액정표시장치(LCD)이나 터치인터페이스를 지원하는 스크린 또는 CRT 모니터로서, 제어부(11)의 제어를 받아 다양한 언어 학습 정보들을 텍스트, 이미지 및 동영상으로 표시한다.The display unit 13 is a screen or a CRT monitor supporting a liquid crystal display (LCD) or a touch interface. The display unit 13 displays various language learning information as text, images, and videos under the control of the controller 11.
음성처리부(14)는 마이크(MIC) 및 스피커(SP)와 연결되고, 마이크(MIC)를 통해 입력하는 아날로그 음성을 PCM, EVRC 또는 MP3 포맷으로 음성처리하여 제어부(11)로 출력하며, 제어부(11)는 이를 메모리부(16)의 녹음정보(16e)에 저장한다.The voice processor 14 is connected to the microphone MIC and the speaker SP, and processes the analog voice input through the microphone MIC in the PCM, EVRC or MP3 format, and outputs the voice to the controller 11. 11 stores this in the recording information 16e of the memory unit 16.
또한, 음성처리부(14)는 메모리부(16)에 저장된 음성 데이터(발음)나 노래파일을 변환하여 스피커(SP)를 통해 소리로 출력한다.In addition, the voice processor 14 converts voice data (pronounced) or song file stored in the memory unit 16 and outputs the sound through the speaker SP.
통신부(15)는 제어부(11)의 제어를 받아 인터넷망에 접속하여 미도시된 학습정보서버에 제공하는 학습정보 예를 들어 도 2의 단어 e사전(16a), 문장 e사전16b), 콘텐츠기능정보(16c) 등에 관한 데이터를 다운로드받아 업데이트한다.The communication unit 15 connects to the Internet network under the control of the control unit 11 and provides learning information provided to the learning information server, not shown. For example, the word e dictionary 16a and the sentence e dictionary 16b of FIG. Data related to the information 16c and the like are downloaded and updated.
맞춤학습생성부(17)는 학습자가 선택한 음악과 단어/어휘를 유관된 이미지 또는 동영상과 매칭하여 음악 기반의 맞춤학습 콘텐츠를 생성하고, 이를 메모리부(16)의 맞춤학습 콘텐츠정보(16d)에 저장한다.The personalized learning generation unit 17 generates music-based personalized learning content by matching the music and words / vocabulary selected by the learner with relevant images or videos, and generates the personalized learning content based on the personalized learning content information 16d of the memory unit 16. Save it.
이때, 사용자가 지정하는 단어/어휘에 대한 음성 데이터는 단어/어휘에 따른 음성의 길이(한 박자, 두 박자 타입 등) 및 음정 유형(기본형, 변화형a, 변화형b 등)에 따라 다양한 학습자 조합이 이루어질 수 있으며, 이로 인해 동일한 반주이더라도, 가사(단어/어휘)에 따라서 서로 다른 음악 기반의 맞춤학습 콘텐츠로 생성된다.In this case, the voice data for the word / vocabulary designated by the user is various learners according to the length of the voice according to the word / vocabulary (one beat, two beat types, etc.) and the pitch type (basic type, change type a, change type b, etc.) Combinations can be made, so that even with the same accompaniment, different music-based customized learning contents are generated according to the lyrics (words / vocabulary).
상기 언어 학습장치(10)는 학습대상별 전용단말의 형태로 구현할 수도 있고 개인이 소유하고 있는 단말(스마트폰을 포함하는 핸드폰, 음성과 동영상을 지원하는 다양한 휴대장치 등)로 구현할 수도 있으며, 표시부(13)는 그 표시부(13)가 입력부(12)의 기능을 겸하도록 터치인터페이스를 지원하는 스크린으로 구성되는 것이 바람직하다.The language learning apparatus 10 may be implemented in the form of a dedicated terminal for each learning object, or may be implemented as a terminal owned by an individual (a mobile phone including a smartphone, various portable devices supporting voice and video, etc.), and a display unit ( 13 preferably includes a screen supporting the touch interface such that the display unit 13 also functions as the input unit 12.
상기한 언어 학습장치(10)는 메인메뉴인 사용자 학습모드와 퀵메뉴인 추천 학습모드로 구분하여 실행되는데, 사용자 학습모드에서는 학습자가 단어/어휘와 반주를 선택해서 선택된 단어/어휘와 반주를 유관 이미지 또는 동영상과 매칭하여 신규로 생성한 음악 기반의 맞춤학습 콘텐츠로 언어를 학습할 수 있고, 추천 학습모드에서는 언어 학습장치(10)에 기저장되어 있는 콘텐츠정보(학습정보 제공업체에서 제공하는 완성된 학습콘텐츠, 또는 학습자가 사용자 학습모드를 통해 기생성하여 저장한 맞춤학습 콘텐츠)를 학습 기능별로 다르게 출력하여 언어를 학습할 수 있다.The language learning apparatus 10 is executed by dividing into a user learning mode which is a main menu and a recommended learning mode which is a quick menu. In the user learning mode, a learner selects a word / vocabulary and accompaniment and associates the selected word / vocabulary and accompaniment with the image. Alternatively, the language can be learned using music-based customized learning content newly created by matching with the video. In the recommended learning mode, content information (prepared by the learning information provider provided in the language learning apparatus 10) is pre-stored. Learning content, or customized learning content that the learner parasiticized and stored through the user learning mode) may be output for each learning function to learn a language.
도 2는 도 1에 나타낸 메모리부의 학습정보 저장영역에 저장되는 학습정보의 구성도이다.FIG. 2 is a configuration diagram of learning information stored in the learning information storage area of the memory unit shown in FIG. 1.
도시된 바와 같이, 메모리부(16)의 학습정보 저장영역에는 단어 e사전(16a)과 문장 e사전(16b), 콘텐츠기능정보(16c), 맞춤학습 콘텐츠정보(16d), 녹음정보(16e) 등이 저장된다.As shown, in the learning information storage area of the memory unit 16, the word e dictionary 16a, the sentence e dictionary 16b, the content function information 16c, the customized learning content information 16d, and the recording information 16e. Etc. are stored.
상기 단어 e사전(16a)은 언어 학습장치(10)의 전용사전으로서, 이 단어 e사전(16a)을 통해 단어를 검색할 수 있으며, 검색한 단어는 히스토리에 저장된다.The word e dictionary 16a is a dedicated dictionary of the language learning apparatus 10. The word e dictionary 16a can search a word through the word e dictionary 16a, and the searched word is stored in the history.
상기 단어 e사전(16a)에는 다수의 단어와 단어의 멀티미디어 파일, 그리고 이러한 파일과 매칭시키기 위해 마련한 다양한 장르의 반주음악이 저장된다.The word e dictionary 16a stores a plurality of words, multimedia files of words, and accompaniment music of various genres prepared to match these files.
더 구체적으로 상기 단어 e사전(16a)은 음악정보(16aa), 텍스트정보(16ab), 음성정보(16ac), 이미지정보(16ad) 등을 포함한다.More specifically, the word e dictionary 16a includes music information 16aa, text information 16ab, voice information 16ac, image information 16ad, and the like.
음악정보(16aa)는 반주(배경음악)나 멜로디(이하에서 반주로 통일함)로 이루어진 다수의 곡(曲)에 관한 정보로서 재생시, 일반적인 음악 사용자에게 익숙한 음악 길이인, 한 곡당 대략 3분 정도의 재생 길이를 갖도록 구성된다.Music information (16aa) is information about a number of songs consisting of accompaniment (background music) or melody (hereafter referred to as accompaniment), approximately 3 minutes per song, the length of music familiar to the general music user during playback. It is configured to have a reproduction length of a degree.
텍스트정보(16ab)에는 단어와 그 뜻이 포함되고, 이미지정보(16ac)는 상기 단어의 뜻이나 의미를 나타내는 그림 파일을 포함하며 상기 텍스트정보(16ab)의 단어와 대응되게 저장된다.The text information 16ab includes a word and its meaning, and the image information 16ac includes a picture file representing the meaning or meaning of the word and is stored in correspondence with the word of the text information 16ab.
음성정보(16ad)는 상기 단어와 그 뜻에 해당하는 발음 등의 음성파일을 포함하고, 상기 텍스트정보(16ab)의 단어와 대응되게 저장된다.The voice information 16ad includes a voice file such as a sound corresponding to the word and its meaning, and is stored in correspondence with the word of the text information 16ab.
상기 문장 e사전(16b)은 문장을 검색하여 목록을 만들 수 있고, 숙어, 속담, 회화로 범주화된 다수의 문장과 문장의 멀티미디어 파일, 그리고 이러한 파일과 매칭시키기 위해 마련한 다양한 장르의 반주음악이 저장된다.The sentence e-dictionary 16b can search a sentence and make a list, and stores a plurality of sentences and sentences, categorized into idioms, proverbs, conversations, and accompaniment music of various genres prepared to match these files. do.
더 구체적으로 문장 e사전(16b)은 음악정보(16ba), 텍스트정보(16bb), 음성정보(16bc), 이미지정보(16bd) 등을 포함한다.More specifically, the sentence e dictionary 16b includes music information 16ba, text information 16bb, voice information 16bc, image information 16bd, and the like.
음악정보(16ba)는 반주로 이루어진 다수의 곡(曲)에 관한 정보로서 재생시, 일반적인 음악 사용자에게 익숙한 음악 길이인, 한 곡당 대략 3분 정도의 재생 길이를 갖도록 구성된다.The music information 16ba is configured to have a reproduction length of approximately three minutes per song, which is a music length that is familiar to a general music user during reproduction as information on a plurality of songs consisting of accompaniment.
텍스트정보(16bb)에는 문장(숙어, 속담, 회화)과 그 번역문이 포함되고, 이미지정보(16bc)는 상기 문장의 뜻이나 의미를 나타내는 그림 파일을 포함하며 상기 텍스트정보(16bb)의 문장과 대응되게 저장된다.The text information 16bb includes sentences (idioms, proverbs, conversations) and translations thereof, and the image information 16bc includes a picture file indicating the meaning or meaning of the sentence and corresponds to the sentences of the text information 16bb. Is stored.
음성정보(16bd)는 상기 문장과 그 뜻에 해당하는 발음 등의 음성파일을 포함하고, 상기 텍스트정보(16bb)의 문장과 대응되게 저장된다.The voice information 16bd includes a voice file such as a sentence corresponding to the sentence and its meaning, and is stored in correspondence with the sentence of the text information 16bb.
추천 학습모드에서 학습자가 선택하여 언어를 학습하는 콘텐츠기능정보(16c)는 기저장되어 있는 것으로, 학습 수준별로 저장되며, 동영상기능(16ca), 구간연습기능(16cb), 발음연습기능(16cc), 멀티미디어북기능(16cd), 단어장기능(16ce), 예문기능(16cf), 게임기능(16cg)을 포함한다.In the recommended learning mode, the learner selects the content function information 16c to learn the language, which is stored in advance, and stored according to the learning level, the video function 16ca, the interval exercise function 16cb, the pronunciation practice function 16cc. , Multimedia book function 16cd, wordbook function 16ce, example sentence function 16cf, game function 16cg.
여기서 콘텐츠는 품사 또는 의미 단위로 조합한 다수의 단어/어휘를 반주와 함께 노래로 만든 학습용 음악 콘텐츠로 구성되는 것으로, 노래의 기본적인 형식은 단어/어휘와 뜻을 번갈아 부르는 방식이며, 단어/어휘와 뜻 사이에 출력되는 시간 간격은 0.5~0.7초이다.Here, the content is composed of a learning music content made of a song with accompaniment or parts of words or vocabulary combined with the accompaniment unit. The basic form of the song is a method of alternating words / vocabulary and meaning. The interval between outputs is between 0.5 and 0.7 seconds.
상기 동영상기능(16ca)은 콘텐츠에 포함된 각 단어를 의미하는 이미지와 타이포그라피(시각적 효과를 준 텍스트 이미지)를 매칭시킨 동영상 파일를 출력하는 기능이다.The video function 16ca is a function of outputting a video file in which an image representing each word included in the content is matched with a typography (text image having a visual effect).
구간연습기능(16cb)은 상기 콘텐츠를 소정의 구간(예를 들어 8구간)으로 나누어 사전에 녹음된 원어민의 노래(콘텐츠 테이터베이스로 구성된 음성데이터)를 출력하는 기능으로서, 이 기능을 통해 학습자가 지정된 소정의 구간을 따라 부를 수 있도록 하여 약 3분으로 구성되는 하나의 노래를 작은 단위로 나누어 배울 수 있다.The section practice function 16cb divides the content into predetermined sections (for example, eight sections) and outputs a pre-recorded native speaker's song (voice data composed of a content database). One song consisting of about three minutes can be learned by dividing it into small units by allowing the user to sing along a predetermined section.
발음연습기능(16cc)은 상기 콘텐츠에 포함된 단어/어휘를 소정의 단어/어휘로 나눈 다음 원어민의 발음(메모리부에서 제공하는 음성데이터), 이미지와 함께 출력하는 기능으로서, 학습자는 원어민의 발음을 따라하되, 리듬에 맞추어서 따라 하기 때문에 단어의 강세를 포함한 원어민의 어조를 모방하기가 용이하고, 표시부(13)에 나타나는 단어/어휘의 뜻을 담은 이미지를 통하여 시각적인 즐거움을 느끼게 되며, 집중력과 흥미를 가지고 짧은 시간에 많은 단어를 학습할 수 있다.Pronunciation practice function (16cc) is to divide the word / vocabulary contained in the content into a predetermined word / vocabulary, and then output the native speaker's pronunciation (voice data provided by the memory unit) and the image. However, it is easy to imitate the tone of the native speaker including the stress of the word because it follows the rhythm, and the visual enjoyment through the image containing the meaning of the word / vocabulary appearing on the display unit 13 You can learn a lot of words in a short time with interest.
멀티미디어북기능(16cd)은 콘텐츠에 포함된 단어/어휘와 유관된 이미지가 음성파일과 함께 표시부의 화면에 한 커트씩 출력되도록 하는 기능으로서, 언어 학습장치(10)는 스크린을 터치할 때마다 음성이 출력되거나 화면이 전환되는 방식으로 반응한다.Multimedia book function (16cd) is a function to output the image related to the word / vocabulary included in the content by one cut on the screen of the display unit along with the voice file, the language learning apparatus 10 is a voice every time the screen is touched It reacts in the way it is output or the screen is switched.
단어장기능(16ce)은 콘텐츠에 포함된 단어/어휘의 표제어와 그에 해당하는 정보(어원, 파생어, 동의어, 반의어, 숙어 등)를 e-book과 같은 형태로 볼 수 있도록 하는 기능이다.The vocabulary function (16ce) is a function that allows you to view the headwords of words / vocabulary and corresponding information (such as etymology, derivatives, synonyms, antonyms, idioms, etc.) included in the contents in the form of an e-book.
예문기능(16cf)은 콘텐츠에 포함된 단어/어휘의 표제어로 만든 문장을 출력하는 기능으로, 반주와 원어민의 발음(메모리부에서 제공하는 음성데이터)을 매칭시켜서 학습할 수 있으며, 문장만 모아서 학습할 수 있다.Example sentence function (16cf) is a function to output a sentence made of the headword of the words / vocabulary included in the content, you can learn by matching the pronunciation of the native speaker (voice data provided by the memory unit), learning by collecting only sentences can do.
게임콘텐츠(16cg)는 콘텐츠에 포함된 언어의 정답과 오답이 표시부에 동시에 출력하고 학습자가 그중 하나를 선택하도록 하는 기능으로서, 평가부(18)는 오답과 정답의 수에 따라 점수를 산출한다.The game content 16cg is a function of simultaneously outputting the correct answer and the wrong answer of the language included in the content on the display unit and allowing the learner to select one of them, and the evaluation unit 18 calculates a score according to the number of the incorrect answer and the correct answer.
맞춤학습 콘텐츠정보(16d)에는 맞춤학습생성부(17)에서 학습자가 지정한 단어/어휘나 문장과 반주를 유관된 이미지와 매칭시켜 생성한 음악 기반의 맞춤학습 콘텐츠가 멀티미디어 콘텐츠로 저장된다.The personalized learning content information 16d stores music-based personalized learning content generated by matching the word / vocabulary or sentence and accompaniment designated by the learner with the associated image in the personalized learning generation unit 17 as multimedia content.
녹음정보(16e)에는 학습자가 생성한 맞춤학습 콘텐츠나 기제공된 학습 콘텐츠를 재생하고 이를 학습자가 따라하는 경우에 학습자의 발음이 음성데이터 형태로 저장된다.The recording information 16e stores the learner's pronunciation in the form of voice data when the learner reproduces the customized learning content or the previously provided learning content, and the learner follows it.
도 3은 본 발명의 제1실시예에 따른 언어 학습방법의 순서도이다.3 is a flowchart of a language learning method according to a first embodiment of the present invention.
도시된 바와 같이, 언어 학습장치(10)를 실행한 후 표시부(13)에 출력되는 모드 선택창에서 입력부(12)를 통해 학습하고자 하는 모드를 선택한다(S302).As shown, after the language learning apparatus 10 is executed, the mode to be learned through the input unit 12 is selected in the mode selection window output to the display unit 13 (S302).
예를 들어 학습자가 단어/어휘와 반주를 지정하여 학습을 하고자 하는 경우에는 사용자 학습모드를 선택하고, 기저장되어 있는 콘텐츠(학습정보 제공업체에서 제공하는 완성된 학습콘텐츠, 또는 학습자가 사용자 학습모드를 통해 기생성하여 저장한 맞춤학습 콘텐츠)를 이용하여 학습을 하고자 하는 경우에는 추천 학습모드를 선택한다.For example, if the learner wants to learn by specifying words / vocabulary and accompaniment, the user selects the user learning mode and the pre-stored content (completed learning content provided by the learning information provider, or the learner user learning mode). If you want to learn using the custom learning content stored in the parasitic through) select the recommended learning mode.
사용자 학습모드를 선택한 경우에는 단어/어휘나 문장을 지정하는 바, 첫째 단어/어휘나 문장의 텍스트가 표시부(13)에 출력되는 경우에 학습자가 입력부(12)를 통해 원하는 단어/어휘나 문장을 선택하여 목록을 생성하고, 목록의 명칭을 정하여 단어나 문장을 지정할 수도 있고, 둘째 단어 e사전(16a)이나 문장 e사전(16b)에서 단어나 문장을 검색하여 목록을 생성하고 목록의 명칭을 정하여 단어/어휘나 문장을 지정할 수도 있다(S304).When the user learning mode is selected, the word / vocabulary or sentence is designated. When the text of the first word / vocabulary or sentence is output on the display unit 13, the learner selects a desired word / vocabulary or sentence through the input unit 12. You can create a list by selecting it, and specify a word or sentence by specifying the name of the list.You can create a list by searching for a word or sentence in the second word e dictionary 16a or sentence e dictionary 16b, and create a list and name the list. Words / vocabulary or sentences may also be specified (S304).
다음 입력부(12)는 단어/어휘나 문장이 지정된 후 노래에 사용될 반주음악을 지정한다(S306).Next, the input unit 12 designates accompaniment music to be used in a song after a word / vocabulary or a sentence is designated (S306).
이때 맞춤학습생성부(17)는 학습자가 단어 옵션을 선택한 경우에 지정된 반주와 단어를 발음 및 이미지와 매칭하여 음악 기반의 맞춤학습 콘텐츠를 생성하고, 학습자가 단어+뜻 옵션을 선택한 경우에 지정된 반주와 단어에 뜻과 발음 및 이미지와 매칭하여 음악 기반의 맞춤학습 콘텐츠를 생성하여 상기 목록의 명칭으로 메모리부(16)의 맞춤학습 콘텐츠정보(16d)에 저장한다(S308).At this time, the custom learning generation unit 17 generates music-based customized learning content by matching the accompaniment and words specified when the learner selects the word option with the pronunciation and the image, and the designated accompaniment when the learner selects the word + meaning option. Matching the word with the meaning, pronunciation, and image to generate a music-based customized learning content and stores it in the customized learning content information 16d of the memory unit 16 as the name of the list (S308).
또한 맞춤학습생성부(17)는 학습자가 문장 옵션을 선택한 경우에 지정된 반주와 문장을 발음 및 이미지와 매칭하여 음악 기반의 맞춤학습 콘텐츠를 생성하고, 학습자가 문장+번역 옵션을 선택한 경우에 지정된 반주와 문장에 번역과 발음 및 이미지와 매칭하여 음악 기반의 맞춤학습 콘텐츠를 생성하여 상기 목록의 명칭으로 메모리부(16)의 맞춤학습 콘텐츠정보(16d)에 저장한다.In addition, the custom learning generation unit 17 generates music-based customized learning content by matching the accompaniment and sentences specified when the learner selects the sentence option with the pronunciation and the image, and the designated accompaniment when the learner selects the sentence + translation option. The music-based customized learning content is generated by matching the translation, the pronunciation, and the image to the sentence and stored in the customized learning content information 16d of the memory unit 16 under the name of the list.
다음 입력부(12)를 통해 맞춤학습 콘텐츠의 실행을 선택하면 제어부(11)의 제어에 의해 스피커(SP)와 표시부(13)를 통해 음악 기반의 맞춤학습 콘텐츠가 재생된다(S310).Next, when the execution of the customized learning content is selected through the input unit 12, the music-based customized learning content is reproduced through the speaker SP and the display unit 13 under the control of the control unit 11 (S310).
이때 단어 옵션의 선택으로 생성된 맞춤학습 콘텐츠가 재생되는 경우에 표시부(13)에는 해당하는 단어의 이미지가 텍스트와 함께 예를 들어 1.2초 또는 2.4초 간격(단어의 길이가 한 박자인지 두 박자인지에 따라서)으로 출력되면서(도 7 내지 도 9 참조), 스피커(SP)를 통해서는 단어의 발음이 상기 지정한 반주에 맞추어 예를 들어 1.2초 또는 2.4초 간격으로 출력된다.In this case, when the customized learning content generated by the selection of the word option is played, the display unit 13 displays the image of the corresponding word along with the text, for example, at 1.2 second or 2.4 second intervals (word length is one beat or two beats). 7 to 9, the pronunciation of the word is output through the speaker SP at intervals of 1.2 seconds or 2.4 seconds, for example, in accordance with the specified accompaniment.
또한 단어+뜻 옵션의 선택으로 생성된 맞춤학습 콘텐츠가 출력되는 경우에 표시부(13)에는 해당하는 단어의 이미지가 텍스트와 함께 예를 들어 1.2초 또는 2.4초 간격(단어의 길이가 한 박자인지 두 박자인지에 따라서)으로 출력되면서, 스피커(SP)를 통해서는 단어의 발음과 뜻 음성이 상기 지정한 반주에 맞추어 예를 들어 0.6초 또는 1.2초 간격으로 출력된다.In addition, when the customized learning content generated by the selection of the word + meaning option is output, the display unit 13 displays the image of the corresponding word along with the text, for example, at 1.2 second or 2.4 second intervals (whether the length of the word is one beat or two). The sound and pronunciation of the word are output at intervals of, for example, 0.6 seconds or 1.2 seconds through the speaker SP.
또한 문장 옵션의 선택으로 생성된 맞춤학습 콘텐츠가 출력되는 경우에 표시부(13)에는 해당하는 문장의 이미지가 텍스트와 함께 출력되면서, 스피커(SP)를 통해서는 문장의 발음이 상기 지정한 반주에 맞추어 출력된다.In addition, when the customized learning content generated by the selection of the sentence option is output, the display unit 13 outputs the image of the corresponding sentence together with the text, and the pronunciation of the sentence is output in accordance with the specified accompaniment through the speaker SP. do.
이때 특정 문장과 다음 문장 사이의 출력 간격은 특정 문장의 길이(시간)에예를 들어 2초를 더한다.The output interval between the specific sentence and the next sentence adds, for example, 2 seconds to the length (time) of the specific sentence.
상기 문장에 타이포그라피(시각적 효과를 준 텍스트 이미지) 기능을 적용할 수 있는 바, 문장의 발음에서 음정정보와 음량정보를 추출하고 추출된 정보를 텍스트에 반영하여 음정의 높이에 따라서 텍스트의 높이를 다르게 하여 텍스트에 곡선을 생성하고, 음량의 정도에 따라 텍스트의 두께를 다르게 하여 표시부(13)에 출력할 수도 있다.A typography (text image with visual effect) can be applied to the sentence, and the pitch information and volume information are extracted from the pronunciation of the sentence, and the extracted information is reflected in the text to increase the height of the text according to the height of the pitch. The curve may be generated in the text differently, and the text may be output to the display unit 13 by varying the thickness of the text according to the degree of volume.
또한 문장+번역 옵션의 선택으로 생성된 맞춤학습 콘텐츠가 출력되는 경우에 표시부(13)에는 해당하는 문장의 이미지가 텍스트와 함께 출력되면서, 스피커(SP)를 통해서는 문장의 발음과 번역 음성이 상기 지정한 반주에 맞추어 출력된다.In addition, when the customized learning content generated by the selection of the sentence + translation option is output, the display unit 13 outputs the image of the corresponding sentence together with the text, and the pronunciation and the translation voice of the sentence are recalled through the speaker SP. The output is in line with the specified accompaniment.
이를 통해 학습자는 원어민의 발음이나 뜻(번역)을 반주와 함께 귀로 듣고, 텍스트와 유관 이미지를 눈으로 보면서, 입을 통해 따라하면서 3가지 감각기관을 통한 3-Tier(세 가지 차원 또는 세 가지 방식)의 입체적 학습을 체험한다. This allows learners to listen to native speakers' pronunciation or meaning with their accompaniment through their ears, to see text and related images with their eyes, to follow through their mouths, and to use 3-Tier (three dimensions or three ways) through three sensory organs. Experience three-dimensional learning.
즉, 노래를 하듯이 자연스럽게 자발적이고 자기주도적인 언어를 학습하게 된다.That is, they naturally learn spontaneous and self-directed language as if they are singing.
이때 학습자의 발음은 마이크(MIC)를 통해 입력되어 음성처리부(14)를 거쳐 메모리부(16)의 녹음정보(16e)에 녹음, 음성데이터로 저장된다(S312).At this time, the pronunciation of the learner is input through the microphone (MIC) and is recorded in the recording information 16e of the memory unit 16 via the voice processing unit 14 and stored as voice data (S312).
상기 제어부(11)는 녹음 저장된 학습자의 발음을 원어민의 발음과 함께 스피커(SP)를 통해 출력하여, 발음 차이를 청각적으로 비교할 수 있다(S314).The controller 11 may output the pronunciation of the recorded learner through the speaker SP together with the native speaker's pronunciation, and compare the pronunciation difference in audible manner (S314).
단어나 문장의 발음이 저장된 경우에 음정추출 프로그램에서 원어민과 학습자 발음의 음정, 음량, 재생시간 정보를 추출하여 발화 곡선그래프를 생성하되, 음정은 발화 곡선그래프에서 높낮이를, 음량은 두께를, 재생시간은 길이를 나타내도록 하여 표시부(13)에 출력함으로써 원어민과 학습자의 발음 차이를 시각적으로 비교할 수 있도록 한다(S316).When the pronunciation of a word or sentence is stored, a pitch extraction program extracts the pitch, volume, and duration information of native speakers and learner's pronunciation to generate an ignition curve graph, where the pitch is high and low in the utterance curve graph. The time is displayed so that the length is output to the display unit 13 so that the pronunciation difference between the native speaker and the learner can be visually compared (S316).
평가부(18)는 원어민과 학습자의 발화 곡선그래프의 유사도를 서로 비교하여 점수를 산출한다.The evaluation unit 18 calculates a score by comparing the similarity between the native speaker and the learner's speech curve graph.
이상과 같은 제1실시예에 따른 언어 학습방법에 의하면, 학습자가 지정한 단어/어휘와 반주를 이미지와 매칭시킨 음악 기반의 맞춤학습 콘텐츠를 생성하여 학습자의 지정에 의해 단어와 뜻을 노래처럼 학습(암기)할 수 있다.According to the language learning method according to the first embodiment as described above, by generating a music-based customized learning content that matches the words / vocabulary and accompaniment specified by the learner with the image, learning the words and meanings as songs by the learner's designation ( Memorize).
도 4는 본 발명의 제2실시예에 따른 언어 학습방법의 순서도이다.4 is a flowchart illustrating a language learning method according to a second embodiment of the present invention.
상기 S302단계에서 추천 학습모드가 선택된 경우에 표시부(13)에는 학습수준 선택창이 출력되고(도 10 참조) 학습자가 입력부(12)를 통해 자신의 학습 수준을 선택한다(S402)When the recommended learning mode is selected in step S302, the display unit 13 outputs a learning level selection window (see FIG. 10), and the learner selects his or her learning level through the input unit 12 (S402).
여기서 학습수준은 콘텐츠 데이터베이스의 구성을 결정하면, 학습대상(초등학생, 중학생, 고등학생, 토익/토플) 또는 학습의 정도(초급, 중급, 고급)에 따라 구분할 수 있다.Here, the level of learning can be classified according to the learning object (elementary school student, junior high school student, high school student, TOEIC / TOEFL) or the degree of learning (beginner, intermediate, advanced) after determining the composition of the content database.
표시부(18)에는 선택된 학습수준에 해당하는 다수의 콘텐츠 목록이 출력되고, 학습자는 입력부(12)를 통해 하나의 콘텐츠를 선택한다(S404).The display unit 18 outputs a plurality of content lists corresponding to the selected learning level, and the learner selects one content through the input unit 12 (S404).
이 콘텐츠는 품사 또는 의미 단위로 조합한 다수의 단어/어휘를 반주와 함께 노래로 만든 멀티미디어 콘텐츠로 구성되는 것으로, 노래의 기본적인 형식은 단어/어휘와 뜻을 번갈아 부르는 방식이며, 단어/어휘와 뜻 사이에 출력되는 시간 간격은 0.5~0.7초이다.This content consists of multimedia contents made of songs along with accompaniment of words / vocabulary combined in parts of speech or semantic units. The basic form of the song is a way of alternating words / vocabulary and meaning. The interval between outputs is between 0.5 and 0.7 seconds.
상기 콘텐츠가 선택된 후 표시부(13)에는 그 콘텐츠에 해당하는 학습 기능 목록이 출력되고(도 11 참조), 학습자는 입력부(12)를 통해 선택한 콘텐츠를 학습할 기능을 선택한다(S406).After the content is selected, the display unit 13 outputs a learning function list corresponding to the content (see FIG. 11), and the learner selects a function to learn the selected content through the input unit 12 (S406).
여기서 상기 콘텐츠를 학습할 기능으로는 동영상, 구간연습, 발음연습, 멀티미디어북, 단어장, 예문, 게임이 포함된다.Here, the function to learn the content includes a video, section practice, pronunciation practice, multimedia book, vocabulary, example sentences, games.
상기한 콘텐츠의 학습 기능 중 어느 하나를 선택하면 상기 선택된 학습 기능 콘텐츠가 학습 기능별로 다르게 스피커(SP)와 표시부(13)를 통해 재생된다(S408).If one of the learning functions of the content is selected, the selected learning function content is reproduced through the speaker SP and the display unit 13 differently for each learning function (S408).
상기 동영상기능(16ca)을 선택한 경우에 콘텐츠에 각 단어/어휘를 의미하는 이미지 또는 타이포그라피를 매칭시킨 동영상 파일이 출력된다(도 12 참조).When the video function 16ca is selected, a video file in which an image or typography meaning each word / word is matched to the content is output (see FIG. 12).
구간연습기능(16cb)을 선택한 경우에는 상기 콘텐츠를 소정의 구간(예를 들어 8구간)으로 나누어 사전에 녹음된 원어민의 노래(콘텐츠 데이터베이스로 구성된 음성데이터)가 반주와 함께 스피커(SP)로 출력되고, 각 단어를 의미하는 이미지와 텍스트가 표시부(13)에 출력된다(도 13 참조).If the section practice function (16cb) is selected, the content is divided into predetermined sections (for example, eight sections), and previously recorded native speakers' songs (voice data composed of a content database) are output to the speaker SP together with the accompaniment. Then, an image and text representing each word are output to the display unit 13 (see FIG. 13).
이때 학습자는 원어민의 노래를 듣고 따라 부르며 학습을 하되, 학습에 혼란이 생기지 않도록 노래를 시작하기 전과 원어민이 노래 부를 때와 사용자가 노래를 따라 부를 때에 표시부(13)에 출력되는 텍스트의 컬러를 다르게 한다.At this time, the learner listens and sings along with the native speaker's song, but the color of the text displayed on the display unit 13 is different before starting the song and when the native speaker sings and the user sings in order to prevent confusion in learning. do.
발음연습기능(16cc)을 선택한 경우에 상기 콘텐츠를 소정의 단위로 나누어 원어민의 발음(메모리부에서 제공하는 음성데이터)을 출력하되, 단어, 단어+뜻, 무반주, 화면 꺼짐 등의 옵션을 선택할 수 있다.When the pronunciation practice function (16cc) is selected, the content is divided into predetermined units and the native speaker's pronunciation (voice data provided by the memory unit) is output, but options such as word, word + meaning, unaccompaniment, and screen off can be selected. Can be.
상기 단어/어휘를 선택한 경우 표시부(13)에는 단어/어휘의 유관 이미지가 텍스트와 함께 소정간격(예를 들어 2.4초)으로 출력되고, 스피커(SP)에는 반주와 원어민의 발음이 소정간격(예를 들어 2.4초)으로 출력된다.When the word / vocabulary is selected, the display unit 13 outputs the relevant image of the word / vocabulary at a predetermined interval (for example, 2.4 seconds) together with the text, and the accompaniment and pronunciation of the native speaker are output at a predetermined interval (for example, the speaker SP). For example, 2.4 seconds).
상기 단어/어휘+뜻을 선택한 경우 표시부(13)에는 단어/어휘의 유관 이미지가 텍스트와 함께 소정간격(예를 들어 2.4초)으로 출력되고(도 14 참조), 스피커(SP)에는 원어민의 발음과 뜻 음성파일이 반주에 맞추어 소정간격(예를 들어 1.2초)으로 출력된다.If the word / vocabulary + meaning is selected, the display unit 13 outputs the relevant image of the word / vocabulary along with the text at a predetermined interval (for example, 2.4 seconds) (see FIG. 14), and the native speaker pronounces the speaker SP. Audio files are output at predetermined intervals (for example, 1.2 seconds) in accompaniment with accompaniment.
이때 학습자는 원어민의 발음과 뜻 음성파일을 반주에 맞추어 따라함으로써 단어와 뜻을 학습(암기)하게 된다.At this time, the learner learns (memorizes) words and meanings by accommodating the pronunciation and meaning voice files of native speakers in accompaniment.
상기 무반주를 선택한 경우 반주없이 음성파일이 스피커(SP)로 출력되고, 표시부(13)에는 이미지와 텍스트가 출력된다.When the unaccompaniment is selected, the audio file is output to the speaker SP without accompaniment, and the image and text are output to the display unit 13.
상기 화면 꺼짐을 선택한 경우 스피커(SP)를 통해 음성파일과 반주만 출력된다.When the screen off is selected, only the voice file and the accompaniment are output through the speaker SP.
상기 멀티미디어북기능(16cd)을 선택한 경우 콘텐츠에 포함된 단어/어휘의 유관 이미지가 표시부(13)에 한 커트씩 출력되고, 거기에 맞게 스피커(SP)에는 원어민 발음이 출력된다.When the multimedia book function 16cd is selected, the relevant image of the word / vocabulary included in the content is output one by one on the display unit 13, and the native speaker's pronunciation is output to the speaker SP accordingly.
이때 학습자의 선택에 의해 단어가 한 커트씩 출력되기도 하고, 같은 단어가 반복하여 출력하기도 한다.At this time, a word is output by one cut by the learner's selection, or the same word is repeatedly output.
상기 멀티미디어북기능(16cd)의 옵션으로는 원어민발음, 노래, 묵음, 정보, 전체보기가 구성된다.Options of the multimedia book function 16cd include native speaker pronunciation, song, mute, information, and a full view.
원어민발음을 선택하는 경우에는 학습자가 한 커트씩 넘길 때마다 해당하는 단어의 원어민 발음이 스피커(SP)를 통해 출력된다.When the native speaker is selected, the native speaker's pronunciation of the corresponding word is output through the speaker SP every time the learner makes a cut.
상기 노래를 선택하는 경우에는 표시부의 화면을 한 커트씩 넘길 때마다 해당하는 단어/어휘의 노래가 스피커(SP)를 통해 출력된다.When the song is selected, a song of the corresponding word / vocabulary is output through the speaker SP every time the screen of the display unit is turned by one cut.
상기 묵음을 선택하면 학습자가 표시부(13)의 화면을 터치해도 소리가 출력되지 않는다.If the mute is selected, no sound is output even when the learner touches the screen of the display unit 13.
정보를 선택하면 해당하는 단어/어휘의 정보창(어원, 파생어, 동의어, 반의어, 숙어 등)이 표시부(13)에 출력된다(도 15,16 참조).When the information is selected, the information window (word, derivative, synonym, antonym, idiom, etc.) of the corresponding word / vocabulary is output to the display unit 13 (see FIGS. 15 and 16).
전체보기를 선택하면 한 콘텐츠를 구성하는 단어/어휘에 해당하는 스크린샷이 표시부에 출력된다(도 17 참조).When the entire view is selected, screenshots corresponding to words / vocabulary that constitute a content are output to the display unit (see FIG. 17).
이 전체보기를 선택하면 화면을 스크롤하여 빠르게 원하는 단어/어휘를 찾을 수 있다.Select this overview to scroll through the screen to quickly find the word / vocabulary you want.
상기 단어장기능(16ce)을 선택하면 선택한 콘텐츠에 담긴 단어/어휘의 표제어와 그에 해당하는 정보(어원, 파생어, 동의어, 반의어, 숙어 등)가 표시부(13)와 스피커(SP)를 통해 출력된다(도 18 참조).When the wordbook function 16ce is selected, the headword of the word / vocabulary and the corresponding information (the etymology, the derivative, the synonym, the antonym, the idiom, etc.) contained in the selected content are output through the display unit 13 and the speaker SP ( 18).
이때 학습자는 모르는 단어/어휘만 체크하여, 체크된 단어/어휘의 표제어와 그에 해당하는 정보만 표시부(13)와 스피커(SP)를 통해 출력할 수도 있다.In this case, the learner checks only the unknown words / vocabulary, and outputs only the checked words / vocabulary words and corresponding information through the display unit 13 and the speaker SP.
예문기능(16cf)을 선택한 경우 선택한 콘텐츠에 포함된 단어/어휘의 표제어로 만든 문장이 표시부(13)와 스피커(SP)를 통해 출력된다.When the example sentence function 16cf is selected, the sentence made of the headword of the word / vocabulary included in the selected content is output through the display unit 13 and the speaker SP.
이때 학습자는 학습하고자 하는 문장만 체크하여 체크된 문장과 그 번역만 표시부(13)와 스피커(SP)를 통해 출력할 수도 있다.In this case, the learner may check only the sentence to be learned and output only the checked sentence and its translation through the display unit 13 and the speaker SP.
상기 예문기능(16cf)을 통해 반주와 원어민의 발음(메모리부에서 제공하는 음성데이터)을 매칭시켜서 학습할 수 있으며, 문장만 모아서 학습할 수도 있다.Through the example function 16cf, the accompaniment and the pronunciation of the native speaker (voice data provided by the memory unit) can be matched and can be learned by collecting only sentences.
게임기능(16cg)을 선택한 경우에 콘텐츠에 포함된 언어의 정답과 오답이 표시부(13)에 동시에 출력된다(도 19 참조).When the game function 16cg is selected, the correct answer and the wrong answer of the language included in the content are simultaneously output to the display unit 13 (see FIG. 19).
이때 학습자는 박자에 맞추어 정답이라고 생각하는 것을 클릭하게 되는 데, 게임이 끝난 후 평가부(18)는 오답과 정답의 수에 따라 점수를 산출한다.At this time, the learner clicks what he thinks is the correct answer in time, and after the game is finished, the evaluation unit 18 calculates a score according to the number of incorrect answers and correct answers.
다음 학습자는 선택된 학습 기능에 따라 원어민의 발음을 따라하고 이때 학습자의 발음이나 뜻(번역)은 마이크(MIC)를 통해 입력되어 음성처리부(14)를 거쳐 메모리부(16)의 녹음정보(16e)에 녹음, 음성데이터로 저장된다.(S410).Next, the learner follows the pronunciation of the native speaker according to the selected learning function. At this time, the learner's pronunciation or meaning (translation) is input through a microphone (MIC), and is then recorded through the voice processor 14 and the recording information 16e of the memory unit 16e. Recording is stored in the voice data. (S410).
발음연습기능(16cc)에서 단어/어휘를 옵션으로 선택하면 스피커(SP)를 통해 반주와 원어민의 발음이 소정간격(예를 들어 2.4초)으로 출력되는 데, 이때 학습자가 원어민의 발음과 소정 시간차(예를 들어 1.2초)를 두고 발음을 따라하고 이는 마이크(MIC)를 통해 입력되어 저장부(16)의 녹음정보(16e)에 저장된다.When the word / vocabulary is selected as an option in the pronunciation practice function (16cc), the accompaniment and pronunciation of native speakers are output at a predetermined interval (for example, 2.4 seconds) through the speaker SP. (E.g., 1.2 seconds), followed by pronunciation, which is input through a microphone MIC and stored in the recording information 16e of the storage unit 16.
상기 제어부(11)는 녹음 저장된 학습자의 발음을 원어민의 발음과 함께 스피커(SP)를 통해 출력하여, 발음 차이를 청각적으로 비교할 수 있도록 한다(S412).The controller 11 outputs the recorded pronunciation of the learner through the speaker SP together with the native speaker's pronunciation, so that the pronunciation difference can be compared audibly (S412).
음정추출 프로그램에서 원어민과 학습자 발음의 음정, 음량, 재생시간 정보를 추출하여 발화 곡선그래프를 생성하되, 음정은 발화 곡선그래프에서 높낮이를, 음량은 두께를, 재생시간은 길이를 나타내도록 하여 표시부(13)에 출력함으로써 원어민과 학습자의 발음 차이를 시각적으로 비교할 수 있도록 한다(S414).The pitch extraction program extracts the pitch, volume, and playing time information of native speakers and learner's pronunciation to generate an ignition curve graph, where the pitch is the height, the volume is the thickness, and the playback time is the length. 13) by visually comparing the pronunciation difference between the native speaker and the learner (S414).
그리고 평가부(18)는 원어민과 학습자의 발화 곡선그래프의 유사도를 서로 비교하여 점수를 산출한다(S416).In addition, the evaluation unit 18 compares the similarity of the utterance curve graph of the native speaker and the learner to calculate a score (S416).
이상과 같은 제2실시예에 따른 언어 학습방법에 의하면, 언어 학습장치(10)에 저장된 품사 또는 의미 단위로 조합한 다수의 단어/어휘를 반주와 함께 노래로 만든 학습용 음악 콘텐츠로 구성되는 콘텐츠를 다양한 학습 기능으로 활용하여 단어/어휘와 문장의 발음과 의미(뜻이나 번역)을 노래처럼 학습(암기)할 수 있다.According to the language learning method according to the second embodiment as described above, the content consisting of a learning music content made of songs with accompaniment of a plurality of words / vocabulary combined by parts of speech or semantic units stored in the language learning apparatus 10 It can be used as various learning functions to learn (memorize) the words and vocabulary and the pronunciation and meaning (meaning or translation) of sentences as songs.
이상과 같은 본 발명에 의하면 학습자가 음악과 단어/어휘를 선택하는 경우에 그 음악과 단어/어휘, 유관 이미지를 매칭시킨 맞춤학습콘텐츠를 생성하게 되는데, 이 맞춤학습콘텐츠는 학습자가 지정한 음악이 반주가 되고, 학습자가 지정한 단어/어휘가 가사가 되며, 유관 이미지는 동영상으로 변환되어 하나의 뮤직비디오와 같은 멀티미디어 콘텐츠를 생성하는 것으로, 학습자는 이를 이용하여 발음을 연습하고, 음악과 게임을 통해 단어/어휘를 암기(학습)하며, 스스로 작사한 하나의 노래를 즐기는 것과 동일한 방법으로 언어 학습에 대한 지속적인 흥미를 유발시키고, 지루함을 주지 않게 하여 언어 습득의 기본이 되는 단어/어휘 능력을 향상시킬 수 있는 것이다.According to the present invention as described above, when the learner selects music and words / vocabulary, the user creates the customized learning content by matching the music with the words / vocabulary and related images. The customized learning content accompanies the music designated by the learner. The words / vocabulary designated by the learner becomes the lyrics, and the related image is converted into a video to generate multimedia contents such as a music video. The learner uses the practice to practice pronunciation and uses words and music through the game. You can improve your vocabulary skills, which are the basis of language learning, by inducing constant interest in language learning and avoiding boredom in the same way that you can memorize vocabulary and enjoy a song you wrote yourself. It is.

Claims (15)

  1. 언어 학습장치의 입력부에 의해 기저장된 단어/어휘나 문장과 반주음악이 지정되는 단계;Designating pre-stored words / vocabulary or sentences and accompaniment music by the input unit of the language learning apparatus;
    맞춤학습생성부에서 입력부에 의해 지정된 단어/어휘나 문장 및 반주를 텍스트와 유관 이미지나 동영상 및 원어민의 발음과 함께 매칭시켜 음악 기반의 맞춤학습 콘텐츠를 생성하고 저장하는 단계; 및Generating and storing music-based customized learning content by matching words / vocabulary, sentences, and accompaniment designated by the input unit with text and related images, videos, and native speaker's pronunciation in the customized learning generation unit; And
    제어부에서 상기 맞춤학습 콘텐츠를 재생하되, 표시부를 통해 텍스트와 유관 이미지나 동영상을 출력하고, 스피커를 통해 원어민의 발음과 반주를 출력하는 단계를 포함하는 음악기반의 언어 학습방법.A music-based language learning method comprising the step of playing the customized learning content in the control unit, outputting text and related images or videos through the display unit, and outputting pronunciation and accompaniment of native speakers through a speaker.
  2. 제 1 항에 있어서,The method of claim 1,
    상기 표시부에 문장의 텍스트가 출력되는 경우 제어부에서 문장에 타이포그라피(typography)를 적용하여 음량의 정도에 따라 텍스트를 굵게 또는 얇게 하고, 음정의 높낮이에 따라 텍스트의 높낮이를 크게 또는 낮게 하여 문장에 생동감을 주도록 하는 것을 특징으로 하는 음악기반의 언어 학습방법.When the text of the sentence is output on the display unit, the control unit applies typography to the sentence to make the text thicker or thinner according to the volume level, and to raise or lower the text height according to the pitch of the sentence. Music-based language learning method characterized in that to give.
  3. 언어 학습장치의 입력부에 의해 학습 수준이 선택되는 경우 제어부에서 그 학습 수준에 맞는 품사 또는 의미 단위로 조합한 다수의 단어/어휘를 반주와 함께 노래로 만든 학습용 음악 콘텐츠로 구성되는 콘텐츠의 목록이 출력되는 단계;If the learning level is selected by the input unit of the language learning apparatus, the control unit outputs a list of contents consisting of learning music contents made of songs along with accompaniment of a plurality of words / vocabulary combined with parts of speech or semantic units corresponding to the learning level. Becoming;
    입력부에 의해 콘텐츠 목록 중 어느 하나의 콘텐츠가 선택되는 경우에 제어부에 의해 그 콘텐츠의 학습 기능 목록이 출력되는 단계; 및Outputting a learning function list of the content by the control unit when one of the contents is selected by the input unit; And
    입력부에 의해 학습 기능 목록 중 어느 하나의 학습 기능이 선택되는 경우에 제어부에 의해 콘텐츠의 내용을 학습 기능별로 다르게 재생하되, 표시부를 통해 텍스트와 유관 이미지나 동영상을 출력하고, 스피커를 통해 원어민의 발음과 반주를 출력하는 단계를 포함하는 음악기반의 언어 학습방법.When the learning function of any one of the learning function list is selected by the input unit, the control unit plays the contents of the content differently for each learning function, and outputs text and related images or videos through the display unit, and pronunciation of native speakers through the speaker. Music-based language learning method comprising the step of outputting the accompaniment.
  4. 제 3 항에 있어서,The method of claim 3, wherein
    상기 학습 기능은 콘텐츠에 포함된 각 단어/어휘를 의미하는 이미지와 타이포그라피를 매칭시킨 동영상 파일이 출력되는 동영상기능과,The learning function is a video function that outputs a video file matching the typography and the image representing each word / vocabulary included in the content;
    상기 콘텐츠를 소정의 구간으로 나누어 원어민의 노래가 반주와 함께 스피커로 출력되는 구간연습기능과,A section practice function of dividing the content into predetermined sections and outputting a native speaker's song to the speaker along with the accompaniment;
    상기 콘텐츠를 소정의 단위로 나누어 원어민의 발음이 스피커로 출력되는 발음연습기능과,A pronunciation exercise function of dividing the content into predetermined units and outputting native speaker's pronunciation to a speaker;
    상기 콘텐츠에 포함된 단어/어휘의 이미지가 표시부에 한 커트씩 출력되고, 거기에 맞게 스피커에는 원어민 발음이 출력되는 멀티디미어북기능과,The multi-memory book function that the image of the word / vocabulary included in the content is output to the display unit by one cut, and the native speaker's pronunciation is output to the speaker accordingly.
    상기 콘텐츠에 포함된 단어/어휘의 표제어와 그에 해당하는 정보(어원, 파생어, 동의어, 반의어, 숙어 등)가 표시부와 스피커를 통해 출력되는 단어장기능과,A vocabulary function in which a headword of words / vocabulary included in the content and corresponding information (such as etymology, derivatives, synonyms, antonyms, idioms, etc.) are output through a display unit and a speaker;
    상기 콘텐츠에 포함된 단어/어휘의 표제어로 만든 문장이 표시부와 스피커를 통해 출력되는 예문기능과,Example sentence function that the sentence made of the headword of the word / vocabulary included in the content is output through the display unit and the speaker;
    상기 콘텐츠에 포함된 담긴 언어/어휘의 정답과 오답이 표시부에 동시에 출력되는 게임기능 중 어느 하나 이상으로 이루어지는 것을 특징으로 하는 음악기반의 언어 학습방법.Music-based language learning method, characterized in that made of any one or more of the game function that the correct answer and the wrong answer of the language / vocabulary contained in the content is displayed on the display at the same time.
  5. 제 1 항 또는 제 3 항에 있어서,The method according to claim 1 or 3,
    학습자가 발음을 따라하는 경우에 상기 학습자의 발음을 메모리부에 저장하는 단계와,Storing the learner's pronunciation in a memory unit when the learner follows the pronunciation;
    상기 저장된 학습자의 발음을 스피커를 통해 원어민의 발음과 함께 출력하는 단계를 더 수행하는 것을 특징으로 하는 음악기반의 언어 학습방법.And outputting the stored pronunciation of the learner together with the native speaker's pronunciation through a speaker.
  6. 제 5 항에 있어서,The method of claim 5,
    상기 제어부에서 원어민과 학습자의 발음에 따른 음정과 음량 및 재생시간 정보를 추출하여 곡선그래프로 표시부에 출력하는 것을 특징으로 하는 음악기반의 언어 학습방법.Music-based language learning method characterized in that the control unit extracts the pitch, volume, and playback time information according to the pronunciation of the native speaker and learner and outputs it to the display unit in a curved graph.
  7. 제 6 항에 있어서,The method of claim 6,
    상기 제어부에서 음정은 발화 곡선그래프에서 높낮이를, 음량은 두께를, 재생시간은 길이를 나타내도록 하여 원어민과 학습자의 곡선그래프를 표시부에 출력하는 것을 특징으로 하는 음악기반의 언어 학습방법.The music control method of the language-based language, characterized in that the pitch on the display, and the display of the curve graph of the native speaker and learner so that the pitch indicates the height, the volume is the thickness, the playback time is the length.
  8. 제 7 항에 있어서,The method of claim 7, wherein
    평가부에서 상기 원어민과 학습자의 곡선그래프를 서로 비교하여 그 유사도에 따라 점수를 산출하는 것을 특징으로 하는 음악기반의 언어 학습방법.The evaluation unit compares the curve graph of the native speaker and the learner with each other and calculates the score according to the similarity.
  9. 제 1 항 또는 제 3 항에 있어서,The method according to claim 1 or 3,
    상기 단어와 문장의 발음이 스피커를 통해 출력된 후, 0.5~0.7초 이후에 그 뜻이 스피커를 통해 출력되는 것을 특징으로 하는 음악기반의 언어 학습방법.After the pronunciation of the words and sentences are output through the speaker, the meaning is output through the speaker after 0.5 ~ 0.7 seconds, characterized in that the music-based language learning method.
  10. 입력부, 표시부, 스피커, 메모리부 및 제어부로 구성된 언어 학습장치에 있어서;A language learning device comprising an input unit, a display unit, a speaker, a memory unit, and a control unit;
    상기 메모리부에는 다수의 단어/어휘와 뜻에 대한 텍스트 및 발음, 반주와 유관 이미지나 동영상을 저장하고,The memory unit stores a plurality of words / vocabulary and meaning text and pronunciation, accompaniment and related images or videos,
    상기 입력부에 의해 선택된 메모리부의 단어/어휘와 반주를 이미지나 동영상과 매칭시켜 음악 기반의 단어/어휘 맞춤학습 콘텐츠를 생성하고 메모리부에 저장하는 맞춤학습생성부가 더 구비되되,Matching the word / vocabulary and accompaniment of the memory unit selected by the input unit with the image or video to create a music-based vocabulary / vocabulary custom learning content and store in the memory unit further,
    상기 제어부는 메모리부에 저장된 단어/어휘 맞춤학습 콘텐츠를 재생하여 스피커를 통해 단어/어휘 맞춤학습 콘텐츠의 발음과 반주를 소리로 출력하고, 표시부를 통해 텍스트와 이미지나 동영상을 출력하는 것을 특징으로 하는 음악기반의 언어 학습방법을 활용한 학습장치.The controller may play the word / lexical-customized learning content stored in the memory unit to output the pronunciation and accompaniment of the word / lexical-customized learning content as a sound through a speaker, and output text, an image, or a video through the display unit. Learning device using music-based language learning method.
  11. 제 10 항에 있어서,The method of claim 10,
    상기 메모리부에는 다수의 문장과 번역에 대한 텍스트 및 발음, 반주와 유관이미지나 동영상을 더 저장하며, 상기 맞춤학습생성부는 입력부에 의해 선택된 메모리부의 문장과 반주를 이미지나 동영상과 매칭시켜 음악 기반의 문장 맞춤학습 콘텐츠를 생성하고 메모리부에 저장하되,The memory unit further stores texts and pronunciations for accompaniment of sentences and translations, accompaniment and related images or videos, and the custom learning generation unit matches the sentences and accompaniment of the memory unit selected by the input unit with images or videos. Create a sentence learning content and store it in the memory,
    상기 제어부는 메모리부에 저장된 문장 맞춤학습 콘텐츠를 재생하여 스피커를 통해 문장 맞춤학습 콘텐츠의 발음과 반주를 소리로 출력하고, 표시부를 통해 텍스트와 이미지나 동영상을 출력하는 것을 특징으로 하는 음악기반의 언어 학습방법을 활용한 학습장치.The control unit reproduces the sentence matching learning contents stored in the memory unit, and outputs the pronunciation and accompaniment of the sentence matching learning contents through a speaker, and outputs text, an image or a video through the display unit. Learning device using learning method.
  12. 제 11 항에 있어서,The method of claim 11,
    상기 메모리부에는 품사 또는 의미 단위로 조합한 다수의 단어/어휘를 반주와 함께 노래로 만든 학습용 음악 콘텐츠로 구성되는 콘텐츠가 학습수준별로 더 저장되고, 상기 제어부는 콘텐츠의 내용을 학습 기능별로 다르게 표시부와 스피커에 출력하는 것을 특징으로 하는 음악기반의 언어 학습방법을 활용한 학습장치.The memory unit further stores contents consisting of learning music contents made of songs along with a plurality of words / vocabulary combined by parts of speech or semantic units for each learning level, and the controller displays the contents of the contents differently for each learning function. Learning device using a music-based language learning method characterized in that the output to the speaker.
  13. 제 10 항 내지 제 12 항 중 어느 한 항에 있어서,The method according to any one of claims 10 to 12,
    상기 메모리부에는 원어민과 학습자 발음의 음정과 음량 및 재생시간 정보를 추출하여 곡선그래프로 표시부에 출력하는 음정추출 프로그램이 더 저장되고,The memory unit further stores a pitch extraction program for extracting the pitch, volume, and playback time information of native speakers and learner pronunciations and outputting it to the display unit as a curve graph.
    상기 원어민과 학습자의 곡선그래프를 서로 비교하여 그 유사도에 따라 점수를 평가하는 평가부가 더 구비되는 것을 특징으로 하는 음악기반의 언어 학습방법을 활용한 학습장치.Comparing the curve graph of the native speaker and the learner with each other, the evaluation unit for evaluating the score according to the similarity is further provided learning apparatus using a music-based language learning method.
  14. 제 13 항에 있어서,The method of claim 13,
    상기 제어부는 음정은 발화 곡선그래프에서 높낮이를, 음량은 두께를, 재생시간은 길이를 나타내도록 하여 표시부에 출력하는 것을 특징으로 하는 음악기반의 언어 학습방법을 활용한 학습장치.The control unit is a learning apparatus using a music-based language learning method characterized in that the pitch is output on the display so that the pitch indicates the height, the volume is the thickness, the playback time is the length.
  15. 제 10 항에 있어서,The method of claim 10,
    상기 표시부는 입력부의 기능을 겸하도록 터치인터페이스를 지원하는 스크린으로 구성되고, 언어 학습장치는 학습대상별 전용 단말인 것을 특징으로 하는 음악기반의 언어 학습방법을 활용한 학습장치.The display unit is configured as a screen supporting a touch interface to serve as a function of the input unit, the language learning apparatus using a language-based language learning method, characterized in that the learning-specific terminal.
PCT/KR2010/007017 2010-10-07 2010-10-14 Music-based language-learning method, and learning device using same WO2012046901A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
SG2012090643A SG186705A1 (en) 2010-10-07 2010-10-14 Music-based language-learning method, and learning device using same
CN2010800686571A CN103080991A (en) 2010-10-07 2010-10-14 Music-based language-learning method, and learning device using same
JP2013531460A JP2013541732A (en) 2010-10-07 2010-10-14 Music-based language learning method and learning apparatus utilizing the same

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020100097679A KR101025665B1 (en) 2009-10-16 2010-10-07 Music-based language learning method and learning device using it
KR10-2010-0097679 2010-10-07

Publications (1)

Publication Number Publication Date
WO2012046901A1 true WO2012046901A1 (en) 2012-04-12

Family

ID=45928582

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2010/007017 WO2012046901A1 (en) 2010-10-07 2010-10-14 Music-based language-learning method, and learning device using same

Country Status (5)

Country Link
JP (1) JP2013541732A (en)
KR (1) KR101025665B1 (en)
CN (1) CN103080991A (en)
SG (1) SG186705A1 (en)
WO (1) WO2012046901A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019035033A1 (en) * 2017-08-16 2019-02-21 Panda Corner Corporation Methods and systems for language learning through music

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101112422B1 (en) * 2011-07-07 2012-02-27 박상철 Accompaniment and Speech Matching Methods for Word Learning Music Files
US9679496B2 (en) * 2011-12-01 2017-06-13 Arkady Zilberman Reverse language resonance systems and methods for foreign language acquisition
JP5811837B2 (en) 2011-12-27 2015-11-11 ヤマハ株式会社 Display control apparatus and program
JP6295531B2 (en) * 2013-07-24 2018-03-20 カシオ計算機株式会社 Audio output control apparatus, electronic device, and audio output control program
CN105139311A (en) * 2015-07-31 2015-12-09 谭瑞玲 Intelligent terminal based English teaching system
CN105224073B (en) * 2015-08-27 2018-02-27 华南理工大学 A kind of point based on Voice command reads wrist-watch and its reading method
CN106897304B (en) * 2015-12-18 2021-01-29 北京奇虎科技有限公司 Multimedia data processing method and device
CN111279404B (en) * 2017-10-05 2022-04-05 弗伦特永久公司 Language fluent system
CN108039180B (en) * 2017-12-11 2021-03-12 广东小天才科技有限公司 Method for learning achievement of children language expression exercise and microphone equipment
CN109147422B (en) * 2018-09-03 2022-03-08 北京美智达教育咨询有限公司 English learning system and comprehensive learning method thereof
KR102237118B1 (en) * 2019-05-09 2021-04-08 (주)해피마인드 Method, system and recording medium for learning memory based on brain science
CN111951626A (en) * 2019-05-16 2020-11-17 上海流利说信息技术有限公司 Language learning apparatus, method, medium, and computing device
CN110362675A (en) * 2019-07-22 2019-10-22 田莉 A foreign language teaching content display method and system
CN111460227A (en) * 2020-04-13 2020-07-28 赵琰 Production method, video product and use method of video containing body movements
CN111460220A (en) * 2020-04-13 2020-07-28 赵琰 Method for making word flash card video and video product
CN112000254B (en) * 2020-07-22 2022-09-13 完美世界控股集团有限公司 Corpus resource playing method and device, storage medium and electronic device
KR102651200B1 (en) 2022-01-07 2024-03-26 주식회사 킨트 Voice matching system
KR102651201B1 (en) 2022-01-13 2024-03-26 주식회사 킨트 System for Music matching and method therefor

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20040092829A (en) * 2003-04-29 2004-11-04 인벤텍 베스타 컴파니 리미티드 System and method for playing vocabulary explanations using multimedia data
KR200371317Y1 (en) * 2004-09-18 2004-12-29 김영운 Apparatus for Learning Foreign Language Prosody
KR20050105299A (en) * 2004-04-28 2005-11-04 주식회사 톡톡채널 A language prosody learning device in use of body motions and senses and a method using thereof
KR100568167B1 (en) * 2000-07-18 2006-04-05 한국과학기술원 Foreign language pronunciation test method using automatic phonetic comparison method

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001331092A (en) * 2000-05-22 2001-11-30 Sousei Denshi:Kk Language learning system
JP2004302286A (en) * 2003-03-31 2004-10-28 Casio Comput Co Ltd Information output device, information output program
JP2005172858A (en) * 2003-12-05 2005-06-30 Nariko Matsuda Method for providing language learning material, and language learning material
JP2005266092A (en) * 2004-03-17 2005-09-29 Nec Corp Vocalization learning method and learning system
JP2005352047A (en) * 2004-06-09 2005-12-22 Victor Co Of Japan Ltd Learning device
JP2010128284A (en) * 2008-11-28 2010-06-10 Kazuo Kishida Learning system
JP4581052B2 (en) * 2009-06-12 2010-11-17 サン電子株式会社 Recording / reproducing apparatus, recording / reproducing method, and program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100568167B1 (en) * 2000-07-18 2006-04-05 한국과학기술원 Foreign language pronunciation test method using automatic phonetic comparison method
KR20040092829A (en) * 2003-04-29 2004-11-04 인벤텍 베스타 컴파니 리미티드 System and method for playing vocabulary explanations using multimedia data
KR20050105299A (en) * 2004-04-28 2005-11-04 주식회사 톡톡채널 A language prosody learning device in use of body motions and senses and a method using thereof
KR200371317Y1 (en) * 2004-09-18 2004-12-29 김영운 Apparatus for Learning Foreign Language Prosody

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019035033A1 (en) * 2017-08-16 2019-02-21 Panda Corner Corporation Methods and systems for language learning through music

Also Published As

Publication number Publication date
SG186705A1 (en) 2013-02-28
CN103080991A (en) 2013-05-01
KR101025665B1 (en) 2011-03-30
JP2013541732A (en) 2013-11-14

Similar Documents

Publication Publication Date Title
WO2012046901A1 (en) Music-based language-learning method, and learning device using same
US5533903A (en) Method and system for music training
US5010495A (en) Interactive language learning system
KR100900085B1 (en) Foreign language learning control method
Ergasheva et al. The principles of using computer technologies in the formation and development of students' language skills.
US5273433A (en) Audio-visual language teaching apparatus and method
JP2001159865A (en) Method and device for leading interactive language learning
WO2020159073A1 (en) Conversation-based foreign language learning method using reciprocal speech transmission through speech recognition function and tts function of terminal
KR100900081B1 (en) Foreign language learning control method
WO2009119991A4 (en) Method and system for learning language based on sound analysis on the internet
US20040248068A1 (en) Audio-visual method of teaching a foreign language
KR20000001064A (en) Foreign language conversation study system using internet
JP6656529B2 (en) Foreign language conversation training system
JP2003228279A (en) Language learning apparatus using voice recognition, language learning method and storage medium for the same
JP2001022265A (en) Language study system using digital movie software
KR101180846B1 (en) Method for Music-based Language Training and On-Line Training System thereof
WO2011129665A2 (en) Language learning apparatus using video, and method for controlling same
KR20140028527A (en) Apparatus and method for learning word by using native speaker's pronunciation data and syllable of a word
KR20170097419A (en) Korean language learning system and Korean language learning method using the same
JP2001337594A (en) Method for allowing learner to learn language, language learning system and recording medium
Asri et al. The Advantages Of Elsa Speak To Enhance Speaking Skill In Senior High School
Mole Deaf and multilingual
KR102478016B1 (en) Apparatus and Method for Providing Customized Langauge studying service
Evzhenko et al. DEVELOPMENT OF LISTENING SKILLS IN LESSONS OF FOREIGN LANGUAGE AND IN STUDENTS’INDEPENDENT WORK
JP2656402B2 (en) Information processing device and English conversation learning system using information processing device

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 201080068657.1

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10858177

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2013531460

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10858177

Country of ref document: EP

Kind code of ref document: A1