KR20170009487A - Chunk-based language learning method and electronic device to do this - Google Patents

Chunk-based language learning method and electronic device to do this Download PDF

Info

Publication number
KR20170009487A
KR20170009487A KR1020150101631A KR20150101631A KR20170009487A KR 20170009487 A KR20170009487 A KR 20170009487A KR 1020150101631 A KR1020150101631 A KR 1020150101631A KR 20150101631 A KR20150101631 A KR 20150101631A KR 20170009487 A KR20170009487 A KR 20170009487A
Authority
KR
South Korea
Prior art keywords
chunk
icon
sub
target
chunks
Prior art date
Application number
KR1020150101631A
Other languages
Korean (ko)
Inventor
박상준
Original Assignee
박상준
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 박상준 filed Critical 박상준
Priority to KR1020150101631A priority Critical patent/KR20170009487A/en
Publication of KR20170009487A publication Critical patent/KR20170009487A/en

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/06Foreign languages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

According to an aspect of the present invention, there is provided a chunk-based language learning method and an electronic apparatus for performing the chunk-based language learning method. According to an aspect of the present invention, A storage unit for storing the data; A display unit for outputting a chunk icon for displaying the chunk and a connection icon for connecting the chunk; An audio output unit for outputting audio; An input unit for receiving a user input for selecting at least one icon of the chunk icon and the connection icon; And a control unit for determining a target chunk based on the selected icon and outputting audio data matched to the target chunk through the audio output unit.

Description

TECHNICAL FIELD [0001] The present invention relates to a chunk-based language learning method, and a chunk-

More particularly, the present invention relates to a chunk-based language learning method capable of learning a language based on a chunk unit and an electronic apparatus for performing the same.

When choosing the two most important aspects of language learning, you can use words and word order. Since the sentence that can be regarded as the minimum unit of the pseudo-expression consists of an array of individual words that contain each unique meaning, the word is the core and fundamental of language learning. A word order is a part of the order in which the words that contain each meaning are arranged in the correct order. In this case, it is possible to communicate correctly and the order of the words is changed, which is the opposite meaning.

Therefore, no matter how excellent your knowledge of the word is, it does not make sense to list words you know when speaking or writing. Especially, in spite of abundant English vocabulary knowledge, Korean people often feel difficulty in speaking or writing. This is due to the fact that the word order of Korean is different from English.

As a solution to this, we teach grammatical rules that deal with the format of letters, and many grammatical knowledge does not really help us speaking or writing English. This is because the grammar in the field of education is treated as knowledge of examination, and it can not effectively participate in solving different word order problems. In word order learning, systematic word order training based on major sentence forms is more important than word order knowledge based on grammatical knowledge.

In the correct word learning, it is effective to include not only the meaning of the word of the individual word but also the chunk of the word and the collation information. A chunk is a chunk of words consisting of one or more words. When we say one sentence, we do not think at once and talk at once, but rather think in a proper meaning unit and cut off. For example, in the head of a person who wants to say the sentence: "Today, I read a book while I was in school, I was asleep", the sentence was separated as "Today / at school / during class / reading book / Think and be separated.

In this sense, chunks can be seen as units of thinking or breathing units in speech, and they can be regarded as a very important unit in language learning including colocation information on words used together. The term "commit suicide" is the correct English expression for "suicide" in the case of suicide, for example, when a colocation refers to a combination of commonly used words in a language. In other words, for the expression "commit suicide," the verb "commit" is used instead of "do" or "take / get." In this case, commit and suicide are in a colocation relationship can see.

However, in the existing language learning method, the importance of word learning is enforced, but there is a problem in that it is difficult for the learners to think in a chunk unit because they do not consider the colocation between the words, and accordingly it is difficult to learn the correct sentence.

An object of the present invention is to provide a chunk-based language learning method capable of learning a language based on a chunk unit and an electronic apparatus performing the same.

It is to be understood that the present invention is not limited to the above-described embodiments and that various changes and modifications may be made without departing from the spirit and scope of the present invention as defined by the following claims .

According to an aspect of the present invention, there is provided a speech recognition apparatus comprising: a storage unit for storing text data in which a plurality of words are divided into chunks and speech data matched in chunks included in the text data; A display unit for outputting a chunk icon for displaying the chunk and a connection icon for connecting the chunk; An audio output unit for outputting audio; An input unit for receiving a user input for selecting at least one icon of the chunk icon and the connection icon; And a control unit for determining a target chunk based on the selected icon and outputting audio data matched to the target chunk through the audio output unit.

According to another aspect of the present invention, there is provided a method of encoding speech data, the method comprising: storing text data in which a plurality of words are divided into chunks and speech data matched in chunks included in the text data; Displaying a chunk icon indicating the chunk and a connection icon connecting the chunk; Receiving a user input for selecting an icon of at least one of the chunk icon and the connection icon; Determining a target chunk based on the selected icon; And outputting speech data matched to the target chunk by voice.

It is to be understood that the solution of the problem of the present invention is not limited to the above-mentioned solutions, and the solutions which are not mentioned can be clearly understood by those skilled in the art to which the present invention belongs It will be possible.

According to the present invention, a language can be learned based on a chunk unit.

The effects of the present invention are not limited to the above-mentioned effects, and the effects not mentioned can be clearly understood by those skilled in the art from the present specification and the accompanying drawings.

1 is a block diagram of a chunk-based language learning electronic device according to an embodiment of the present invention.
2 is a diagram illustrating text data divided into chunk units according to an embodiment of the present invention.
FIG. 3 illustrates a database for chunk-based language learning according to an embodiment of the present invention.
FIG. 2 is a diagram illustrating a chunk-based language learning screen according to the first embodiment of the present invention.
FIG. 3 is a diagram illustrating a chunk-based language learning screen according to a second embodiment of the present invention.
FIG. 4 is a diagram illustrating a chunk-based language learning screen according to a third embodiment of the present invention.
FIG. 5 is a diagram illustrating a chunk-based language learning screen according to a fourth embodiment of the present invention.
FIG. 6 is a diagram illustrating a chunk-based language learning screen according to a fifth embodiment of the present invention.
FIG. 7 is a diagram illustrating a chunk-based language learning screen according to a sixth embodiment of the present invention.
FIG. 8 is a diagram illustrating a chunk-based language learning screen according to a seventh embodiment of the present invention.
9 is a diagram illustrating a chunk-based language learning screen according to an eighth embodiment of the present invention.
FIG. 10 is a diagram illustrating a chunk-based language learning screen according to a ninth embodiment of the present invention.

It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are intended to be illustrative of the present invention and not to limit the scope of the invention. Should be interpreted to include modifications or variations that do not depart from the spirit of the invention.

Although the terms used in the present invention have been selected in consideration of the functions of the present invention, they are generally used in general terms. However, the present invention is not limited to the intention of the person skilled in the art to which the present invention belongs . However, if a specific term is defined as an arbitrary meaning, the meaning of the term will be described separately. Accordingly, the terms used herein should be interpreted based on the actual meaning of the term rather than on the name of the term, and on the content throughout the description.

The drawings attached hereto are intended to illustrate the present invention easily, and the shapes shown in the drawings may be exaggerated and displayed as necessary in order to facilitate understanding of the present invention, and thus the present invention is not limited to the drawings.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, a detailed description of known configurations or functions related to the present invention will be omitted when it is determined that the gist of the present invention may be obscured.

According to an aspect of the present invention, there is provided a speech recognition apparatus comprising: a storage unit for storing text data in which a plurality of words are divided into chunks and speech data matched in chunks included in the text data; A display unit for outputting a chunk icon for displaying the chunk and a connection icon for connecting the chunk; An audio output unit for outputting audio; An input unit for receiving a user input for selecting at least one icon of the chunk icon and the connection icon; And a control unit for determining a target chunk based on the selected icon and outputting audio data matched to the target chunk through the audio output unit.

Also, the display unit outputs the chunk icon in the order of the text, and outputs the connection icon to be disposed between the chunk icons. When the chunk icon is selected, the controller displays a chunk corresponding to the chunk icon If the connection icon is selected, the chunk corresponding to the first chunk icon to the chunk icon immediately after the connection icon can be determined as the target chunk.

The display unit may output the chunk icon according to a word order of the chunk in the text, and output the connection icon to connect at least one chunk icon. When the chunk icon is selected, It is determined that the corresponding chunk is the target chunk, and when the connection icon is selected, chunks corresponding to the chunk icon connected by the connection icon can be determined as the target chunk.

The storage unit may store a graphic icon matching the chunk, and the display unit may display the graphic icon to correspond to the chunk icon.

The graphic icon may be a still image, and the control unit may activate a graphic icon matched with a chunk to be audibly reproduced in the audio output of the target chunk.

Also, the graphic icon may be a moving picture, and the controller may play a graphic icon matched with a chunk to be audibly reproduced in the audio output of the target chunk.

The control unit may adjust a display attribute of the chunk icon selected by the target chunk.

The controller may increase the size of the chunk icon according to the number of times selected by the target chunk.

Wherein at least a part of the chunks is divided into a plurality of sub-chunks, and the control unit displays a sub-chunk icon for displaying the sub-chunks, and when receiving a user input for selecting the sub- The sub chunk corresponding to the chunk icon may be determined as the target chunk, and audio data corresponding to the sub chunk may be output.

Further, the display unit may display a reproduction icon, and the control unit may output the audio data when the reproduction icon is selected.

The display unit may display a number-of-times setting icon, and the control unit may receive the number of repetitions of the user input through the number-of-times setting icon through the input unit and repeatedly output the voice data by the repetition times.

The control unit may successively output audio data matched to a plurality of object chunks determined according to the selected icon through the audio output unit, wherein the plurality of chunk icons are selected by the user input.

The user input may be a drag input.

According to another aspect of the present invention, there is provided a method of encoding speech data, the method comprising: storing text data in which a plurality of words are divided into chunks and speech data matched in chunks included in the text data; Displaying a chunk icon indicating the chunk and a connection icon connecting the chunk; Receiving a user input for selecting an icon of at least one of the chunk icon and the connection icon; Determining a target chunk based on the selected icon; And outputting speech data matched to the target chunk by voice.

Also, in the displaying step, a graphic icon may be displayed to correspond to the chunk icon.

And adjusting a display attribute of the chunk icon selected by the target chunk.

At least a part of the chunks is divided into a plurality of sub-chunks. In the displaying step, a sub-chunk icon for displaying the sub-chunks is further displayed, and a user input for selecting the sub- The sub chunk corresponding to the selected sub chunk icon is determined as the target chunk, and voice data corresponding to the sub chunk may be output.

The chunk learning method is a learning method in which English sentence structure ability, which is the basis of English language ability, is efficiently completed in a short period of time. For example, if you train the English sentence to be divided into three parts: the beginning part, the core part, and the formula part, you can train about 3,000,000 English sentences freely with about 500 chunks. .

These chunk learning methods are not only applicable to English, but also to various languages such as Japanese, Chinese and German. However, the embodiment of the present invention will be described mainly in English.

In the embodiment of the present invention, the chunk consists of a subject, a verb, a verb phrase consisting of a preposition and a noun, a verb phrase, and a connective verb. The antinode can be, for example, one of to infinitives, the current injection (~ ing), and the past injection (pp). Thus, it can be seen that the English language is formed of at least one chunk forming a semantic group.

Hereinafter, an electronic apparatus 1000 according to an embodiment of the present invention will be described.

The electronic device 1000 described in the present invention may be provided in the form of a desktop computer, a laptop computer, a tablet PC, or the like. Of course, the electronic device 1000 is not limited to the above-described example, but may be provided in various forms capable of performing a chunk-based language learning database building method having an input / output interface and an arithmetic processing function.

Hereinafter, an electronic device 1000 according to an embodiment of the present invention will be described with reference to FIG.

1 is a block diagram of an electronic device 1000 according to an embodiment of the present invention. 1, an electronic device 1000 according to an embodiment of the present invention may include an input unit 1200, an output unit 1500, a communication unit 1100, a storage unit 1300, and a control unit 1400 have. Hereinafter, each component of the electronic device 1000 will be described.

The input unit 1200 may receive a user input from a user. The user input may be in various forms including key input, touch input, and phonetic lunar power. Examples of the input unit 1200 capable of receiving such user input include a touch sensor for sensing a touch of a user, a microphone for receiving a voice signal, a gesture, etc. through image recognition, as well as a conventional keypad, keyboard, A camera, a proximity sensor including an illuminance sensor or an infrared sensor for sensing user access, a motion sensor for recognizing a user's operation through an acceleration sensor or a gyro sensor, and various types of user sensors for sensing or receiving various types of user input And input means. Here, the touch sensor may be implemented by a touch panel attached to a display panel or a piezoelectric or electrostatic touch sensor that senses a touch through a touch film, or an optical touch sensor that senses a touch by an optical method.

In addition, the input unit 1200 may be implemented in the form of an input interface (USB port, PS / 2 port, etc.) that connects an external input device that receives user input to the electronic device 1000, It is possible.

The output unit 1500 can output various information and provide it to the user. The output unit 1500 may include a display unit 1520 for outputting an image and a sound output unit 1510 for outputting sound. The output unit 1500 may include both a haptic device for generating vibration and various other types of output means can do. The output unit 1500 may be implemented in the form of a port type output interface for connecting the above-described individual output means to the electronic device 1000.

For example, the display unit 1520 may display text, still images, and moving images. The display may be a liquid crystal display (LCD), a light emitting diode (LED) display, an organic light emitting diode (OLED) display, a flat panel display (FPD) various types of devices capable of performing image display functions such as display, curved display, flexible display, 3D display, holographic display, projector, Quot; means a wide range of video display devices. Such a display may be in the form of a touch display integrated with the touch sensor of the input unit 1200. [

The communication unit 1100 can perform communication with an external device. Accordingly, the electronic device 1000 can transmit / receive various information to / from the external device. Here, the communication, that is, the transmission and reception of data, can be made by wire or wireless. The communication unit 1100 may include a wired communication module for connecting to the Internet or the like via a LAN (Local Area Network), a mobile communication module for connecting and receiving data to and from a mobile communication network via a mobile communication base station, a WLAN (Global Positioning System) such as a wireless local area network (WLAN) based communication method, a wireless personal area network (WPAN) based communication method such as Bluetooth or Zigbee, A navigation satellite system), or a combination thereof.

The storage unit 1300 may store various types of information. The storage unit 1300 may store data temporarily or semi-permanently. Examples of the storage unit 1300 include a hard disk drive (HDD), a solid state drive (SSD), a flash memory, a ROM (Read-Only Memory), a RAM (Random Access Memory) And so on. The storage unit 1300 may be provided in a form embedded in the electronic device 1000 or in a detachable form in the electronic device 1000.

The storage unit 1300 stores an operating system (OS) for driving an electronic device, data for reproducing or outputting contents such as voice, image, and moving picture according to an embodiment of the present invention Various data necessary or used for driving can be stored.

The control unit 1400 controls the overall operation of the electronic device. For this purpose, the controller 1400 may perform various operations on information and control the operation of components of the electronic apparatus. The control unit 1400 may be implemented as a computer or similar device in accordance with hardware software or a combination thereof. The controller 1400 may be provided in the form of an electronic circuit that processes an electrical signal to perform a control function, and may be provided in a form of a program that drives a hardware controller 1400 in software.

The control unit 1400 displays a sentence through the chunk icon 1521 and the connection icon 1523 using the text data divided in chunk units. When the icon is selected, the control unit 1400 outputs the voice data matched to the selected chunk . The specific operation of the control unit 1400 other than the above will become more apparent from the description of the chunk-based language learning method according to the embodiment of the present invention.

On the other hand, in the following description, the operation of the electronic device can be interpreted as being performed by control of the control unit 1400 unless otherwise specified.

Hereinafter, data provided to a chunk-based language learning method according to an embodiment of the present invention will be described.

2 is a diagram illustrating text data divided into chunk units according to an embodiment of the present invention.

Referring to FIG. 2, the text data may be divided into chunks. At least one of a chunk delimiter and a sub-chunk delimiter may be provided to divide the text data into chunks.

According to one example, this can be done according to a user input for inserting a chunk delimiter or a sub-chunk delimiter into the text data.

Specifically, the division of chunks by the user can be performed by inserting a chunk separator between chunks, and a chunk separator can be generated between chunks and chunks such as words and words, words and idioms, or between idioms and idioms. At this time, when the chunk length is long or difficult to pronounce, the sub-chunk delimiter is generated between the word and the word, so that the chunk can be divided into smaller units than the chunk. For example, "Samson fell in love" in "Samson fell + in love / with a woman / named Delilah / who lived / in a valley of Sorek." Creates a sub- + in love ".

On the other hand, in the present invention, the chunk delimiter is denoted by " / ", and the subchunk delimiter is denoted by " + ", but if the notation is used to distinguish the chunk delimiter from the sub- It is also possible to create it in various notation. However, it is better to avoid codes such as numbers, "." Or "," that can be included in text data.

Alternatively, the chunk classification may be determined by the control unit 1400 judging the part of the word included in the text data and determining whether or not the part of the word or a group of words constitutes a chunk according to a predetermined rule . For example, the predetermined rule is a rule for determining chunks of a series of words, " preposition + article + noun ", and the control unit 1400 sets the part of " with a woman " Noun "and can recognize it as a chunk.

Or chunks may be stored in the storage unit 1300 in advance, and the control unit 1400 may refer to the chunks to extract chunks in a given sentence. For example, " with a woman " is stored in a chunk table with one chunk, and the control unit 1400 can recognize the corresponding part as a chunk when the sentence in FIG. 3 is given.

When the controller 1400 divides the chunks by referring to a predetermined rule or a chunk table, a chunk delimiter or a sub-chunk delimiter may be inserted between the divided chunks and the sub chunks similarly to the user input. Thereafter, the chunk separator and the sub-chunk separator are deleted and added to the text data in which the chunk separator and the sub-chunk separator are inserted, or the text data automatically classified by the control unit 1400 To be manually edited by the user.

FIG. 3 illustrates a database for chunk-based language learning according to an embodiment of the present invention.

Referring to FIG. 3, a chunk-based language learning database may be provided by matching a chunk table generated through text data divided into chunk units and an audio table generated through text-to-speech (TTS) have. By matching the audio table with the chunk table, the audio file can be matched for each chunk. For example, an audio file matching a # 2 chunk with a woman may be provided as a single file.

In this way, the final chunk audio file can be extracted.

The final chunk audio file may be provided with a chunk audio file for each chunk or sub chunk, and such chunk audio file may be output through the speaker in accordance with the user signal input received on the icon displayed on the display unit 1520 .

Hereinafter, a chunk-based language learning screen according to the present invention will be described.

4 to 8 are diagrams illustrating screens for language learning based on chunks according to the first to fifth embodiments of the present invention.

Referring to FIG. 4, the screen of the chunk learning method according to the first embodiment may include a chunk icon 1521 and a connection icon 1523.

The chunk icon 1521 contains a chunk voice file matched through a database for chunk-based language learning, and each chunk icon 1521 may contain text matching the chunk voice file. When the user executes this chunk icon 1521, the audio file of the target chunk can be reproduced. For example, when a user executes a chunk icon 1521 with "Samson fell in love", a chunk voice file that matches "Samson fell in love" can be played.

On the other hand, when the text is not written in the chunk icon 1521 and the user has executed the chunk icon 1521, text matched to the chunk voice file to be executed may be displayed on the chunk icon 1521. [

The connection icon 1523 may be placed between the chunk icon 1521 and the chunk icon 1521 and may be a chunk icon 1521 corresponding to the connection icon 1523 selected from the first chunk icon 1521 upon which the user input was received, Quot; can be reproduced through the connection icon 1523. [0150] FIG. At this time, the connection icon 1523 may have a code such as " + ", " * ", and the like. When the user executes this connection icon 1523, a chunk audio file embedded in the chunk icon 1521 from the first chunk icon 1521 to the chunk icon 1521 corresponding to the connection icon 1523 selected is reproduced . At this time, the chunk voice files can be connected and reproduced seamlessly. For example, when a user executes a connection icon 1523 between "named Delilah" and "who lived", "Samson fell in love with a woman named Delilah who lived" can be played.

On the other hand, when the chunk icon 1521 is selected, the controller 1400 determines that the chunk corresponding to the chunk icon 1521 is the target chunk. If the connection icon 1523 is selected, the controller 1400 displays the connection icon 1523 The chunk corresponding to the chunk icon 1521 immediately after the chunk icon 1521 can be determined as the target chunk.

FIG. 5 is a diagram illustrating a chunk-based language learning screen according to a second embodiment of the present invention.

Referring to FIG. 5, the screen of the chunk learning method according to the second embodiment may include a chunk icon 1521 and a connection icon 1523. However, in the second embodiment, a display method of the chunk icon 1521 may be provided differently.

Since the chunk icon 1521 and the connection icon 1523 in the second embodiment are similar to the chunk icon 1521 and connection icon 1523 in the first embodiment, a description thereof will be omitted.

The chunk icon 1521 can be displayed so that each icon is enlarged when the user executes it. The enlargement of the icon can be maintained while the chunk voice file built in the icon is executed, and can be enlarged only when the icon is clicked. It is also possible to increase the size according to the number of clicks of the icon.

FIG. 6 is a diagram illustrating a chunk-based language learning screen according to a third embodiment of the present invention.

Referring to FIG. 6, the screen of the chunk learning method according to the third embodiment may include a chunk icon 1521, a connection icon 1523, and a sub-chunk icon 1522. However, in the first embodiment, only the chunk icon 1521 and the connection icon 1523 are displayed, while in the third embodiment, the sub-chunk icon 1522 can be additionally displayed.

The chunk icon 1521 and the connection icon 1523 in the third embodiment are the same as the chunk icon 1521 and the connection icon 1523 in the first embodiment, and a description thereof will be omitted.

The sub-chunk icon 1522 is displayed when there is a chunk divided by the sub-chunk, and may be displayed adjacent to the chunk icon 1521. [ At this time, the sub-chunk icon 1522 may include numbers or letters.

When the user has executed the sub chunk icon 1522, the chunk voice file provided in the sub chunk unit can be reproduced. For example, " Samson fell in love " can be divided into two sub chunks of " Samson fell " and " in love " When ② is executed, chunk sound file of "in love" can be played back.

Substantially, the sub-chunk icon 1522 is the same as the chunk icon 1521, and only the size of the chunk unit is different.

FIG. 7 is a diagram illustrating a chunk-based language learning screen according to a fourth embodiment of the present invention.

7, the screen of the chunk learning method according to the fourth embodiment may include a chunk icon 1521, a connection icon 1523, a sub-chunk icon 1522, and a graphic icon 1527. FIG. The chunk icon 1521, the connection icon 1523 and the sub-chunk icon 1522 in the fourth embodiment are the same as the chunk icon 1521, the connection icon 1523 and the sub-chunk icon 1522 in the third embodiment ), And a description thereof will be omitted.

The graphic icon 1527 may embed a moving image or an image associated with the chunk icon 1521. The graphic icon 1527 may be displayed to match the chunk icon 1521 and may be displayed on the top, bottom, left, or the like of the chunk icon 1521. [

This graphic icon 1527 can be played when the chunk icon 1521 and the connection icon 1523, which match each graphic icon 1527, are executed.

At this time, when a moving image is embedded in the graphic icon 1527, the moving image in the stopped state can be played back when the user executes the chunk icon 1521. When the user executes the connecting icon 1523, The moving picture matching the chunk voice file may be played back in order.

When the graphic icon 1527 is embedded in the image, the image in the black and white state can be changed to color when the user executes the chunk icon 1521. When the user executes the connection icon 1523, The images matched to the chunk voice file may change in color in order.

On the other hand, when the user has executed the chunk icon 1521 or the connection icon 1523, the graphic icon 1527 may be enlarged and executed.

8 is a diagram illustrating a chunk-based language learning screen according to the fifth embodiment of the present invention.

Referring to FIG. 8, the screen of the chunk learning method according to the fifth embodiment may include a chunk icon 1521 and a connection icon 1523. However, the chunk icon 1521 and the connection icon 1523 in the fifth embodiment are similar to the chunk icon 1521 and the connection icon 1523 in the first embodiment, and a description thereof will be omitted.

The connection icon 1523 in the fifth embodiment may be displayed to include a corresponding chunk icon 1521 and a chunk icon 1521 disposed in front of it.

As the user input is received only on the connection icon 1523, a chunk voice file embedded in two or more chunk icons 1521 can be output.

9 to 12 are diagrams illustrating screens for language learning based on chunks according to the sixth to ninth embodiments of the present invention.

FIG. 9 is a diagram illustrating a chunk-based language learning screen according to a sixth embodiment of the present invention.

Referring to FIG. 9, the screen of the chunk learning method according to the sixth embodiment may include a chunk icon 1521 and a play icon 1524. However, the chunk icon 1521 in the sixth embodiment is similar to the chunk icon 1521 in the first embodiment, and a description thereof will be omitted.

The chunk icon 1521 may be executed when the user clicks the icon, not when it is clicked, but when the user selects the chunk icons 1521 to execute and then executes the play icon 1524. [

For example, if you select "Samson fell in love" and "named Delilah" and then select the play icon 1524, a voice file of "Samson fell in love named Delilah" can be output.

FIG. 10 is a diagram illustrating a chunk-based language learning screen according to a seventh embodiment of the present invention.

Referring to FIG. 10, the screen of the chunk learning method according to the seventh embodiment may include a chunk icon 1521 and a drag icon 1526. FIG. However, the chunk icon 1521 in the seventh embodiment is the same as the chunk icon 1521 in the first embodiment, and a description thereof will be omitted.

When the drag icon 1526 is dragged to a desired position, a chunk voice file that matches the chunk icon 1521 up to the dragged position can be output.

The drag position can be adjusted to various sizes, such as a chunk icon (1521) unit, a syllable unit, and the like.

11 is a diagram illustrating a chunk-based language learning screen according to an eighth embodiment of the present invention.

Referring to FIG. 11, the screen of the chunk learning method according to the eighth embodiment may include a chunk icon 1521, a drag icon 1526, and a play icon 1524. Since the chunk icon 1521 in the eighth embodiment is similar to the chunk icon 1521 and the drag icon 1526 in the seventh embodiment, some explanations thereof are omitted.

The drag icon 1526 can be dragged to be multi-selected, and after the selection is completed, a playback icon 1524 is executed to output a chunk audio file matched to the selected chunk icon 1521. [

FIG. 12 is a diagram illustrating a chunk-based language learning screen according to a ninth embodiment of the present invention.

Referring to FIG. 12, the screen of the chunk learning method according to the ninth embodiment may include a chunk icon 1521, a drag icon 1526, a play icon 1524, and a number setting icon 1525. However, the chunk icon 1521, the drag icon 1526, and the play icon 1524 in the ninth embodiment are the same as the chunk icon 1521, the drag icon 1526, and the play icon 1524 in the eighth embodiment Description thereof will be omitted.

The number-of-times setting icon 1525 is an icon for controlling the number of times of outputting the chunk voice files built in the icon. The number of times of outputting the chunk voice files can be determined according to the number of times the number-of-times setting icon 1525 is clicked. At this time, the number of clicks of the number setting icon 1525 may be displayed on the number setting icon 1525. It is also possible that the chunk voice files are output immediately upon clicking the number setting icon 1525 or when the playback icon 1524 is executed.

On the other hand, the screen of the chunk learning method described above may be provided in various combinations such as omitting some icons or adding some icons.

Hereinafter, an operation method of a chunk-based language learning electronic device according to the present invention will be described.

A method of operating a chunk-based language learning electronic device includes receiving a user input on an icon; Activating the icon according to a user input; And reproducing at least one of a voice, an image, and a moving image matched with the activated icon. Here, the icon may include a chunk icon 1521, a sub-chunk icon 1522, a connection icon 1523, a drag icon 1526, a play icon 1524, and a set number icon 1525.

The step of receiving the user input may be receiving a drag input or a click input by the user on the icon. At this time, the input of the user may be performed through an input device such as a mouse and a touch pad.

In the step of activating the icon, the size of the icon may be changed according to the number of user inputs, and the size of the icon may increase as the number of user inputs increases. It is also possible that the size of the icon changes only while the user's input is received.

The step of reproducing at least one of voice, image, or moving picture may be repeatedly played according to the number of times the voice is input to the icon.

And the image may be displayed in black and white before the user's input is received, and in color after the user's input is received. The moving image may be provided as a still image until the user input is received, and may be provided as a moving image after the user input is received.

On the other hand, images and moving images can be reproduced by matching with sounds, and images and moving images related to sounds can be provided.

The foregoing description is merely illustrative of the technical idea of the present invention, and various changes and modifications may be made by those skilled in the art without departing from the essential characteristics of the present invention. Therefore, the embodiments of the present invention described above can be implemented separately or in combination.

Therefore, the embodiments disclosed in the present invention are intended to illustrate rather than limit the scope of the present invention, and the scope of the technical idea of the present invention is not limited by these embodiments. The scope of protection of the present invention should be construed according to the following claims, and all technical ideas within the scope of equivalents should be construed as falling within the scope of the present invention.

1000: electronic device 1100: communication unit
1200: input unit 1300:
1400: control unit 1500: output unit
1510: Audio output unit 1520: Display unit
1521: Chunk icon 1523: Connection icon

Claims (17)

A storage unit for storing text data in which a plurality of words are divided into chunks and audio data matched in chunks included in the text data;
A display unit for outputting a chunk icon for displaying the chunk and a connection icon for connecting the chunk;
An audio output unit for outputting audio;
An input unit for receiving a user input for selecting at least one icon of the chunk icon and the connection icon; And
And a controller for determining a target chunk based on the selected icon and outputting audio data matched to the target chunk through the audio output unit
Electronics.
The method according to claim 1,
Wherein the display unit outputs the chunk icons in an order within the text, outputs the connection icons to be disposed between the chunk icons,
Wherein the control unit determines that the chunk corresponding to the chunk icon is the target chunk when the chunk icon is selected and selects chunks corresponding to the chunk icon immediately after the chunk icon when the connection icon is selected Judge by target chunk
Electronics.
The method according to claim 1,
Wherein the display unit outputs the chunk icon according to a word order of the chunk in the text, connects the connection icon to at least one chunk icon,
The controller determines that the chunk corresponding to the chunk icon is the target chunk when the chunk icon is selected and determines that the chunk corresponding to the chunk icon linked by the connection icon is the target chunk when the connection icon is selected
Electronics.
The method according to claim 1,
Wherein the storage unit stores a graphic icon matching the chunk,
The display unit displays the graphic icon corresponding to the chunk icon
Electronics.
5. The method of claim 4,
The graphic icon is a still image,
Wherein the control unit activates a graphic icon matched with a chunk to be audibly reproduced in audio output of the target chunk
Electronics.
5. The method of claim 4,
The graphic icon is a moving picture,
Wherein the control unit reproduces a graphic icon matched with a chunk to be audibly reproduced in the audio output of the target chunk
Electronics.
The method according to claim 1,
Wherein the control unit adjusts a display attribute of a chunk icon selected by the target chunk
Electronics.
8. The method of claim 7,
The control unit may increase the size of the chunk icon according to the number of times selected by the target chunk
Electronics.
The method according to claim 1,
At least a part of the chunks is divided into a plurality of sub-chunks,
Wherein the control unit displays a sub-chunk icon indicating the sub-chunk, and when receiving a user input for selecting the sub-chunk icon, determines that the sub-chunk corresponding to the selected sub-chunk icon is the target chunk, Outputting audio data corresponding to a chunk
Electronics.
The method according to claim 1,
Wherein the display unit displays a playback icon,
The control unit may output the audio data when the playback icon is selected
Electronics.
11. The method of claim 10,
The display unit displays a number setting icon,
Wherein the control unit receives the number of repetition times through the input unit through the number-of-times setting icon as a user input and repeatedly outputs the voice data for the repetition times
Electronics.
The method according to claim 1,
A plurality of the chunk icons selected by the user input,
The control unit continuously outputs audio data matched to a plurality of object chunks determined according to the selected icon through the audio output unit
Electronics.
13. The method of claim 12,
The user input is a drag input
Electronics.
Storing text data in which a plurality of words are divided into chunks and speech data matched in chunks included in the text data;
Displaying a chunk icon indicating the chunk and a connection icon connecting the chunk;
Receiving a user input for selecting an icon of at least one of the chunk icon and the connection icon;
Determining a target chunk based on the selected icon; And
And outputting audio data matched to the target chunk by voice
A chunk - based language learning method.
15. The method of claim 14,
In the displaying step, a graphic icon is displayed corresponding to the chunk icon
A chunk - based language learning method.
15. The method of claim 14,
And adjusting a display attribute of the chunk icon selected with the target chunk
A chunk - based language learning method.
15. The method of claim 14,
At least a part of the chunks is divided into a plurality of sub-chunks,
In the displaying step, a sub-chunk icon for displaying the sub-chunk is further displayed,
When receiving a user input for selecting the sub-chunk icon in the receiving step, determines a sub-chunk corresponding to the selected sub-chunk icon as the target chunk and outputs audio data corresponding to the sub-chunk
A chunk - based language learning method.
KR1020150101631A 2015-07-17 2015-07-17 Chunk-based language learning method and electronic device to do this KR20170009487A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150101631A KR20170009487A (en) 2015-07-17 2015-07-17 Chunk-based language learning method and electronic device to do this

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150101631A KR20170009487A (en) 2015-07-17 2015-07-17 Chunk-based language learning method and electronic device to do this

Publications (1)

Publication Number Publication Date
KR20170009487A true KR20170009487A (en) 2017-01-25

Family

ID=57991455

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150101631A KR20170009487A (en) 2015-07-17 2015-07-17 Chunk-based language learning method and electronic device to do this

Country Status (1)

Country Link
KR (1) KR20170009487A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180130933A (en) * 2017-05-30 2018-12-10 주식회사 엠글리쉬 Analysis method for chunk and key word based on voice signal of video data, and system thereof
WO2019107604A1 (en) * 2017-11-30 2019-06-06 코리아테스팅 주식회사 Target-assigning firefighting robot in which fire extinguishing ball is automatically loaded

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180130933A (en) * 2017-05-30 2018-12-10 주식회사 엠글리쉬 Analysis method for chunk and key word based on voice signal of video data, and system thereof
WO2019107604A1 (en) * 2017-11-30 2019-06-06 코리아테스팅 주식회사 Target-assigning firefighting robot in which fire extinguishing ball is automatically loaded

Similar Documents

Publication Publication Date Title
US20200175890A1 (en) Device, method, and graphical user interface for a group reading environment
US20060194181A1 (en) Method and apparatus for electronic books with enhanced educational features
US11854431B2 (en) Interactive education system and method
US20140315163A1 (en) Device, method, and graphical user interface for a group reading environment
CN111462740A (en) Voice command matching for voice-assisted application prototyping for non-speech alphabetic languages
US11210964B2 (en) Learning tool and method
US20140315179A1 (en) Educational Content and/or Dictionary Entry with Complementary Related Trivia
KR101102520B1 (en) The audio-visual learning system of its operating methods that based on hangul alphabet combining the metrics
KR20170009486A (en) Database generating method for chunk-based language learning and electronic device performing the same
US20140278428A1 (en) Tracking spoken language using a dynamic active vocabulary
KR102645880B1 (en) Method and device for providing english self-directed learning contents
KR20170009487A (en) Chunk-based language learning method and electronic device to do this
US20040102973A1 (en) Process, apparatus, and system for phonetic dictation and instruction
KR20040094634A (en) Dynamic pronunciation support for japanese and chinese speech recognition training
US20160267811A1 (en) Systems and methods for teaching foreign languages
US20160307453A1 (en) System and method for auditory capacity development for language processing
KR102618311B1 (en) An apparatus and method for providing conversational english lecturing contents
Amelia Utilizing Balabolka to enhance teaching listening
KR102453876B1 (en) Apparatus, program and method for training foreign language speaking
CN111401082A (en) Intelligent personalized bilingual learning method, terminal and computer readable storage medium
TW201435825A (en) Electronic apparatus, learning method, and computer program product thereof
KR20140122172A (en) System and method for learning language using touch screen
KR102667466B1 (en) Method and apparatus for providing english reading comprehension lecturing contents using image association technique
KR102616915B1 (en) Method and system for providing korean spelling quizzes
KR101191904B1 (en) Sign language translating device

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E902 Notification of reason for refusal
E601 Decision to refuse application