KR20170009487A - Chunk-based language learning method and electronic device to do this - Google Patents
Chunk-based language learning method and electronic device to do this Download PDFInfo
- Publication number
- KR20170009487A KR20170009487A KR1020150101631A KR20150101631A KR20170009487A KR 20170009487 A KR20170009487 A KR 20170009487A KR 1020150101631 A KR1020150101631 A KR 1020150101631A KR 20150101631 A KR20150101631 A KR 20150101631A KR 20170009487 A KR20170009487 A KR 20170009487A
- Authority
- KR
- South Korea
- Prior art keywords
- chunk
- icon
- sub
- target
- chunks
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B19/00—Teaching not covered by other main groups of this subclass
- G09B19/06—Foreign languages
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/06—Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Entrepreneurship & Innovation (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Electrically Operated Instructional Devices (AREA)
Abstract
According to an aspect of the present invention, there is provided a chunk-based language learning method and an electronic apparatus for performing the chunk-based language learning method. According to an aspect of the present invention, A storage unit for storing the data; A display unit for outputting a chunk icon for displaying the chunk and a connection icon for connecting the chunk; An audio output unit for outputting audio; An input unit for receiving a user input for selecting at least one icon of the chunk icon and the connection icon; And a control unit for determining a target chunk based on the selected icon and outputting audio data matched to the target chunk through the audio output unit.
Description
More particularly, the present invention relates to a chunk-based language learning method capable of learning a language based on a chunk unit and an electronic apparatus for performing the same.
When choosing the two most important aspects of language learning, you can use words and word order. Since the sentence that can be regarded as the minimum unit of the pseudo-expression consists of an array of individual words that contain each unique meaning, the word is the core and fundamental of language learning. A word order is a part of the order in which the words that contain each meaning are arranged in the correct order. In this case, it is possible to communicate correctly and the order of the words is changed, which is the opposite meaning.
Therefore, no matter how excellent your knowledge of the word is, it does not make sense to list words you know when speaking or writing. Especially, in spite of abundant English vocabulary knowledge, Korean people often feel difficulty in speaking or writing. This is due to the fact that the word order of Korean is different from English.
As a solution to this, we teach grammatical rules that deal with the format of letters, and many grammatical knowledge does not really help us speaking or writing English. This is because the grammar in the field of education is treated as knowledge of examination, and it can not effectively participate in solving different word order problems. In word order learning, systematic word order training based on major sentence forms is more important than word order knowledge based on grammatical knowledge.
In the correct word learning, it is effective to include not only the meaning of the word of the individual word but also the chunk of the word and the collation information. A chunk is a chunk of words consisting of one or more words. When we say one sentence, we do not think at once and talk at once, but rather think in a proper meaning unit and cut off. For example, in the head of a person who wants to say the sentence: "Today, I read a book while I was in school, I was asleep", the sentence was separated as "Today / at school / during class / reading book / Think and be separated.
In this sense, chunks can be seen as units of thinking or breathing units in speech, and they can be regarded as a very important unit in language learning including colocation information on words used together. The term "commit suicide" is the correct English expression for "suicide" in the case of suicide, for example, when a colocation refers to a combination of commonly used words in a language. In other words, for the expression "commit suicide," the verb "commit" is used instead of "do" or "take / get." In this case, commit and suicide are in a colocation relationship can see.
However, in the existing language learning method, the importance of word learning is enforced, but there is a problem in that it is difficult for the learners to think in a chunk unit because they do not consider the colocation between the words, and accordingly it is difficult to learn the correct sentence.
An object of the present invention is to provide a chunk-based language learning method capable of learning a language based on a chunk unit and an electronic apparatus performing the same.
It is to be understood that the present invention is not limited to the above-described embodiments and that various changes and modifications may be made without departing from the spirit and scope of the present invention as defined by the following claims .
According to an aspect of the present invention, there is provided a speech recognition apparatus comprising: a storage unit for storing text data in which a plurality of words are divided into chunks and speech data matched in chunks included in the text data; A display unit for outputting a chunk icon for displaying the chunk and a connection icon for connecting the chunk; An audio output unit for outputting audio; An input unit for receiving a user input for selecting at least one icon of the chunk icon and the connection icon; And a control unit for determining a target chunk based on the selected icon and outputting audio data matched to the target chunk through the audio output unit.
According to another aspect of the present invention, there is provided a method of encoding speech data, the method comprising: storing text data in which a plurality of words are divided into chunks and speech data matched in chunks included in the text data; Displaying a chunk icon indicating the chunk and a connection icon connecting the chunk; Receiving a user input for selecting an icon of at least one of the chunk icon and the connection icon; Determining a target chunk based on the selected icon; And outputting speech data matched to the target chunk by voice.
It is to be understood that the solution of the problem of the present invention is not limited to the above-mentioned solutions, and the solutions which are not mentioned can be clearly understood by those skilled in the art to which the present invention belongs It will be possible.
According to the present invention, a language can be learned based on a chunk unit.
The effects of the present invention are not limited to the above-mentioned effects, and the effects not mentioned can be clearly understood by those skilled in the art from the present specification and the accompanying drawings.
1 is a block diagram of a chunk-based language learning electronic device according to an embodiment of the present invention.
2 is a diagram illustrating text data divided into chunk units according to an embodiment of the present invention.
FIG. 3 illustrates a database for chunk-based language learning according to an embodiment of the present invention.
FIG. 2 is a diagram illustrating a chunk-based language learning screen according to the first embodiment of the present invention.
FIG. 3 is a diagram illustrating a chunk-based language learning screen according to a second embodiment of the present invention.
FIG. 4 is a diagram illustrating a chunk-based language learning screen according to a third embodiment of the present invention.
FIG. 5 is a diagram illustrating a chunk-based language learning screen according to a fourth embodiment of the present invention.
FIG. 6 is a diagram illustrating a chunk-based language learning screen according to a fifth embodiment of the present invention.
FIG. 7 is a diagram illustrating a chunk-based language learning screen according to a sixth embodiment of the present invention.
FIG. 8 is a diagram illustrating a chunk-based language learning screen according to a seventh embodiment of the present invention.
9 is a diagram illustrating a chunk-based language learning screen according to an eighth embodiment of the present invention.
FIG. 10 is a diagram illustrating a chunk-based language learning screen according to a ninth embodiment of the present invention.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are intended to be illustrative of the present invention and not to limit the scope of the invention. Should be interpreted to include modifications or variations that do not depart from the spirit of the invention.
Although the terms used in the present invention have been selected in consideration of the functions of the present invention, they are generally used in general terms. However, the present invention is not limited to the intention of the person skilled in the art to which the present invention belongs . However, if a specific term is defined as an arbitrary meaning, the meaning of the term will be described separately. Accordingly, the terms used herein should be interpreted based on the actual meaning of the term rather than on the name of the term, and on the content throughout the description.
The drawings attached hereto are intended to illustrate the present invention easily, and the shapes shown in the drawings may be exaggerated and displayed as necessary in order to facilitate understanding of the present invention, and thus the present invention is not limited to the drawings.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, a detailed description of known configurations or functions related to the present invention will be omitted when it is determined that the gist of the present invention may be obscured.
According to an aspect of the present invention, there is provided a speech recognition apparatus comprising: a storage unit for storing text data in which a plurality of words are divided into chunks and speech data matched in chunks included in the text data; A display unit for outputting a chunk icon for displaying the chunk and a connection icon for connecting the chunk; An audio output unit for outputting audio; An input unit for receiving a user input for selecting at least one icon of the chunk icon and the connection icon; And a control unit for determining a target chunk based on the selected icon and outputting audio data matched to the target chunk through the audio output unit.
Also, the display unit outputs the chunk icon in the order of the text, and outputs the connection icon to be disposed between the chunk icons. When the chunk icon is selected, the controller displays a chunk corresponding to the chunk icon If the connection icon is selected, the chunk corresponding to the first chunk icon to the chunk icon immediately after the connection icon can be determined as the target chunk.
The display unit may output the chunk icon according to a word order of the chunk in the text, and output the connection icon to connect at least one chunk icon. When the chunk icon is selected, It is determined that the corresponding chunk is the target chunk, and when the connection icon is selected, chunks corresponding to the chunk icon connected by the connection icon can be determined as the target chunk.
The storage unit may store a graphic icon matching the chunk, and the display unit may display the graphic icon to correspond to the chunk icon.
The graphic icon may be a still image, and the control unit may activate a graphic icon matched with a chunk to be audibly reproduced in the audio output of the target chunk.
Also, the graphic icon may be a moving picture, and the controller may play a graphic icon matched with a chunk to be audibly reproduced in the audio output of the target chunk.
The control unit may adjust a display attribute of the chunk icon selected by the target chunk.
The controller may increase the size of the chunk icon according to the number of times selected by the target chunk.
Wherein at least a part of the chunks is divided into a plurality of sub-chunks, and the control unit displays a sub-chunk icon for displaying the sub-chunks, and when receiving a user input for selecting the sub- The sub chunk corresponding to the chunk icon may be determined as the target chunk, and audio data corresponding to the sub chunk may be output.
Further, the display unit may display a reproduction icon, and the control unit may output the audio data when the reproduction icon is selected.
The display unit may display a number-of-times setting icon, and the control unit may receive the number of repetitions of the user input through the number-of-times setting icon through the input unit and repeatedly output the voice data by the repetition times.
The control unit may successively output audio data matched to a plurality of object chunks determined according to the selected icon through the audio output unit, wherein the plurality of chunk icons are selected by the user input.
The user input may be a drag input.
According to another aspect of the present invention, there is provided a method of encoding speech data, the method comprising: storing text data in which a plurality of words are divided into chunks and speech data matched in chunks included in the text data; Displaying a chunk icon indicating the chunk and a connection icon connecting the chunk; Receiving a user input for selecting an icon of at least one of the chunk icon and the connection icon; Determining a target chunk based on the selected icon; And outputting speech data matched to the target chunk by voice.
Also, in the displaying step, a graphic icon may be displayed to correspond to the chunk icon.
And adjusting a display attribute of the chunk icon selected by the target chunk.
At least a part of the chunks is divided into a plurality of sub-chunks. In the displaying step, a sub-chunk icon for displaying the sub-chunks is further displayed, and a user input for selecting the sub- The sub chunk corresponding to the selected sub chunk icon is determined as the target chunk, and voice data corresponding to the sub chunk may be output.
The chunk learning method is a learning method in which English sentence structure ability, which is the basis of English language ability, is efficiently completed in a short period of time. For example, if you train the English sentence to be divided into three parts: the beginning part, the core part, and the formula part, you can train about 3,000,000 English sentences freely with about 500 chunks. .
These chunk learning methods are not only applicable to English, but also to various languages such as Japanese, Chinese and German. However, the embodiment of the present invention will be described mainly in English.
In the embodiment of the present invention, the chunk consists of a subject, a verb, a verb phrase consisting of a preposition and a noun, a verb phrase, and a connective verb. The antinode can be, for example, one of to infinitives, the current injection (~ ing), and the past injection (pp). Thus, it can be seen that the English language is formed of at least one chunk forming a semantic group.
Hereinafter, an
The
Hereinafter, an
1 is a block diagram of an
The
In addition, the
The
For example, the
The
The
The
The
The
On the other hand, in the following description, the operation of the electronic device can be interpreted as being performed by control of the
Hereinafter, data provided to a chunk-based language learning method according to an embodiment of the present invention will be described.
2 is a diagram illustrating text data divided into chunk units according to an embodiment of the present invention.
Referring to FIG. 2, the text data may be divided into chunks. At least one of a chunk delimiter and a sub-chunk delimiter may be provided to divide the text data into chunks.
According to one example, this can be done according to a user input for inserting a chunk delimiter or a sub-chunk delimiter into the text data.
Specifically, the division of chunks by the user can be performed by inserting a chunk separator between chunks, and a chunk separator can be generated between chunks and chunks such as words and words, words and idioms, or between idioms and idioms. At this time, when the chunk length is long or difficult to pronounce, the sub-chunk delimiter is generated between the word and the word, so that the chunk can be divided into smaller units than the chunk. For example, "Samson fell in love" in "Samson fell + in love / with a woman / named Delilah / who lived / in a valley of Sorek." Creates a sub- + in love ".
On the other hand, in the present invention, the chunk delimiter is denoted by " / ", and the subchunk delimiter is denoted by " + ", but if the notation is used to distinguish the chunk delimiter from the sub- It is also possible to create it in various notation. However, it is better to avoid codes such as numbers, "." Or "," that can be included in text data.
Alternatively, the chunk classification may be determined by the
Or chunks may be stored in the
When the
FIG. 3 illustrates a database for chunk-based language learning according to an embodiment of the present invention.
Referring to FIG. 3, a chunk-based language learning database may be provided by matching a chunk table generated through text data divided into chunk units and an audio table generated through text-to-speech (TTS) have. By matching the audio table with the chunk table, the audio file can be matched for each chunk. For example, an audio file matching a # 2 chunk with a woman may be provided as a single file.
In this way, the final chunk audio file can be extracted.
The final chunk audio file may be provided with a chunk audio file for each chunk or sub chunk, and such chunk audio file may be output through the speaker in accordance with the user signal input received on the icon displayed on the
Hereinafter, a chunk-based language learning screen according to the present invention will be described.
4 to 8 are diagrams illustrating screens for language learning based on chunks according to the first to fifth embodiments of the present invention.
Referring to FIG. 4, the screen of the chunk learning method according to the first embodiment may include a
The
On the other hand, when the text is not written in the
The
On the other hand, when the
FIG. 5 is a diagram illustrating a chunk-based language learning screen according to a second embodiment of the present invention.
Referring to FIG. 5, the screen of the chunk learning method according to the second embodiment may include a
Since the
The
FIG. 6 is a diagram illustrating a chunk-based language learning screen according to a third embodiment of the present invention.
Referring to FIG. 6, the screen of the chunk learning method according to the third embodiment may include a
The
The
When the user has executed the
Substantially, the
FIG. 7 is a diagram illustrating a chunk-based language learning screen according to a fourth embodiment of the present invention.
7, the screen of the chunk learning method according to the fourth embodiment may include a
The
This
At this time, when a moving image is embedded in the
When the
On the other hand, when the user has executed the
8 is a diagram illustrating a chunk-based language learning screen according to the fifth embodiment of the present invention.
Referring to FIG. 8, the screen of the chunk learning method according to the fifth embodiment may include a
The
As the user input is received only on the
9 to 12 are diagrams illustrating screens for language learning based on chunks according to the sixth to ninth embodiments of the present invention.
FIG. 9 is a diagram illustrating a chunk-based language learning screen according to a sixth embodiment of the present invention.
Referring to FIG. 9, the screen of the chunk learning method according to the sixth embodiment may include a
The
For example, if you select "Samson fell in love" and "named Delilah" and then select the
FIG. 10 is a diagram illustrating a chunk-based language learning screen according to a seventh embodiment of the present invention.
Referring to FIG. 10, the screen of the chunk learning method according to the seventh embodiment may include a
When the
The drag position can be adjusted to various sizes, such as a chunk icon (1521) unit, a syllable unit, and the like.
11 is a diagram illustrating a chunk-based language learning screen according to an eighth embodiment of the present invention.
Referring to FIG. 11, the screen of the chunk learning method according to the eighth embodiment may include a
The
FIG. 12 is a diagram illustrating a chunk-based language learning screen according to a ninth embodiment of the present invention.
Referring to FIG. 12, the screen of the chunk learning method according to the ninth embodiment may include a
The number-of-
On the other hand, the screen of the chunk learning method described above may be provided in various combinations such as omitting some icons or adding some icons.
Hereinafter, an operation method of a chunk-based language learning electronic device according to the present invention will be described.
A method of operating a chunk-based language learning electronic device includes receiving a user input on an icon; Activating the icon according to a user input; And reproducing at least one of a voice, an image, and a moving image matched with the activated icon. Here, the icon may include a
The step of receiving the user input may be receiving a drag input or a click input by the user on the icon. At this time, the input of the user may be performed through an input device such as a mouse and a touch pad.
In the step of activating the icon, the size of the icon may be changed according to the number of user inputs, and the size of the icon may increase as the number of user inputs increases. It is also possible that the size of the icon changes only while the user's input is received.
The step of reproducing at least one of voice, image, or moving picture may be repeatedly played according to the number of times the voice is input to the icon.
And the image may be displayed in black and white before the user's input is received, and in color after the user's input is received. The moving image may be provided as a still image until the user input is received, and may be provided as a moving image after the user input is received.
On the other hand, images and moving images can be reproduced by matching with sounds, and images and moving images related to sounds can be provided.
The foregoing description is merely illustrative of the technical idea of the present invention, and various changes and modifications may be made by those skilled in the art without departing from the essential characteristics of the present invention. Therefore, the embodiments of the present invention described above can be implemented separately or in combination.
Therefore, the embodiments disclosed in the present invention are intended to illustrate rather than limit the scope of the present invention, and the scope of the technical idea of the present invention is not limited by these embodiments. The scope of protection of the present invention should be construed according to the following claims, and all technical ideas within the scope of equivalents should be construed as falling within the scope of the present invention.
1000: electronic device 1100: communication unit
1200: input unit 1300:
1400: control unit 1500: output unit
1510: Audio output unit 1520: Display unit
1521: Chunk icon 1523: Connection icon
Claims (17)
A display unit for outputting a chunk icon for displaying the chunk and a connection icon for connecting the chunk;
An audio output unit for outputting audio;
An input unit for receiving a user input for selecting at least one icon of the chunk icon and the connection icon; And
And a controller for determining a target chunk based on the selected icon and outputting audio data matched to the target chunk through the audio output unit
Electronics.
Wherein the display unit outputs the chunk icons in an order within the text, outputs the connection icons to be disposed between the chunk icons,
Wherein the control unit determines that the chunk corresponding to the chunk icon is the target chunk when the chunk icon is selected and selects chunks corresponding to the chunk icon immediately after the chunk icon when the connection icon is selected Judge by target chunk
Electronics.
Wherein the display unit outputs the chunk icon according to a word order of the chunk in the text, connects the connection icon to at least one chunk icon,
The controller determines that the chunk corresponding to the chunk icon is the target chunk when the chunk icon is selected and determines that the chunk corresponding to the chunk icon linked by the connection icon is the target chunk when the connection icon is selected
Electronics.
Wherein the storage unit stores a graphic icon matching the chunk,
The display unit displays the graphic icon corresponding to the chunk icon
Electronics.
The graphic icon is a still image,
Wherein the control unit activates a graphic icon matched with a chunk to be audibly reproduced in audio output of the target chunk
Electronics.
The graphic icon is a moving picture,
Wherein the control unit reproduces a graphic icon matched with a chunk to be audibly reproduced in the audio output of the target chunk
Electronics.
Wherein the control unit adjusts a display attribute of a chunk icon selected by the target chunk
Electronics.
The control unit may increase the size of the chunk icon according to the number of times selected by the target chunk
Electronics.
At least a part of the chunks is divided into a plurality of sub-chunks,
Wherein the control unit displays a sub-chunk icon indicating the sub-chunk, and when receiving a user input for selecting the sub-chunk icon, determines that the sub-chunk corresponding to the selected sub-chunk icon is the target chunk, Outputting audio data corresponding to a chunk
Electronics.
Wherein the display unit displays a playback icon,
The control unit may output the audio data when the playback icon is selected
Electronics.
The display unit displays a number setting icon,
Wherein the control unit receives the number of repetition times through the input unit through the number-of-times setting icon as a user input and repeatedly outputs the voice data for the repetition times
Electronics.
A plurality of the chunk icons selected by the user input,
The control unit continuously outputs audio data matched to a plurality of object chunks determined according to the selected icon through the audio output unit
Electronics.
The user input is a drag input
Electronics.
Displaying a chunk icon indicating the chunk and a connection icon connecting the chunk;
Receiving a user input for selecting an icon of at least one of the chunk icon and the connection icon;
Determining a target chunk based on the selected icon; And
And outputting audio data matched to the target chunk by voice
A chunk - based language learning method.
In the displaying step, a graphic icon is displayed corresponding to the chunk icon
A chunk - based language learning method.
And adjusting a display attribute of the chunk icon selected with the target chunk
A chunk - based language learning method.
At least a part of the chunks is divided into a plurality of sub-chunks,
In the displaying step, a sub-chunk icon for displaying the sub-chunk is further displayed,
When receiving a user input for selecting the sub-chunk icon in the receiving step, determines a sub-chunk corresponding to the selected sub-chunk icon as the target chunk and outputs audio data corresponding to the sub-chunk
A chunk - based language learning method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150101631A KR20170009487A (en) | 2015-07-17 | 2015-07-17 | Chunk-based language learning method and electronic device to do this |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150101631A KR20170009487A (en) | 2015-07-17 | 2015-07-17 | Chunk-based language learning method and electronic device to do this |
Publications (1)
Publication Number | Publication Date |
---|---|
KR20170009487A true KR20170009487A (en) | 2017-01-25 |
Family
ID=57991455
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020150101631A KR20170009487A (en) | 2015-07-17 | 2015-07-17 | Chunk-based language learning method and electronic device to do this |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR20170009487A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20180130933A (en) * | 2017-05-30 | 2018-12-10 | 주식회사 엠글리쉬 | Analysis method for chunk and key word based on voice signal of video data, and system thereof |
WO2019107604A1 (en) * | 2017-11-30 | 2019-06-06 | 코리아테스팅 주식회사 | Target-assigning firefighting robot in which fire extinguishing ball is automatically loaded |
-
2015
- 2015-07-17 KR KR1020150101631A patent/KR20170009487A/en not_active Application Discontinuation
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20180130933A (en) * | 2017-05-30 | 2018-12-10 | 주식회사 엠글리쉬 | Analysis method for chunk and key word based on voice signal of video data, and system thereof |
WO2019107604A1 (en) * | 2017-11-30 | 2019-06-06 | 코리아테스팅 주식회사 | Target-assigning firefighting robot in which fire extinguishing ball is automatically loaded |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20200175890A1 (en) | Device, method, and graphical user interface for a group reading environment | |
US20060194181A1 (en) | Method and apparatus for electronic books with enhanced educational features | |
US11854431B2 (en) | Interactive education system and method | |
US20140315163A1 (en) | Device, method, and graphical user interface for a group reading environment | |
CN111462740A (en) | Voice command matching for voice-assisted application prototyping for non-speech alphabetic languages | |
US11210964B2 (en) | Learning tool and method | |
US20140315179A1 (en) | Educational Content and/or Dictionary Entry with Complementary Related Trivia | |
KR101102520B1 (en) | The audio-visual learning system of its operating methods that based on hangul alphabet combining the metrics | |
KR20170009486A (en) | Database generating method for chunk-based language learning and electronic device performing the same | |
US20140278428A1 (en) | Tracking spoken language using a dynamic active vocabulary | |
KR102645880B1 (en) | Method and device for providing english self-directed learning contents | |
KR20170009487A (en) | Chunk-based language learning method and electronic device to do this | |
US20040102973A1 (en) | Process, apparatus, and system for phonetic dictation and instruction | |
KR20040094634A (en) | Dynamic pronunciation support for japanese and chinese speech recognition training | |
US20160267811A1 (en) | Systems and methods for teaching foreign languages | |
US20160307453A1 (en) | System and method for auditory capacity development for language processing | |
KR102618311B1 (en) | An apparatus and method for providing conversational english lecturing contents | |
Amelia | Utilizing Balabolka to enhance teaching listening | |
KR102453876B1 (en) | Apparatus, program and method for training foreign language speaking | |
CN111401082A (en) | Intelligent personalized bilingual learning method, terminal and computer readable storage medium | |
TW201435825A (en) | Electronic apparatus, learning method, and computer program product thereof | |
KR20140122172A (en) | System and method for learning language using touch screen | |
KR102667466B1 (en) | Method and apparatus for providing english reading comprehension lecturing contents using image association technique | |
KR102616915B1 (en) | Method and system for providing korean spelling quizzes | |
KR101191904B1 (en) | Sign language translating device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
E902 | Notification of reason for refusal | ||
E902 | Notification of reason for refusal | ||
E601 | Decision to refuse application |