KR20110062738A - System and method for operating language training electronic device and real-time translation training apparatus operated thereof - Google Patents
System and method for operating language training electronic device and real-time translation training apparatus operated thereof Download PDFInfo
- Publication number
- KR20110062738A KR20110062738A KR1020090119555A KR20090119555A KR20110062738A KR 20110062738 A KR20110062738 A KR 20110062738A KR 1020090119555 A KR1020090119555 A KR 1020090119555A KR 20090119555 A KR20090119555 A KR 20090119555A KR 20110062738 A KR20110062738 A KR 20110062738A
- Authority
- KR
- South Korea
- Prior art keywords
- subtitle
- display area
- caption
- subtitle display
- section
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 53
- 238000013519 translation Methods 0.000 title claims description 34
- 238000012549 training Methods 0.000 title description 9
- 230000001360 synchronised effect Effects 0.000 claims abstract description 15
- 230000008569 process Effects 0.000 claims description 10
- 230000004069 differentiation Effects 0.000 claims description 3
- 239000000284 extract Substances 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 16
- 238000012545 processing Methods 0.000 description 12
- 238000010586 diagram Methods 0.000 description 8
- 230000000694 effects Effects 0.000 description 5
- 239000004973 liquid crystal related substance Substances 0.000 description 5
- 230000008901 benefit Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000013500 data storage Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000005856 abnormality Effects 0.000 description 2
- 239000012141 concentrate Substances 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 230000010365 information processing Effects 0.000 description 2
- 230000003252 repetitive effect Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 208000006930 Pseudomyxoma Peritonei Diseases 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 229920001690 polydopamine Polymers 0.000 description 1
- 229920000306 polymethylpentene Polymers 0.000 description 1
- 230000029058 respiratory gaseous exchange Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B19/00—Teaching not covered by other main groups of this subclass
- G09B19/06—Foreign languages
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/0485—Scrolling or panning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/06—Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/34—Indicating arrangements
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Entrepreneurship & Innovation (AREA)
- Electrically Operated Instructional Devices (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
Abstract
The present invention relates to a caption and audio output method of a language learning application installed and operated in an electronic device terminal such as an MP3 player or a mobile phone.
The present invention extracts and stores caption audio information from the contents to be output, divides the screen of the display into a plurality of first caption display areas, a second caption display area, and a k-th caption display area. t n ) displays a subtitle section for audio output in the first subtitle display area, and sequentially displays subtitle sections for audio output at a next time point (t n +1 , t n +2 , ...). Area, the third subtitle display area, ..., k-th subtitle display area in order one-to-one correspondence. The subtitle section of the first subtitle display area may be output with synchronized audio, the subtitle section of the second subtitle display area as the first subtitle display area, and the subtitle section of the third subtitle display area as the second subtitle display area. ...,, Moves the subtitle section of the (k + 1) subtitle display area to the k th subtitle display area. In addition, the present invention allows the caption sessions displayed on the plurality of caption display areas to be continuously scrolled and sequentially played, or a specific caption session can be repeatedly played under user control.
Description
The present invention relates to a language learning application program installed and operated in a portable electronic device such as an MP3 player or a mobile phone, and more particularly, to a language learning system technology capable of effectively performing foreign language learning as well as simultaneous interpretation training.
The language learning application program installed and operated in the electronic device according to the present invention may be a general computer, a laptop computer, as well as an MP3 player (for example, an iPod) and a mobile phone (for example, an iPhone). It can be applied to all electronic devices that provide display screens such as liquid crystals such as netbooks, PDAs, etc., and can be driven by mounting application software. In the present specification, these are collectively referred to as language learning electronic devices, portable electronic devices, Alternatively, the term "electronic device terminal" is used interchangeably.
In addition, the display provided with the electronic device for language learning according to the present invention does not need to be limited to liquid crystal, and an electronic device that provides a general display screen as well as an organic light emitting display (OLED), an electronic paper (E-Paper), and the like. It can be extended and applied to the terminal with a touch recognition function on the screen.
The language learning system to be mounted in the electronic device terminal according to the present invention, by downloading the language learning application program and language training contents (wires) from the Internet application store (wireless or wired) to the wireless or wired Internet, to effectively perform language learning and training You can do it.
In general, foreign language learning is conducted through repetitive listening and speaking training, and various portable electronic devices such as cassette recorders and walkmen have been used for repetitive language learning.
On the other hand, in order to enhance the language learning effect, electronic devices that output subtitle sentences on the display screen of the learner corresponding to the voice are being sold in the market. This is because you can learn while watching the sentences displayed on the display screen without having to carry a separate textbook or script.
In addition, since most portable terminals such as mobile phones, MP3 players, or PMPs have a liquid crystal display on the front, a language learning program mounted here employs a method of simultaneously outputting audio and subtitles. For example, recently released e-books display textbook contents on the entire screen, and provide a function of reading and listening to the contents by turning a page like a conventional paper book.
However, since the conventional technology has a limitation on the size of the LCD screen, it provides only a function of displaying a sentence that can be displayed in a one-to-one correspondence, and even though the LCD display unit is large, it only outputs letters. And subtitles are out of sync, which is inconvenient.
In other words, the prior art simply has a function of displaying one-to-one sentences corresponding to voice, so when listening to a long sentence that is not easy to understand, there is a tendency to skip over and miss it and thus miss the learning effect. There is a problem.
In addition, in the case of an e-book according to the prior art, it is easy to obstruct concentration by urgently chasing the sentence that is being listened to by the eye on the screen. Moreover, if you sell your eyes for a while, you will have a hard time finding which sentence you are reading.
Accordingly, it is a first object of the present invention to provide a language learning system and method capable of dividing subtitle information corresponding to a voice output from an electronic device into semantic units and displaying the subtitle information.
In addition to the first object, a second object of the present invention is to provide a language learning system and method capable of selectively outputting at least one or more translation sound data and displaying corresponding translation sound subtitle data in a semantic unit. To provide.
In order to achieve the above object, the present invention provides a method for outputting voice and subtitles to the display unit and the audio output unit of the electronic device terminal, comprising the steps of: (a) extracting and storing subtitle voice information from the content to be output; (b) The first subtitle display area is divided into a first subtitle display area, a second subtitle display area, a ..., k th subtitle display area, and a subtitle section for audio output at the present time point t n . Subtitle sections to be displayed in the subtitle display area and to be outputted at the next time point (t n +1 , t n +2 , ...) in order of the second subtitle display area, the third subtitle display area, ..., k displaying one-to-one correspondence in sequence on the caption display area; (c) outputting audio synchronized to the caption section of the first caption display area; And (d) the subtitle section of the second subtitle display area as the first subtitle display area, the subtitle section of the third subtitle display area as the second subtitle display area, and the (k + 1) subtitle display area. And moving the subtitle section of the k-th subtitle display area.
The language learning electronic device according to the present invention includes a display unit for displaying subtitles and a voice output unit for outputting a voice, and the application program mounted and driven by the language learning electronic device includes subtitles corresponding to voice information and voice information. A subtitle voice information providing unit for providing subtitle voice information including information, and a reproducing process for outputting subtitle information and voice information of the subtitle voice information provided from the subtitle voice information providing unit to the display unit and the audio output unit, respectively. And a processing unit, wherein the output of the caption information comprises a plurality of caption display areas for simultaneously arranging and displaying a plurality of caption sections obtained by shortly dividing a sentence into one sentence.
In order to drive a language learning application program according to the present invention, when one sentence is long, a plurality of subtitle sections are divided in advance into semantic units and stored as subtitle information in the subtitle voice information providing unit. Here, the semantic unit may be a method of dividing a clause unit or a sentence by phrase.
In the language learning system according to the present invention, a subtitle section corresponding to the voice currently output through the speaker is displayed in the first subtitle display area, and the subtitle sections corresponding to the voice to be output in the next time frame are sequentially subtitled to the second subtitle. Display area, third subtitle display area; When the voice of the current frame is output, the subtitle section of the (k + 1) subtitle display area may be sequentially shifted to the k th subtitle display area.
According to a preferred embodiment of the present invention, it is possible to repeat the subtitle section of the kth subtitle display area or the subtitle sections of the pth subtitle display area to the qth subtitle display area q≥p by voice command.
The present invention visually differentiates the subtitle sections of the first subtitle display area corresponding to the audio information currently outputted, so that the second subtitle display area, the third subtitle display area,... It can be distinguished from other subtitle sections displayed on the back. Different methods may be used, such as changing the color of the caption display area, changing the gray scale, or changing the font. However, the present invention is not necessarily limited thereto.
The language learning system according to the present invention provides a user graphic interface on a display screen, so that the playback processing unit starts, stops, pauses, and repeats the playback of subtitle audio information according to a user's selection. It may be set to perform a function including the.
According to an exemplary embodiment of the present invention, a user may select at least one subtitle display area displayed on a display unit through a touch screen type user interface so that voice information corresponding to the selected subtitle section is output to the speaker. In addition, the language learning system according to the present invention can set the corresponding subtitle sections to be scrollable on the display unit by clicking and pushing or pulling a plurality of subtitle display areas up and down through a touch screen type user interface. have.
The speech information of the language learning system according to the present invention includes the original sound data of the language to be learned and the translated sound data translated into at least one kind of language for the original sound data, and the original sound data and the translated sound data of the user are reproduced. The output is determined by the selection.
The subtitle information of the language learning system according to the present invention includes original sound caption data and translated sound caption data translated into at least one language for the original sound caption data, and the original sound caption data and the translation sound caption data are selected by the user during playback. The output is determined by In the language learning system according to the present invention, the original sound data, the original sound caption data, the translation sound data, and the translation sound caption data may be stored, and the output may be determined by a combination of the above four data selected by the user at the time of reproduction. . At this time, the output audio data and subtitle data are of course synchronized with each other.
The present invention allows the trainees to train naturally and effectively by being intuitive since they can listen while reading the subtitle section divided by the semantic unit. In addition, since the language learning system according to the present invention can clearly know the subtitle portion currently being reproduced during listening, it is effective to concentrate on listening.
The present invention allows the entire sentence to be studied even during listening through the scroll function, and when touched the subtitle corresponding to the part to be listened to, the voice synchronized to the subtitle is output, so that the trainee selects only the desired part of the output. It is easy to specify the repetition time, which can improve the learning efficiency. In addition, the present invention can expect a learning effect by selectively outputting the translation sound and translation subtitles of various foreign languages.
Hereinafter, the configuration and operation of a language learning system and an application program for driving the same according to the present invention will be described in detail with reference to FIGS. 1 to 8.
According to a first embodiment of the present invention, there is provided a method of outputting voice and subtitles to a display unit and a voice output unit of an electronic device terminal, the method comprising: (a) extracting and storing subtitle voice information from content to be output; (b) The display unit divides a plurality of screens into a first subtitle display area, a second subtitle display area, ..., k-th subtitle display area, and outputs a subtitle section for audio output at the current time t n . 1 Subtitle display area is displayed on the subtitle display area, and the subtitle sections to be audio output at the next time point (t n + 1 , t n + 2 , ...) are sequentially displayed in the second subtitle display area, the third subtitle display area, ..., Displaying one-to-one correspondence on the kth subtitle display area in order; (c) outputting audio synchronized to the caption section of the first caption display area; And (d) the subtitle section of the second subtitle display area as the first subtitle display area, the subtitle section of the third subtitle display area as the second subtitle display area, and the (k + 1) subtitle display area. And moving the subtitle section of the k-th subtitle display area.
According to a second embodiment of the present invention, the present invention provides a method for outputting to a display unit and a voice output unit of an electronic device terminal, the method comprising: (a) extracting and storing caption voice information from content to be output; (b) The first subtitle display area is divided into a first subtitle display area, a second subtitle display area, a ..., k th subtitle display area, and a subtitle section for audio output at the present time point t n . Subtitle sections to be displayed on the subtitle display area and to be outputted at the next time point (t n + 1 , t n + 2 , ...) in order of the second subtitle display area, the third subtitle display area, ..., k displaying one-to-one correspondence in sequence on the caption display area; (c) outputting a sound synchronized to the subtitle section corresponding to the first subtitle display area to the r th subtitle display area (1 ≦ r ≦ k); And (d) the subtitle section of the (r + 1) subtitle display area as the first subtitle display area, the subtitle section of the (r + 2) subtitle display area as the second subtitle display area, ...,, (k + r) providing a method comprising moving a caption section of the caption display area to a k-th caption display area.
According to a preferred embodiment of the present invention, in addition to the steps (a), (b), (c), and (d), (e) the second subtitle display area and the third subtitle by a user scroll command are provided. For the caption sections displayed in the display area, ..., k-th caption display area, the caption section of the m-th caption display area (2≤m≤k) is the (m + 1) caption display area or (m). Displaying scroll by moving to a subtitle display area; And (f) when the user selects an arbitrary m-th caption display area (2 ≦ m ≦ k) from the scroll-displayed caption display areas, the caption section corresponding to the m-th caption display area is displayed in the first caption display area. Then, the subtitle section of the (m + 1) subtitle display area is moved to the second subtitle display area, and the subtitle section of the (m + 2) subtitle display area is sequentially moved to the third subtitle display area, ..... It also provides a scrolling process including the step of displaying.
The present invention also provides an electronic device terminal equipped with a language learning application program of the driving method, and a server system technology for selling or providing the language learning application program over a wired or wireless Internet.
The subtitle section is formed by dividing one sentence or one sentence into predetermined semantic units, and the subtitle information is included in the subtitle voice information.
The claim 1 added to the present specification claims an embodiment in which a subtitle display area is divided into a plurality of subtitle display areas, and an audio output of the subtitle section corresponding to the first subtitle display area is performed. An embodiment in which a caption section displayed in the caption display area is voice output and a plurality of caption sections are moved is described.
Hereinafter, the spirit of the present invention will be described with reference to the first embodiment.
1 is a view showing the operation between the server and the client of the language learning system according to the present invention. Referring to FIG. 1, as an embodiment of a client-side electronic device for performing language learning according to the present invention, an
For example, such as an MP3 player, a personal multimedia player (PMP), a mobile phone, a smartphone, a netbook, a computer, etc., capable of playing subtitle voice information including voice information including subtitles, such as liquid crystal or OLED, electronic paper, and the like. Audio devices having a display unit and general electronic devices may be used.
That is, the
As another embodiment of the present invention, the
In a preferred embodiment of the present invention, in the case of the
In another embodiment of the present invention, in the case of an MP3 player, such as an iPod, by connecting a USB to a computer connected to the Internet, operating the communication
Figure 2 is a block diagram showing the configuration of a language learning system mounted on an electronic device terminal of the client side according to the present invention. Referring to FIG. 2, the electronic device terminal of the present invention includes a
The language learning application according to the present invention includes a subtitle voice
The
The
The caption audio
The
In a preferred embodiment of the present invention, the
According to the present invention, the caption information is divided into a plurality of caption sections and displayed on the plurality of caption display areas on the
As a preferred embodiment of the subtitle audio information to be applied to the present invention, data in MP3 format or audio data in various formats may be applied. Techniques for inserting subtitle information into a general digital audio data file are already known in the art, and a detailed description thereof will be omitted. As a preferred embodiment of the present invention, the beginning and end of the subtitle sentence to be displayed may be marked with a frame or time value so that the sentence of the marked frame or time value is displayed when the voice is output.
The language learning system according to the present invention is characterized in that the subtitle information is divided into a plurality of subtitle sections in a semantic unit and displayed on the plurality of subtitle display areas of the display unit. Key features of the invention in which subtitle information is divided into subtitle sections in semantic units are described in more detail below.
3A and 3B are diagrams illustrating a first embodiment and a second embodiment of a display screen on which the language learning system according to the present invention is executed on an electronic device terminal.
Referring to FIG. 3A, a language learning system according to the present invention divides a
For example, the present invention may output subtitles in synchronism with the following English sentences while displaying the subtitles.
"I usually walk to school which is only 100 meters away from my house where my parents and I live together in peace, joy, and harmony."
This sentence says, according to the Korean word order, "I usually walk to a school that is only 100 meters from my home where I live with my parents in peace, joy and harmony." Can be translated. But if you divide it into semantic units for simultaneous interpretation:
① I usually walk to school
② which is only 100 meters away from my house
③ where my parents and I live together
④ in peace, joy, and harmony
Accordingly, the language learning system according to the present invention displays subtitle sections divided by semantic units in the arranged plurality of
In addition, the "meaning unit" according to the present invention may be divided into a pause part of speech for reading. In addition, the semantic unit may vary according to the individual level of the language to be learned. For example, even in the same sentence, a beginner level can split a sentence into very short subtitle sections, and at an advanced level it can be divided into longer subtitle sections.
In this way, repeating the listening while visually seeing the subtitle section divided into semantic units naturally cultivates the ability to be straight or straight. In other words, it is trained to omit the process of making a translation in accordance with Korean grammar. This can be said to be a learning process that is especially necessary for learning languages with a different word order from Korean, such as English.
Referring back to FIG. 3A, according to the first embodiment of the present invention, a subtitle section corresponding to the currently output voice (t n ) may be programmed to be displayed on the designated first
Subtitle sections to be reproduced at the next time point t n +1 are displayed in the second to fourth
That is, the subtitle section displayed in the first
That is, when the audio output corresponding to the subtitle section displayed on the first subtitle display area is completed at the present time t n , the present invention recognizes the subtitle section of the second subtitle display area in the first subtitle display area. The subtitle section of the third subtitle display area may be moved to the second subtitle display area, and the subtitle section of the (k + 1) th subtitle display area may move to the kth subtitle display area. The caption section displayed on the first caption display area, the second caption display area, ..., k-th caption display area may be scrolled up and down by a user command.
In another embodiment, when the audio output corresponding to the subtitle section displayed in the first subtitle display area is completed at the present time t n , the pause state is maintained and when the command is input in the manual mode, the second subtitle display area is displayed. Subtitle section to the first subtitle display area, subtitle section of the third subtitle display area to the second subtitle display area, ...., subtitle section of the (k + 1) subtitle display area to the k th subtitle display area You can move it.
As another embodiment of the present invention, it is possible to program so that the number of caption display areas displayed on the display unit of the electronic device terminal can be variably adjusted to the length of a sentence. That is, as a preferred embodiment of the present invention, the number of subtitle display areas can be changed to a user setting. At this time, the caption display area in which the caption section currently outputting audio is located may be the highest
3B is a diagram illustrating a display screen on which the language learning system according to the second embodiment of the present invention is executed on an electronic device terminal.
According to the second embodiment of the present invention, as described in claim 2, a plurality of subtitle display areas, such as a first subtitle display area, a second subtitle display area, ... It is characterized by moving the corresponding block up or down at once. At this time, the audio output synchronized to the subtitle section is output in the order of the first subtitle display area, the second subtitle display area, ..., and is visually highlighted to highlight the corresponding subtitle section in which the audio output is being performed. The subtitle display area is sequentially moved.
Referring to FIG. 3B, the second
That is, the first embodiment of the present invention always outputs the subtitle section of the first subtitle display area as audio, whereas the second embodiment of the present invention moves the subtitle display area to be outputted sequentially, so that the subtitle display is highlighted. Visual effects, such as moving an area, appear.
In other words, the second embodiment of the present invention extracts and stores caption audio information from the content to be output and stores the screen of the display unit in the first caption display area, the second caption display area, ..., k-th caption display area. Display the subtitle section which is divided into a plurality, and outputs the audio at the current time point t n in the first subtitle display area, and outputs the audio at the next time point (t n +1 , t n +2 , ...). Subtitle sections are sequentially the same as those in the first embodiment until the subtitle sections are sequentially displayed one-to-one on the second subtitle display region, the third subtitle display region, ..., and the k-th subtitle display region.
Then, the second embodiment differs from the first embodiment in that the audio synchronized to the subtitle section corresponding to the first subtitle display area to the r th subtitle display area (1 ≦ r ≦ k) is output, and the first (r +1) The subtitle section of the subtitle display area to the first subtitle display area, the subtitle section of the (r + 2) subtitle display area to the second subtitle display area, ....,, (k + r) subtitle display And move the caption section of the area to the k-th caption display area.
At this time, in the process of outputting the subtitle sections corresponding to the r subtitle display areas one by one, one can visually differentiate the subtitle display areas that are outputting audio at the present time, thereby highlighting them. The subtitle display area is sequentially viewed from the first subtitle display area to the r subtitle display area.
As a preferred embodiment of the present invention, by providing a graphical user interface (GUI), it is possible to select a subtitle section displayed on the screen so that the user can listen selectively. At this time, the user may select one or more subtitle sections. The language learning application according to the present invention can increase the efficiency of training because it is easy to repeat the listening by selecting only the desired portion during listening.
As a preferred embodiment of the present invention, the subtitle section corresponding to the currently output voice information may be visually differentiated to distinguish it from other subtitle sections displayed on the display unit. For example, in the case of the subtitle section currently being played as shown in the figure, the font size may be different or may be differentiated through an emphasis effect on the font.
As a preferred embodiment of the present invention, the apparatus further includes a user graphic interface, and the
In addition, the subtitle sections may be set to enable a scroll function on the display screen. Accordingly, a specific subtitle section can be searched by scrolling by pressing a finger or a stylus on the
According to the present invention, the m-th subtitle display area (2) is displayed for subtitle sections displayed on the second subtitle display area, the third subtitle display area, ..., the k-th subtitle display area by a user scroll command. A scroll display function is provided by moving a subtitle section ≤ m ≤ k) to a (m + 1) subtitle display area or a (m-1) subtitle display area.
That is, while scrolling up and down the caption section displayed on the second caption display area, the third caption display area, ..., k-th caption display area, the user (learner) among the plurality of caption display areas scrolled and displayed. Selects an arbitrary mth subtitle display area (2≤m≤k), the subtitle section corresponding to the mth subtitle display area is jumped to the first subtitle display area, and the (m + 1) th subtitle is selected. The subtitle section of the display area is moved to the second subtitle display area, the subtitle section of the (m + 2) subtitle display area is sequentially moved to the third subtitle display area, ..... and displayed on the first subtitle display area. The subtitles are synchronized and the audio is output.
In a preferred embodiment of the present invention, the voice information in the subtitle voice information may include original sound data of a language to be learned and translation sound data in at least one kind of language for the original sound data. For example, the translation sound data may be Korean, Japanese, French, or the like. The output of the original sound data and the translated sound data may be determined by a user's selection at the time of reproduction. That is, at least one translation sound data may be set after the original sound data or the translation sound data may not be output.
As another embodiment of the present invention, the subtitle information includes original sound caption data and translation sound caption data in at least one language for the original sound caption data, and the original sound caption data and the translation sound caption data are selected by a user during playback. The output can be determined. The language learning system configurations according to the present invention may be implemented in software as an application program included in the
4 is a view showing an embodiment of the initial screen setting of the language learning system according to the present invention. As a preferred embodiment of the present invention, three modes may be provided, a
The
The
The
FIG. 5A is a diagram illustrating a file automatically selected for language learning according to a first embodiment of the present invention. When the
At this time, the subtitle section corresponding to the audio output at the current time point t n may be displayed on the designated first
That is, the caption section displayed in the first
If the caption text is long, the unmarked caption section may be displayed while being sequentially raised from below during playback. As a preferred embodiment of the present invention, as long as the size of the display screen allows, the number of subtitle display areas can be increased or decreased without any particular limitation, and there is no need to add any particular limitation to the arrangement direction.
In the course of learning in the automatic mode, selecting the play /
When a user scroll command is issued, m-th subtitle display area (2≤m≤k) for subtitle sections displayed in the second subtitle display area, the third subtitle display area, ..., k-th subtitle display area, and so on. Scrolls the caption section by moving to the (m + 1) subtitle display area or the (m-1) subtitle display area. Subsequently, when a user selects an m-th caption display area (2 ≦ m ≦ k) from among a plurality of subtitle display areas that are scrolled and displayed, the subtitle section corresponding to the m-th caption display area is defined as the first subtitle display area. (m + 1) The subtitle section of the subtitle display area is moved to the second subtitle display area, and the subtitle section of the (m + 2) subtitle display area is sequentially moved to the third subtitle display area, ..... do.
In addition, if you select the
In addition, the language learning program according to the present invention may be set through the repetition learning
To indicate that the setting of the repetition learning interval has been successfully set, the color of the repetition learning
In a preferred embodiment of the present invention, when the
5B is an exemplary view showing a passive mode according to the present invention.
In the language learning manual mode according to the present invention, only the subtitle section displayed on the first
In a preferred embodiment of the present invention, selecting the repetition learning
Fig. 6 is a setting screen for setting the playback range, whether to repeat playback, whether to shuffle play, whether to play the translation sound data and the translation sound caption data, and so on, according to a preferred embodiment of the present invention. It is set to be selected and called. The above-described setting screens of FIGS. 2 to 6 are embodiments that can be variously changed, and the present invention is not limited thereto.
7 is a flowchart of processing a subtitle scroll player reproduction algorithm according to the present invention. Referring to FIG. 7, first, when a list of MP3 files for language learning is obtained (step S1000), caption data information of a file to be reproduced is extracted (step S1010).
Subsequently, it is determined whether the extracted caption information is valid (step S1020), and if an abnormality occurs in the caption data, it is displayed (step S1060). On the other hand, if there is no abnormality in the extracted caption information, the caption data is stored in the memory (step S1030), the playback program module is initialized, and the caption list is initialized (step S1040).
Subsequently, it is determined whether it is in the current audio / subtitle playback state (step S1050), and if it is in the playback state, the caption scroll function is deactivated (step S1070), otherwise, the caption scroll selection is enabled (step S1061). ).
On the other hand, during playback, the animation is executed so that the caption text is scrolled and displayed according to the time value (step S1080). If the subtitle scroll function is activated, clicking the play button plays the subtitle section of the selected subtitle display area from the start time value (step S1062). At this time, if there is no selected subtitle, it is played from the start time value of the subtitle when PAUSE.
8 is a diagram illustrating a screen when a translation icon is executed according to a preferred embodiment of the present invention. The speech information of the language learning system according to the present invention includes the original sound data of the language to be learned and the translated sound data translated into at least one kind of language for the original sound data, and the original sound data and the translated sound data of the user are reproduced. The output is determined by the selection.
The subtitle information of the language learning system according to the present invention includes original sound subtitle data and translated sound subtitle data translated into at least one language for the original sound subtitle data, and the original sound subtitle data and the translated sound subtitle data are reproduced by the user. The output is determined by the selection. In the language learning system according to the present invention, the original sound data, the original sound caption data, the translation sound data, and the translation sound caption data may be stored, and the output may be determined by a combination of the above four data selected by the user at the time of reproduction. .
That is, the English original sound data and the original subtitle data may be output, or the English original sound data and the translated subtitle data may be output. According to the user's (learner's) selection, a combination for outputting the translation sound data and the English subtitle data is possible. At this time, the output audio data and subtitle data are of course synchronized with each other.
Referring back to FIG. 8, when the
There is no particular limitation with regard to the structure of the subtitle voice information applicable to the language learning system of the present invention, which is described in detail in Korean Patent No. 297,206 of the applicant of the patent application. Various techniques for embedding subtitle information are known in the art.
The foregoing has somewhat broadly improved the features and technical advantages of the present invention to better understand the claims that follow. Additional features and advantages that make up the claims of the present invention will be described below. It should be appreciated by those skilled in the art that the conception and specific embodiments of the invention disclosed may be readily used as a basis for designing or modifying other structures for carrying out similar purposes to the invention.
In addition, the inventive concepts and embodiments disclosed herein may be used by those skilled in the art as a basis for modifying or designing other structures for carrying out the same purposes of the present invention. In addition, such modifications or altered equivalent structures by those skilled in the art may be variously evolved, substituted and changed without departing from the spirit or scope of the invention described in the claims.
The present invention allows the trainees to train naturally and effectively by being intuitive since they can listen while reading the subtitle section divided by the semantic unit. In addition, since the language learning system according to the present invention can clearly know the subtitle portion currently being reproduced during listening, it is effective to concentrate on listening.
The present invention allows the entire sentence to be studied even during listening through the scroll function, and when touched the subtitle corresponding to the part to be listened to, the voice synchronized to the subtitle is output, so that the trainee selects only the desired part of the output. It is easy to specify the repetition time, which can improve the learning efficiency. In addition, the present invention can expect a learning effect by selectively outputting the translation sounds and translation subtitles of various foreign languages.
The present invention downloads an application from a wired or wireless Internet application store, mounts it on a user's mobile phone or an MP3 player, and allows a user to learn a language in a way he or she wishes to present a new business model in the wired and wireless Internet market. It can be used industrially.
1 is a view showing the operation between the server and the client of the language learning system according to the present invention.
Figure 2 is a block diagram showing the configuration of the language learning system mounted on the electronic device terminal of the client side in accordance with the present invention.
3A and 3B show a first preferred embodiment and a second preferred embodiment of a display screen on which a language learning system according to the present invention is implemented on an electronic device terminal;
Figure 4 is a diagram showing an embodiment of the initial screen setting of the language learning system according to the present invention.
FIG. 5A illustrates a file automatically selected for language learning in accordance with a preferred embodiment of the present invention. FIG.
Figure 5b illustrates a passive mode in accordance with the present invention.
Fig. 6 is a diagram showing a setting screen for setting a playback range, whether to repeat playback, whether to shuffle play, whether to play the translation sound data and the translation sound caption data, and so on, according to a preferred embodiment of the present invention.
7 is a flowchart of processing a caption scroll reproduction algorithm according to a preferred embodiment of the present invention.
8 illustrates a translation function in accordance with the present invention.
<Explanation of symbols for the main parts of the drawings>
100: electronic device terminal
110: display unit
120: audio output unit
130: subtitle voice information provider
131: data storage device
140: playback processing unit
141: codec
200: ASP program
300: database server
400: File Server
401: Superlab Mode
402: automatic mode
403: manual mode
508a: reverse icon
508b: Play / Pause Icon
508c: Forward Icon
509: Repeated learning section setting icon
510: progress bar
511: Trance Icon
515: Enter
Claims (16)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR20090119555A KR101158319B1 (en) | 2009-12-04 | 2009-12-04 | System and method for operating language training electronic device and real-time translation training apparatus operated thereof |
PCT/KR2010/001473 WO2011068284A1 (en) | 2009-12-04 | 2010-03-09 | Language learning electronic device driving method, system, and simultaneous interpretation system applying same |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR20090119555A KR101158319B1 (en) | 2009-12-04 | 2009-12-04 | System and method for operating language training electronic device and real-time translation training apparatus operated thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
KR20110062738A true KR20110062738A (en) | 2011-06-10 |
KR101158319B1 KR101158319B1 (en) | 2012-06-22 |
Family
ID=44115113
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR20090119555A KR101158319B1 (en) | 2009-12-04 | 2009-12-04 | System and method for operating language training electronic device and real-time translation training apparatus operated thereof |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR101158319B1 (en) |
WO (1) | WO2011068284A1 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012006024A2 (en) * | 2010-06-28 | 2012-01-12 | Randall Lee Threewits | Interactive environment for performing arts scripts |
US9122656B2 (en) | 2010-06-28 | 2015-09-01 | Randall Lee THREEWITS | Interactive blocking for performing arts scripts |
US9870134B2 (en) | 2010-06-28 | 2018-01-16 | Randall Lee THREEWITS | Interactive blocking and management for performing arts productions |
KR20180056082A (en) | 2016-11-18 | 2018-05-28 | 한국과학기술원 | Metal enhanced fluorescence composite nano structure and method for manufacturing the same, fluorescence material detect method thereof |
KR20190008977A (en) * | 2019-01-18 | 2019-01-25 | (주)뤼이드 | Method for displaying study content and application program thereof |
US10642463B2 (en) | 2010-06-28 | 2020-05-05 | Randall Lee THREEWITS | Interactive management system for performing arts productions |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103902377B (en) * | 2012-12-24 | 2017-11-03 | 联想(北京)有限公司 | Terminal device and its running status synchronous method |
CN104093085B (en) * | 2014-04-22 | 2016-08-24 | 腾讯科技(深圳)有限公司 | Method for information display and device |
CN109272923B (en) * | 2018-09-17 | 2021-02-02 | 深圳市创维群欣安防科技股份有限公司 | Subtitle rolling display method and system based on multi-screen equipment and storage medium |
CN110428674A (en) * | 2019-08-15 | 2019-11-08 | 湖北纽云教育科技发展有限公司 | A kind of application method of listening study device |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100297206B1 (en) * | 1999-01-08 | 2001-09-26 | 노영훈 | Caption MP3 data format and a player for reproducing the same |
KR20000012538A (en) * | 1999-05-12 | 2000-03-06 | 김민선 | Method and storing media for controlling caption function for studying foreign language subscript included in moving picture |
KR20020005523A (en) * | 2001-10-08 | 2002-01-17 | 노영훈 | Audio Player having a Caption Display Function |
JP2007316613A (en) * | 2006-04-26 | 2007-12-06 | Matsushita Electric Ind Co Ltd | Caption display control apparatus |
KR100974002B1 (en) * | 2008-04-25 | 2010-08-05 | 설융석 | System for studying nuance of foreign by playing movie |
-
2009
- 2009-12-04 KR KR20090119555A patent/KR101158319B1/en not_active IP Right Cessation
-
2010
- 2010-03-09 WO PCT/KR2010/001473 patent/WO2011068284A1/en active Application Filing
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012006024A2 (en) * | 2010-06-28 | 2012-01-12 | Randall Lee Threewits | Interactive environment for performing arts scripts |
WO2012006024A3 (en) * | 2010-06-28 | 2012-05-18 | Randall Lee Threewits | Interactive environment for performing arts scripts |
US8888494B2 (en) | 2010-06-28 | 2014-11-18 | Randall Lee THREEWITS | Interactive environment for performing arts scripts |
US9122656B2 (en) | 2010-06-28 | 2015-09-01 | Randall Lee THREEWITS | Interactive blocking for performing arts scripts |
US9870134B2 (en) | 2010-06-28 | 2018-01-16 | Randall Lee THREEWITS | Interactive blocking and management for performing arts productions |
US9904666B2 (en) | 2010-06-28 | 2018-02-27 | Randall Lee THREEWITS | Interactive environment for performing arts scripts |
US10642463B2 (en) | 2010-06-28 | 2020-05-05 | Randall Lee THREEWITS | Interactive management system for performing arts productions |
KR20180056082A (en) | 2016-11-18 | 2018-05-28 | 한국과학기술원 | Metal enhanced fluorescence composite nano structure and method for manufacturing the same, fluorescence material detect method thereof |
KR20190008977A (en) * | 2019-01-18 | 2019-01-25 | (주)뤼이드 | Method for displaying study content and application program thereof |
Also Published As
Publication number | Publication date |
---|---|
KR101158319B1 (en) | 2012-06-22 |
WO2011068284A1 (en) | 2011-06-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101158319B1 (en) | System and method for operating language training electronic device and real-time translation training apparatus operated thereof | |
US10210769B2 (en) | Method and system for reading fluency training | |
US20200175890A1 (en) | Device, method, and graphical user interface for a group reading environment | |
US10222946B2 (en) | Video lesson builder system and method | |
US20140315163A1 (en) | Device, method, and graphical user interface for a group reading environment | |
US11657725B2 (en) | E-reader interface system with audio and highlighting synchronization for digital books | |
US20060194181A1 (en) | Method and apparatus for electronic books with enhanced educational features | |
US20080005656A1 (en) | Apparatus, method, and file format for text with synchronized audio | |
JP2022533310A (en) | A system and method for simultaneously expressing content in a target language in two forms and improving listening comprehension of the target language | |
US20130332859A1 (en) | Method and user interface for creating an animated communication | |
US20200273450A1 (en) | System and A Method for Speech Analysis | |
JP2012133662A (en) | Electronic comic viewer device, electronic comic browsing system, viewer program and recording medium recording viewer program | |
CN103942990A (en) | Language learning device | |
CN109389873B (en) | Computer system and computer-implemented training system | |
CN111711834A (en) | Recorded broadcast interactive course generation method and device, storage medium and terminal | |
JP2003307997A (en) | Language education system, voice data processor, voice data processing method, voice data processing program, and recording medium | |
US10366149B2 (en) | Multimedia presentation authoring tools | |
US20140272823A1 (en) | Systems and methods for teaching phonics using mouth positions steps | |
US20230419847A1 (en) | System and method for dual mode presentation of content in a target language to improve listening fluency in the target language | |
US10460178B1 (en) | Automated production of chapter file for video player | |
CN101493995A (en) | Video interactive teaching system and method | |
KR20030049791A (en) | Device and Method for studying foreign languages using sentence hearing and memorization and Storage media | |
KR101326275B1 (en) | Text and voice synchronizing player | |
KR102645880B1 (en) | Method and device for providing english self-directed learning contents | |
KR20130015918A (en) | A device for learning language considering level of learner and text, and a method for providing learning language using the device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
E902 | Notification of reason for refusal | ||
E902 | Notification of reason for refusal | ||
E701 | Decision to grant or registration of patent right | ||
GRNT | Written decision to grant | ||
LAPS | Lapse due to unpaid annual fee |