KR20110062738A - System and method for operating language training electronic device and real-time translation training apparatus operated thereof - Google Patents

System and method for operating language training electronic device and real-time translation training apparatus operated thereof Download PDF

Info

Publication number
KR20110062738A
KR20110062738A KR1020090119555A KR20090119555A KR20110062738A KR 20110062738 A KR20110062738 A KR 20110062738A KR 1020090119555 A KR1020090119555 A KR 1020090119555A KR 20090119555 A KR20090119555 A KR 20090119555A KR 20110062738 A KR20110062738 A KR 20110062738A
Authority
KR
South Korea
Prior art keywords
subtitle
display area
caption
subtitle display
section
Prior art date
Application number
KR1020090119555A
Other languages
Korean (ko)
Other versions
KR101158319B1 (en
Inventor
노영훈
Original Assignee
(주)아이칼리지
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by (주)아이칼리지 filed Critical (주)아이칼리지
Priority to KR20090119555A priority Critical patent/KR101158319B1/en
Priority to PCT/KR2010/001473 priority patent/WO2011068284A1/en
Publication of KR20110062738A publication Critical patent/KR20110062738A/en
Application granted granted Critical
Publication of KR101158319B1 publication Critical patent/KR101158319B1/en

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/06Foreign languages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0485Scrolling or panning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)

Abstract

The present invention relates to a caption and audio output method of a language learning application installed and operated in an electronic device terminal such as an MP3 player or a mobile phone.

The present invention extracts and stores caption audio information from the contents to be output, divides the screen of the display into a plurality of first caption display areas, a second caption display area, and a k-th caption display area. t n ) displays a subtitle section for audio output in the first subtitle display area, and sequentially displays subtitle sections for audio output at a next time point (t n +1 , t n +2 , ...). Area, the third subtitle display area, ..., k-th subtitle display area in order one-to-one correspondence. The subtitle section of the first subtitle display area may be output with synchronized audio, the subtitle section of the second subtitle display area as the first subtitle display area, and the subtitle section of the third subtitle display area as the second subtitle display area. ...,, Moves the subtitle section of the (k + 1) subtitle display area to the k th subtitle display area. In addition, the present invention allows the caption sessions displayed on the plurality of caption display areas to be continuously scrolled and sequentially played, or a specific caption session can be repeatedly played under user control.

Description

TECHNICAL AND METHOD FOR OPERATING LANGUAGE TRAINING ELECTRONIC DEVICE AND REAL-TIME TRANSLATION TRAINING APPARATUS OPERATED THEREOF}

The present invention relates to a language learning application program installed and operated in a portable electronic device such as an MP3 player or a mobile phone, and more particularly, to a language learning system technology capable of effectively performing foreign language learning as well as simultaneous interpretation training.

The language learning application program installed and operated in the electronic device according to the present invention may be a general computer, a laptop computer, as well as an MP3 player (for example, an iPod) and a mobile phone (for example, an iPhone). It can be applied to all electronic devices that provide display screens such as liquid crystals such as netbooks, PDAs, etc., and can be driven by mounting application software. In the present specification, these are collectively referred to as language learning electronic devices, portable electronic devices, Alternatively, the term "electronic device terminal" is used interchangeably.

In addition, the display provided with the electronic device for language learning according to the present invention does not need to be limited to liquid crystal, and an electronic device that provides a general display screen as well as an organic light emitting display (OLED), an electronic paper (E-Paper), and the like. It can be extended and applied to the terminal with a touch recognition function on the screen.

The language learning system to be mounted in the electronic device terminal according to the present invention, by downloading the language learning application program and language training contents (wires) from the Internet application store (wireless or wired) to the wireless or wired Internet, to effectively perform language learning and training You can do it.

In general, foreign language learning is conducted through repetitive listening and speaking training, and various portable electronic devices such as cassette recorders and walkmen have been used for repetitive language learning.

On the other hand, in order to enhance the language learning effect, electronic devices that output subtitle sentences on the display screen of the learner corresponding to the voice are being sold in the market. This is because you can learn while watching the sentences displayed on the display screen without having to carry a separate textbook or script.

In addition, since most portable terminals such as mobile phones, MP3 players, or PMPs have a liquid crystal display on the front, a language learning program mounted here employs a method of simultaneously outputting audio and subtitles. For example, recently released e-books display textbook contents on the entire screen, and provide a function of reading and listening to the contents by turning a page like a conventional paper book.

However, since the conventional technology has a limitation on the size of the LCD screen, it provides only a function of displaying a sentence that can be displayed in a one-to-one correspondence, and even though the LCD display unit is large, it only outputs letters. And subtitles are out of sync, which is inconvenient.

In other words, the prior art simply has a function of displaying one-to-one sentences corresponding to voice, so when listening to a long sentence that is not easy to understand, there is a tendency to skip over and miss it and thus miss the learning effect. There is a problem.

In addition, in the case of an e-book according to the prior art, it is easy to obstruct concentration by urgently chasing the sentence that is being listened to by the eye on the screen. Moreover, if you sell your eyes for a while, you will have a hard time finding which sentence you are reading.

Accordingly, it is a first object of the present invention to provide a language learning system and method capable of dividing subtitle information corresponding to a voice output from an electronic device into semantic units and displaying the subtitle information.

In addition to the first object, a second object of the present invention is to provide a language learning system and method capable of selectively outputting at least one or more translation sound data and displaying corresponding translation sound subtitle data in a semantic unit. To provide.

In order to achieve the above object, the present invention provides a method for outputting voice and subtitles to the display unit and the audio output unit of the electronic device terminal, comprising the steps of: (a) extracting and storing subtitle voice information from the content to be output; (b) The first subtitle display area is divided into a first subtitle display area, a second subtitle display area, a ..., k th subtitle display area, and a subtitle section for audio output at the present time point t n . Subtitle sections to be displayed in the subtitle display area and to be outputted at the next time point (t n +1 , t n +2 , ...) in order of the second subtitle display area, the third subtitle display area, ..., k displaying one-to-one correspondence in sequence on the caption display area; (c) outputting audio synchronized to the caption section of the first caption display area; And (d) the subtitle section of the second subtitle display area as the first subtitle display area, the subtitle section of the third subtitle display area as the second subtitle display area, and the (k + 1) subtitle display area. And moving the subtitle section of the k-th subtitle display area.

The language learning electronic device according to the present invention includes a display unit for displaying subtitles and a voice output unit for outputting a voice, and the application program mounted and driven by the language learning electronic device includes subtitles corresponding to voice information and voice information. A subtitle voice information providing unit for providing subtitle voice information including information, and a reproducing process for outputting subtitle information and voice information of the subtitle voice information provided from the subtitle voice information providing unit to the display unit and the audio output unit, respectively. And a processing unit, wherein the output of the caption information comprises a plurality of caption display areas for simultaneously arranging and displaying a plurality of caption sections obtained by shortly dividing a sentence into one sentence.

In order to drive a language learning application program according to the present invention, when one sentence is long, a plurality of subtitle sections are divided in advance into semantic units and stored as subtitle information in the subtitle voice information providing unit. Here, the semantic unit may be a method of dividing a clause unit or a sentence by phrase.

In the language learning system according to the present invention, a subtitle section corresponding to the voice currently output through the speaker is displayed in the first subtitle display area, and the subtitle sections corresponding to the voice to be output in the next time frame are sequentially subtitled to the second subtitle. Display area, third subtitle display area; When the voice of the current frame is output, the subtitle section of the (k + 1) subtitle display area may be sequentially shifted to the k th subtitle display area.

According to a preferred embodiment of the present invention, it is possible to repeat the subtitle section of the kth subtitle display area or the subtitle sections of the pth subtitle display area to the qth subtitle display area q≥p by voice command.

The present invention visually differentiates the subtitle sections of the first subtitle display area corresponding to the audio information currently outputted, so that the second subtitle display area, the third subtitle display area,... It can be distinguished from other subtitle sections displayed on the back. Different methods may be used, such as changing the color of the caption display area, changing the gray scale, or changing the font. However, the present invention is not necessarily limited thereto.

The language learning system according to the present invention provides a user graphic interface on a display screen, so that the playback processing unit starts, stops, pauses, and repeats the playback of subtitle audio information according to a user's selection. It may be set to perform a function including the.

According to an exemplary embodiment of the present invention, a user may select at least one subtitle display area displayed on a display unit through a touch screen type user interface so that voice information corresponding to the selected subtitle section is output to the speaker. In addition, the language learning system according to the present invention can set the corresponding subtitle sections to be scrollable on the display unit by clicking and pushing or pulling a plurality of subtitle display areas up and down through a touch screen type user interface. have.

The speech information of the language learning system according to the present invention includes the original sound data of the language to be learned and the translated sound data translated into at least one kind of language for the original sound data, and the original sound data and the translated sound data of the user are reproduced. The output is determined by the selection.

The subtitle information of the language learning system according to the present invention includes original sound caption data and translated sound caption data translated into at least one language for the original sound caption data, and the original sound caption data and the translation sound caption data are selected by the user during playback. The output is determined by In the language learning system according to the present invention, the original sound data, the original sound caption data, the translation sound data, and the translation sound caption data may be stored, and the output may be determined by a combination of the above four data selected by the user at the time of reproduction. . At this time, the output audio data and subtitle data are of course synchronized with each other.

The present invention allows the trainees to train naturally and effectively by being intuitive since they can listen while reading the subtitle section divided by the semantic unit. In addition, since the language learning system according to the present invention can clearly know the subtitle portion currently being reproduced during listening, it is effective to concentrate on listening.

The present invention allows the entire sentence to be studied even during listening through the scroll function, and when touched the subtitle corresponding to the part to be listened to, the voice synchronized to the subtitle is output, so that the trainee selects only the desired part of the output. It is easy to specify the repetition time, which can improve the learning efficiency. In addition, the present invention can expect a learning effect by selectively outputting the translation sound and translation subtitles of various foreign languages.

Hereinafter, the configuration and operation of a language learning system and an application program for driving the same according to the present invention will be described in detail with reference to FIGS. 1 to 8.

According to a first embodiment of the present invention, there is provided a method of outputting voice and subtitles to a display unit and a voice output unit of an electronic device terminal, the method comprising: (a) extracting and storing subtitle voice information from content to be output; (b) The display unit divides a plurality of screens into a first subtitle display area, a second subtitle display area, ..., k-th subtitle display area, and outputs a subtitle section for audio output at the current time t n . 1 Subtitle display area is displayed on the subtitle display area, and the subtitle sections to be audio output at the next time point (t n + 1 , t n + 2 , ...) are sequentially displayed in the second subtitle display area, the third subtitle display area, ..., Displaying one-to-one correspondence on the kth subtitle display area in order; (c) outputting audio synchronized to the caption section of the first caption display area; And (d) the subtitle section of the second subtitle display area as the first subtitle display area, the subtitle section of the third subtitle display area as the second subtitle display area, and the (k + 1) subtitle display area. And moving the subtitle section of the k-th subtitle display area.

According to a second embodiment of the present invention, the present invention provides a method for outputting to a display unit and a voice output unit of an electronic device terminal, the method comprising: (a) extracting and storing caption voice information from content to be output; (b) The first subtitle display area is divided into a first subtitle display area, a second subtitle display area, a ..., k th subtitle display area, and a subtitle section for audio output at the present time point t n . Subtitle sections to be displayed on the subtitle display area and to be outputted at the next time point (t n + 1 , t n + 2 , ...) in order of the second subtitle display area, the third subtitle display area, ..., k displaying one-to-one correspondence in sequence on the caption display area; (c) outputting a sound synchronized to the subtitle section corresponding to the first subtitle display area to the r th subtitle display area (1 ≦ r ≦ k); And (d) the subtitle section of the (r + 1) subtitle display area as the first subtitle display area, the subtitle section of the (r + 2) subtitle display area as the second subtitle display area, ...,, (k + r) providing a method comprising moving a caption section of the caption display area to a k-th caption display area.

According to a preferred embodiment of the present invention, in addition to the steps (a), (b), (c), and (d), (e) the second subtitle display area and the third subtitle by a user scroll command are provided. For the caption sections displayed in the display area, ..., k-th caption display area, the caption section of the m-th caption display area (2≤m≤k) is the (m + 1) caption display area or (m). Displaying scroll by moving to a subtitle display area; And (f) when the user selects an arbitrary m-th caption display area (2 ≦ m ≦ k) from the scroll-displayed caption display areas, the caption section corresponding to the m-th caption display area is displayed in the first caption display area. Then, the subtitle section of the (m + 1) subtitle display area is moved to the second subtitle display area, and the subtitle section of the (m + 2) subtitle display area is sequentially moved to the third subtitle display area, ..... It also provides a scrolling process including the step of displaying.

The present invention also provides an electronic device terminal equipped with a language learning application program of the driving method, and a server system technology for selling or providing the language learning application program over a wired or wireless Internet.

The subtitle section is formed by dividing one sentence or one sentence into predetermined semantic units, and the subtitle information is included in the subtitle voice information.

The claim 1 added to the present specification claims an embodiment in which a subtitle display area is divided into a plurality of subtitle display areas, and an audio output of the subtitle section corresponding to the first subtitle display area is performed. An embodiment in which a caption section displayed in the caption display area is voice output and a plurality of caption sections are moved is described.

Hereinafter, the spirit of the present invention will be described with reference to the first embodiment.

1 is a view showing the operation between the server and the client of the language learning system according to the present invention. Referring to FIG. 1, as an embodiment of a client-side electronic device for performing language learning according to the present invention, an electronic device terminal 100 is shown, and the electronic device terminal 100 can reproduce caption information and voice information. Electronics can be used. In the following detailed description of embodiments of the present invention, it is noted that the electronic device terminal 100 includes all electronic devices in which the spirit of the present invention may be implemented.

For example, such as an MP3 player, a personal multimedia player (PMP), a mobile phone, a smartphone, a netbook, a computer, etc., capable of playing subtitle voice information including voice information including subtitles, such as liquid crystal or OLED, electronic paper, and the like. Audio devices having a display unit and general electronic devices may be used.

That is, the electronic device terminal 100 according to the present invention is applied to the devices that can output the subtitle information in synchronization when outputting the voice information. More preferably, the electronic device terminal 100 according to the present invention may be provided with a platform for downloading an application program or contents through a wired internet or a wireless internet. As a preferred embodiment of the electronic device terminal 100 according to the present invention, Apple's iPod (iPOD) or iPhone (iPhone) may be applied.

As another embodiment of the present invention, the electronic device terminal 100 may be connected to a desktop computer through a USB terminal, and a program and content downloaded through the Internet may be transmitted from the computer to the electronic device terminal.

In a preferred embodiment of the present invention, in the case of the electronic device terminal 100 having a wireless Internet connection, the language learning according to the present invention is performed by operating the communication module ASP program 200 provided in the platform of the electronic device terminal 100. By directly accessing the database server 300 and the file server 400 providing the service, it is possible to perform operations such as membership registration, authentication, search, and file download.

In another embodiment of the present invention, in the case of an MP3 player, such as an iPod, by connecting a USB to a computer connected to the Internet, operating the communication module ASP program 200 of a computer such as a web browser, etc. After accessing the database server 300 and the file server 400 that provide a service and performing operations such as membership registration, authentication, search, file download, and the like, the necessary applications and contents can be downloaded to the MP3 player.

Figure 2 is a block diagram showing the configuration of a language learning system mounted on an electronic device terminal of the client side according to the present invention. Referring to FIG. 2, the electronic device terminal of the present invention includes a display unit 110 and a voice output unit 120 in hardware. The program is executed by clicking a language learning application icon on the operating platform of the electronic device terminal. It works.

The language learning application according to the present invention includes a subtitle voice information providing unit 130 and a reproduction processing unit 140. The caption audio information providing unit 130 and the reproduction processing unit 140 for driving the electronic device terminal according to the present invention may be implemented in software or hardware. The caption audio information providing unit 130 according to the present invention stores and accesses the caption audio information in the data storage device 131.

The display unit 110 included in the electronic device terminal 100 of the language learning system according to the present invention may include a flat display screen such as liquid crystal, an organic display (OLED), an electronic paper (E-Paper), and the like. The display unit 110 divides and displays a plurality of caption display areas by driving the playback processor 140 of an application program mounted on the electronic device terminal 100, and displays the caption sections output by the playback processor 140. Display in each subtitle display area. The caption display area will be described with reference to FIG. 3 and will be described later with reference to FIG. 3.

The voice output unit 120 of the language learning system according to the present invention includes a voice output device such as an external speaker or earphone, and outputs a voice signal output from the reproduction processor 140.

The caption audio information providing unit 130 of the language learning system according to the present invention stores and accesses an application program for providing caption audio information to the reproduction processing unit 140 and the caption audio information in the data storage device 131. In addition, the caption audio information providing unit 130 may include a communication module port, and may include a platform for receiving caption audio information from a wired or wireless Internet.

The reproduction processing unit 140 of the language learning system according to the present invention may be configured to include hardware and software, and the display unit 110 and the audio output unit may be configured to process the subtitle voice information of the present invention and synchronize the subtitle with voice. A reproduction process output to 120 is performed.

In a preferred embodiment of the present invention, the reproduction processing unit 140 includes a processing device (not shown), such as a microprocessor or digital signal processor, a storage device (not shown), such as a semiconductor memory, a codec 141, and a reproduction. Programs and the like. The reproduction processor 140 processes the caption audio information to transmit the audio information processing signal to the audio output unit 120, arranges a plurality of caption display areas on the display 110, and divides the caption into a plurality of caption sections. The caption information processing signal is transmitted to the display 110 to display the information.

According to the present invention, the caption information is divided into a plurality of caption sections and displayed on the plurality of caption display areas on the display 110, respectively. Therefore, the electronic device running the language learning system according to the present invention is characterized in that the device capable of reproducing subtitle speech information including subtitles.

As a preferred embodiment of the subtitle audio information to be applied to the present invention, data in MP3 format or audio data in various formats may be applied. Techniques for inserting subtitle information into a general digital audio data file are already known in the art, and a detailed description thereof will be omitted. As a preferred embodiment of the present invention, the beginning and end of the subtitle sentence to be displayed may be marked with a frame or time value so that the sentence of the marked frame or time value is displayed when the voice is output.

The language learning system according to the present invention is characterized in that the subtitle information is divided into a plurality of subtitle sections in a semantic unit and displayed on the plurality of subtitle display areas of the display unit. Key features of the invention in which subtitle information is divided into subtitle sections in semantic units are described in more detail below.

3A and 3B are diagrams illustrating a first embodiment and a second embodiment of a display screen on which the language learning system according to the present invention is executed on an electronic device terminal.

Referring to FIG. 3A, a language learning system according to the present invention divides a subtitle display area 301, 302, 303, and 304 into a plurality of subtitle display areas on a display screen, and divides a subtitle sentence into semantic units in each subtitle display area. Subtitle sections may be displayed respectively. Fig. 3A shows a display screen including four subtitle display areas as a preferred embodiment of the present invention.

For example, the present invention may output subtitles in synchronism with the following English sentences while displaying the subtitles.

"I usually walk to school which is only 100 meters away from my house where my parents and I live together in peace, joy, and harmony."

This sentence says, according to the Korean word order, "I usually walk to a school that is only 100 meters from my home where I live with my parents in peace, joy and harmony." Can be translated. But if you divide it into semantic units for simultaneous interpretation:

① I usually walk to school

② which is only 100 meters away from my house

③ where my parents and I live together

④ in peace, joy, and harmony

Accordingly, the language learning system according to the present invention displays subtitle sections divided by semantic units in the arranged plurality of subtitle display areas 301, 302, 303, and 304. As described above, the division by semantic unit, which is a unique function of the present invention, is also a basic unit capable of direct reading or direct viewing, and is generally divided into phrase and clause units.

In addition, the "meaning unit" according to the present invention may be divided into a pause part of speech for reading. In addition, the semantic unit may vary according to the individual level of the language to be learned. For example, even in the same sentence, a beginner level can split a sentence into very short subtitle sections, and at an advanced level it can be divided into longer subtitle sections.

In this way, repeating the listening while visually seeing the subtitle section divided into semantic units naturally cultivates the ability to be straight or straight. In other words, it is trained to omit the process of making a translation in accordance with Korean grammar. This can be said to be a learning process that is especially necessary for learning languages with a different word order from Korean, such as English.

Referring back to FIG. 3A, according to the first embodiment of the present invention, a subtitle section corresponding to the currently output voice (t n ) may be programmed to be displayed on the designated first subtitle display area 301. According to the first exemplary embodiment of the present invention, the first subtitle display area 301 outputting audio at this point may be highlighted to visually differentiate from other subtitle display areas. In an embodiment of the differentiation method, the first subtitle display area is divided into second, third,... Compared to the caption display area, the color, gray scale, font shape, font size, and the like can be differentiated.

Subtitle sections to be reproduced at the next time point t n +1 are displayed in the second to fourth subtitle display areas 302, 303, and 304, and then the fourth subtitle display area 304 is displayed at a next time frame. The third subtitle display area 303, the third subtitle display area 303, the second subtitle display area 302, the second subtitle display area 302, and the first subtitle display area 301. It can be moved up sequentially.

That is, the subtitle section displayed in the first subtitle display area 301 disappears upward, and the subtitle section in the second subtitle display area 302 is raised to the first subtitle display area 301. If the caption text is long, the unmarked caption section may be displayed while being sequentially raised from below during playback. As a preferred embodiment of the present invention, as long as the size of the display screen allows, the number of subtitle display areas can be increased or decreased without any particular limitation, and there is no need to add special limitation to the arrangement direction.

That is, when the audio output corresponding to the subtitle section displayed on the first subtitle display area is completed at the present time t n , the present invention recognizes the subtitle section of the second subtitle display area in the first subtitle display area. The subtitle section of the third subtitle display area may be moved to the second subtitle display area, and the subtitle section of the (k + 1) th subtitle display area may move to the kth subtitle display area. The caption section displayed on the first caption display area, the second caption display area, ..., k-th caption display area may be scrolled up and down by a user command.

In another embodiment, when the audio output corresponding to the subtitle section displayed in the first subtitle display area is completed at the present time t n , the pause state is maintained and when the command is input in the manual mode, the second subtitle display area is displayed. Subtitle section to the first subtitle display area, subtitle section of the third subtitle display area to the second subtitle display area, ...., subtitle section of the (k + 1) subtitle display area to the k th subtitle display area You can move it.

As another embodiment of the present invention, it is possible to program so that the number of caption display areas displayed on the display unit of the electronic device terminal can be variably adjusted to the length of a sentence. That is, as a preferred embodiment of the present invention, the number of subtitle display areas can be changed to a user setting. At this time, the caption display area in which the caption section currently outputting audio is located may be the highest caption display area 301 as shown in the drawing, or may be any other position.

3B is a diagram illustrating a display screen on which the language learning system according to the second embodiment of the present invention is executed on an electronic device terminal.

According to the second embodiment of the present invention, as described in claim 2, a plurality of subtitle display areas, such as a first subtitle display area, a second subtitle display area, ... It is characterized by moving the corresponding block up or down at once. At this time, the audio output synchronized to the subtitle section is output in the order of the first subtitle display area, the second subtitle display area, ..., and is visually highlighted to highlight the corresponding subtitle section in which the audio output is being performed. The subtitle display area is sequentially moved.

Referring to FIG. 3B, the second subtitle display area 302 outputting audio at this point is highlighted to visually differentiate from other subtitle display areas.

That is, the first embodiment of the present invention always outputs the subtitle section of the first subtitle display area as audio, whereas the second embodiment of the present invention moves the subtitle display area to be outputted sequentially, so that the subtitle display is highlighted. Visual effects, such as moving an area, appear.

In other words, the second embodiment of the present invention extracts and stores caption audio information from the content to be output and stores the screen of the display unit in the first caption display area, the second caption display area, ..., k-th caption display area. Display the subtitle section which is divided into a plurality, and outputs the audio at the current time point t n in the first subtitle display area, and outputs the audio at the next time point (t n +1 , t n +2 , ...). Subtitle sections are sequentially the same as those in the first embodiment until the subtitle sections are sequentially displayed one-to-one on the second subtitle display region, the third subtitle display region, ..., and the k-th subtitle display region.

Then, the second embodiment differs from the first embodiment in that the audio synchronized to the subtitle section corresponding to the first subtitle display area to the r th subtitle display area (1 ≦ r ≦ k) is output, and the first (r +1) The subtitle section of the subtitle display area to the first subtitle display area, the subtitle section of the (r + 2) subtitle display area to the second subtitle display area, ....,, (k + r) subtitle display And move the caption section of the area to the k-th caption display area.

At this time, in the process of outputting the subtitle sections corresponding to the r subtitle display areas one by one, one can visually differentiate the subtitle display areas that are outputting audio at the present time, thereby highlighting them. The subtitle display area is sequentially viewed from the first subtitle display area to the r subtitle display area.

As a preferred embodiment of the present invention, by providing a graphical user interface (GUI), it is possible to select a subtitle section displayed on the screen so that the user can listen selectively. At this time, the user may select one or more subtitle sections. The language learning application according to the present invention can increase the efficiency of training because it is easy to repeat the listening by selecting only the desired portion during listening.

As a preferred embodiment of the present invention, the subtitle section corresponding to the currently output voice information may be visually differentiated to distinguish it from other subtitle sections displayed on the display unit. For example, in the case of the subtitle section currently being played as shown in the figure, the font size may be different or may be differentiated through an emphasis effect on the font.

As a preferred embodiment of the present invention, the apparatus further includes a user graphic interface, and the playback processing unit 140 of the present invention starts, stops, or pauses playback of the caption audio information according to a user's selection. ) And an area 305 that includes an identifier to perform functions including interval repetition.

In addition, the subtitle sections may be set to enable a scroll function on the display screen. Accordingly, a specific subtitle section can be searched by scrolling by pressing a finger or a stylus on the subtitle display areas 301, 302, 303, and 304 of the display screen having the touch pad function.

According to the present invention, the m-th subtitle display area (2) is displayed for subtitle sections displayed on the second subtitle display area, the third subtitle display area, ..., the k-th subtitle display area by a user scroll command. A scroll display function is provided by moving a subtitle section ≤ m ≤ k) to a (m + 1) subtitle display area or a (m-1) subtitle display area.

That is, while scrolling up and down the caption section displayed on the second caption display area, the third caption display area, ..., k-th caption display area, the user (learner) among the plurality of caption display areas scrolled and displayed. Selects an arbitrary mth subtitle display area (2≤m≤k), the subtitle section corresponding to the mth subtitle display area is jumped to the first subtitle display area, and the (m + 1) th subtitle is selected. The subtitle section of the display area is moved to the second subtitle display area, the subtitle section of the (m + 2) subtitle display area is sequentially moved to the third subtitle display area, ..... and displayed on the first subtitle display area. The subtitles are synchronized and the audio is output.

In a preferred embodiment of the present invention, the voice information in the subtitle voice information may include original sound data of a language to be learned and translation sound data in at least one kind of language for the original sound data. For example, the translation sound data may be Korean, Japanese, French, or the like. The output of the original sound data and the translated sound data may be determined by a user's selection at the time of reproduction. That is, at least one translation sound data may be set after the original sound data or the translation sound data may not be output.

As another embodiment of the present invention, the subtitle information includes original sound caption data and translation sound caption data in at least one language for the original sound caption data, and the original sound caption data and the translation sound caption data are selected by a user during playback. The output can be determined. The language learning system configurations according to the present invention may be implemented in software as an application program included in the reproduction processor 140 or may be implemented in a chip form in hardware.

4 is a view showing an embodiment of the initial screen setting of the language learning system according to the present invention. As a preferred embodiment of the present invention, three modes may be provided, a superlab mode 401, an automatic mode 402, and a manual mode 403.

The super lab mode 401 is a learning method of simultaneously listening to voice and subtitles by editing a unit suitable for storing in the brain by understanding English information coming through the eyes and ears, that is, a semantic unit, a pseudo unit, or a breathing unit. In a preferred embodiment of the present invention, the superlab mode can display subtitles corresponding to voices 1: 1 by editing in semantic units of about 30 characters.

The automatic mode 402 is a learning mode in which the superlab mode is arranged in plural and the subtitles synchronized with the voice while viewing the entire sentence structure are automatically and continuously output by the pre-programmed settings. .

The manual mode 403 is a learning mode in which a user (learner) directly synchronizes a caption corresponding to a voice. The manual mode 403 may adjust a speed at which voice and caption data are output according to user convenience. Details of the automatic mode and the manual mode will be described below with reference to FIGS. 5A and 5B.

FIG. 5A is a diagram illustrating a file automatically selected for language learning according to a first embodiment of the present invention. When the automatic mode 402 of the language learning according to the present invention is selected and the content to be learned is operated, a plurality of subtitle display areas 501, 502, 503, 504, and 505 are displayed on the display screen. 5A illustrates five subtitle display areas for convenience of explanation, and each subtitle section obtained by dividing a sentence into semantic units is displayed in the subtitle display areas 501, 502, 503, 504, and 505.

At this time, the subtitle section corresponding to the audio output at the current time point t n may be displayed on the designated first subtitle display area 501. Subtitle sections to be reproduced at a later point in time are displayed in the second to fifth subtitle display areas 502, 503, 504, and 505, and then the fifth subtitle display area 505 in a next time frame t n +1 . → fourth subtitle display area 504, fourth subtitle display area 504 → third subtitle display area 503, third subtitle display area 503 → second subtitle display area 502, second subtitle The caption section may be sequentially moved upward from the display area 502 to the first caption display area 501.

That is, the caption section displayed in the first caption display area 501 disappears in response to the audio output at this point in time, and the caption section in the second caption display area 502 goes to the first caption display area 501. It comes up automatically.

If the caption text is long, the unmarked caption section may be displayed while being sequentially raised from below during playback. As a preferred embodiment of the present invention, as long as the size of the display screen allows, the number of subtitle display areas can be increased or decreased without any particular limitation, and there is no need to add any particular limitation to the arrangement direction.

In the course of learning in the automatic mode, selecting the play / pause icon 508b can pause the voice playback. Subsequently, clicking the play / pause icon 508b again may resume voice playback. As a preferred embodiment of the present invention, the subtitle section cannot be scrolled manually when the voice is being output in the automatic mode. At this time, click on the play / pause icon 508b to scroll.

When a user scroll command is issued, m-th subtitle display area (2≤m≤k) for subtitle sections displayed in the second subtitle display area, the third subtitle display area, ..., k-th subtitle display area, and so on. Scrolls the caption section by moving to the (m + 1) subtitle display area or the (m-1) subtitle display area. Subsequently, when a user selects an m-th caption display area (2 ≦ m ≦ k) from among a plurality of subtitle display areas that are scrolled and displayed, the subtitle section corresponding to the m-th caption display area is defined as the first subtitle display area. (m + 1) The subtitle section of the subtitle display area is moved to the second subtitle display area, and the subtitle section of the (m + 2) subtitle display area is sequentially moved to the third subtitle display area, ..... do.

In addition, if you select the backward icon 508a during the learning process in the automatic mode, you can return to the contents of the previous chapter. If you select the forward icon 508c, the next chapter is displayed. You can skip to the lesson. The caption section of the k-th caption display area or the caption section of the p-th caption display area to the q-th caption display area q≥p may be reproduced by voice according to a user command.

In addition, the language learning program according to the present invention may be set through the repetition learning section setting icon 509. That is, by first clicking the repeat learning section setting icon 509, the viewpoint of the repeating learning section is defined, and after listening to the voice, clicking the repeat learning section setting icon 509 again, the end point of the repeat learning section is defined.

To indicate that the setting of the repetition learning interval has been successfully set, the color of the repetition learning interval setting icon 509 is different from the setting time (for example, yellow color) and the non-setting state (for example, white color). By displaying, the user's convenience can be improved. As a preferred embodiment of the present invention, the progress bar 510 may be provided to display the progress of the content progress.

In a preferred embodiment of the present invention, when the trans icon 511 is selected during the automatic mode execution, the translation of the language content can be displayed. As a preferred embodiment of the present invention, when the trans icon 511 is selected, the translation subtitles in the Korean language can be displayed simultaneously with the English subtitles. As a preferred embodiment of the present invention, the language icon to be learned language can be retrieved through the file icon 512.

5B is an exemplary view showing a passive mode according to the present invention.

In the language learning manual mode according to the present invention, only the subtitle section displayed on the first subtitle display area 501 is output as the current playback voice, and when the user (learner) clicks the enter 515 manually, the second subtitle display area ( The subtitle section, which is displayed and waiting in step 502, ascends to the first subtitle display area 501, and audio is output. Basically, individual selection of subtitle sections is possible by individual selection.

In a preferred embodiment of the present invention, selecting the repetition learning section setting icon 509 repeatedly outputs the subtitles reproduced by manual selection. On the other hand, it is possible to jump to the language learning content of the previous section (chapter) or the back section (chapter) by clicking the forward icon 508c or the backward icon 508a as in the automatic mode. In addition, the play / pause icon 508b may be configured to perform a play and pause function. The manual mode learning method according to the present invention has an advantage that the user (learner) can arbitrarily adjust the output speed of the subtitles and the voice when compared to the automatic mode.

Fig. 6 is a setting screen for setting the playback range, whether to repeat playback, whether to shuffle play, whether to play the translation sound data and the translation sound caption data, and so on, according to a preferred embodiment of the present invention. It is set to be selected and called. The above-described setting screens of FIGS. 2 to 6 are embodiments that can be variously changed, and the present invention is not limited thereto.

7 is a flowchart of processing a subtitle scroll player reproduction algorithm according to the present invention. Referring to FIG. 7, first, when a list of MP3 files for language learning is obtained (step S1000), caption data information of a file to be reproduced is extracted (step S1010).

Subsequently, it is determined whether the extracted caption information is valid (step S1020), and if an abnormality occurs in the caption data, it is displayed (step S1060). On the other hand, if there is no abnormality in the extracted caption information, the caption data is stored in the memory (step S1030), the playback program module is initialized, and the caption list is initialized (step S1040).

Subsequently, it is determined whether it is in the current audio / subtitle playback state (step S1050), and if it is in the playback state, the caption scroll function is deactivated (step S1070), otherwise, the caption scroll selection is enabled (step S1061). ).

On the other hand, during playback, the animation is executed so that the caption text is scrolled and displayed according to the time value (step S1080). If the subtitle scroll function is activated, clicking the play button plays the subtitle section of the selected subtitle display area from the start time value (step S1062). At this time, if there is no selected subtitle, it is played from the start time value of the subtitle when PAUSE.

8 is a diagram illustrating a screen when a translation icon is executed according to a preferred embodiment of the present invention. The speech information of the language learning system according to the present invention includes the original sound data of the language to be learned and the translated sound data translated into at least one kind of language for the original sound data, and the original sound data and the translated sound data of the user are reproduced. The output is determined by the selection.

The subtitle information of the language learning system according to the present invention includes original sound subtitle data and translated sound subtitle data translated into at least one language for the original sound subtitle data, and the original sound subtitle data and the translated sound subtitle data are reproduced by the user. The output is determined by the selection. In the language learning system according to the present invention, the original sound data, the original sound caption data, the translation sound data, and the translation sound caption data may be stored, and the output may be determined by a combination of the above four data selected by the user at the time of reproduction. .

That is, the English original sound data and the original subtitle data may be output, or the English original sound data and the translated subtitle data may be output. According to the user's (learner's) selection, a combination for outputting the translation sound data and the English subtitle data is possible. At this time, the output audio data and subtitle data are of course synchronized with each other.

Referring back to FIG. 8, when the trance icon 511 is clicked, the original sound subtitle section and the translation sound subtitle section are selected by the user in response to the plurality of subtitle display areas 601, 602, 603, 604, 605, and 606. 608 may be displayed simultaneously or selectively. In this case, the user (learner) may scroll the caption section displayed on the caption display area by touching the screen.

There is no particular limitation with regard to the structure of the subtitle voice information applicable to the language learning system of the present invention, which is described in detail in Korean Patent No. 297,206 of the applicant of the patent application. Various techniques for embedding subtitle information are known in the art.

The foregoing has somewhat broadly improved the features and technical advantages of the present invention to better understand the claims that follow. Additional features and advantages that make up the claims of the present invention will be described below. It should be appreciated by those skilled in the art that the conception and specific embodiments of the invention disclosed may be readily used as a basis for designing or modifying other structures for carrying out similar purposes to the invention.

In addition, the inventive concepts and embodiments disclosed herein may be used by those skilled in the art as a basis for modifying or designing other structures for carrying out the same purposes of the present invention. In addition, such modifications or altered equivalent structures by those skilled in the art may be variously evolved, substituted and changed without departing from the spirit or scope of the invention described in the claims.

The present invention allows the trainees to train naturally and effectively by being intuitive since they can listen while reading the subtitle section divided by the semantic unit. In addition, since the language learning system according to the present invention can clearly know the subtitle portion currently being reproduced during listening, it is effective to concentrate on listening.

The present invention allows the entire sentence to be studied even during listening through the scroll function, and when touched the subtitle corresponding to the part to be listened to, the voice synchronized to the subtitle is output, so that the trainee selects only the desired part of the output. It is easy to specify the repetition time, which can improve the learning efficiency. In addition, the present invention can expect a learning effect by selectively outputting the translation sounds and translation subtitles of various foreign languages.

The present invention downloads an application from a wired or wireless Internet application store, mounts it on a user's mobile phone or an MP3 player, and allows a user to learn a language in a way he or she wishes to present a new business model in the wired and wireless Internet market. It can be used industrially.

1 is a view showing the operation between the server and the client of the language learning system according to the present invention.

Figure 2 is a block diagram showing the configuration of the language learning system mounted on the electronic device terminal of the client side in accordance with the present invention.

3A and 3B show a first preferred embodiment and a second preferred embodiment of a display screen on which a language learning system according to the present invention is implemented on an electronic device terminal;

Figure 4 is a diagram showing an embodiment of the initial screen setting of the language learning system according to the present invention.

FIG. 5A illustrates a file automatically selected for language learning in accordance with a preferred embodiment of the present invention. FIG.

Figure 5b illustrates a passive mode in accordance with the present invention.

Fig. 6 is a diagram showing a setting screen for setting a playback range, whether to repeat playback, whether to shuffle play, whether to play the translation sound data and the translation sound caption data, and so on, according to a preferred embodiment of the present invention.

7 is a flowchart of processing a caption scroll reproduction algorithm according to a preferred embodiment of the present invention.

8 illustrates a translation function in accordance with the present invention.

<Explanation of symbols for the main parts of the drawings>

100: electronic device terminal

110: display unit

120: audio output unit

130: subtitle voice information provider

131: data storage device

140: playback processing unit

141: codec

200: ASP program

300: database server

400: File Server

401: Superlab Mode

402: automatic mode

403: manual mode

508a: reverse icon

508b: Play / Pause Icon

508c: Forward Icon

509: Repeated learning section setting icon

510: progress bar

511: Trance Icon

515: Enter

Claims (16)

In the method for outputting audio and subtitles to the display unit and the audio output unit of the electronic device terminal, (a) extracting and storing caption audio information from the content to be output; (b) The first subtitle display area is divided into a first subtitle display area, a second subtitle display area, a ..., k th subtitle display area, and a subtitle section for audio output at the present time point t n . Subtitle sections to be displayed in the subtitle display area and to be outputted at the next time point (t n +1 , t n +2 , ...) in order of the second subtitle display area, the third subtitle display area, ..., k displaying one-to-one correspondence in sequence on the caption display area; (c) outputting audio synchronized to the caption section of the first caption display area; And (d) the subtitle section of the second subtitle display area to the first subtitle display area, the subtitle section of the third subtitle display area to the second subtitle display area, ...,, and the (k + 1) subtitle display area. Moving the subtitle section to the k-th subtitle display area How to include. The method of claim 1 wherein the method is (e) m-th subtitle display area (2) for subtitle sections displayed on the second subtitle display area, the third subtitle display area, ..., k th subtitle display area by a user scroll command; Scroll display by moving a subtitle section of ≤m≤k) to the (m + 1) th subtitle display area or the (m-1) th subtitle display area; And (f) When a user selects an arbitrary m-th caption display area (2 ≦ m ≦ k) from the scroll-displayed caption display areas, the caption section corresponding to the m-th caption display area is the first caption display area. Move the subtitle section of the (m + 1) subtitle display area to the second subtitle display area and the subtitle section of the (m + 2) subtitle display area to the third subtitle display area, ..... Marking Step How to include more. A method according to any one of the preceding claims, wherein when step (b) is completed, steps (c) and (d) proceed sequentially. The method according to any one of claims 1 to 4, wherein once the step (b) is completed, the voice output is stopped in the stopped state and proceeded to the step (c) at least once by a user command, After proceeding to step (d), the method is characterized by waiting in a paused state. The subtitle section shown in the first subtitle display area is assigned to the remaining second, third, k,. Displayed so as to visually differentiate from the subtitle section displayed on the subtitle display area, the differentiation method is characterized in that the first subtitle display area is the second, third, .... And a method of differentiating using the subtitle display area and any one or a combination of color, gray scale, font shape, and font size. In the method for outputting audio and subtitles to the display unit and the audio output unit of the electronic device terminal, (a) extracting and storing caption audio information from the content to be output; (b) The first subtitle display area is divided into a first subtitle display area, a second subtitle display area, a ..., k th subtitle display area, and a subtitle section for audio output at the present time point t n . Subtitle sections to be displayed in the subtitle display area and to be outputted at the next time point (t n +1 , t n +2 , ...) in order of the second subtitle display area, the third subtitle display area, ..., k displaying one-to-one correspondence in sequence on the caption display area; (c) outputting a sound synchronized to the subtitle section corresponding to the first subtitle display area to the r th subtitle display area (1 ≦ r ≦ k); And (d) the subtitle section of the (r + 1) subtitle display area to the first subtitle display area, the subtitle section of the (r + 2) subtitle display area to the second subtitle display area, ....,, ( k + r) moving the subtitle section of the subtitle display area to the k th subtitle display area How to include. The method of claim 6, wherein the method (e) m-th subtitle display area (2≤) for subtitle sections displayed in the second subtitle display area, the third subtitle display area, ..., k-th subtitle display area by a user scroll command; scroll display by moving the subtitle section of m ≦ k) to the (m + 1) subtitle display area or the (m-1) subtitle display area; And (f) When a user selects an arbitrary m-th caption display area (2 ≦ m ≦ k) from the scroll-displayed caption display areas, the caption section corresponding to the m-th caption display area is the first caption display area. Move the subtitle section of the (m + 1) subtitle display area to the second subtitle display area and the subtitle section of the (m + 2) subtitle display area to the third subtitle display area, ..... Marking Step How to include more. 8. A method according to any one of claims 6 to 7, wherein when step (b) is completed, steps (c) and (d) proceed sequentially. The method according to any one of claims 6 to 7, wherein once the step (b) is completed, the voice output is stopped in the stopped state and proceeded to the step (c) at least once by a user command, After proceeding to step (d), the method is characterized by waiting in a paused state. The process according to claim 6 or 7, wherein in step (c), A caption section for displaying the i-th caption display area 1≤i≤r corresponding to the subtitle section for audio output in the remaining first caption display area to the r-th caption display area (1≤r≤k, r ≠ i); Displayed to visually differentiate, the differentiation method is the color of the i-th subtitle display area (1≤i≤r) and the remaining first subtitle display area to the r-th subtitle display area (1≤r≤k, r ≠ i) And differentiating using any one or combination of gray scale, or font shapes, font sizes. The method of claim 1, wherein the subtitle section is formed by dividing one sentence or one sentence into a predetermined semantic unit, and the subtitle audio information corresponds to the subtitle audio information. Caption information is included. The method of claim 1, wherein the voice output voice is any one of original sound data or translation sound data, and the subtitle output subtitle is original sound subtitle data or translation. Any one of the sound caption data, the original sound data, the translation sound data, the original sound caption data, the translation sound caption data are all synchronized to the same time value or frame, characterized in that the output is selectively determined by the user's command Way. 8. The method of any one of claims 1, 2, 6 and 7, wherein the method is (g) setting a pth caption display region as a repeating start caption section and setting a qth caption display region q≥p as the ending caption section by a user command; And (h) Move the subtitle section of the p subtitle display area to the first subtitle display area, the subtitle section of the (p + 1) subtitle display area to the second subtitle display area, and the (p + 2) subtitle display area. Moving the subtitle section of the subtitle display area to the third subtitle display area, ..... sequentially and displaying the subtitles repeatedly How to include more. The method of claim 1, claim 2, claim 6, claim 7, characterized in that the application program for executing the method of any one of the method provided by the server via the Internet to download. A server system for providing an application program executed on the Internet according to any one of claims 1, 2, 6 and 7 on the Internet. An electronic device terminal for executing an application program executed by the method according to any one of claims 1, 2, 6, and 7.
KR20090119555A 2009-12-04 2009-12-04 System and method for operating language training electronic device and real-time translation training apparatus operated thereof KR101158319B1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR20090119555A KR101158319B1 (en) 2009-12-04 2009-12-04 System and method for operating language training electronic device and real-time translation training apparatus operated thereof
PCT/KR2010/001473 WO2011068284A1 (en) 2009-12-04 2010-03-09 Language learning electronic device driving method, system, and simultaneous interpretation system applying same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR20090119555A KR101158319B1 (en) 2009-12-04 2009-12-04 System and method for operating language training electronic device and real-time translation training apparatus operated thereof

Publications (2)

Publication Number Publication Date
KR20110062738A true KR20110062738A (en) 2011-06-10
KR101158319B1 KR101158319B1 (en) 2012-06-22

Family

ID=44115113

Family Applications (1)

Application Number Title Priority Date Filing Date
KR20090119555A KR101158319B1 (en) 2009-12-04 2009-12-04 System and method for operating language training electronic device and real-time translation training apparatus operated thereof

Country Status (2)

Country Link
KR (1) KR101158319B1 (en)
WO (1) WO2011068284A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012006024A2 (en) * 2010-06-28 2012-01-12 Randall Lee Threewits Interactive environment for performing arts scripts
US9122656B2 (en) 2010-06-28 2015-09-01 Randall Lee THREEWITS Interactive blocking for performing arts scripts
US9870134B2 (en) 2010-06-28 2018-01-16 Randall Lee THREEWITS Interactive blocking and management for performing arts productions
KR20180056082A (en) 2016-11-18 2018-05-28 한국과학기술원 Metal enhanced fluorescence composite nano structure and method for manufacturing the same, fluorescence material detect method thereof
KR20190008977A (en) * 2019-01-18 2019-01-25 (주)뤼이드 Method for displaying study content and application program thereof
US10642463B2 (en) 2010-06-28 2020-05-05 Randall Lee THREEWITS Interactive management system for performing arts productions

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103902377B (en) * 2012-12-24 2017-11-03 联想(北京)有限公司 Terminal device and its running status synchronous method
CN104093085B (en) * 2014-04-22 2016-08-24 腾讯科技(深圳)有限公司 Method for information display and device
CN109272923B (en) * 2018-09-17 2021-02-02 深圳市创维群欣安防科技股份有限公司 Subtitle rolling display method and system based on multi-screen equipment and storage medium
CN110428674A (en) * 2019-08-15 2019-11-08 湖北纽云教育科技发展有限公司 A kind of application method of listening study device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100297206B1 (en) * 1999-01-08 2001-09-26 노영훈 Caption MP3 data format and a player for reproducing the same
KR20000012538A (en) * 1999-05-12 2000-03-06 김민선 Method and storing media for controlling caption function for studying foreign language subscript included in moving picture
KR20020005523A (en) * 2001-10-08 2002-01-17 노영훈 Audio Player having a Caption Display Function
JP2007316613A (en) * 2006-04-26 2007-12-06 Matsushita Electric Ind Co Ltd Caption display control apparatus
KR100974002B1 (en) * 2008-04-25 2010-08-05 설융석 System for studying nuance of foreign by playing movie

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012006024A2 (en) * 2010-06-28 2012-01-12 Randall Lee Threewits Interactive environment for performing arts scripts
WO2012006024A3 (en) * 2010-06-28 2012-05-18 Randall Lee Threewits Interactive environment for performing arts scripts
US8888494B2 (en) 2010-06-28 2014-11-18 Randall Lee THREEWITS Interactive environment for performing arts scripts
US9122656B2 (en) 2010-06-28 2015-09-01 Randall Lee THREEWITS Interactive blocking for performing arts scripts
US9870134B2 (en) 2010-06-28 2018-01-16 Randall Lee THREEWITS Interactive blocking and management for performing arts productions
US9904666B2 (en) 2010-06-28 2018-02-27 Randall Lee THREEWITS Interactive environment for performing arts scripts
US10642463B2 (en) 2010-06-28 2020-05-05 Randall Lee THREEWITS Interactive management system for performing arts productions
KR20180056082A (en) 2016-11-18 2018-05-28 한국과학기술원 Metal enhanced fluorescence composite nano structure and method for manufacturing the same, fluorescence material detect method thereof
KR20190008977A (en) * 2019-01-18 2019-01-25 (주)뤼이드 Method for displaying study content and application program thereof

Also Published As

Publication number Publication date
KR101158319B1 (en) 2012-06-22
WO2011068284A1 (en) 2011-06-09

Similar Documents

Publication Publication Date Title
KR101158319B1 (en) System and method for operating language training electronic device and real-time translation training apparatus operated thereof
US10210769B2 (en) Method and system for reading fluency training
US20200175890A1 (en) Device, method, and graphical user interface for a group reading environment
US10222946B2 (en) Video lesson builder system and method
US20140315163A1 (en) Device, method, and graphical user interface for a group reading environment
US11657725B2 (en) E-reader interface system with audio and highlighting synchronization for digital books
US20060194181A1 (en) Method and apparatus for electronic books with enhanced educational features
US20080005656A1 (en) Apparatus, method, and file format for text with synchronized audio
JP2022533310A (en) A system and method for simultaneously expressing content in a target language in two forms and improving listening comprehension of the target language
US20130332859A1 (en) Method and user interface for creating an animated communication
US20200273450A1 (en) System and A Method for Speech Analysis
JP2012133662A (en) Electronic comic viewer device, electronic comic browsing system, viewer program and recording medium recording viewer program
CN103942990A (en) Language learning device
CN109389873B (en) Computer system and computer-implemented training system
CN111711834A (en) Recorded broadcast interactive course generation method and device, storage medium and terminal
JP2003307997A (en) Language education system, voice data processor, voice data processing method, voice data processing program, and recording medium
US10366149B2 (en) Multimedia presentation authoring tools
US20140272823A1 (en) Systems and methods for teaching phonics using mouth positions steps
US20230419847A1 (en) System and method for dual mode presentation of content in a target language to improve listening fluency in the target language
US10460178B1 (en) Automated production of chapter file for video player
CN101493995A (en) Video interactive teaching system and method
KR20030049791A (en) Device and Method for studying foreign languages using sentence hearing and memorization and Storage media
KR101326275B1 (en) Text and voice synchronizing player
KR102645880B1 (en) Method and device for providing english self-directed learning contents
KR20130015918A (en) A device for learning language considering level of learner and text, and a method for providing learning language using the device

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant
LAPS Lapse due to unpaid annual fee