KR20120129015A - Method for creating educational contents for foreign languages and terminal therefor - Google Patents

Method for creating educational contents for foreign languages and terminal therefor Download PDF

Info

Publication number
KR20120129015A
KR20120129015A KR1020110047013A KR20110047013A KR20120129015A KR 20120129015 A KR20120129015 A KR 20120129015A KR 1020110047013 A KR1020110047013 A KR 1020110047013A KR 20110047013 A KR20110047013 A KR 20110047013A KR 20120129015 A KR20120129015 A KR 20120129015A
Authority
KR
South Korea
Prior art keywords
unit
audio
text
language content
image
Prior art date
Application number
KR1020110047013A
Other languages
Korean (ko)
Inventor
조성진
Original Assignee
조성진
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 조성진 filed Critical 조성진
Priority to KR1020110047013A priority Critical patent/KR20120129015A/en
Publication of KR20120129015A publication Critical patent/KR20120129015A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/22Social work
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/06Foreign languages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/233Processing of audio elementary streams

Abstract

PURPOSE: A language content generating method and a terminal therefor are provided to easily generate contents by easily synchronizing audio data and text data included in language contents with each other. CONSTITUTION: An input unit(20) inputs a user command. A storage unit(40) stores text files and audio files. The storage unit stores generated language contents. If a text file and an audio file are selected, an image output unit(50) displays text data included in the text file and audio data included in the audio data. A control unit(10) generates the language contents. [Reference numerals] (10) Control unit; (20) Input unit; (30) Communication unit; (40) Storage unit; (50) Image output unit; (60) Voice output unit; (70) Audio reading unit; (80) Image reading unit; (90) Interface unit

Description

Language content generation method and terminal for the same {METHOD FOR CREATING EDUCATIONAL CONTENTS FOR FOREIGN LANGUAGES AND TERMINAL THEREFOR}

The present invention relates to a method for generating a language content and a terminal therefor, and more particularly, to a method for generating a language content and a terminal therefor that anyone can easily generate language content using text and audio data.

In order to enhance the learning effect, various types of language contents are used. However, since general language content is difficult for the general public to generate language content, the supplier and the consumer are clearly divided, so that the amount of the content is not limited and does not sufficiently reflect the demand of the consumer. In addition, there was a problem that it is difficult to provide language content at a low cost according to the law of supply and demand.

In addition, since the learning is performed according to the format determined by the supplier even when using the language content, the consumer was in a position to follow the learning format prescribed by the supplier, and thus there was a disadvantage in that the learning efficiency was not large.

Accordingly, the present invention has been made to solve the above problems, an object of the present invention is to provide a language content generation method and a terminal for easily synchronizing text data and audio data to easily generate language content will be.

Another object of the present invention is to provide a language content generation method capable of reproducing text / audio / image data and the like in various formats and a terminal therefor.

According to a feature of the present invention for achieving the above object, the present invention provides a terminal for generating a language content using text and audio data, comprising the steps of: (A) receiving a text file and an audio file from a user; ; (B) dividing the text data included in the selected text file into sentence units; (C) performing waveform analysis on the audio data included in the selected audio file to search for a blank section in which the amplitude is kept below the reference width by more than a reference time, and partitioning the audio data into a plurality of unit sections based on the found blank section; ; (D) sequentially one-to-one matching each sentence unit text separated in step (B) and each audio unit section partitioned in step (C); And (E) generating one content including text data divided into sentence units and audio data divided into unit sections, wherein the generated content includes a matching relationship between each sentence unit text and each audio unit section. It includes a step.

The reference width may be set to 1/1000 or less of the maximum amplitude of the audio data included in the audio file.

The method may further include (F) matching an image corresponding to each sentence unit text classified in step (B) or matching an image corresponding to each audio unit section partitioned in step (C). In addition, in the step (E), the content may further include an image corresponding to each sentence unit text or an image corresponding to each audio unit section.

In addition, the step (F) may be performed by matching an image of a partial area of the enlarged or reduced image data included in one image file to each sentence unit text or each audio unit section.

In this case, the language content generation method, when a command to play language content (G1) is input, outputs one or more audio unit sections included in the selected language content, and outputs sentence unit text corresponding to the reproduced audio unit section and the audio unit section. It may also be performed by the step of outputting at the same time.

The language content generating method may further include outputting one or more audio unit sections included in the selected language content when a command to play language content (G2) is input, and sentence unit text corresponding to the reproduced audio unit section or audio unit section to be reproduced. The method may further include simultaneously outputting at least one of the images corresponding to the sentence unit text corresponding to the audio unit section corresponding to or reproduced with the audio unit section.

The language content generating method may include (H) an audio data waveform of an audio file selected in step (A) and text data of a text file selected in step (A), and the text data may be displayed in step (B). Displaying the unit section divided by the sentence unit, wherein the audio data waveform is divided in the step (C); (I) Text data and audio data when receiving a command for modifying a sentence unit of text data or a unit section of audio data in the text data divided into sentence units and unit sections shown in step (H) above. The method may further include modifying and displaying a waveform.

The present invention provides a terminal for generating language content using text and audio data, the terminal comprising: an input unit for inputting a user's command; A storage unit for storing one or more text files and audio files, and storing generated language content; An image output unit configured to display text data included in the text file and audio data waveforms included in the audio file when one text file and one audio file stored in the storage unit are selected through the input unit; The text data displayed on the image output unit is divided into sentence units, and the waveform of the audio data displayed on the image output unit is analyzed to search for a blank section in which an amplitude is maintained below a reference width by searching for a blank section. The control unit may be configured to divide the audio data into a plurality of unit sections based on the control unit, and to generate one language content by sequentially matching each sentence unit text and each audio unit section one by one and storing the language unit. It may be.

The terminal further includes a voice output unit through which audio data is output, and the controller outputs one or more of audio unit sections included in the language content through the voice output unit when a language content reproduction command is input through the input unit. The sentence unit text matching the output audio unit section may be displayed on the image output unit.

When the image corresponding to at least one of the respective sentence unit text or the respective audio unit section is selected through the input unit, the control unit matches the selected image with each sentence unit text and the respective audio unit. Matching sections may be included in the language content and stored.

The terminal further includes a voice output unit through which audio data is output, and the controller outputs one or more of audio unit sections included in the language content through the voice output unit when a language content reproduction command is input through the input unit. The display unit may display at least one of sentence unit text or an image matched with the output audio unit section.

The terminal may further include an audio reader configured to record an external sound to generate an audio file, and an image reader configured to photograph an external image and generate an image file.

Furthermore, the terminal may further include a communication unit for uploading the generated language content to the server or downloading language content from the server.

According to the language content generation method and the terminal for the same according to the present invention has the following effects.

That is, since audio data and text data included in the language content can be easily synchronized, anyone can easily generate the content.

In addition, according to the method for generating a language content according to the present invention and a terminal therefor, there is an advantage in that various language contents can be generated, utilized, and shared.

Furthermore, according to the method of generating a language content according to the present invention and a terminal therefor, there is an advantage that a learning format suitable for oneself can be selected to enhance learning efficiency by enabling one content to be reproduced in various ways.

1 is a conceptual diagram schematically showing the configuration of a language content generation system according to an embodiment of the present invention.
2 is a block diagram schematically illustrating a configuration of a language content generation terminal according to an embodiment of the present invention.
3 is a flowchart illustrating a method of generating a language content in accordance with an embodiment of the present invention.
4 is an exemplary diagram illustrating a process of matching text data and audio data in a language content generating method according to an exemplary embodiment of the present invention.
5 is an exemplary diagram illustrating a process of matching text data and image data in a language content generating method according to an exemplary embodiment of the present invention.
6 is an exemplary view of a content reproduction method generated by a language content generation method according to an embodiment of the present invention.

Hereinafter, a method and a terminal for generating a language content according to an embodiment of the present invention will be described in detail with reference to the accompanying drawings. Advantages and features of the present invention, and methods of achieving the same will become apparent with reference to the embodiments described below in detail in conjunction with the accompanying drawings.

It is to be understood that the present invention is not limited to the embodiments disclosed herein but may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. To fully disclose the scope of the invention to those skilled in the art, and the invention is only defined by the scope of the claims.

Like reference numerals refer to like elements throughout the specification.

1 is a conceptual diagram schematically showing the configuration of a language content generation system according to an embodiment of the present invention, Figure 2 is a block diagram schematically showing the configuration of a language content generation terminal according to an embodiment of the present invention. As shown in FIG. 1, the language content generation system according to an exemplary embodiment of the present invention includes a content providing server 100. The content providing server 100 is an information processing device that is connected to a plurality of content generating terminal 300 by using a network 200 to provide another service in response to a request of the content generating terminal 300, the content generating terminal Search and provide the information requested by the 300, or receives the data to be processed in the program installed in the content providing server 100 and transmits the processing result back to the content generating terminal 300. In particular, in the present invention, the content providing server 100 includes a database in which language content is stored, and when language content is received from the content generating terminal 300, the content providing server 100 accumulates it in the database, and then another content generating terminal 300. When the language content providing request is received from the language content stored in the database serves to provide to the other content generating terminal 300. The network 200 may be a general internet.

On the other hand, the content generation terminal 300 is a client receiving the necessary services by connecting to the content providing server 100 by wire or wirelessly through the network 200, the language content required from the content providing server 100 Receives, plays and serves to create new language content and upload it to the content providing server 100 again. Here, the content generation terminal 300 may be a smartphone, a tablet computer, a general personal computer, and the like. In this case, in order to enable the content generation terminal 300 not only to reproduce the language content provided from the content providing server 100 but also to generate a new language content, an application program for the content generation terminal 300 is installed. The application program may be directly provided through the content providing server 100 or may be downloaded from an app store or the like.

Looking at a more specific configuration of the content generation terminal 300, as shown in Figure 2, first includes a control unit 10. The control unit 10 is a data processing device that is in charge of the overall control of the content generation terminal 300, and controls the processing of the interpretation of the command and the operation, comparison, etc. of the data. In particular, the user selects text, audio, and image data necessary for generating language content, and plays a role of generating language content in which text, audio, and images can be organically reproduced according to the application program.

In addition, the content generation terminal 300 is provided with an input unit 20. The input unit 20 is a means for receiving a command or information required from a user of the content generating terminal 300, and includes a plurality of key columns, a touch panel, and the like. The user inputs text data necessary for generating language content through the input unit 20, or selects text, audio, an image, and the like.

In addition, the content generation terminal 300 is provided with a communication unit 30, the communication unit 30 serves to transmit or receive data by connecting to the network 200 in a wired or wireless manner, and thus the control unit ( 10 exchanges data with the content providing server 100 through the communication unit 30. In particular, the communication unit 30 is provided with text, audio, image data, etc. necessary for generating language content, or is provided with language content already generated by another user.

As such, the data provided through the communication unit 30 is stored in the storage unit 40. The storage unit 40 is a data storage device, and stores text, audio, and image files for generating language content according to a command of the controller 10, and also stores newly generated language content with them. Furthermore, the application program may also be stored in the storage unit 40. The data stored in the storage unit 40 may be read and managed by the control unit 10 as necessary.

Meanwhile, the image output unit 50 of the content generation terminal 300 is an image output means such as a liquid crystal display, and shows the data processing result of the control unit 10. In particular, when the language content is reproduced, the text data or the image data included in the language content is output in accordance with the output time of the audio data. In addition, the audio output unit 60 of the content generating terminal 300 is an audio output means including a speaker, and outputs audio data in synchronization with text or image data displayed on the image output unit 50.

In addition, the content generation terminal 300 may further include an audio reader 70 and an image reader 80. Here, the audio reading unit 70 includes a device for converting sound waves into an electric signal, such as a microphone, and the image reading unit 80 is an optical device such as a digital camera, and takes a picture to the storage unit 40. Save the image as an electrical signal. The audio reader 70 and the image reader 80 may be used as means for obtaining audio data and image data for generating language content. In addition, the controller 10 may extract the text portion from the image photographed by the image reader 80 and use the extracted text when generating language content.

In addition, the content generation terminal 300 may further include an interface unit 90. The interface unit 90 is a means for communicating the content generating terminal 300 with another external device, and may be a data input / output device of a wired or short-range wireless communication method. For example, it may be a universal serial bus (USB) module or a Bluetooth module. The content generation terminal 300 through the interface unit 90 may obtain text, audio, image data, etc. required for language content generation from an external device.

Hereinafter, a language content generation system using a language content generation system and a language content generation terminal according to an embodiment of the present invention as described above will be described in detail with reference to the accompanying drawings. 3 is a flowchart illustrating a method for generating language content according to an exemplary embodiment of the present invention, and FIG. 4 is an example illustrating a process of matching text data and audio data in a method for generating language content according to an embodiment of the present invention. 5 is an exemplary diagram illustrating a process of matching text data and image data in a language content generation method according to an exemplary embodiment of the present invention.

First, as shown in FIG. 3, the method for generating a language content according to an embodiment of the present invention begins with selecting a text file and an audio file to be used for generating the language content (S100). The text file is composed of one or more sentences, and the audio file includes audio data corresponding to the selected text file. For example, if an audio file contains voice data of an English sentence contained in a fairy tale “The Little Mermaid,” the text file is a textual translation of the English sentence contained in the audio file or a sentence of an English sentence contained in the audio file. May include Korean translated text or both. Here, the text file and the audio file may be selected from the user through the input unit 20, which is stored in the storage unit 40. Such a text file and the audio file may be downloaded from the content providing server 100 or the The audio reading unit 70, the image reading unit 80, or the input unit 20 may be generated or received through the interface unit 90.

The text data included in the selected text file is structured in units of sentences (S110). Here, the text file may be divided into sentence units based on a period "." Or sentences based on a line break character or a new line character in the text file. In addition, in a text file having alternating Korean sentences and foreign sentences, the Korean sentences and the corresponding foreign sentences may be divided into one unit. In this case, two sentences are used as a unit to distinguish the sentences in the text file. Each sentence divided as described above is structured by assigning different identification codes.

In addition, the controller 10 performs a step of structuring audio data included in the selected audio file (S120). At this time, the structure of the audio file is represented by the size of the amplitude of the audio data with respect to time, and is searched for a section in which the absolute value of the amplitude is equal to or less than the preset reference width and the time when the amplitude is equal to or less than the reference reference time. And it is structured in audio units which are distinguished from each other back and forth based on the searched section. In this case, the reference width may be set to 0. However, the reference width may not be necessarily and may include noise in the audio file, and thus may be set to a value close to 0 but having an absolute value. For example, 1/1000 of the maximum signal level of the audio data included in the structured audio file may be set as the reference width. In addition, the reference time may be set to, for example, 0.1 seconds to be the basis for confirming that the reference width lasts for a predetermined time or more.

In this way, checking the blank of the audio data using the reference width and the reference time and recognizing the audio data before and after the unit based on the identified blank is based on the characteristics of general language content. In this case, since a space is generated between the sentences by reading between the sentences, the audio data may be partitioned and structured in units of sentences by using spaces between the sentences. At this time, the audio file is structured by assigning a unique identification code to each partitioned audio data area.

For example, when a text file and an audio file corresponding thereto are selected to generate new language content, a waveform of an audio file may appear at the top and a text file at the bottom as shown in FIG. 4.

In this case, the controller 10 of the content generation terminal 300 analyzes the waveform of the audio file displayed at the top, searches for the space using the reference width and the reference time described above, and divides the section before and after the searched space. Accordingly, as shown by the dashed-dotted line in the figure, the space before and after the space is partitioned, and each of the partitioned areas except the space is structured in different units. That is, from 0 seconds to about 14 seconds of the selected audio file (Mermaid.wma) is divided into one unit, and the portion between 15 seconds and 26 seconds is another unit, and the time between approximately 26 seconds and 35 seconds is another. It can be set in other units. Here, the blank portion may be set to be included in the audio unit before the blank portion, the audio unit after the space, or not included anywhere.

However, in order to reduce an error, the user may change or modify the partition position of each unit section partitioned by the content generation terminal 300, in which case, the position of the dashed line on the audio waveform as shown in FIG. By forming a new dashed line so that the audio data can be partitioned in units of sentences, the audio data can be partitioned while listening to the contents of the file by moving the dotted line on the waveform to the right at various speeds.

And the text file (Little_Mermaid.txt) selected in Figure 4 is "." It is divided into units and displayed in sentence units.

Each unit of text data divided into sentence units and audio data divided into a plurality of unit sections are sequentially matched with each other and stored as a structured text-audio file (S130).

That is, in the example shown in FIG. 4, the audio data divided into three units and the text data divided into three sentence units are sequentially matched with each other. That is, when the identification code "A" is assigned to the first audio unit section within 0 seconds to about 14 seconds, the same identification code "A" is assigned to the first sentence "You can never be a mermaid again" and is mutually different. Can be matched. Alternatively, each sentence of the text data divided into sentence units may be tagged in a specific area of the audio data corresponding thereto.

In this way, text files and audio files structured to match each other using the same identification code are stored together, or text data divided by sentence units in each unit section of the audio file is tagged and stored together.

In the embodiment of the present invention, the image file may additionally match text or audio data divided into a plurality of units. The structured text-audio-image file may be stored by structuring the matched image data together (S140). ).

For example, in the embodiment shown in FIG. 4, when an image is to be associated with text of each sentence unit divided by sentences, a desired camera icon may be selected for each sentence unit text.

Here, the image corresponding to each sentence unit text may correspond to a different image file for each sentence unit, but as shown in FIG. 5, different portions of one image may correspond to each sentence unit text.

That is, as shown in FIG. 5, when the entire image I of the image file selected by the user includes a plurality of different images such as a cartoon image, or when the image size is large and displayed at once, the image contents are reduced too small. If necessary, a partial region of one image may be matched with text of each sentence unit or audio of each unit section.

This may be done by selecting a partial region S in the entire image I, and in particular, may be selected by determining an enlargement / reduction ratio of the selected partial region S. FIG. That is, when the control unit 10 outputs the image selected by the user to the image output unit 50 in the content generation terminal 300, the user may display all the images I on the image output unit 50. Alternatively, the entire image I may be enlarged so that only a part of the region S is displayed on the image output unit 50. When the partial area S of the desired size and position is selectively displayed, when the screen shot of the corresponding area is acquired, the control unit 10 displays an image of the acquired partial area S in the storage unit 40. While storing in the tag, the identification code of the sentence unit text or audio unit section to which the image is matched is displayed to indicate the correspondence relationship with each other. However, the image may not match each sentence unit of the text file, but may match the audio data.

Through the above-described process, the data file, the audio file, and optionally the data contained in the image file are divided into a plurality of units, and the divided units are matched with each other to generate one language content.

The newly created language content is reproduced in such a manner that data matching each other is synchronized and output together. For example, while the text “The Prince must marry you and give you a soul” of the second sentence in FIG. 4 is displayed on the image output unit 50, the audio output unit 60 displays about 15 seconds to about 15 seconds. The unit section corresponding to 26 seconds is played back. Accordingly, text and audio corresponding to each other are simultaneously output so that the user can learn the sentence. In addition, when the image corresponding to the partial region S shown in FIG. 5 is selected and stored in the text of the third sentence unit of FIG. 4, the image output unit 50 includes the sentence “Without his love, you die and you”. are nothing. "may be displayed together with the image of the selected partial area S. FIG.

In addition, the user in the content generation terminal 300 optionally uploads the newly created language content to the content providing server 100 through the communication unit 30 so that other users can download for free or for a fee It can also be profitable by making it available.

The language content generated by the above-described embodiment of the present invention includes text and audio data synchronized with each other in units of sentences and additionally includes an image. Using this, language content reproduction may be performed in a plurality of different formats. Hereinafter, a method of playing back language content generated by an embodiment of the present invention will be described with reference to the accompanying drawings. 6 is an exemplary view of a content reproduction method generated by a language content generation method according to an embodiment of the present invention.

As illustrated in FIG. 6, when a user selects a language content in the storage unit 40 that stores language content including structured text-audio-image data, the selected content may be reproduced in various ways.

First, the identification code A represents a novel book viewing method, in which text data is continuously displayed, and a highlighter mark or an underline mark is displayed in a corresponding sentence according to a unit section of the currently reproduced audio data. Each unit section of is to be played sequentially without interruption. Here, at least a part of the image corresponding to each sentence unit text may be displayed together on one side of the fingerprint.

In addition, the identification code B represents a sentence viewing method in which text data is divided into sentence units and a unit section of audio data corresponding thereto is reproduced. The unit section reproduction of the audio data corresponding to the displayed sentence can be repeatedly listened to, and the user can select to move to the previous sentence or the next sentence. In this case, if there is an image corresponding to the displayed sentence, it may be displayed together with the text.

The identification code C represents a case in which the image is reproduced by the image viewing method, and the images corresponding to the unit sections of the reproduced audio data may be sequentially displayed on the screen, as shown in FIG. 5. When the included partial region S corresponds to the unit section of each audio data, each time the unit section of the reproduced audio data is changed, the partial section corresponding to the unit section before the change corresponds to the unit section after the change. The process of moving and enlarging / reducing an image to a partial region may be represented as an animation effect on the entire image (I).

The identification code D represents a case in which the language content is reproduced by the cartoon view method, and the image corresponding to the unit section of the reproduced audio data and the unit sections before and after the display is collected and displayed in a cartoon form to reproduce the audio.

By reproducing the same language content in various other ways, the user can select a desired format for convenience or interest to learn.

It will be understood by those skilled in the art that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. It is therefore to be understood that the above-described embodiments are illustrative in all aspects and not restrictive. The scope of the present invention is defined by the appended claims rather than the foregoing detailed description, and all changes or modifications derived from the meaning and scope of the claims and the equivalents thereof are included in the scope of the present invention Should be interpreted.

100: content providing server 200: network
300: content generation terminal 10: control unit
20: input unit 30: communication unit
40: storage unit 50: image output unit
60: audio output unit 70: audio reading unit
80: image reading unit 90: interface unit

Claims (13)

In the terminal for generating language content using text and audio data,
(A) receiving a text file and an audio file from the user;
(B) dividing the text data included in the selected text file into sentence units;
(C) performing waveform analysis on the audio data included in the selected audio file to search for a blank section in which the amplitude is kept below the reference width by more than a reference time, and partitioning the audio data into a plurality of unit sections based on the found blank section; ;
(D) sequentially one-to-one matching each sentence unit text separated in step (B) and each audio unit section partitioned in step (C); And
(E) generating one content including text data divided into sentence units and audio data divided into unit sections, wherein the generated content includes a matching relationship between each sentence unit text and each audio unit section; Method for generating language content, including the step.
The method of claim 1,
The reference width is less than 1/1000 of the maximum amplitude of the audio data included in the audio file, language content generation method.
The method of claim 1,
The method comprises:
(F) matching an image corresponding to each sentence unit text classified in step (B) or matching an image corresponding to each audio unit section partitioned in step (C),
In the step (E),
The content may further include an image corresponding to each sentence unit text or an image corresponding to each audio unit section.
The method of claim 3,
Step (F) is,
And matching the image of the enlarged or reduced partial region of the image data included in one image file to each sentence unit text or each audio unit section.
The method of claim 1,
The language content generation method,
(G1) outputting one or more audio unit sections included in the selected language content when the language content reproduction command is input, and simultaneously outputting sentence unit text corresponding to the reproduced audio unit section simultaneously with the audio unit section; , Language content generation method.
The method of claim 3,
The language content generation method,
(G2) When a language content reproduction command is input, at least one audio unit section included in the selected language content is output, and sentence unit text corresponding to the reproduced audio unit section or audio unit corresponding to or reproduced from the reproduced audio unit section is output. And outputting at least one of the images corresponding to the sentence unit text corresponding to the section simultaneously with the audio unit section.
The method of claim 1,
The language content generation method,
(H) The audio data waveform of the audio file selected in step (A) and the text data of the text file selected in step (A) are displayed together, and the text data is displayed in units of sentences separated in step (B). And displaying the unit section partitioned in the step (C).
(I) Text data and audio data when receiving a command for modifying a sentence unit of text data or a unit section of audio data in the text data divided into sentence units and unit sections shown in step (H) above. The method further comprises modifying and displaying a waveform.
In the terminal for generating language content using text and audio data,
An input unit to input a user command;
A storage unit for storing one or more text files and audio files, and storing generated language content;
An image output unit configured to display text data included in the text file and audio data waveforms included in the audio file when one text file and one audio file stored in the storage unit are selected through the input unit; And
The text data displayed on the image output unit is classified in sentence units, and the waveform of the audio data displayed on the image output unit is analyzed to search for a blank section in which an amplitude is maintained below a reference width by searching for a blank section. A language unit configured to divide audio data into a plurality of unit sections based on a reference, and to generate one language content by one-to-one matching each sentence unit text and each audio unit section sequentially and storing the language unit in the storage unit. Content generation terminal.
9. The method of claim 8,
The terminal comprises:
Further comprising a voice output unit for outputting audio data,
The control unit,
When a language content reproduction command is input through the input unit, one or more audio unit sections included in language content are output through the voice output unit, and sentence unit text matching the output audio unit section is displayed on the image output unit. Language content generation terminal.
9. The method of claim 8,
The control unit,
When an image corresponding to at least one of each sentence unit text or each audio unit section is selected through the input unit, the selected image is matched with each sentence unit text matched with each audio unit section. Language content generation terminal for storing in the language content included.
The method of claim 10,
The terminal comprises:
Further comprising a voice output unit for outputting audio data,
The control unit,
When a language content reproduction command is input through the input unit, at least one of audio unit sections included in language content is output through the voice output unit, and at least one of the sentence unit text or image matched with the output audio unit section is output. Language content generation terminal to be displayed on the image output unit.
9. The method of claim 8,
The terminal comprises:
And an audio reader configured to record an external sound to generate an audio file, and an image reader configured to photograph an external image to generate an image file.
9. The method of claim 8,
The terminal comprises:
The language content generating terminal further comprises a communication unit for uploading the language content to the server or download the language content from the server.
KR1020110047013A 2011-05-18 2011-05-18 Method for creating educational contents for foreign languages and terminal therefor KR20120129015A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020110047013A KR20120129015A (en) 2011-05-18 2011-05-18 Method for creating educational contents for foreign languages and terminal therefor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020110047013A KR20120129015A (en) 2011-05-18 2011-05-18 Method for creating educational contents for foreign languages and terminal therefor

Related Child Applications (1)

Application Number Title Priority Date Filing Date
KR1020130073289A Division KR20130076852A (en) 2013-06-25 2013-06-25 Method for creating educational contents for foreign languages and terminal therefor

Publications (1)

Publication Number Publication Date
KR20120129015A true KR20120129015A (en) 2012-11-28

Family

ID=47513577

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020110047013A KR20120129015A (en) 2011-05-18 2011-05-18 Method for creating educational contents for foreign languages and terminal therefor

Country Status (1)

Country Link
KR (1) KR20120129015A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014148665A2 (en) * 2013-03-21 2014-09-25 디노플러스(주) Apparatus and method for editing multimedia content
KR101523258B1 (en) * 2013-05-08 2015-05-28 문지원 System Providing Mobile Leading Contents
KR101958981B1 (en) * 2017-09-19 2019-03-15 문수산 Method of learning foreign languages and apparatus performing the same
KR20190093777A (en) * 2018-01-15 2019-08-12 주식회사 젠리코 System of providing educational contents for foreign languages
KR102082851B1 (en) * 2019-04-05 2020-04-23 송승헌 Server and method for providing daily translation course
KR102082845B1 (en) * 2019-04-05 2020-04-23 송승헌 Server and method for generating daily audio book of chinese voice

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014148665A2 (en) * 2013-03-21 2014-09-25 디노플러스(주) Apparatus and method for editing multimedia content
WO2014148665A3 (en) * 2013-03-21 2015-05-07 디노플러스(주) Apparatus and method for editing multimedia content
KR101523258B1 (en) * 2013-05-08 2015-05-28 문지원 System Providing Mobile Leading Contents
KR101958981B1 (en) * 2017-09-19 2019-03-15 문수산 Method of learning foreign languages and apparatus performing the same
WO2019059507A1 (en) * 2017-09-19 2019-03-28 문수산 Foreign language learning method and device for implementing same
KR20190093777A (en) * 2018-01-15 2019-08-12 주식회사 젠리코 System of providing educational contents for foreign languages
KR102082851B1 (en) * 2019-04-05 2020-04-23 송승헌 Server and method for providing daily translation course
KR102082845B1 (en) * 2019-04-05 2020-04-23 송승헌 Server and method for generating daily audio book of chinese voice

Similar Documents

Publication Publication Date Title
US20240107127A1 (en) Video display method and apparatus, video processing method, apparatus, and system, device, and medium
US8719029B2 (en) File format, server, viewer device for digital comic, digital comic generation device
WO2019114516A1 (en) Media information display method and apparatus, storage medium, and electronic apparatus
CN106688035B (en) Speech synthesis device and speech synthesis method
KR20120129015A (en) Method for creating educational contents for foreign languages and terminal therefor
JP2012133662A (en) Electronic comic viewer device, electronic comic browsing system, viewer program and recording medium recording viewer program
CN112188266A (en) Video generation method and device and electronic equipment
CN109324811A (en) It is a kind of for update teaching recorded broadcast data device
EP2747464A1 (en) Sent message playing method, system and related device
KR20200045852A (en) Speech and image service platform and method for providing advertisement service
KR20130076852A (en) Method for creating educational contents for foreign languages and terminal therefor
WO2019146466A1 (en) Information processing device, moving-image retrieval method, generation method, and program
JP2012178028A (en) Album creation device, control method thereof, and program
KR20090000745A (en) Sound book production system and the method which use the internet
KR101124798B1 (en) Apparatus and method for editing electronic picture book
US10678842B2 (en) Geostory method and apparatus
KR101753986B1 (en) Method for providing multi-language lylics service, terminal and server performing the method
KR101781516B1 (en) System and method for providung contents background service
US10714146B2 (en) Recording device, recording method, reproducing device, reproducing method, and recording/reproducing device
WO2019069997A1 (en) Information processing device, screen output method, and program
CN114822492B (en) Speech synthesis method and device, electronic equipment and computer readable storage medium
KR102295826B1 (en) E-book service method and device for providing sound effect
JP2019144817A (en) Motion picture output device, motion picture output method, and motion picture output program
JP7128222B2 (en) Content editing support method and system based on real-time generation of synthesized sound for video content
CN110268467B (en) Display control system and display control method

Legal Events

Date Code Title Description
A201 Request for examination
E601 Decision to refuse application
A107 Divisional application of patent