KR20120129015A - Method for creating educational contents for foreign languages and terminal therefor - Google Patents
Method for creating educational contents for foreign languages and terminal therefor Download PDFInfo
- Publication number
- KR20120129015A KR20120129015A KR1020110047013A KR20110047013A KR20120129015A KR 20120129015 A KR20120129015 A KR 20120129015A KR 1020110047013 A KR1020110047013 A KR 1020110047013A KR 20110047013 A KR20110047013 A KR 20110047013A KR 20120129015 A KR20120129015 A KR 20120129015A
- Authority
- KR
- South Korea
- Prior art keywords
- unit
- audio
- text
- language content
- image
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/22—Social work
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B19/00—Teaching not covered by other main groups of this subclass
- G09B19/06—Foreign languages
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/233—Processing of audio elementary streams
Abstract
Description
The present invention relates to a method for generating a language content and a terminal therefor, and more particularly, to a method for generating a language content and a terminal therefor that anyone can easily generate language content using text and audio data.
In order to enhance the learning effect, various types of language contents are used. However, since general language content is difficult for the general public to generate language content, the supplier and the consumer are clearly divided, so that the amount of the content is not limited and does not sufficiently reflect the demand of the consumer. In addition, there was a problem that it is difficult to provide language content at a low cost according to the law of supply and demand.
In addition, since the learning is performed according to the format determined by the supplier even when using the language content, the consumer was in a position to follow the learning format prescribed by the supplier, and thus there was a disadvantage in that the learning efficiency was not large.
Accordingly, the present invention has been made to solve the above problems, an object of the present invention is to provide a language content generation method and a terminal for easily synchronizing text data and audio data to easily generate language content will be.
Another object of the present invention is to provide a language content generation method capable of reproducing text / audio / image data and the like in various formats and a terminal therefor.
According to a feature of the present invention for achieving the above object, the present invention provides a terminal for generating a language content using text and audio data, comprising the steps of: (A) receiving a text file and an audio file from a user; ; (B) dividing the text data included in the selected text file into sentence units; (C) performing waveform analysis on the audio data included in the selected audio file to search for a blank section in which the amplitude is kept below the reference width by more than a reference time, and partitioning the audio data into a plurality of unit sections based on the found blank section; ; (D) sequentially one-to-one matching each sentence unit text separated in step (B) and each audio unit section partitioned in step (C); And (E) generating one content including text data divided into sentence units and audio data divided into unit sections, wherein the generated content includes a matching relationship between each sentence unit text and each audio unit section. It includes a step.
The reference width may be set to 1/1000 or less of the maximum amplitude of the audio data included in the audio file.
The method may further include (F) matching an image corresponding to each sentence unit text classified in step (B) or matching an image corresponding to each audio unit section partitioned in step (C). In addition, in the step (E), the content may further include an image corresponding to each sentence unit text or an image corresponding to each audio unit section.
In addition, the step (F) may be performed by matching an image of a partial area of the enlarged or reduced image data included in one image file to each sentence unit text or each audio unit section.
In this case, the language content generation method, when a command to play language content (G1) is input, outputs one or more audio unit sections included in the selected language content, and outputs sentence unit text corresponding to the reproduced audio unit section and the audio unit section. It may also be performed by the step of outputting at the same time.
The language content generating method may further include outputting one or more audio unit sections included in the selected language content when a command to play language content (G2) is input, and sentence unit text corresponding to the reproduced audio unit section or audio unit section to be reproduced. The method may further include simultaneously outputting at least one of the images corresponding to the sentence unit text corresponding to the audio unit section corresponding to or reproduced with the audio unit section.
The language content generating method may include (H) an audio data waveform of an audio file selected in step (A) and text data of a text file selected in step (A), and the text data may be displayed in step (B). Displaying the unit section divided by the sentence unit, wherein the audio data waveform is divided in the step (C); (I) Text data and audio data when receiving a command for modifying a sentence unit of text data or a unit section of audio data in the text data divided into sentence units and unit sections shown in step (H) above. The method may further include modifying and displaying a waveform.
The present invention provides a terminal for generating language content using text and audio data, the terminal comprising: an input unit for inputting a user's command; A storage unit for storing one or more text files and audio files, and storing generated language content; An image output unit configured to display text data included in the text file and audio data waveforms included in the audio file when one text file and one audio file stored in the storage unit are selected through the input unit; The text data displayed on the image output unit is divided into sentence units, and the waveform of the audio data displayed on the image output unit is analyzed to search for a blank section in which an amplitude is maintained below a reference width by searching for a blank section. The control unit may be configured to divide the audio data into a plurality of unit sections based on the control unit, and to generate one language content by sequentially matching each sentence unit text and each audio unit section one by one and storing the language unit. It may be.
The terminal further includes a voice output unit through which audio data is output, and the controller outputs one or more of audio unit sections included in the language content through the voice output unit when a language content reproduction command is input through the input unit. The sentence unit text matching the output audio unit section may be displayed on the image output unit.
When the image corresponding to at least one of the respective sentence unit text or the respective audio unit section is selected through the input unit, the control unit matches the selected image with each sentence unit text and the respective audio unit. Matching sections may be included in the language content and stored.
The terminal further includes a voice output unit through which audio data is output, and the controller outputs one or more of audio unit sections included in the language content through the voice output unit when a language content reproduction command is input through the input unit. The display unit may display at least one of sentence unit text or an image matched with the output audio unit section.
The terminal may further include an audio reader configured to record an external sound to generate an audio file, and an image reader configured to photograph an external image and generate an image file.
Furthermore, the terminal may further include a communication unit for uploading the generated language content to the server or downloading language content from the server.
According to the language content generation method and the terminal for the same according to the present invention has the following effects.
That is, since audio data and text data included in the language content can be easily synchronized, anyone can easily generate the content.
In addition, according to the method for generating a language content according to the present invention and a terminal therefor, there is an advantage in that various language contents can be generated, utilized, and shared.
Furthermore, according to the method of generating a language content according to the present invention and a terminal therefor, there is an advantage that a learning format suitable for oneself can be selected to enhance learning efficiency by enabling one content to be reproduced in various ways.
1 is a conceptual diagram schematically showing the configuration of a language content generation system according to an embodiment of the present invention.
2 is a block diagram schematically illustrating a configuration of a language content generation terminal according to an embodiment of the present invention.
3 is a flowchart illustrating a method of generating a language content in accordance with an embodiment of the present invention.
4 is an exemplary diagram illustrating a process of matching text data and audio data in a language content generating method according to an exemplary embodiment of the present invention.
5 is an exemplary diagram illustrating a process of matching text data and image data in a language content generating method according to an exemplary embodiment of the present invention.
6 is an exemplary view of a content reproduction method generated by a language content generation method according to an embodiment of the present invention.
Hereinafter, a method and a terminal for generating a language content according to an embodiment of the present invention will be described in detail with reference to the accompanying drawings. Advantages and features of the present invention, and methods of achieving the same will become apparent with reference to the embodiments described below in detail in conjunction with the accompanying drawings.
It is to be understood that the present invention is not limited to the embodiments disclosed herein but may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. To fully disclose the scope of the invention to those skilled in the art, and the invention is only defined by the scope of the claims.
Like reference numerals refer to like elements throughout the specification.
1 is a conceptual diagram schematically showing the configuration of a language content generation system according to an embodiment of the present invention, Figure 2 is a block diagram schematically showing the configuration of a language content generation terminal according to an embodiment of the present invention. As shown in FIG. 1, the language content generation system according to an exemplary embodiment of the present invention includes a
On the other hand, the
Looking at a more specific configuration of the
In addition, the
In addition, the
As such, the data provided through the
Meanwhile, the
In addition, the
In addition, the
Hereinafter, a language content generation system using a language content generation system and a language content generation terminal according to an embodiment of the present invention as described above will be described in detail with reference to the accompanying drawings. 3 is a flowchart illustrating a method for generating language content according to an exemplary embodiment of the present invention, and FIG. 4 is an example illustrating a process of matching text data and audio data in a method for generating language content according to an embodiment of the present invention. 5 is an exemplary diagram illustrating a process of matching text data and image data in a language content generation method according to an exemplary embodiment of the present invention.
First, as shown in FIG. 3, the method for generating a language content according to an embodiment of the present invention begins with selecting a text file and an audio file to be used for generating the language content (S100). The text file is composed of one or more sentences, and the audio file includes audio data corresponding to the selected text file. For example, if an audio file contains voice data of an English sentence contained in a fairy tale “The Little Mermaid,” the text file is a textual translation of the English sentence contained in the audio file or a sentence of an English sentence contained in the audio file. May include Korean translated text or both. Here, the text file and the audio file may be selected from the user through the
The text data included in the selected text file is structured in units of sentences (S110). Here, the text file may be divided into sentence units based on a period "." Or sentences based on a line break character or a new line character in the text file. In addition, in a text file having alternating Korean sentences and foreign sentences, the Korean sentences and the corresponding foreign sentences may be divided into one unit. In this case, two sentences are used as a unit to distinguish the sentences in the text file. Each sentence divided as described above is structured by assigning different identification codes.
In addition, the
In this way, checking the blank of the audio data using the reference width and the reference time and recognizing the audio data before and after the unit based on the identified blank is based on the characteristics of general language content. In this case, since a space is generated between the sentences by reading between the sentences, the audio data may be partitioned and structured in units of sentences by using spaces between the sentences. At this time, the audio file is structured by assigning a unique identification code to each partitioned audio data area.
For example, when a text file and an audio file corresponding thereto are selected to generate new language content, a waveform of an audio file may appear at the top and a text file at the bottom as shown in FIG. 4.
In this case, the
However, in order to reduce an error, the user may change or modify the partition position of each unit section partitioned by the
And the text file (Little_Mermaid.txt) selected in Figure 4 is "." It is divided into units and displayed in sentence units.
Each unit of text data divided into sentence units and audio data divided into a plurality of unit sections are sequentially matched with each other and stored as a structured text-audio file (S130).
That is, in the example shown in FIG. 4, the audio data divided into three units and the text data divided into three sentence units are sequentially matched with each other. That is, when the identification code "A" is assigned to the first audio unit section within 0 seconds to about 14 seconds, the same identification code "A" is assigned to the first sentence "You can never be a mermaid again" and is mutually different. Can be matched. Alternatively, each sentence of the text data divided into sentence units may be tagged in a specific area of the audio data corresponding thereto.
In this way, text files and audio files structured to match each other using the same identification code are stored together, or text data divided by sentence units in each unit section of the audio file is tagged and stored together.
In the embodiment of the present invention, the image file may additionally match text or audio data divided into a plurality of units. The structured text-audio-image file may be stored by structuring the matched image data together (S140). ).
For example, in the embodiment shown in FIG. 4, when an image is to be associated with text of each sentence unit divided by sentences, a desired camera icon may be selected for each sentence unit text.
Here, the image corresponding to each sentence unit text may correspond to a different image file for each sentence unit, but as shown in FIG. 5, different portions of one image may correspond to each sentence unit text.
That is, as shown in FIG. 5, when the entire image I of the image file selected by the user includes a plurality of different images such as a cartoon image, or when the image size is large and displayed at once, the image contents are reduced too small. If necessary, a partial region of one image may be matched with text of each sentence unit or audio of each unit section.
This may be done by selecting a partial region S in the entire image I, and in particular, may be selected by determining an enlargement / reduction ratio of the selected partial region S. FIG. That is, when the
Through the above-described process, the data file, the audio file, and optionally the data contained in the image file are divided into a plurality of units, and the divided units are matched with each other to generate one language content.
The newly created language content is reproduced in such a manner that data matching each other is synchronized and output together. For example, while the text “The Prince must marry you and give you a soul” of the second sentence in FIG. 4 is displayed on the
In addition, the user in the
The language content generated by the above-described embodiment of the present invention includes text and audio data synchronized with each other in units of sentences and additionally includes an image. Using this, language content reproduction may be performed in a plurality of different formats. Hereinafter, a method of playing back language content generated by an embodiment of the present invention will be described with reference to the accompanying drawings. 6 is an exemplary view of a content reproduction method generated by a language content generation method according to an embodiment of the present invention.
As illustrated in FIG. 6, when a user selects a language content in the
First, the identification code A represents a novel book viewing method, in which text data is continuously displayed, and a highlighter mark or an underline mark is displayed in a corresponding sentence according to a unit section of the currently reproduced audio data. Each unit section of is to be played sequentially without interruption. Here, at least a part of the image corresponding to each sentence unit text may be displayed together on one side of the fingerprint.
In addition, the identification code B represents a sentence viewing method in which text data is divided into sentence units and a unit section of audio data corresponding thereto is reproduced. The unit section reproduction of the audio data corresponding to the displayed sentence can be repeatedly listened to, and the user can select to move to the previous sentence or the next sentence. In this case, if there is an image corresponding to the displayed sentence, it may be displayed together with the text.
The identification code C represents a case in which the image is reproduced by the image viewing method, and the images corresponding to the unit sections of the reproduced audio data may be sequentially displayed on the screen, as shown in FIG. 5. When the included partial region S corresponds to the unit section of each audio data, each time the unit section of the reproduced audio data is changed, the partial section corresponding to the unit section before the change corresponds to the unit section after the change. The process of moving and enlarging / reducing an image to a partial region may be represented as an animation effect on the entire image (I).
The identification code D represents a case in which the language content is reproduced by the cartoon view method, and the image corresponding to the unit section of the reproduced audio data and the unit sections before and after the display is collected and displayed in a cartoon form to reproduce the audio.
By reproducing the same language content in various other ways, the user can select a desired format for convenience or interest to learn.
It will be understood by those skilled in the art that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. It is therefore to be understood that the above-described embodiments are illustrative in all aspects and not restrictive. The scope of the present invention is defined by the appended claims rather than the foregoing detailed description, and all changes or modifications derived from the meaning and scope of the claims and the equivalents thereof are included in the scope of the present invention Should be interpreted.
100: content providing server 200: network
300: content generation terminal 10: control unit
20: input unit 30: communication unit
40: storage unit 50: image output unit
60: audio output unit 70: audio reading unit
80: image reading unit 90: interface unit
Claims (13)
(A) receiving a text file and an audio file from the user;
(B) dividing the text data included in the selected text file into sentence units;
(C) performing waveform analysis on the audio data included in the selected audio file to search for a blank section in which the amplitude is kept below the reference width by more than a reference time, and partitioning the audio data into a plurality of unit sections based on the found blank section; ;
(D) sequentially one-to-one matching each sentence unit text separated in step (B) and each audio unit section partitioned in step (C); And
(E) generating one content including text data divided into sentence units and audio data divided into unit sections, wherein the generated content includes a matching relationship between each sentence unit text and each audio unit section; Method for generating language content, including the step.
The reference width is less than 1/1000 of the maximum amplitude of the audio data included in the audio file, language content generation method.
The method comprises:
(F) matching an image corresponding to each sentence unit text classified in step (B) or matching an image corresponding to each audio unit section partitioned in step (C),
In the step (E),
The content may further include an image corresponding to each sentence unit text or an image corresponding to each audio unit section.
Step (F) is,
And matching the image of the enlarged or reduced partial region of the image data included in one image file to each sentence unit text or each audio unit section.
The language content generation method,
(G1) outputting one or more audio unit sections included in the selected language content when the language content reproduction command is input, and simultaneously outputting sentence unit text corresponding to the reproduced audio unit section simultaneously with the audio unit section; , Language content generation method.
The language content generation method,
(G2) When a language content reproduction command is input, at least one audio unit section included in the selected language content is output, and sentence unit text corresponding to the reproduced audio unit section or audio unit corresponding to or reproduced from the reproduced audio unit section is output. And outputting at least one of the images corresponding to the sentence unit text corresponding to the section simultaneously with the audio unit section.
The language content generation method,
(H) The audio data waveform of the audio file selected in step (A) and the text data of the text file selected in step (A) are displayed together, and the text data is displayed in units of sentences separated in step (B). And displaying the unit section partitioned in the step (C).
(I) Text data and audio data when receiving a command for modifying a sentence unit of text data or a unit section of audio data in the text data divided into sentence units and unit sections shown in step (H) above. The method further comprises modifying and displaying a waveform.
An input unit to input a user command;
A storage unit for storing one or more text files and audio files, and storing generated language content;
An image output unit configured to display text data included in the text file and audio data waveforms included in the audio file when one text file and one audio file stored in the storage unit are selected through the input unit; And
The text data displayed on the image output unit is classified in sentence units, and the waveform of the audio data displayed on the image output unit is analyzed to search for a blank section in which an amplitude is maintained below a reference width by searching for a blank section. A language unit configured to divide audio data into a plurality of unit sections based on a reference, and to generate one language content by one-to-one matching each sentence unit text and each audio unit section sequentially and storing the language unit in the storage unit. Content generation terminal.
The terminal comprises:
Further comprising a voice output unit for outputting audio data,
The control unit,
When a language content reproduction command is input through the input unit, one or more audio unit sections included in language content are output through the voice output unit, and sentence unit text matching the output audio unit section is displayed on the image output unit. Language content generation terminal.
The control unit,
When an image corresponding to at least one of each sentence unit text or each audio unit section is selected through the input unit, the selected image is matched with each sentence unit text matched with each audio unit section. Language content generation terminal for storing in the language content included.
The terminal comprises:
Further comprising a voice output unit for outputting audio data,
The control unit,
When a language content reproduction command is input through the input unit, at least one of audio unit sections included in language content is output through the voice output unit, and at least one of the sentence unit text or image matched with the output audio unit section is output. Language content generation terminal to be displayed on the image output unit.
The terminal comprises:
And an audio reader configured to record an external sound to generate an audio file, and an image reader configured to photograph an external image to generate an image file.
The terminal comprises:
The language content generating terminal further comprises a communication unit for uploading the language content to the server or download the language content from the server.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020110047013A KR20120129015A (en) | 2011-05-18 | 2011-05-18 | Method for creating educational contents for foreign languages and terminal therefor |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020110047013A KR20120129015A (en) | 2011-05-18 | 2011-05-18 | Method for creating educational contents for foreign languages and terminal therefor |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020130073289A Division KR20130076852A (en) | 2013-06-25 | 2013-06-25 | Method for creating educational contents for foreign languages and terminal therefor |
Publications (1)
Publication Number | Publication Date |
---|---|
KR20120129015A true KR20120129015A (en) | 2012-11-28 |
Family
ID=47513577
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020110047013A KR20120129015A (en) | 2011-05-18 | 2011-05-18 | Method for creating educational contents for foreign languages and terminal therefor |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR20120129015A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014148665A2 (en) * | 2013-03-21 | 2014-09-25 | 디노플러스(주) | Apparatus and method for editing multimedia content |
KR101523258B1 (en) * | 2013-05-08 | 2015-05-28 | 문지원 | System Providing Mobile Leading Contents |
KR101958981B1 (en) * | 2017-09-19 | 2019-03-15 | 문수산 | Method of learning foreign languages and apparatus performing the same |
KR20190093777A (en) * | 2018-01-15 | 2019-08-12 | 주식회사 젠리코 | System of providing educational contents for foreign languages |
KR102082851B1 (en) * | 2019-04-05 | 2020-04-23 | 송승헌 | Server and method for providing daily translation course |
KR102082845B1 (en) * | 2019-04-05 | 2020-04-23 | 송승헌 | Server and method for generating daily audio book of chinese voice |
-
2011
- 2011-05-18 KR KR1020110047013A patent/KR20120129015A/en active Application Filing
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014148665A2 (en) * | 2013-03-21 | 2014-09-25 | 디노플러스(주) | Apparatus and method for editing multimedia content |
WO2014148665A3 (en) * | 2013-03-21 | 2015-05-07 | 디노플러스(주) | Apparatus and method for editing multimedia content |
KR101523258B1 (en) * | 2013-05-08 | 2015-05-28 | 문지원 | System Providing Mobile Leading Contents |
KR101958981B1 (en) * | 2017-09-19 | 2019-03-15 | 문수산 | Method of learning foreign languages and apparatus performing the same |
WO2019059507A1 (en) * | 2017-09-19 | 2019-03-28 | 문수산 | Foreign language learning method and device for implementing same |
KR20190093777A (en) * | 2018-01-15 | 2019-08-12 | 주식회사 젠리코 | System of providing educational contents for foreign languages |
KR102082851B1 (en) * | 2019-04-05 | 2020-04-23 | 송승헌 | Server and method for providing daily translation course |
KR102082845B1 (en) * | 2019-04-05 | 2020-04-23 | 송승헌 | Server and method for generating daily audio book of chinese voice |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20240107127A1 (en) | Video display method and apparatus, video processing method, apparatus, and system, device, and medium | |
US8719029B2 (en) | File format, server, viewer device for digital comic, digital comic generation device | |
WO2019114516A1 (en) | Media information display method and apparatus, storage medium, and electronic apparatus | |
CN106688035B (en) | Speech synthesis device and speech synthesis method | |
KR20120129015A (en) | Method for creating educational contents for foreign languages and terminal therefor | |
JP2012133662A (en) | Electronic comic viewer device, electronic comic browsing system, viewer program and recording medium recording viewer program | |
CN112188266A (en) | Video generation method and device and electronic equipment | |
CN109324811A (en) | It is a kind of for update teaching recorded broadcast data device | |
EP2747464A1 (en) | Sent message playing method, system and related device | |
KR20200045852A (en) | Speech and image service platform and method for providing advertisement service | |
KR20130076852A (en) | Method for creating educational contents for foreign languages and terminal therefor | |
WO2019146466A1 (en) | Information processing device, moving-image retrieval method, generation method, and program | |
JP2012178028A (en) | Album creation device, control method thereof, and program | |
KR20090000745A (en) | Sound book production system and the method which use the internet | |
KR101124798B1 (en) | Apparatus and method for editing electronic picture book | |
US10678842B2 (en) | Geostory method and apparatus | |
KR101753986B1 (en) | Method for providing multi-language lylics service, terminal and server performing the method | |
KR101781516B1 (en) | System and method for providung contents background service | |
US10714146B2 (en) | Recording device, recording method, reproducing device, reproducing method, and recording/reproducing device | |
WO2019069997A1 (en) | Information processing device, screen output method, and program | |
CN114822492B (en) | Speech synthesis method and device, electronic equipment and computer readable storage medium | |
KR102295826B1 (en) | E-book service method and device for providing sound effect | |
JP2019144817A (en) | Motion picture output device, motion picture output method, and motion picture output program | |
JP7128222B2 (en) | Content editing support method and system based on real-time generation of synthesized sound for video content | |
CN110268467B (en) | Display control system and display control method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
E601 | Decision to refuse application | ||
A107 | Divisional application of patent |