KR20170057473A - Device and method for contents production - Google Patents
Device and method for contents production Download PDFInfo
- Publication number
- KR20170057473A KR20170057473A KR1020150152059A KR20150152059A KR20170057473A KR 20170057473 A KR20170057473 A KR 20170057473A KR 1020150152059 A KR1020150152059 A KR 1020150152059A KR 20150152059 A KR20150152059 A KR 20150152059A KR 20170057473 A KR20170057473 A KR 20170057473A
- Authority
- KR
- South Korea
- Prior art keywords
- content
- terminal
- music
- screen transition
- source
- Prior art date
Links
- 238000004519 manufacturing process Methods 0.000 title claims abstract description 32
- 238000000034 method Methods 0.000 title claims description 39
- 230000007704 transition Effects 0.000 claims abstract description 137
- 230000008859 change Effects 0.000 claims abstract description 11
- 238000010586 diagram Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 230000000007 visual effect Effects 0.000 description 6
- 230000033764 rhythmic process Effects 0.000 description 3
- 206010029897 Obsessive thoughts Diseases 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000004575 stone Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/765—Interface circuits between an apparatus for recording and another apparatus
- H04N5/77—Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
Landscapes
- Business, Economics & Management (AREA)
- Engineering & Computer Science (AREA)
- Tourism & Hospitality (AREA)
- Human Resources & Organizations (AREA)
- Primary Health Care (AREA)
- Health & Medical Sciences (AREA)
- Economics (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Marketing (AREA)
- Signal Processing (AREA)
- Strategic Management (AREA)
- Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
A content production method of a terminal is disclosed. In an embodiment, the source content is associated with a music section that is set based on a time stamp corresponding to a time point at which harmonic characteristics of the music content change, and a predefined screen transition is made to the source content And generating user content based on application of the screen transition.
Description
The following embodiments relate to a content production method.
The user can photograph and store a picture or a moving picture in the terminal. Some applications may collect time-series photos or videos stored in memory, and continuously display the collected pictures or videos in a slideshow format on an electronic frame or a smartphone idle screen.
As a related prior art, Korean Patent Laid-Open Publication No. 10-2010-0051997 (entitled " Moving Image Production System and Method, Applicant: Kim Jin Soo) " The Korean Unexamined Patent Publication (Kokai) discloses an image input unit for loading a stored image file; A control unit connected to the image input unit to select an image file and to control operations related to image editing, frame generation, and storage; An editing unit, connected to the control unit, for performing editing operations such as position, size, rotation, and transparency of an image selected by the image input unit; A frame generation unit coupled to the control unit and configured to form at least one intermediate frame between a start frame and an end frame according to an editing operation of the editing unit; A storage unit, connected to the control unit and the image input unit, for storing each frame generated by the original image or frame generation unit, or storing consecutive frames in the form of a moving image file, and a storage unit for storing the moving image file, A moving image producing system including an uploading section for uploading is disclosed.
Embodiments can generate user content by analyzing music such that the screen of the user content is switched according to the rhythm of the music. Embodiments also provide preview functionality for user content and various editing functions. In addition, the embodiments can automatically generate user contents, thereby enhancing the convenience of the user, and providing the function of easily sharing generated user contents with other users.
A method of producing a content according to one side comprises the steps of: associating source content with a section of a music section set on the basis of a time stamp corresponding to a time when harmonic characteristics of the music content change; Applying a predefined screen transition to the source content; And generating user content based on the application of the screen transition.
The step of applying a predefined screen transition to the source content may include determining a start time of the screen transition using the end time of the music node interval and the time length information of the screen transition.
The applying the predefined screen transition to the source content may include applying the screen transition to expose the next source content at the end of the music node interval.
Wherein the applying the predefined screen transition to the source content comprises: comparing time length information of the screen transition and time corresponding to a next music node interval; And applying the screen transition so that the source content is exposed in the next music section if the time length information of the screen transition is longer than the time corresponding to the next music section period.
Estimating a bit rate of the music content based on a frequency characteristic of the music content and obtaining the time stamp based on the estimated bit rate when there is a selection input for the music content, have.
And if the plurality of source contents are selected, grouping the plurality of source contents.
Determining whether the source content includes face information; And inserting additional content corresponding to the face information into the source content when the face information is included.
Exposing an editing interface for editing the user content to a display of the terminal; And changing the exposure order of the plurality of source contents included in the user content based on the selection input for the editing interface.
The source content may be at least one of a plurality of contents arranged according to a predetermined criterion.
The predetermined criterion may include at least one of a date of anniversary and a date when the plurality of contents are generated by a predetermined number or more.
A terminal according to one side displays; A memory for storing the source content and the music content; And a processor for generating user content based on the source content and the music content, wherein the processor receives the source content, and based on the time stamp corresponding to a time when the melody characteristic of the music content changes, The source content is mapped to a music section that is set as the source, and a predetermined transition is applied to the source content, and the user content is generated based on the application of the screen transition.
The processor may determine a starting point of the screen transition using the end time of the music section and the time length information of the screen transition.
The processor may apply the screen transition so that the next source content is exposed at the end of the music section.
Wherein the processor compares the time length information of the screen transition with a time corresponding to the next music node interval and if the time length information of the screen transition is longer than the time corresponding to the next music node interval, The screen transition may be applied to expose the source content.
Wherein the processor estimates a beat rate of the music content on the basis of the frequency characteristic when the selection input for the music content is present and acquires the time stamp based on the estimated bit rate can do.
The processor may group the plurality of source contents when a plurality of source contents is selected.
The processor may check whether the source content includes face information, and if the face information is included, insert additional content into the source content corresponding to the face information.
The processor may expose an edit interface for editing the user content to the display and change an exposure order of a plurality of source contents included in the user content based on a selection input for the edit interface.
The source content may be at least one of a plurality of contents arranged according to a predetermined criterion.
The predetermined criterion may include at least one of a date of anniversary and a date when the plurality of contents are generated by a predetermined number or more.
A content production application according to one side is stored in a memory of a terminal and executed by a processor of the terminal, and the content production application generates a music segment based on a time stamp corresponding to a time point at which the melody characteristic of the music content changes, Mapping the source content to a duration; Applying a predefined screen transition to the source content; And generating user content based on the application of the screen transition.
Embodiments can generate user content by analyzing music such that the screen of the user content is switched according to the rhythm of the music. Embodiments also provide preview functionality for user content and various editing functions. In addition, the embodiments can automatically generate user contents, thereby enhancing the convenience of the user, and providing the function of easily sharing generated user contents with other users.
1 is a flowchart for explaining a content production method according to an embodiment.
FIGS. 2 to 3 are diagrams for explaining a content production method according to an embodiment.
4 to 8 are diagrams for explaining a content production method using a content production application according to an embodiment.
9 is a block diagram for explaining a terminal according to an embodiment.
Hereinafter, embodiments will be described in detail with reference to the accompanying drawings.
Various modifications may be made to the embodiments described below. It is to be understood that the embodiments described below are not intended to limit the embodiments, but include all modifications, equivalents, and alternatives to them.
The terms used in the examples are used only to illustrate specific embodiments and are not intended to limit the embodiments. The singular expressions include plural expressions unless the context clearly dictates otherwise. In this specification, the terms "comprises" or "having" and the like refer to the presence of stated features, integers, steps, operations, elements, components, or combinations thereof, But do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, or combinations thereof.
Unless defined otherwise, all terms used herein, including technical or scientific terms, have the same meaning as commonly understood by one of ordinary skill in the art to which this embodiment belongs. Terms such as those defined in commonly used dictionaries are to be interpreted as having a meaning consistent with the contextual meaning of the related art and are to be interpreted as ideal or overly formal in the sense of the art unless explicitly defined herein Do not.
In the following description of the present invention with reference to the accompanying drawings, the same components are denoted by the same reference numerals regardless of the reference numerals, and redundant explanations thereof will be omitted. In the following description of the embodiments, a detailed description of related arts will be omitted if it is determined that the gist of the embodiments may be unnecessarily blurred.
1 is a flowchart for explaining a content production method according to an embodiment.
The content production method according to one embodiment can be performed by the terminal.
Referring to FIG. 1, source content may be input (110). More specifically, the source content stored in the memory of the terminal can be input to the processor of the terminal. The source content may include, for example, static images (e.g., pictures) and / or dynamic images (e.g., moving pictures). When a plurality of source contents are input to the processor, the processor can group the plurality of source contents through the collage function and edit them into one source content. For example, the processor may estimate the similarity of a plurality of pictures selected by the user, and may group the pictures estimated to have a high degree of similarity to generate one source content.
According to one embodiment, the terminal can check whether the source content includes face information. Here, if the source content includes face information, the terminal may insert additional content into the source content corresponding to the face information. For example, the terminal can insert a hat or a mustache into a human face included in a photograph using a sticker inserting function, and insert glasses or the like into an animal face included in the photograph.
The terminal associates the source content with the music node section set in the music content based on the time stamp (120). Here, the time stamp may correspond to a time when harmonic characteristics of music contents change. The harmonic characteristic of the music content can be changed in an obsession or a furthest, and the terminal can determine a beat corresponding to the obsession or the furthest of the music content as a down beat. If the down bit is determined, the terminal can acquire the time stamp. Hereinafter, a procedure for obtaining the time stamp will be described.
The terminal can acquire a feature set of music contents based on the frequency characteristics of the music contents. For example, the terminal can convert music content into a spectrogram and convert the spectrogram into a mel frequency spectrogram. The terminal can extract the peak value of the Mel frequency spectrogram to acquire the feature set.
The terminal may determine the beat rate of the music content using a property set and a plurality of predefined beat grids. Here, the bit rate may be bpm (beat per minute). Each of the plurality of bit grids may correspond to a bit rate. For example, the first bit grid may correspond to 60 bpm, and the second bit grid may correspond to 70 bpm. The terminal can obtain a correlation between each of the plurality of bit grids and the set of characteristics and identify the bit grid having the highest correlation with the set of characteristics. The terminal may determine the bit rate corresponding to the identified bit grid as the bit rate of the music content.
The terminal can obtain the coordination bit grid from the bit grid corresponding to the determined bit rate and obtain the self-similarity of the coordination bit grid. That is, the terminal can obtain the autocorrelation of each bit of the coordination bit grid. The terminal can extract a bit having a peak value by peak picking the autocorrelation of each bit and identify a down bit index matched to a predetermined condition using the bit index of the extracted bit . Here, the UE can confirm how many bits having a remainder of dividing the bit index by a predetermined number (for example, 8), how many bits having the remainder are 2, and the like. The UE can determine the remainder having the largest number of bits as a down-bit index. The terminal can generate a down-bit grid based on the down-bit index, and can obtain the timestamp using the down-bit grid. The terminal can set a plurality of music section periods in the music content based on the time stamp.
The terminal may apply a predefined screen transition to the source content (130). For example, the terminal can apply the screen transition to the source content based on the theme information selected by the user. A screen transition can represent a visual effect where the source content exposed on the display disappears from the display and the next source content is exposed. For example, a screen transition may include a first transition that represents a visual effect where the source content fades out and the next source content fades in, the source content disappears to the left of the display, A third transition indicating a visual effect in which the source content disappears to the right of the display and the next source content appears in the left of the display, or a third transition that indicates the visual effect that the source content appears in the left of the display, May include a fourth transition that represents a visual effect appearing below the display. The type of the screen transition described above is only an example according to the embodiment, and the type of the screen transition is not limited to the above description.
The terminal can determine the start point of the screen transition using the end point of the music node interval and the time length information of the screen transition. The time length information of the first transition described above may be 0.4 second, and the time length information of the second transition may be 0.5 second. The time taken for the source content to fade out, the next source content to fade in, and the next source content to be fully exposed to the display may be 0.4 seconds. Also, the time it takes for the source content to disappear to the left of the display, the next source content to appear to the right of the display, and the next source content to be fully exposed to the display may be 0.5 seconds. The terminal can determine the start point of the screen transition so that the next source content is completely exposed on the display and the end point of the music node interval is the same. This will be described in detail with reference to FIG.
In addition, the terminal can compare the time length information of the screen transition applied to the source content and the time corresponding to the next music section. Here, the next music bar section is a music bar section corresponding to the source content and the adjacent music bar section. If the time length information of the screen transition is longer than the time corresponding to the next music section, the terminal may cause the source content to be exposed in the next music section. According to an exemplary embodiment, the time of the next music section may be shorter than that of the screen transition. In the above example, when the second transition is applied to the source content and the time corresponding to the next music section is 0.3 second, the time length information of the second transition is longer than the time corresponding to the next music section. In this case, since the second transition also proceeds during the next music section, the terminal can apply the second transition so that the source content is exposed to the display even during the next music section. This will be described in detail with reference to FIG.
The terminal may generate the user content based on the application of the screen transition (140).
When the user content is created, the terminal can expose an edit interface for editing the user content to the display. Further, based on the selection input to the editing interface, the terminal can change the exposure order of the plurality of source contents included in the user content. For example, when the user content is generated, the user content may be exposed on one side of the editing interface displayed on the display of the terminal, and the editing button for editing the user content may be exposed on the other side. When there is a selection input for the edit button, the terminal can display a plurality of source contents included in the user content on the display. Here, when there is a drag for one of a plurality of source contents, and the selected source content is dropped at a specific position, the exposure order of the selected source content can be changed.
According to one embodiment, a terminal may generate user content using music content and source content (e.g., pictures and / or movies). At this time, the terminal can generate the user content so that the screen is switched according to the rhythm of the music content.
FIGS. 2 to 3 are diagrams for explaining a content production method according to an embodiment.
Referring to FIG. 2, a plurality of music section periods are set in music contents based on a time stamp. As in the example shown in FIG. 2, the terminal can analyze the music content to obtain time stamps t1, t2, t3, and t4. The melodic characteristics of music may change at time stamps t1, t2, t3, and t4. The
According to one embodiment, when a plurality of music section periods are set, the terminal can correspond source contents to each music section period. For example, when the
The terminal can determine the start point of the screen transition by using the time length information of the screen transition and the end point of the music node interval. That is, the terminal can determine the start point of the screen transition so that the end point of the screen transition and the end point of the music node interval are the same. For example, assume that t1 is 1 second, the time length information of the screen transition applied in the
In the above-described embodiment, the terminal first associates a plurality of source contents with a plurality of music section periods, and applies a screen transition to each of the plurality of source contents. In contrast to the above, the terminal may apply the screen transition to the source content by associating the source content with the music section, and may apply the screen transition to the next source content by associating the next source content with the next music section. That is, according to another embodiment, the terminal can associate the music section with the source content according to a time series sequence. The terminal can associate the
The user contents including the
Referring to FIG. 3, a plurality of music section periods are set in music contents based on a time stamp. As shown in the example of FIG. 3, the terminal analyzes music contents to obtain time stamps t1, t2, t3, t4, and t5. The music contents are classified into
According to one embodiment, the music section may be narrow. For example, in the case of music contents in which the bits change regularly rapidly, the music section may be narrow. In this case, the time length information of the screen transition may be longer than the time corresponding to the music section. Assume that t1 = 0.2 seconds, t2 = 0.3 seconds, t3 = 0.4 seconds, and t4 = 0.55 seconds in the example shown in Fig. 3, and the time length information of the screen transition applied to the
4 to 8 are diagrams for explaining a content production method using a content production application according to an embodiment.
The content creation application can be stored in the memory of the terminal and executed by the processor. When the content creation application is executed, the content creation application may expose the user interface to the display to receive a selection of the source content.
Referring to FIG. 4, a plurality of source contents are displayed on the display of the terminal. A plurality of source contents can be displayed and displayed according to a specific criterion. For example, if there is a selection input for
The user can select each of the plurality of source contents through the check box displayed on the upper right of each of the plurality of source contents. Also, although not shown in FIG. 4, the user can select background music of the user content from a plurality of music contents.
The content production application can correspond to the selected source content in the music section of the selected music content. Here, the music segment period may be set based on a time stamp corresponding to a time point at which the selected characteristic of the selected music content changes. In one embodiment, if there is a selection input for music content, the content production application may analyze the selected music content to obtain a timestamp of the selected music content. The content production application can set a plurality of music section intervals using a time stamp. In addition, the content production application can preprocess a plurality of music contents to obtain a time stamp of each of the plurality of music contents in advance. The content production application can store a time stamp of each of a plurality of music contents in a memory. If there is a selection input for the music content of the user, the content production application can retrieve the time stamp of the selected music content by referring to the memory.
If the source content is corresponded to the music section, the content creation application can apply a predefined screen transition to the source content. The content production application can apply the screen transition so that the next source content is exposed at the end of the music section. In other words, the content production application can apply the screen transition to the source content so that the next source content is exposed in correspondence with the time stamp. For example, the content production application can determine the start time of the screen transition using the end time of the music section and the time length information of the screen transition. Here, the screen transition may start earlier or may start later.
When a screen transition is applied, the content creation application can create user content.
Referring to FIG. 5, the
When there is an input to the play button or
The identification information of the music content included in the
Referring to FIG. 6, additional content may be inserted into the source content.
The content production application can confirm whether the source content includes the person information. When the source content includes person information, the content creation application can insert additional content into the source content. For example, in the case of the
Although not shown in FIG. 6, if content information is not included in the source content, the content creation application may insert additional content into a specific area of the source content. For example, in the case of a landscape photograph, the content production application can insert a flower ornamental sticker photograph as additional content into the landscape photograph.
Also, although not shown in FIG. 6, the content production application can extract generation date information and weather information of the source content, and insert additional content corresponding to the generation date information and weather information into the source content. Here, the generation date information may include information on the season in which the source content is generated. For example, if the source content is created in the spring, the content authoring application may insert the sun picture sticker photo as additional content into a particular area of the source content.
Referring to FIG. 7, a content creation application may expose an edit interface for editing user content to a display.
The editing interface may include an area in which the user content is exposed and an area in which the style information of the user content is displayed.
The style information of the user content may include theme information of the user content and background music information. As shown in FIG. 7, the theme information applied to the user content and the background music information included in the user content can be displayed. When there is a selection input for the theme information, a plurality of theme information can be displayed. When there is a selection input for any one of a plurality of theme information, the content production application can change the theme information applied to the user content to the selected theme information.
When there is a selection input for background music, a plurality of music contents can be displayed. If there is a selection input for a plurality of music contents, the contents production application can change the music contents contained in the user contents.
If there is a selection input for detailed editing, the screen shown in Fig. 8 may be displayed on the display of the terminal. This will be described below with reference to FIG.
Referring to FIG. 8, a plurality of source contents included in the user contents are displayed on the display. When there is a selection input for detailed editing included in the editing interface, the content application can display a plurality of source contents contained in the user content. The user may delete one or more source content included in the user content. In addition, the user may relocate the location of one or more source content included in the user content. For example, the user can relocate the location of the source content by dragging and dropping the source content.
9 is a block diagram for explaining a terminal according to an embodiment.
Referring to FIG. 9, a terminal 900 according to one embodiment includes a
The
The
The
In the case of specific music content, the interval between time stamps may be shorter than the time length information of the screen transition. In this case, the ending point of the music section and the screen switching point may not correspond to each other. The
1 through 8 can be applied to the matters described with reference to FIG. 9, so that a detailed description will be omitted.
The apparatus described above may be implemented as a hardware component, a software component, and / or a combination of hardware components and software components. For example, the apparatus and components described in the embodiments may be implemented within a computer system, such as, for example, a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable gate array (FPGA) , A programmable logic unit (PLU), a microprocessor, or any other device capable of executing and responding to instructions. The processing device may execute an operating system (OS) and one or more software applications running on the operating system. The processing device may also access, store, manipulate, process, and generate data in response to execution of the software. For ease of understanding, the processing apparatus may be described as being used singly, but those skilled in the art will recognize that the processing apparatus may have a plurality of processing elements and / As shown in FIG. For example, the processing unit may comprise a plurality of processors or one processor and one controller. Other processing configurations are also possible, such as a parallel processor.
The software may include a computer program, code, instructions, or a combination of one or more of the foregoing, and may be configured to configure the processing device to operate as desired or to process it collectively or collectively Device can be commanded. The software and / or data may be in the form of any type of machine, component, physical device, virtual equipment, computer storage media, or device , Or may be permanently or temporarily embodied in a transmitted signal wave. The software may be distributed over a networked computer system and stored or executed in a distributed manner. The software and data may be stored on one or more computer readable recording media.
The method according to an embodiment may be implemented in the form of a program command that can be executed through various computer means and recorded in a computer-readable medium. The computer-readable medium may include program instructions, data files, data structures, and the like, alone or in combination. The program instructions to be recorded on the medium may be those specially designed and configured for the embodiments or may be available to those skilled in the art of computer software. Examples of computer-readable media include magnetic media such as hard disks, floppy disks and magnetic tape; optical media such as CD-ROMs and DVDs; magnetic media such as floppy disks; Magneto-optical media, and hardware devices specifically configured to store and execute program instructions such as ROM, RAM, flash memory, and the like. Examples of program instructions include machine language code such as those produced by a compiler, as well as high-level language code that can be executed by a computer using an interpreter or the like. The hardware devices described above may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa.
While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. For example, it is to be understood that the techniques described may be performed in a different order than the described methods, and / or that components of the described systems, structures, devices, circuits, Lt; / RTI > or equivalents, even if it is replaced or replaced.
Therefore, other implementations, other embodiments, and equivalents to the claims are also within the scope of the following claims.
Claims (21)
Applying a predefined screen transition to the source content; And
Generating user content based on the application of the screen transition
/ RTI >
A method for content creation of a terminal.
Wherein applying the predefined screen transition to the source content comprises:
Determining a starting point of the screen transition using the end point of the music node interval and the time length information of the screen transition;
/ RTI >
A method for content creation of a terminal.
Wherein applying the predefined screen transition to the source content comprises:
Applying the screen transition so that the next source content is exposed at the end of the music section;
/ RTI >
A method for content creation of a terminal.
Wherein applying the predefined screen transition to the source content comprises:
Comparing time length information of the screen transition and time corresponding to a next music node interval; And
Applying the screen transition so that the source content is exposed in the next music section when the time length information of the screen transition is longer than the time corresponding to the next music section;
/ RTI >
A method for content creation of a terminal.
Estimating a bit rate of the music content based on a frequency characteristic of the music content and obtaining the time stamp based on the estimated speed when there is a selection input for the music content;
≪ / RTI >
A method for content creation of a terminal.
When a plurality of source contents are selected, grouping the plurality of source contents
≪ / RTI >
A method for content creation of a terminal.
Determining whether the source content includes face information; And
If the face information is included, inserting additional content into the source content corresponding to the face information
≪ / RTI >
A method for content creation of a terminal.
Exposing an editing interface for editing the user content to a display of the terminal; And
Changing an exposure order of a plurality of source contents included in the user content based on a selection input to the editing interface
≪ / RTI >
A method for content creation of a terminal.
Wherein the source content comprises:
And at least one of a plurality of contents arranged in accordance with a predetermined criterion,
A method for content creation of a terminal.
The predetermined criterion may be,
An anniversary, and a date on which the plurality of contents are generated by a predetermined number or more.
A method for content creation of a terminal.
display;
A memory for storing the source content and the music content; And
A processor for generating user content based on the source content and the music content,
Lt; / RTI >
The processor comprising:
Wherein the source content is input to a source terminal, and the source content is associated with a set of music section intervals based on a time stamp corresponding to a time point at which the melody characteristic of the music content changes, And generating user content based on the application of the screen transition,
Terminal.
The processor comprising:
Determining a start point of the screen transition using the end point of the music node interval and the time length information of the screen transition,
Terminal.
The processor comprising:
Applying the screen transition so that the next source content is exposed at the end of the music section,
Terminal.
The processor comprising:
And when the time length information of the screen transition is longer than the time corresponding to the next music section, comparing the time length information of the screen transition with the time corresponding to the next music section, The screen transition is applied so that the screen transition is exposed,
Terminal.
The processor comprising:
Estimating a beat rate of the music content based on a frequency characteristic of the music content and obtaining the time stamp based on the estimated bit rate when there is a selection input for the music content;
Terminal.
The processor comprising:
When a plurality of source contents are selected, grouping the plurality of source contents,
Terminal.
The processor comprising:
The method comprising the steps of: checking whether the source content includes face information, and inserting additional content in the source content in correspondence with the face information when the face information is included;
Terminal.
The processor comprising:
Exposing an edit interface for editing the user content to the display and changing an exposure order of a plurality of source contents included in the user content based on a selection input to the edit interface,
Terminal.
Wherein the source content comprises:
And at least one of a plurality of contents arranged in accordance with a predetermined criterion,
Terminal.
The predetermined criterion may be,
An anniversary, and a date on which the plurality of contents are generated by a predetermined number or more.
Terminal.
The content production application stored in the memory of the terminal and executed by the processor of the terminal,
Comprising the steps of: associating source content with a music section that is set based on a time stamp corresponding to a time point at which an image feature of the music content changes;
Applying a predefined screen transition to the source content; And
Generating user content based on the application of the screen transition
Running,
Content creation application.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150152059A KR101970383B1 (en) | 2015-10-30 | 2015-10-30 | Device and method for contents production |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150152059A KR101970383B1 (en) | 2015-10-30 | 2015-10-30 | Device and method for contents production |
Publications (2)
Publication Number | Publication Date |
---|---|
KR20170057473A true KR20170057473A (en) | 2017-05-25 |
KR101970383B1 KR101970383B1 (en) | 2019-04-18 |
Family
ID=59050705
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020150152059A KR101970383B1 (en) | 2015-10-30 | 2015-10-30 | Device and method for contents production |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR101970383B1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102521903B1 (en) | 2022-11-28 | 2023-04-17 | (주)비디오몬스터 | System and method for automatic video editing using auto labeling |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102308369B1 (en) * | 2020-12-18 | 2021-10-06 | 주식회사 스파크엑스 (SPARKX Co.,Ltd.) | Automatic video editing system through artificial intelligence sound source analysis |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100613859B1 (en) * | 2004-06-07 | 2006-08-21 | (주)잉카엔트웍스 | Apparatus and method for editing and providing multimedia data for portable device |
KR100860510B1 (en) * | 2007-04-23 | 2008-09-26 | 엠엠비 주식회사 | Method for creating slide show having visual effect in mobile device |
KR20090001592A (en) * | 2007-04-30 | 2009-01-09 | 삼성전자주식회사 | Image production system, apparatus and method using user data of mobile terminal |
KR20100051997A (en) * | 2008-11-10 | 2010-05-19 | 김진수 | Dynamic image producing system and the method |
KR101315970B1 (en) * | 2012-05-23 | 2013-10-08 | (주)엔써즈 | Apparatus and method for recognizing content using audio signal |
JP2015099603A (en) * | 2010-02-24 | 2015-05-28 | 大日本印刷株式会社 | Image display system |
-
2015
- 2015-10-30 KR KR1020150152059A patent/KR101970383B1/en active IP Right Grant
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100613859B1 (en) * | 2004-06-07 | 2006-08-21 | (주)잉카엔트웍스 | Apparatus and method for editing and providing multimedia data for portable device |
KR100860510B1 (en) * | 2007-04-23 | 2008-09-26 | 엠엠비 주식회사 | Method for creating slide show having visual effect in mobile device |
KR20090001592A (en) * | 2007-04-30 | 2009-01-09 | 삼성전자주식회사 | Image production system, apparatus and method using user data of mobile terminal |
KR20100051997A (en) * | 2008-11-10 | 2010-05-19 | 김진수 | Dynamic image producing system and the method |
JP2015099603A (en) * | 2010-02-24 | 2015-05-28 | 大日本印刷株式会社 | Image display system |
KR101315970B1 (en) * | 2012-05-23 | 2013-10-08 | (주)엔써즈 | Apparatus and method for recognizing content using audio signal |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102521903B1 (en) | 2022-11-28 | 2023-04-17 | (주)비디오몬스터 | System and method for automatic video editing using auto labeling |
Also Published As
Publication number | Publication date |
---|---|
KR101970383B1 (en) | 2019-04-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104380380B (en) | Video editing method and digital device therefor | |
JP4811433B2 (en) | Image selection apparatus, image selection method, and program | |
TWI579838B (en) | Automatic generation of compilation videos | |
US8319086B1 (en) | Video editing matched to musical beats | |
CN101300567B (en) | Method for media sharing and authoring on the web | |
US11438510B2 (en) | System and method for editing video contents automatically technical field | |
TW201545120A (en) | Automatic generation of compilation videos | |
JP2017503394A (en) | VIDEO PROCESSING METHOD, VIDEO PROCESSING DEVICE, AND DISPLAY DEVICE | |
JP2007533271A (en) | Audio-visual work and corresponding text editing system for television news | |
WO2016029745A1 (en) | Method and device for generating video slide | |
JP2005065244A (en) | Method and apparatus for reviewing video | |
EP3046107B1 (en) | Generating and display of highlight video associated with source contents | |
WO2018076174A1 (en) | Multimedia editing method and device, and smart terminal | |
WO2012160771A1 (en) | Information processing device, information processing method, program, storage medium and integrated circuit | |
JP5017424B2 (en) | Electronic apparatus and image processing method | |
JP4892074B2 (en) | Electronic device, image output method and program | |
KR101970383B1 (en) | Device and method for contents production | |
US20120251081A1 (en) | Image editing device, image editing method, and program | |
KR101850285B1 (en) | Device and method for generating video script, and video producing system and method based on video script, computer program media | |
JP2007267356A (en) | File management program, thumb nail image display method, and moving image reproduction device | |
JP2008084021A (en) | Animation scenario generation method, program and device | |
JP5225330B2 (en) | Electronic apparatus and image processing method | |
JP2014016670A (en) | Image processing device and image processing program | |
US9122923B2 (en) | Image generation apparatus and control method | |
WO2015131651A1 (en) | Slide generation method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
E902 | Notification of reason for refusal | ||
AMND | Amendment | ||
E601 | Decision to refuse application | ||
AMND | Amendment | ||
J201 | Request for trial against refusal decision | ||
J301 | Trial decision |
Free format text: TRIAL NUMBER: 2017101002486; TRIAL DECISION FOR APPEAL AGAINST DECISION TO DECLINE REFUSAL REQUESTED 20170522 Effective date: 20190123 |
|
S901 | Examination by remand of revocation | ||
GRNO | Decision to grant (after opposition) |