KR20130093186A - Apparatus for making a moving image with interactive character - Google Patents
Apparatus for making a moving image with interactive character Download PDFInfo
- Publication number
- KR20130093186A KR20130093186A KR1020120014555A KR20120014555A KR20130093186A KR 20130093186 A KR20130093186 A KR 20130093186A KR 1020120014555 A KR1020120014555 A KR 1020120014555A KR 20120014555 A KR20120014555 A KR 20120014555A KR 20130093186 A KR20130093186 A KR 20130093186A
- Authority
- KR
- South Korea
- Prior art keywords
- information
- motion
- character
- icon
- input
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/85403—Content authoring by describing the content as an MPEG-21 Digital Item
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/144—Movement detection
- H04N5/145—Movement estimation
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
In the video production apparatus using the interactive character according to the present invention, resource information including character information, motion information defined to perform a predetermined motion with respect to individual motions of the character, and motion icon information corresponding to the motion information are stored. A resource storage unit; A screen processor for outputting the character, a motion icon, and a tool icon for making an image to a screen display unit; A preprocessor configured to control the input voice data to be stored in association with the dubbing motion icon when a dubbing motion icon is selected and a voice data is input through a microphone; And when the image production signal is input from the user, time information in which the at least one dubbing motion icon is individually selected, and sync information in which narration is input through the microphone are sequentially generated, and the at least one dubbing motion icon is generated. And an image generator for generating image information of the character's individual motions and comments and the narration information sequentially linked by the sync information. According to the present invention, the user can directly create a video more easily, and can provide an intuitive experience to the user, thereby further inspiring the interest of the video production, as well as providing a highly creative effect in relation to the production of a fairy tale. can do.
Description
The present invention relates to an apparatus for producing a video, and more specifically, to a user by configuring an interactive motion, dialogue, narration, and the like in a portable terminal such as a smartphone to be effectively and variably applied to a character through a user-oriented interface environment. The present invention relates to a video production device using an interactive character that can provide an intuitive experience.
In recent years, activities such as making a video by an individual such as UCC and uploading the produced video through a space such as SNS have been widely used. Such video production is usually performed through a video recording device such as a camcorder provided by an individual or a driving program or a device capable of capturing an image, and requires professional knowledge on the device or application software, regardless of gender or age. There is a limit to the universal use.
In addition, as well as the activation of SNS, personal mobile terminal has been developed into a smart phone, such as using them to share digital photos or videos taken by them, or to send to others is also common.
Accordingly, many tools and techniques for making or editing a video by the user have been disclosed, but most of them are merely deleting, extracting, or editing the existing video data to the extent necessary, and also complicated. There are many problems in user convenience and ease because it is made by a program that requires a considerable amount of resources.
In addition, the conventional methods are far from the genuine creative activities of the video because they only remain in the way of purchasing the emoticons and characters already produced by the item provider and provided to the user and transmitting them to others through MMS. It can be said that it is far.
Therefore, when a user directly produces a video, it may be said that there is a great need for a method for easily and effectively producing a video based on the user's intuitive perception and a device in which they are implemented.
The present invention was devised to solve the above problems or necessities, and easily applies characters and motion motions of characters and characters to an authoring environment in which the user can intuitively recognize, comment, and story telling. It is an object of the present invention to provide a video production apparatus that allows the user to effectively create a video using a character by making it easy to apply to the video using an interactive character environment.
Other objects and advantages of the invention will be described below and will be appreciated by the embodiments of the invention. Further, the objects and advantages of the present invention can be realized by a combination of the constitution and the constitution shown in the claims.
An apparatus for producing a video using the interactive character of the present invention for achieving the above object includes character information, motion information defined to perform a predetermined movement with respect to individual motions of the character, and motion icon information corresponding to the motion information. A resource storage unit for storing resource information; A screen processor for outputting the character, a motion icon, and a tool icon for making an image to a screen display unit; A preprocessor configured to control the input voice data to be stored in association with the dubbing motion icon when a dubbing motion icon is selected and a voice data is input through a microphone; And when the image production signal is input from the user, time information in which the at least one dubbing motion icon is individually selected, and sync information in which narration is input through the microphone are sequentially generated, and the at least one dubbing motion icon is generated. And an image generation unit configured to generate image information of the character's individual motions and announcements and the narration information sequentially linked by the sink information.
The present invention may further include an image reproducing unit which outputs the image information according to the sync information when a reproduction signal is input from a user.
In addition, the present invention may further include a conversion unit for converting the narration information or narration information input from the user through the microphone into text information, wherein the image generation unit using the sync information at the time It may be configured to output the narration information or narration information converted into text information around the character.
Preferably, the preprocessor of the present invention may control to store the input voice data in association with the dubbing motion icon when the voice data of a reference size or more is detected.
The video production apparatus using the interactive character according to the present invention is configured so that the user can apply motion information, which predefines various movements of the character, to the character only with a simple interface environment in which the user selects an icon. Can create a video.
According to the present invention, the user can easily create a video directly, as well as provide an intuitive experience to the user in relation to the video production can further inspire the interest of video production, the user directly Corresponding dialogue and comments can be created and modified at any time for each character's motion, so that the user's authoring activity can be more effectively and simply implemented.
According to the present invention, the user can directly generate narrations such as storytelling, as well as dialogue according to the character's movement through such an authoring environment, thereby providing an intuitive user experience beyond simply directing a motion. It can provide effects that can more effectively implement creative activities, such as creating simple oral fairy tales directly.
The following drawings attached to this specification are illustrative of the preferred embodiments of the present invention, and together with the detailed description of the invention to serve as a further understanding of the technical spirit of the present invention, the present invention is only to the matter described in these drawings It should not be construed as limited.
1 is a block diagram showing a configuration according to a preferred embodiment of the present invention,
2 is a flowchart illustrating a processing process of a video production method according to a preferred embodiment of the present invention;
3 is a view illustrating a video production and sync information generation according to a preferred embodiment of the present invention;
4 and 5 are diagrams illustrating an interface environment of video production according to a preferred embodiment of the present invention.
Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings. Prior to this, terms and words used in the present specification and claims should not be construed as limited to ordinary or dictionary terms, and the inventor should appropriately interpret the concepts of the terms appropriately It should be interpreted in accordance with the meaning and concept consistent with the technical idea of the present invention based on the principle that it can be defined.
Therefore, the embodiments described in this specification and the configurations shown in the drawings are merely the most preferred embodiments of the present invention and do not represent all the technical ideas of the present invention. Therefore, It is to be understood that equivalents and modifications are possible.
1 is a block diagram showing the configuration of a video production apparatus (hereinafter referred to as a production apparatus) 100 utilizing the interactive character of the present invention, Figure 2 is a processing process for producing a video by the production apparatus of the present invention It is a flowchart shown.
As shown in FIG. 1, the
First, the
That is, each configuration corresponds to a logical component that performs each function in order to realize the technical idea of the present invention, so that even if each component is integrated or separated, the function performed by the configuration of the present invention can be achieved. It should be construed that it is within the scope of, and that components that perform the same or similar functions should be construed as being within the scope of the present invention regardless of whether or not their names match.
First, the present invention provides a plurality of character information that can be selected by the user, as well as motion information defined so that predetermined two-dimensional or three-dimensional movements are performed with respect to each character's behavior and motion. Such motion information may be classified into various categories according to the emotional state of the character, and may be configured in a structure in which the action motions of the divided categories are subdivided.
The subdivided motion information may be defined as a two-dimensional to three-dimensional object structure in the character data so that natural motion is implemented according to, for example, 45 joint movements. In addition, such motion information is implemented as an interface environment that allows a user to easily apply to a character through each motion icon symbolizing it.
Through this environment, when a user selects a character and selects one or more of various motion icons categorized into a folder structure or a tree structure, the character is configured to perform an action or action corresponding to the motion icon.
In addition, it is of course possible to structure data that predefines the selection of wallpaper, props, etc., and the relative positional relationship between each character, prop, and wallpaper.
To this end, the
The character information, motion information, motion icon information, or tool icon information may be modified or updated, and more information and data about the character information, motion information, tool icon information, and the like may be loaded into the device through a method of accessing a server on which a service according to the present invention is operated. It can be configured to be.
The
In this way, the
As illustrated in FIG. 4,
To this end, the
In this processing, after the comment generation signal is input (S220) through a touch of the
In this regard, it is preferable that after the dubbing motion icon is selected, it is desirable to configure subsequent processing to be performed automatically when voice data is detected to reduce the user's artificial manipulation and to create a more efficient and intuitive comment therefrom. Can be. In this case, it is more preferable to configure so that subsequent processing is automatically performed only when the detected voice data is detected as voice data having a reference size or more, so that simple device operation sounds or noises can be effectively filtered.
In order to distinguish a motion icon associated with a comment (metabolism) from a general motion icon, it is referred to as a dubbing motion icon in the following description. When the
Various interface means 30 related to the use of a smartphone, including other video production, may be configured to be output to the
When the character performs a specific motion through the configuration of the present invention as described above, the voice stored directly by the user can be fused to the motion so that the character can be interactively performed with the comment at any time. Of course, it can be configured to update or modify the information associated with the motion.
When the image production signal is input from the user through selection of the
When the user selects a specific motion icon, the character performs an operation corresponding thereto. When the specific operation corresponding to this is completed, the character performs an operation corresponding to the input motion icon in the following order. Until the signal for completing the image generation is input, the
Hereinafter, referring to FIG. 3, more preferable processing for image generation will be described. FIG.
FIG. 3 is a diagram illustrating a process in which a motion, a comment (narration), a narration, and the like of a character are connected in sequence to produce a video.
As described above, the
At this time, the
To this end, the
When the sync information is generated in this way, it is possible to clearly distinguish between the operation and the audio in the video according to the time sequence according to the time. In addition, since the motion and dialogue (ment) of the character corresponding to each motion icon or dubbing motion icon are already stored in association with each icon and narration information is also stored, the corresponding information or data is output in the order of the sync information. By controlling, one video can be generated.
In this case, a single video may be generated by a method of fusion of the sync information and the information associated therewith, thereby generating a simpler and lower capacity video.
For example, after the video production signal is input from the user, when the # 1 narration is input, the # 1 narration generates time information after the input and the end, and stores the input narration data. Thereafter, when the user selects the # 1 dubbing motion icon, the character performs the # 1 operation corresponding to the # 1 dubbing motion icon, and at the same time, the # 1 comment previously designated and stored by the user is performed. The
The
After that, when the # 2 motion icon is selected, the character performs the corresponding # 2 motion. After the completion of the # 2 motion, when the # 3 dub motion icon is selected, the character performs the # 3 motion and the # 3 cement. do. The
In this process, as shown in FIG. 5, the video may be generated while performing pause, stop, narration input and end, FF, rewind, etc. through the
The user has dubbed “I am happy to see you again” in the motion icon “face” and saves it. Raising operation] may be selected.
In addition, the "hug" motion icon dubbed "I love you" is stored in the state, while the "hand" motion icon is assumed to be dubbed.
After the user inputs the video production signal, the narration input and end buttons input the
Then, when the user selects the "face" dubbing motion icon, the prince or / and princess character performs a [lift face] motion from t3 to t5, saying, "I am glad to see you again." Is performed.
In the middle of this processing, t4 time, the user can again input Vivaldi's four seasons to t6 as sound effects using the interface of the narration input and end buttons. After that, when the user selects a motion icon called "hand", the motion of [hand holding], which is already defined, is performed from t7 to t8.
After that, when the user selects the "hug" dubbing motion icon, the character, such as a prince, performs a motion of "hugging" through a predetermined motion vector, a positive motion, and the like. Done. Thereafter, the series of processing ends by the motion picture production completion signal input from the user.
The
In addition, in the present invention, when a video playback signal is input from the user, the
In the present invention, a comment or narration input from a user through a microphone or the like during dubbing of the above-described comment through the
In addition, the present invention further induces the interest of video production, and can be configured so that the user himself can be the main character in the oral fairy tale. To this end, by using the face region information of the character and the image position information corresponding to the face region, The apparatus may further include an
In addition, when the video is transmitted to an external server or another client, the video information generated by the standard video codec may be transmitted, and as described above, the generated sync information (including comment and narration information) may be lowered significantly. More preferably configured to transmit. However, character information, motion information, motion icon information, and the like must be stored in the second client in order to enable video playback from another client, such as another client, through the transmission of the sync information.
When the sink information is transmitted to a receiving client which is a receiving target, character, motion, motion icon information, etc. are analyzed in a storage medium of the receiving client, and if necessary information or data is not present in the receiving client, The interface window may be popped up to induce data to be transmitted through a specific server connection.
When the image transmission request signal to the receiving client including all of the character, motion, and motion icon information is input, the transmitting
While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not to be limited to the details thereof and that various changes and modifications will be apparent to those skilled in the art. And various modifications and variations are possible within the scope of the appended claims.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are intended to provide further explanation of the invention as claimed. It should be understood that various modifications may be made in the ordinary skill in the art.
100: video production device 110: resource storage unit
120: screen processing unit 130: screen display unit
140: audio input unit 145: conversion unit
150: preprocessing unit 160: image generation unit
170: image playback unit 180: transmission unit
Claims (4)
A screen processor for outputting the character, a motion icon, and a tool icon for making an image to a screen display unit;
A preprocessor configured to control the input voice data to be stored in association with the dubbing motion icon when a dubbing motion icon is selected and a voice data is input through a microphone; And
When an image production signal is input from a user, time information at which the at least one dubbing motion icon is individually selected, and time information at which narration is input through the microphone are sequentially generated, and sync information is sequentially generated at the at least one dubbing motion icon. And an image generator for generating image information by sequentially linking the narration information with the information about individual motions and comments of the character by the sync information.
And a video playback unit for outputting the video information according to the sync information when a playback signal is input from a user.
A conversion unit for converting the announcement information or narration information input from the user through the microphone into text information,
And the image generation unit outputs the narration information or narration information converted into the text information at a corresponding time using the sync information around the character.
And when the voice data having a reference size or more is detected, controlling the input voice data to be stored in association with the dubbing motion icon.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020120014555A KR20130093186A (en) | 2012-02-14 | 2012-02-14 | Apparatus for making a moving image with interactive character |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020120014555A KR20130093186A (en) | 2012-02-14 | 2012-02-14 | Apparatus for making a moving image with interactive character |
Publications (1)
Publication Number | Publication Date |
---|---|
KR20130093186A true KR20130093186A (en) | 2013-08-22 |
Family
ID=49217532
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020120014555A KR20130093186A (en) | 2012-02-14 | 2012-02-14 | Apparatus for making a moving image with interactive character |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR20130093186A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102180576B1 (en) * | 2020-05-18 | 2020-11-18 | 주식회사 일루니 | Method and apparatus for providing re-programmed interactive content based on user playing |
KR102213618B1 (en) * | 2020-09-03 | 2021-02-09 | 주식회사 웨인힐스벤처스 | Multimedia automatic generation system for automatically generating multimedia suitable for user's voice data by using artificial intelligence |
KR102263659B1 (en) * | 2019-12-16 | 2021-06-09 | 민광윤 | Web server for generating mommy's fairy tale using story contents application |
CN114286155A (en) * | 2021-12-07 | 2022-04-05 | 咪咕音乐有限公司 | Picture element modification method, device, equipment and storage medium based on barrage |
-
2012
- 2012-02-14 KR KR1020120014555A patent/KR20130093186A/en not_active Application Discontinuation
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102263659B1 (en) * | 2019-12-16 | 2021-06-09 | 민광윤 | Web server for generating mommy's fairy tale using story contents application |
KR102180576B1 (en) * | 2020-05-18 | 2020-11-18 | 주식회사 일루니 | Method and apparatus for providing re-programmed interactive content based on user playing |
WO2021235636A1 (en) * | 2020-05-18 | 2021-11-25 | 주식회사 일루니 | Method and apparatus for providing interactive content reprogrammed on basis of playing of user |
US11402975B2 (en) | 2020-05-18 | 2022-08-02 | Illuni Inc. | Apparatus and method for providing interactive content |
KR102213618B1 (en) * | 2020-09-03 | 2021-02-09 | 주식회사 웨인힐스벤처스 | Multimedia automatic generation system for automatically generating multimedia suitable for user's voice data by using artificial intelligence |
WO2022050632A1 (en) * | 2020-09-03 | 2022-03-10 | 주식회사 웨인힐스벤처스 | Multimedia automatic generation system for automatically generating multimedia appropriate for user voice data by using artificial intelligence |
CN114286155A (en) * | 2021-12-07 | 2022-04-05 | 咪咕音乐有限公司 | Picture element modification method, device, equipment and storage medium based on barrage |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102306624B1 (en) | Persistent companion device configuration and deployment platform | |
US11148296B2 (en) | Engaging in human-based social interaction for performing tasks using a persistent companion device | |
US20170206064A1 (en) | Persistent companion device configuration and deployment platform | |
AU2019262848B2 (en) | Interactive application adapted for use by multiple users via a distributed computer-based system | |
US9984724B2 (en) | System, apparatus and method for formatting a manuscript automatically | |
CN110400251A (en) | Method for processing video frequency, device, terminal device and storage medium | |
US20230092103A1 (en) | Content linking for artificial reality environments | |
WO2016011159A9 (en) | Apparatus and methods for providing a persistent companion device | |
US20160045834A1 (en) | Overlay of avatar onto live environment for recording a video | |
CN110782900A (en) | Collaborative AI storytelling | |
US20140028780A1 (en) | Producing content to provide a conversational video experience | |
CN106575361A (en) | Method of providing visual sound image and electronic device implementing the same | |
US10812430B2 (en) | Method and system for creating a mercemoji | |
CN103430217A (en) | Input support device, input support method, and recording medium | |
KR20170057736A (en) | Virtual-Reality EDUCATIONAL CONTENT PRODUCTION SYSTEM AND METHOD OF CONTRLLING THE SAME | |
JP2016038601A (en) | Cg character interaction device and cg character interaction program | |
JP2018078402A (en) | Content production device, and content production system with sound | |
KR20130093186A (en) | Apparatus for making a moving image with interactive character | |
KR20130094058A (en) | Communication system, apparatus and computer-readable storage medium | |
EP4252195A1 (en) | Real world beacons indicating virtual locations | |
US20180276185A1 (en) | System, apparatus and method for formatting a manuscript automatically | |
WO2018183812A1 (en) | Persistent companion device configuration and deployment platform | |
CN110989912A (en) | Entertainment file generation method, device, medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
E902 | Notification of reason for refusal | ||
E601 | Decision to refuse application |