KR20130093186A - Apparatus for making a moving image with interactive character - Google Patents

Apparatus for making a moving image with interactive character Download PDF

Info

Publication number
KR20130093186A
KR20130093186A KR1020120014555A KR20120014555A KR20130093186A KR 20130093186 A KR20130093186 A KR 20130093186A KR 1020120014555 A KR1020120014555 A KR 1020120014555A KR 20120014555 A KR20120014555 A KR 20120014555A KR 20130093186 A KR20130093186 A KR 20130093186A
Authority
KR
South Korea
Prior art keywords
information
motion
character
icon
input
Prior art date
Application number
KR1020120014555A
Other languages
Korean (ko)
Inventor
이성도
Original Assignee
주식회사 와이즈게코
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 와이즈게코 filed Critical 주식회사 와이즈게코
Priority to KR1020120014555A priority Critical patent/KR20130093186A/en
Publication of KR20130093186A publication Critical patent/KR20130093186A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/85403Content authoring by describing the content as an MPEG-21 Digital Item
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • H04N5/145Movement estimation

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

In the video production apparatus using the interactive character according to the present invention, resource information including character information, motion information defined to perform a predetermined motion with respect to individual motions of the character, and motion icon information corresponding to the motion information are stored. A resource storage unit; A screen processor for outputting the character, a motion icon, and a tool icon for making an image to a screen display unit; A preprocessor configured to control the input voice data to be stored in association with the dubbing motion icon when a dubbing motion icon is selected and a voice data is input through a microphone; And when the image production signal is input from the user, time information in which the at least one dubbing motion icon is individually selected, and sync information in which narration is input through the microphone are sequentially generated, and the at least one dubbing motion icon is generated. And an image generator for generating image information of the character's individual motions and comments and the narration information sequentially linked by the sync information. According to the present invention, the user can directly create a video more easily, and can provide an intuitive experience to the user, thereby further inspiring the interest of the video production, as well as providing a highly creative effect in relation to the production of a fairy tale. can do.

Description

Apparatus for making a moving image with interactive character}

The present invention relates to an apparatus for producing a video, and more specifically, to a user by configuring an interactive motion, dialogue, narration, and the like in a portable terminal such as a smartphone to be effectively and variably applied to a character through a user-oriented interface environment. The present invention relates to a video production device using an interactive character that can provide an intuitive experience.

In recent years, activities such as making a video by an individual such as UCC and uploading the produced video through a space such as SNS have been widely used. Such video production is usually performed through a video recording device such as a camcorder provided by an individual or a driving program or a device capable of capturing an image, and requires professional knowledge on the device or application software, regardless of gender or age. There is a limit to the universal use.

In addition, as well as the activation of SNS, personal mobile terminal has been developed into a smart phone, such as using them to share digital photos or videos taken by them, or to send to others is also common.

Accordingly, many tools and techniques for making or editing a video by the user have been disclosed, but most of them are merely deleting, extracting, or editing the existing video data to the extent necessary, and also complicated. There are many problems in user convenience and ease because it is made by a program that requires a considerable amount of resources.

In addition, the conventional methods are far from the genuine creative activities of the video because they only remain in the way of purchasing the emoticons and characters already produced by the item provider and provided to the user and transmitting them to others through MMS. It can be said that it is far.

Therefore, when a user directly produces a video, it may be said that there is a great need for a method for easily and effectively producing a video based on the user's intuitive perception and a device in which they are implemented.

The present invention was devised to solve the above problems or necessities, and easily applies characters and motion motions of characters and characters to an authoring environment in which the user can intuitively recognize, comment, and story telling. It is an object of the present invention to provide a video production apparatus that allows the user to effectively create a video using a character by making it easy to apply to the video using an interactive character environment.

Other objects and advantages of the invention will be described below and will be appreciated by the embodiments of the invention. Further, the objects and advantages of the present invention can be realized by a combination of the constitution and the constitution shown in the claims.

An apparatus for producing a video using the interactive character of the present invention for achieving the above object includes character information, motion information defined to perform a predetermined movement with respect to individual motions of the character, and motion icon information corresponding to the motion information. A resource storage unit for storing resource information; A screen processor for outputting the character, a motion icon, and a tool icon for making an image to a screen display unit; A preprocessor configured to control the input voice data to be stored in association with the dubbing motion icon when a dubbing motion icon is selected and a voice data is input through a microphone; And when the image production signal is input from the user, time information in which the at least one dubbing motion icon is individually selected, and sync information in which narration is input through the microphone are sequentially generated, and the at least one dubbing motion icon is generated. And an image generation unit configured to generate image information of the character's individual motions and announcements and the narration information sequentially linked by the sink information.

The present invention may further include an image reproducing unit which outputs the image information according to the sync information when a reproduction signal is input from a user.

In addition, the present invention may further include a conversion unit for converting the narration information or narration information input from the user through the microphone into text information, wherein the image generation unit using the sync information at the time It may be configured to output the narration information or narration information converted into text information around the character.

Preferably, the preprocessor of the present invention may control to store the input voice data in association with the dubbing motion icon when the voice data of a reference size or more is detected.

The video production apparatus using the interactive character according to the present invention is configured so that the user can apply motion information, which predefines various movements of the character, to the character only with a simple interface environment in which the user selects an icon. Can create a video.

According to the present invention, the user can easily create a video directly, as well as provide an intuitive experience to the user in relation to the video production can further inspire the interest of video production, the user directly Corresponding dialogue and comments can be created and modified at any time for each character's motion, so that the user's authoring activity can be more effectively and simply implemented.

According to the present invention, the user can directly generate narrations such as storytelling, as well as dialogue according to the character's movement through such an authoring environment, thereby providing an intuitive user experience beyond simply directing a motion. It can provide effects that can more effectively implement creative activities, such as creating simple oral fairy tales directly.

The following drawings attached to this specification are illustrative of the preferred embodiments of the present invention, and together with the detailed description of the invention to serve as a further understanding of the technical spirit of the present invention, the present invention is only to the matter described in these drawings It should not be construed as limited.
1 is a block diagram showing a configuration according to a preferred embodiment of the present invention,
2 is a flowchart illustrating a processing process of a video production method according to a preferred embodiment of the present invention;
3 is a view illustrating a video production and sync information generation according to a preferred embodiment of the present invention;
4 and 5 are diagrams illustrating an interface environment of video production according to a preferred embodiment of the present invention.

Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings. Prior to this, terms and words used in the present specification and claims should not be construed as limited to ordinary or dictionary terms, and the inventor should appropriately interpret the concepts of the terms appropriately It should be interpreted in accordance with the meaning and concept consistent with the technical idea of the present invention based on the principle that it can be defined.

Therefore, the embodiments described in this specification and the configurations shown in the drawings are merely the most preferred embodiments of the present invention and do not represent all the technical ideas of the present invention. Therefore, It is to be understood that equivalents and modifications are possible.

1 is a block diagram showing the configuration of a video production apparatus (hereinafter referred to as a production apparatus) 100 utilizing the interactive character of the present invention, Figure 2 is a processing process for producing a video by the production apparatus of the present invention It is a flowchart shown.

As shown in FIG. 1, the production apparatus 100 of the present invention includes a resource storage unit 110, a screen processing unit 120, a screen display unit 130, a preprocessor 150, and an image generator 160. The production apparatus 100 of the present invention may be configured as an audio input unit 140, a converter 145, an image reproducing unit 170, an image processing unit 180, and a transmitting unit ( 190 may additionally comprise elements in combination according to an embodiment.

First, the manufacturing apparatus 100 of the present invention described above and each component constituting the same are to be understood as components that are logically divided or components that include the same, rather than components that are physically separated.

That is, each configuration corresponds to a logical component that performs each function in order to realize the technical idea of the present invention, so that even if each component is integrated or separated, the function performed by the configuration of the present invention can be achieved. It should be construed that it is within the scope of, and that components that perform the same or similar functions should be construed as being within the scope of the present invention regardless of whether or not their names match.

First, the present invention provides a plurality of character information that can be selected by the user, as well as motion information defined so that predetermined two-dimensional or three-dimensional movements are performed with respect to each character's behavior and motion. Such motion information may be classified into various categories according to the emotional state of the character, and may be configured in a structure in which the action motions of the divided categories are subdivided.

The subdivided motion information may be defined as a two-dimensional to three-dimensional object structure in the character data so that natural motion is implemented according to, for example, 45 joint movements. In addition, such motion information is implemented as an interface environment that allows a user to easily apply to a character through each motion icon symbolizing it.

Through this environment, when a user selects a character and selects one or more of various motion icons categorized into a folder structure or a tree structure, the character is configured to perform an action or action corresponding to the motion icon.

In addition, it is of course possible to structure data that predefines the selection of wallpaper, props, etc., and the relative positional relationship between each character, prop, and wallpaper.

To this end, the resource storage unit 110 of the present invention, the above-described character information, motion information defined to make a predetermined movement for the individual motion of the character and motion icon information corresponding to or corresponding to the motion information is mutually linked Resource information is stored. In addition, the resource storage unit 110 generates and stores information on a tool icon necessary for producing an image including a dialogue input or a narration input, which will be described later (S200).

The character information, motion information, motion icon information, or tool icon information may be modified or updated, and more information and data about the character information, motion information, tool icon information, and the like may be loaded into the device through a method of accessing a server on which a service according to the present invention is operated. It can be configured to be.

The screen processing unit 120 of the present invention is suitable for the screen display unit 130, such as an LCD such as a smart phone, such as a smartphone selected by the user or a default (character), motion icon or tool icon as illustrated in FIG. Output to the position (S210).

In this way, the screen display unit 130 is not only outputs the icon, but also implemented as a touch panel is configured to be used as an input interface that can be input to the user's choices and the like.

As illustrated in FIG. 4, reference numeral 25 illustrates a motion icon in which the motion or motion of the character is defined. For example, a user selects a prince (male) among the characters 10 and selects a “hug” icon. Or touch the selected prince character to perform a motion corresponding to "hug" by predetermined two-dimensional to three-dimensional structure data. The method of selecting the motion icon 25 may be configured to perform a subdivided operation according to a touch (tap), a double touch (tap), and the like so that more behaviors and the like may be implemented in the character.

Reference numerals 20, 21, etc. of FIG. 4 correspond to tool icons related to dub (metabolism) dubbing or image production. The present invention uses the tool icons to interactively interact with the corresponding dialogue lines and comments for each character's action motion. Configure it for your own production.

To this end, the preprocessor 150 of the present invention stores the voice data input through the audio input unit 140 such as a microphone (S240) when the dubbing motion icon, which is a motion icon for creating a comment, is selected (S215). The data is processed to be linked to a corresponding motion icon to generate and store a dubbed motion icon that is a motion icon dubbed a dialogue (S250) (S250).

In this processing, after the comment generation signal is input (S220) through a touch of the tool icon 21 or the like, processing for storing the input voice data and linking the voice data to the corresponding motion icon is performed. can do.

In this regard, it is preferable that after the dubbing motion icon is selected, it is desirable to configure subsequent processing to be performed automatically when voice data is detected to reduce the user's artificial manipulation and to create a more efficient and intuitive comment therefrom. Can be. In this case, it is more preferable to configure so that subsequent processing is automatically performed only when the detected voice data is detected as voice data having a reference size or more, so that simple device operation sounds or noises can be effectively filtered.

In order to distinguish a motion icon associated with a comment (metabolism) from a general motion icon, it is referred to as a dubbing motion icon in the following description. When the preprocessing unit 150 of the present invention generates a dubbing motion icon linked to the comment as described above, the screen processing unit 120 of the present invention can be distinguished from the general motion icon in the future as shown in FIG. 5. It is desirable to configure so that the user can easily identify the icon and the like according to the characteristics by outputting so that D is additionally displayed. In FIG. 5, the dubbing motion icon is denoted by reference numeral 27, and the general motion icon is denoted by reference numeral 25.

Various interface means 30 related to the use of a smartphone, including other video production, may be configured to be output to the screen display unit 130 in the form of an icon or the like.

When the character performs a specific motion through the configuration of the present invention as described above, the voice stored directly by the user can be fused to the motion so that the character can be interactively performed with the comment at any time. Of course, it can be configured to update or modify the information associated with the motion.

When the image production signal is input from the user through selection of the image generation icon 21 or the like, the image generation unit 160 classifies the motion information of the character after that point in time according to the order of time. Save it.

When the user selects a specific motion icon, the character performs an operation corresponding thereto. When the specific operation corresponding to this is completed, the character performs an operation corresponding to the input motion icon in the following order. Until the signal for completing the image generation is input, the image generation unit 160 of the present invention stores all the motion information of the character and the dialogue information linked to the motion information.

Hereinafter, referring to FIG. 3, more preferable processing for image generation will be described. FIG.

FIG. 3 is a diagram illustrating a process in which a motion, a comment (narration), a narration, and the like of a character are connected in sequence to produce a video.

As described above, the image generator 160 of the present invention activates a function of the audio input unit 140 such as a microphone when an image production signal is input from a user through selection of a specific icon 21 (S260). In the video production process, it is possible to store the voice or audio data input from the user (S270), and to perform individual actions corresponding to the selected motion icon, including the motion icon, the individual time information in which the dub motion icon is selected, and the time information in which the narration is input. In addition, the image information is generated by sequentially connecting the announcement information and the input narration information (S290).

At this time, the image generation unit 160 of the present invention may generate a single video based on the above-described various information by using a standardized video codec. However, the present invention proposes a method of constructing such a generalized video production, as well as a simpler and lower capacity video information to have higher usability (writing, editing, transmission, etc.).

To this end, the image generating unit 160 of the present invention first generates sync information in which one or more motion icons and time information in which a narration is input are sequentially linked through a microphone and the like in which a dubbing motion icon is individually selected ( S280).

When the sync information is generated in this way, it is possible to clearly distinguish between the operation and the audio in the video according to the time sequence according to the time. In addition, since the motion and dialogue (ment) of the character corresponding to each motion icon or dubbing motion icon are already stored in association with each icon and narration information is also stored, the corresponding information or data is output in the order of the sync information. By controlling, one video can be generated.

In this case, a single video may be generated by a method of fusion of the sync information and the information associated therewith, thereby generating a simpler and lower capacity video.

For example, after the video production signal is input from the user, when the # 1 narration is input, the # 1 narration generates time information after the input and the end, and stores the input narration data. Thereafter, when the user selects the # 1 dubbing motion icon, the character performs the # 1 operation corresponding to the # 1 dubbing motion icon, and at the same time, the # 1 comment previously designated and stored by the user is performed. The image generation unit 160 of the present invention generates the information on the time at which the # 1 dubbing motion icon is selected and the time information ended as sink information.

The image generation unit 160 of the present invention may generate sync information for the # 2 narration input from the user in the process of performing the # 1 operation and the comment like the multilayered layer structure.

After that, when the # 2 motion icon is selected, the character performs the corresponding # 2 motion. After the completion of the # 2 motion, when the # 3 dub motion icon is selected, the character performs the # 3 motion and the # 3 cement. do. The image generating unit 160 of the present invention interactively associates the time information selected by the user with the specific motion icon or the dubbing motion icon selected by the user, the motion data associated with the selected icon, and the audio data.

In this process, as shown in FIG. 5, the video may be generated while performing pause, stop, narration input and end, FF, rewind, etc. through the detailed icons 40 for generating various videos. Of course. The production of the oral animation based on the sync information shown in FIG. 3 will be described as an example.

The user has dubbed “I am happy to see you again” in the motion icon “face” and saves it. Raising operation] may be selected.

In addition, the "hug" motion icon dubbed "I love you" is stored in the state, while the "hand" motion icon is assumed to be dubbed.

After the user inputs the video production signal, the narration input and end buttons input the narrations # 1, "Two Prince Thomas and Princess Julie lived in a beautiful country dominated by the Queen of Flowers" from t1 to t2.

Then, when the user selects the "face" dubbing motion icon, the prince or / and princess character performs a [lift face] motion from t3 to t5, saying, "I am glad to see you again." Is performed.

In the middle of this processing, t4 time, the user can again input Vivaldi's four seasons to t6 as sound effects using the interface of the narration input and end buttons. After that, when the user selects a motion icon called "hand", the motion of [hand holding], which is already defined, is performed from t7 to t8.

After that, when the user selects the "hug" dubbing motion icon, the character, such as a prince, performs a motion of "hugging" through a predetermined motion vector, a positive motion, and the like. Done. Thereafter, the series of processing ends by the motion picture production completion signal input from the user.

The image generation unit 160 of the present invention generates sync information associated with viewpoint information on which each data is input, information on the type of data input or selected at the viewpoint, and produces a video based on the sync information. In this way, the video information can be produced by combining time information inputted with character movement, comment, narration (external sound), or time information input or selection started or ended by combining the corresponding information with the corresponding time. It becomes possible. Through this method, the present invention can produce a video by combining the character's interactive motion and dialogue with the character more simply and effectively.

In addition, in the present invention, when a video playback signal is input from the user, the video player 170 of the present invention plays a video generated by a standard video codec (avi, mpeg4, asf, etc.) in playing the previously produced video. In addition, the video may be played by calling the sink information and outputting each piece of information or data constituting the sink information according to the sink information. According to this method, it is not necessary to create and play a high-capacity video according to the video codec, thereby creating an effect that can play the video more easily and quickly.

In the present invention, a comment or narration input from a user through a microphone or the like during dubbing of the above-described comment through the conversion unit 145 in which the voice-to-text conversion engine is implemented may be performed even when the voice output is not desired. The information is converted into text, and the converted information is transmitted to the image generator 160. Thereafter, the image generator 160 is configured to output the speech or narration information in the sync information as a speech bubble effect or a moving text effect around the character.

In addition, the present invention further induces the interest of video production, and can be configured so that the user himself can be the main character in the oral fairy tale. To this end, by using the face region information of the character and the image position information corresponding to the face region, The apparatus may further include an image converter 180 which extracts a face image included in the data from the internal camera module provided by the user or inputted from the built-in camera module and converts the face image into the face image of the character.

In addition, when the video is transmitted to an external server or another client, the video information generated by the standard video codec may be transmitted, and as described above, the generated sync information (including comment and narration information) may be lowered significantly. More preferably configured to transmit. However, character information, motion information, motion icon information, and the like must be stored in the second client in order to enable video playback from another client, such as another client, through the transmission of the sync information.

When the sink information is transmitted to a receiving client which is a receiving target, character, motion, motion icon information, etc. are analyzed in a storage medium of the receiving client, and if necessary information or data is not present in the receiving client, The interface window may be popped up to induce data to be transmitted through a specific server connection.

When the image transmission request signal to the receiving client including all of the character, motion, and motion icon information is input, the transmitting unit 190 of the present invention transmits only the generated sink information, stored comment information, and narration information to the receiving client. do. As such, the video created by the user in the other client can be played simply by transmitting the sync, comment, and narration information, so that the video playback can be more easily implemented, and the created video can be transmitted with a significantly lower data transfer rate. You can share.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not to be limited to the details thereof and that various changes and modifications will be apparent to those skilled in the art. And various modifications and variations are possible within the scope of the appended claims.

It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are intended to provide further explanation of the invention as claimed. It should be understood that various modifications may be made in the ordinary skill in the art.

100: video production device 110: resource storage unit
120: screen processing unit 130: screen display unit
140: audio input unit 145: conversion unit
150: preprocessing unit 160: image generation unit
170: image playback unit 180: transmission unit

Claims (4)

A resource storage unit for storing resource information including character information, motion information defined to perform a predetermined motion with respect to individual motions of the character, and motion icon information corresponding to the motion information;
A screen processor for outputting the character, a motion icon, and a tool icon for making an image to a screen display unit;
A preprocessor configured to control the input voice data to be stored in association with the dubbing motion icon when a dubbing motion icon is selected and a voice data is input through a microphone; And
When an image production signal is input from a user, time information at which the at least one dubbing motion icon is individually selected, and time information at which narration is input through the microphone are sequentially generated, and sync information is sequentially generated at the at least one dubbing motion icon. And an image generator for generating image information by sequentially linking the narration information with the information about individual motions and comments of the character by the sync information.
The method of claim 1,
And a video playback unit for outputting the video information according to the sync information when a playback signal is input from a user.
The method of claim 1,
A conversion unit for converting the announcement information or narration information input from the user through the microphone into text information,
And the image generation unit outputs the narration information or narration information converted into the text information at a corresponding time using the sync information around the character.
The method of claim 1, wherein the preprocessing unit,
And when the voice data having a reference size or more is detected, controlling the input voice data to be stored in association with the dubbing motion icon.
KR1020120014555A 2012-02-14 2012-02-14 Apparatus for making a moving image with interactive character KR20130093186A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020120014555A KR20130093186A (en) 2012-02-14 2012-02-14 Apparatus for making a moving image with interactive character

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020120014555A KR20130093186A (en) 2012-02-14 2012-02-14 Apparatus for making a moving image with interactive character

Publications (1)

Publication Number Publication Date
KR20130093186A true KR20130093186A (en) 2013-08-22

Family

ID=49217532

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020120014555A KR20130093186A (en) 2012-02-14 2012-02-14 Apparatus for making a moving image with interactive character

Country Status (1)

Country Link
KR (1) KR20130093186A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102180576B1 (en) * 2020-05-18 2020-11-18 주식회사 일루니 Method and apparatus for providing re-programmed interactive content based on user playing
KR102213618B1 (en) * 2020-09-03 2021-02-09 주식회사 웨인힐스벤처스 Multimedia automatic generation system for automatically generating multimedia suitable for user's voice data by using artificial intelligence
KR102263659B1 (en) * 2019-12-16 2021-06-09 민광윤 Web server for generating mommy's fairy tale using story contents application
CN114286155A (en) * 2021-12-07 2022-04-05 咪咕音乐有限公司 Picture element modification method, device, equipment and storage medium based on barrage

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102263659B1 (en) * 2019-12-16 2021-06-09 민광윤 Web server for generating mommy's fairy tale using story contents application
KR102180576B1 (en) * 2020-05-18 2020-11-18 주식회사 일루니 Method and apparatus for providing re-programmed interactive content based on user playing
WO2021235636A1 (en) * 2020-05-18 2021-11-25 주식회사 일루니 Method and apparatus for providing interactive content reprogrammed on basis of playing of user
US11402975B2 (en) 2020-05-18 2022-08-02 Illuni Inc. Apparatus and method for providing interactive content
KR102213618B1 (en) * 2020-09-03 2021-02-09 주식회사 웨인힐스벤처스 Multimedia automatic generation system for automatically generating multimedia suitable for user's voice data by using artificial intelligence
WO2022050632A1 (en) * 2020-09-03 2022-03-10 주식회사 웨인힐스벤처스 Multimedia automatic generation system for automatically generating multimedia appropriate for user voice data by using artificial intelligence
CN114286155A (en) * 2021-12-07 2022-04-05 咪咕音乐有限公司 Picture element modification method, device, equipment and storage medium based on barrage

Similar Documents

Publication Publication Date Title
KR102306624B1 (en) Persistent companion device configuration and deployment platform
US11148296B2 (en) Engaging in human-based social interaction for performing tasks using a persistent companion device
US20170206064A1 (en) Persistent companion device configuration and deployment platform
AU2019262848B2 (en) Interactive application adapted for use by multiple users via a distributed computer-based system
US9984724B2 (en) System, apparatus and method for formatting a manuscript automatically
CN110400251A (en) Method for processing video frequency, device, terminal device and storage medium
US20230092103A1 (en) Content linking for artificial reality environments
WO2016011159A9 (en) Apparatus and methods for providing a persistent companion device
US20160045834A1 (en) Overlay of avatar onto live environment for recording a video
CN110782900A (en) Collaborative AI storytelling
US20140028780A1 (en) Producing content to provide a conversational video experience
CN106575361A (en) Method of providing visual sound image and electronic device implementing the same
US10812430B2 (en) Method and system for creating a mercemoji
CN103430217A (en) Input support device, input support method, and recording medium
KR20170057736A (en) Virtual-Reality EDUCATIONAL CONTENT PRODUCTION SYSTEM AND METHOD OF CONTRLLING THE SAME
JP2016038601A (en) Cg character interaction device and cg character interaction program
JP2018078402A (en) Content production device, and content production system with sound
KR20130093186A (en) Apparatus for making a moving image with interactive character
KR20130094058A (en) Communication system, apparatus and computer-readable storage medium
EP4252195A1 (en) Real world beacons indicating virtual locations
US20180276185A1 (en) System, apparatus and method for formatting a manuscript automatically
WO2018183812A1 (en) Persistent companion device configuration and deployment platform
CN110989912A (en) Entertainment file generation method, device, medium and electronic equipment

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E601 Decision to refuse application