KR20140065762A - System for providing character video and method thereof - Google Patents
System for providing character video and method thereof Download PDFInfo
- Publication number
- KR20140065762A KR20140065762A KR1020120132397A KR20120132397A KR20140065762A KR 20140065762 A KR20140065762 A KR 20140065762A KR 1020120132397 A KR1020120132397 A KR 1020120132397A KR 20120132397 A KR20120132397 A KR 20120132397A KR 20140065762 A KR20140065762 A KR 20140065762A
- Authority
- KR
- South Korea
- Prior art keywords
- user
- character
- face
- image
- facial feature
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 27
- 239000002131 composite material Substances 0.000 claims abstract description 19
- 230000002194 synthesizing effect Effects 0.000 claims abstract description 11
- 230000015572 biosynthetic process Effects 0.000 claims abstract description 9
- 238000012545 processing Methods 0.000 claims abstract description 9
- 238000003786 synthesis reaction Methods 0.000 claims abstract description 9
- 230000001815 facial effect Effects 0.000 claims description 61
- 230000008921 facial expression Effects 0.000 claims description 24
- 238000000605 extraction Methods 0.000 claims description 6
- 230000001131 transforming effect Effects 0.000 claims description 5
- 239000000284 extract Substances 0.000 abstract description 6
- 230000000694 effects Effects 0.000 description 8
- 230000014509 gene expression Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 230000008859 change Effects 0.000 description 5
- 230000008451 emotion Effects 0.000 description 5
- 238000007654 immersion Methods 0.000 description 5
- 238000001308 synthesis method Methods 0.000 description 5
- 238000011161 development Methods 0.000 description 4
- 210000004209 hair Anatomy 0.000 description 4
- 230000006872 improvement Effects 0.000 description 4
- 230000002996 emotional effect Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000015654 memory Effects 0.000 description 3
- 239000000203 mixture Substances 0.000 description 3
- 230000008447 perception Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 239000011165 3D composite Substances 0.000 description 2
- 206010011469 Crying Diseases 0.000 description 2
- 210000004709 eyebrow Anatomy 0.000 description 2
- 210000000744 eyelid Anatomy 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 210000001747 pupil Anatomy 0.000 description 2
- 210000004556 brain Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Processing Or Creating Images (AREA)
Abstract
The present invention relates to a system and method for providing a user-customized character video in real time, and more particularly, to a technique for providing a user-customized character video by synthesizing a face of a user with a character face in real time.
To this end, the system for providing a real-time user-customized character image includes a character image storage unit, a user face extracting unit, a style information inputting unit, a user face creating unit, a character image selecting unit, a synthesizing processing unit, and a display unit. The character video object storage unit stores character video objects. The user face extracting unit extracts the face of the user from the user image. The style information input unit receives style information to be applied to the extracted user's face from the user. The user face generating unit generates a face image of a user to which the style information is applied. The character image selection unit selects one character image to be combined with the face image of the user generated from the user. The synthesis processing unit synthesizes the user's face image to which the style information is applied in the face region of the character image of the selected character image to generate a synthesized image in real time. The display unit displays the generated composite image.
Description
The present invention relates to a system and a method for providing a user-customized character video in real time, and more particularly, to a system and a method for providing a user-customized character video in real time by synthesizing a user's face on a character face of an image, It is about technology that can make realism more vivid than empathy.
In many cases, a user has a desire to become the same as a hero or a hero in a movie while watching a movie or an animation, but this is practically impossible.
Accordingly, a variety of chain stores or vending machines that can attract users' interest, such as providing pictures of users in various shapes, are actively being provided. In recent years, when a user registers his or her own photograph through the Internet, Such as the ability to output a composite image to a specific image. For example, Korean Patent Registration No. 1177106, entitled " Digital Image Compositing Method ", combines at least one of two or more continuous operation images stored as a background image and photographs the image to satisfy a user's desire to decorate the captured image in various ways .
However, if you create a new video by matching the user's face to the animated character in general, the user must manually edit it. That is, existing video editing technology is proceeding in such a manner that a user changes a still image into a moving image, edits a voice, and converts a compressed format of the moving image. In addition, the existing video editing technology provides a simple function of cutting and attaching a video and decorating a video by giving various effects, and every step of editing a video is performed by the user's manual work.
As described above, since the conventional moving image editing technology provides only a function for the user to manually edit the desired moving image, it is difficult to edit the moving image without expert knowledge.
In addition, when changing the image of the main character of the animation, all of them must be manually operated. For example, in the case of 30 minutes of animation compressed at 30 frames per second, 54,000 images must be manually edited.
On the other hand, children in infancy are exposed to children's stories such as fairy tales, Aesop's fables, and wisdoms through various media, such as books, cartoons, animations, and plays. In recent years, with the active supply of PCs and the expansion of PC culture, children in early childhood are dealing with PCs professionally and enjoying games and community culture through PCs.
Especially, in the learning and entertainment aspect, the children are in the process of developing perception and emotional development, and they follow the educational videos and actively use them to contribute to the improvement of perception and emotional development. In other words, it is necessary to maximize the effects of story topic and lesson by giving more realistic feeling than the indirect experience emotion input through existing image or media.
The present invention aims at enabling the children to perceive and develop their emotions and to contribute to the improvement of perception and emotional development by positively utilizing them in watching educational videos. .
It is another object of the present invention to provide a more realistic sense of reality than an indirect experience-type emotion input through existing images or media.
In addition, the present invention aims to maximize the effect of the transfer of learning and the entertainment effect such as the story topic and the lesson by making the imagination and the fun of the user become more intense and more intense.
In addition, the present invention aims at enabling a user to feel a strange and proud feeling by seeing a person who has become a main character in his or her favorite image or a friend of the main character.
To achieve these and other advantages and in accordance with the purpose of the present invention, as embodied and broadly described herein, there is provided a system for realizing a user-customized character video image including a character image storage unit, a user face extracting unit, a style information inputting unit, a user face creating unit, a character image selecting unit, do. The character video object storage unit stores character video objects. The user face extracting unit extracts the face of the user from the user image. The style information input unit receives style information to be applied to the extracted user's face from the user. The user face generating unit generates a face image of a user to which the style information is applied. The character image selection unit selects one character image to be combined with the face image of the user generated from the user. The synthesis processing unit synthesizes the user's face image to which the style information is applied in the face region of the character image of the selected character image to generate a synthesized image in real time. The display unit displays the generated composite image.
As described above, according to the present invention, the present invention can provide a user-customized character video object (animation, game, etc.) by synthesizing the face of a user (child) with a face of a main character such as Pororo in real time, It is possible to provide a personalized character video object for allowing the character to appear as a friend of the protagonist or the main character in the video, so that it has a more realistic feeling of reality than the indirect experience expression input through the existing video or media. And this sense of reality can maximize user 's imagination and fun, so as to immerse more and to convey the learning effect such as story topic and lesson.
In addition, it provides an experience of appearing in an animation, satisfying the desire of surrogate satisfaction and the special desire of "my own thing". In other words, the user can feel the excitement and proud feelings while watching the character of the hero or hero in her favorite video, and this experience will make her feel self-esteem and confidence. Therefore, As shown in FIG. In addition, the present invention can be differentiated from other contents because the family can hold the child of the infantile period and possess memories.
Also, since the user can become a favorite character or a friend of the main character in the synthesized image, it can contribute to the improvement of the character recognition.
Also, due to the special nature of the animation that became the main character, illegal downloading of video and reproduction products become meaningless. This is because nobody would want to have content that appears as a main character other than my child. As a result, content creators are encouraged to maintain the value of content.
If the match degree is less than 80%, the eye facial feature points are transformed and synthesized to the character facial feature points. If the match degree is less than 80%, the user's facial feature points are extracted and compared with the character facial feature points. It is possible to synthesize the pose and the facial expression so that the realism and the immersion are high. In other words, according to the pose of the character and the story of the image, the facial expression of the character is variously produced. According to the above-described synthesis method, even if one photograph does not include photographs of various facial expressions, It is possible to generate a child's facial expression and obtain a natural composite image.
1 is a diagram showing a schematic configuration of a real-time providing system for a user-customized character video according to an embodiment of the present invention.
FIG. 2 is a diagram illustrating a process of generating a user-customized character image according to the present invention.
FIG. 3 is a diagram illustrating an example of a user-customized character image created according to the present invention.
FIG. 4 is a flowchart illustrating a method of providing a user-adapted character image in real time according to an exemplary embodiment of the present invention.
Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. In the following description of the present invention, a detailed description of known functions and configurations incorporated herein will be omitted when it may make the subject matter of the present invention rather unclear. Further, in the description of the embodiments of the present invention, specific values are merely examples, and exaggerated values may be presented for convenience and understanding of the present invention.
<Description of System>
FIG. 1 is a diagram illustrating a schematic configuration of a system for real-time provision of a customized character video according to an exemplary embodiment of the present invention, and FIG. 2 is a diagram illustrating a real-time synthesis process of a customized character video.
Referring to FIG. 1, a
The character video
The user face extracting
The style
The user's
The character video
The
The
As described above, the present invention can provide a user-customized character video object (animation, game, etc.) by synthesizing the face of a user (child) with a face of a main character such as Pororo in real time, It is possible to provide a personalized character video image that allows a user to appear as a friend of the user, thereby providing a more realistic feeling of reality than an indirect experience type emotion input through a conventional image or media. And this sense of reality can maximize user 's imagination and fun, so as to immerse more and to convey the learning effect such as story topic and lesson.
In addition, it provides an experience of appearing in an animation, satisfying the desire of surrogate satisfaction and the special desire of "my own thing". In other words, the user can feel the excitement and proud feelings while watching the character of the hero or hero in her favorite video, and this experience will make her feel self-esteem and confidence. Therefore, As shown in FIG. In addition, the present invention can be differentiated from other contents because the family can hold the child of the infantile period and possess memories.
Also, since the user can become a favorite character or a friend of the main character in the synthesized image, it can contribute to the improvement of the character recognition.
Also, due to the special nature of the animation that became the main character, illegal downloading of video and reproduction products become meaningless. This is because nobody would want to have content that appears as a main character other than my child. As a result, content creators are encouraged to maintain the value of content.
Meanwhile, in order to provide a composite image with high realism and high immersion, it is preferable that the user-customized character video real-
The
The character
The user
The facial feature
The facial feature
For example, changes in the shape and position of the eyes, mouth, or nose can be used to express a variety of facial expressions at the eye-to-eye or eye-to-eye ratio, or to maximize the reliability of the composition It is important to extract the shape and position of the eyes and mouth from each of the face of the character and the child, and to compare them. In the case of the eyes, the shape and position of the eye are extracted from each of the character and the face of the child, and the information related to the eye such as the degree of opening and closing of the eyelid, the position of the eye tail, the position of the pupil, If the match degree is 80% or more, the eye of the child can be synthesized on the eye of the character as it is. If the degree of agreement is less than 80%, the shape and position of the eye may be modified by moving the eye upward or downward according to the eyes of the character, or by raising or lowering the eye tail. Likewise, in the case of the mouth, the shape and position of the mouth are extracted from each of the face of the character and the child, and the mouth-related expression can be known by the information related to the mouth such as the degree of opening and closing of the mouth, the position of the upper lip and the lower lip, If the match is 80% or more, the child's mouth can be synthesized into the mouth of the character as it is. If the degree of agreement is less than 80%, the upper lip and lower lip of the child may be moved up and down and left and right according to the expression of the mouth of the character, or the shape and position of the mouth may be modified by combining the mouth of the character .
In the present invention, the reason for limiting the reference range of the degree of agreement to 80% is as follows. As a result of experimenting with various reference ranges of 70%, 80%, 90%, etc., This is because it is an optimal range in which the awkwardness of the synthesized character is excluded when the pose of the synthesized character and the facial expression are opposite to each other.
The composite
As described above, after extracting facial feature points of a user on a photograph and comparing the facial feature points with the character facial feature points to determine the degree of conformity, if the degree of match is less than 80%, the eye facial feature points are modified and synthesized to character facial feature points. It is possible to combine the pose of the character and the expression so that the face can be synthesized while using the face of the child, and the effect of realism and immersion is high. In other words, according to the pose of the character and the story of the image, the facial expression of the character is variously produced. According to the above-described synthesis method, even if one photograph shows no photographs of various facial expressions, It is possible to generate a child's facial expression and obtain a natural composite image.
Meanwhile, in the present invention, it is possible to change the story development according to the user's selection beyond the user's becoming a friend of the main character or the main character in the character video stored in the character video
<Description of Method>
A method for real-time provision of a user-customized character image according to the present invention will be described with reference to an exemplary diagram shown in FIG. 2 and FIG. 3, together with a flowchart shown in FIG. 4, for convenience.
1. Storing a character video object < S410 >
It stores 2D or 3D character video images that children like, such as Pororo.
2. Step of extracting the user's face from the user image < S420 >
The user's face is extracted from the user image. That is, when a child's face is photographed using a terminal such as a mobile phone, a PDA, or a smart phone, or a photo is taken with a complex background or a plurality of people, The face of the child is extracted as shown in (b) of FIG.
3. receiving the style information to be applied to the user's face extracted from the user (S430)
And receives style information to be applied to the face of the user extracted in step S420 from the user. Here, the style information refers to information that can be applied to a child's face, such as a hairstyle, a wearable hat and accessories, and a ball touch.
4. Step S440 of generating a face image of a user to which style information is applied
The face image of the user to which the style information is applied is generated in step S430. Accordingly, the user can change the hair style and color of the extracted child or wear the hat as shown in FIG. 2 (c).
5. Step S450 of selecting a character image to be combined with a user's face image
The user selects one of the character images and the main character or friend character of the selected character image, which is the face image of the user generated in step S440. For example, as shown in (d) of FIG. 2, a character video object desired to be experienced by a child and a character in the video object are selected and input from among various character video objects.
6. Step of generating a composite image in real time (S460)
In step S450, a face image of the character image of the selected character is synthesized with the user's face image to which style information such as the hair style change or the wearing of a hat is applied to generate a 2D or 3D composite image in real time.
7. Displaying the composite image (S470)
The composite image generated in step S460, that is, the composite image in which the child is the main character as shown in (e) of FIG.
As described above, according to the present invention, since the face of a user (child) can be synthesized in real time on the face of a main character such as Pororo, the user can be provided with a personalized character video object (animation, game Etc.), it is possible to have a more realistic feeling of reality than an indirect experience-type emotion input through existing images or media.
In addition, the user can feel the excitement and proud feelings while watching the character of the hero or protagonist in the video he usually liked, and since this experience will make him feel self-esteem and confidence, It can be deep in the brain.
Meanwhile, in the present invention, the user's facial expression is modified so as to be the same as the facial expression of the character in step S460, thereby providing a composite image having high realism and immersion. That is, the expression of the face of the child should be changed to match the character's various facial expressions such as the front or side view of the character, the sleeping appearance, the smiling face, and the crying appearance.
For this, in step S460, the following synthesis method is employed.
First, a character facial feature point is extracted from the face of the character for each scene of the character video selected by the user (S461), and the user facial feature point is extracted from the face of the user on the photograph (S462). Here, the facial feature point refers to an element that can recognize the impression or facial expression appearing on the face such as a character and a child's face contour line, eyebrow, eye, nose, mouth, and such feature points include the shape and position of each element. Next, the character facial feature points extracted in the steps S461 and S462 are compared with the user facial feature points. In the present invention, the matching degrees of the character facial feature points and the user facial feature points are compared (S463). Then, based on the comparison in step S463, the user's facial feature point is modified to be the same as the character facial feature point (S464). That is, at this stage, the eye facial feature points are synthesized to the character facial feature points when the match degree is 80% or more, and the morphology and position of the eye facial feature points are appropriately modified when the match degree is less than 80%.
For example, changes in the shape and position of the eyes, mouth, or nose can be used to express a variety of facial expressions at the eye-to-eye or eye-to-eye ratio, or to maximize the reliability of the composition It is important to extract the shape and position of the eyes and mouth from each of the face of the character and the child, and to compare them. In the case of the eyes, the shape and position of the eye are extracted from each of the character and the face of the child, and the information related to the eye such as the degree of opening and closing of the eyelid, the position of the eye tail, the position of the pupil, If the match degree is 80% or more, the eye of the child can be synthesized on the eye of the character as it is. If the degree of agreement is less than 80%, the shape and position of the eye may be modified by moving the eye upward or downward according to the eyes of the character, or by raising or lowering the eye tail. Likewise, in the case of the mouth, the shape and position of the mouth are extracted from each of the face of the character and the child, and the mouth-related expression can be known by the information related to the mouth such as the degree of opening and closing of the mouth, the position of the upper lip and the lower lip, If the match is 80% or more, the child's mouth can be synthesized into the mouth of the character as it is. If the degree of agreement is less than 80%, the upper lip and lower lip of the child may be moved up and down and left and right according to the expression of the mouth of the character, or the shape and position of the mouth may be modified by combining the mouth of the character .
In the present invention, the reason for limiting the reference range of the degree of agreement to 80% is as follows. As a result of experimenting with various reference ranges of 70%, 80%, 90%, etc., This is because it is an optimal range in which the awkwardness of the synthesized character is excluded when the pose of the synthesized character and the facial expression are opposite to each other.
Finally, in step S465, the synthesized image is synthesized by synthesizing the transformed user facial feature points with the character facial feature points (S465).
According to the synthesizing method described above, a character is posed and the story of an image is variously displayed according to the story of the character. According to the above-described synthesis method, It is possible to generate a facial expression of a child, and a natural synthetic image can be obtained, so that there is an effect of high realism and immersion.
The method of realizing a user-customized character video according to the present invention may be implemented in a form of a program command which can be executed through various computer means and recorded in a computer-readable medium. The computer-readable medium may include program instructions, data files, data structures, and the like, alone or in combination. The program instructions recorded on the medium may be those specially designed and constructed for the present invention or may be available to those skilled in the art of computer software. Examples of computer-readable media include magnetic media such as hard disks, floppy disks and magnetic tape; optical media such as CD-ROMs and DVDs; magnetic media such as floppy disks; Magneto-optical media, and hardware devices specifically configured to store and execute program instructions such as ROM, RAM, flash memory, and the like. Examples of program instructions include machine language code such as those produced by a compiler, as well as high-level language code that can be executed by a computer using an interpreter or the like. The hardware devices described above may be configured to operate as one or more software modules to perform the operations of the present invention, and vice versa.
The foregoing description is merely illustrative of the technical idea of the present invention and various changes and modifications may be made by those skilled in the art without departing from the essential characteristics of the present invention. Therefore, the embodiments disclosed in the present invention are intended to illustrate rather than limit the scope of the present invention, and the scope of the technical idea of the present invention is not limited by these embodiments. The scope of protection of the present invention should be construed according to the following claims, and all technical ideas falling within the scope of the same shall be construed as falling within the scope of the present invention.
100: Real-time delivery system of customized character video
Claims (9)
A user face extracting unit for extracting a face of a user from a user image;
A style information input unit for inputting style information to be applied to the extracted user's face from a user;
A user face generating unit for generating a face image of a user to which the style information is applied;
A character video selection unit for selecting one character video to be combined with the face image of the user generated by the user;
A synthesis processing unit for synthesizing a user's face image to which the style information is applied in a face region of a character image of the selected character image to generate a synthesized image in real time; And
A display unit for displaying the generated composite image,
Wherein the system comprises:
The synthesis processing unit,
The user's facial expression is modified so as to be the same as the facial expression of the character
Real-time system of customized character video.
The synthesis processing unit,
A character face extraction module for extracting character face feature points from the face of the character;
A user face extraction module for extracting user face feature points from the face of the user;
A facial feature point comparing module for comparing the character facial feature point with the user facial feature point;
A facial feature point transforming module for transforming the user facial feature point so as to be the same as the character facial feature point based on the comparison; And
A synthesized image generation module for synthesizing the modified user facial feature points with the character facial feature points to generate the synthesized image,
Wherein the system comprises:
The facial feature point comparison module comprises:
Comparing the matching degree of the character facial feature point and the user facial feature point,
The facial feature point transforming module comprises:
If the degree of match is less than 80%, the shape and position of the user facial feature point are modified
Real-time system of customized character video.
(b) extracting a user's face from the user image;
(c) receiving style information to be applied to the extracted user's face from a user;
(d) generating a face image of the user to which the style information is applied;
(e) receiving from the user a face image of the user and a character video object to be composited;
(f) generating a composite image in real time by synthesizing a user's face image to which the style information is applied in a face region of a character of the selected character image; And
(g) displaying the generated composite image
Wherein the method comprises the steps of:
In the step (f)
The user's facial expression is modified so as to be the same as the facial expression of the character
A method of providing real time video of user customized character.
The step (f)
(f1) extracting a character facial feature point from the face of the character;
(f2) extracting user facial feature points from the face of the user;
(f3) comparing the character face feature point with the user face feature point;
(f4) transforming the user facial feature point to be the same as the character facial feature point based on the comparison; And
(f5) generating the synthesized image by synthesizing the modified user facial feature points with the character facial feature points
Wherein the method comprises the steps of:
In the step (f3)
Comparing the matching degree of the character facial feature point and the user facial feature point,
In the step (f4)
If the degree of match is less than 80%, the shape and position of the user facial feature point are modified
A method of providing real time video of user customized character.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020120132397A KR20140065762A (en) | 2012-11-21 | 2012-11-21 | System for providing character video and method thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020120132397A KR20140065762A (en) | 2012-11-21 | 2012-11-21 | System for providing character video and method thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
KR20140065762A true KR20140065762A (en) | 2014-05-30 |
Family
ID=50892533
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020120132397A KR20140065762A (en) | 2012-11-21 | 2012-11-21 | System for providing character video and method thereof |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR20140065762A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20160012362A (en) * | 2014-07-23 | 2016-02-03 | 서울특별시 | Security system and the control method thereof |
KR20200025062A (en) * | 2018-08-29 | 2020-03-10 | 주식회사 케이티 | Apparatus, method and user device for prividing customized character |
KR102399255B1 (en) * | 2021-12-22 | 2022-05-18 | 주식회사 위딧 | System and method for producing webtoon using artificial intelligence |
KR20230080543A (en) * | 2021-11-30 | 2023-06-07 | (주) 키글 | System for creating face avatar |
WO2024085513A1 (en) * | 2022-10-18 | 2024-04-25 | 삼성전자 주식회사 | Display device and method for operating same |
-
2012
- 2012-11-21 KR KR1020120132397A patent/KR20140065762A/en not_active Application Discontinuation
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20160012362A (en) * | 2014-07-23 | 2016-02-03 | 서울특별시 | Security system and the control method thereof |
KR20200025062A (en) * | 2018-08-29 | 2020-03-10 | 주식회사 케이티 | Apparatus, method and user device for prividing customized character |
KR20230080543A (en) * | 2021-11-30 | 2023-06-07 | (주) 키글 | System for creating face avatar |
KR102399255B1 (en) * | 2021-12-22 | 2022-05-18 | 주식회사 위딧 | System and method for producing webtoon using artificial intelligence |
WO2024085513A1 (en) * | 2022-10-18 | 2024-04-25 | 삼성전자 주식회사 | Display device and method for operating same |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107154069B (en) | Data processing method and system based on virtual roles | |
KR101445263B1 (en) | System and method for providing personalized content | |
Prince | Digital visual effects in cinema: The seduction of reality | |
US20090153552A1 (en) | Systems and methods for generating individualized 3d head models | |
CN106648071A (en) | Social implementation system for virtual reality | |
Ryokai et al. | StoryFaces: pretend-play with ebooks to support social-emotional storytelling | |
KR20140065762A (en) | System for providing character video and method thereof | |
Gress | [digital] Visual Effects and Compositing | |
Van der Laan et al. | Creating aesthetic, institutional and symbolic boundaries in fashion photo shoots | |
Cleland | Image avatars: Self-other encounters in a mediated world | |
Hu | Forming the Spectacle of Body: Analysis of the User-Platform Relationship through Body Performance Videos on TikTok | |
Reinhuber et al. | Layered images: the desire to see more than the obvious | |
Wikayanto et al. | Aesthetic Morphology of Animation | |
Doroski | Thoughts of spirits in madness: Virtual production animation and digital technologies for the expansion of independent storytelling | |
Ng et al. | A pedagogy of craft: Teaching culture analysis with machinima | |
KR102553432B1 (en) | System for creating face avatar | |
Gan | The newly developed form of Ganime and its relation to selective animation for adults in Japan | |
WO2023130715A1 (en) | Data processing method and apparatus, electronic device, computer-readable storage medium, and computer program product | |
KR20120076456A (en) | Avata media production method and device using a recognition of sensitivity | |
Franco Alonso | Differences between Japanese and Western anatomical animation techniques applied to videogames | |
O’Meara | Anna Biller | |
McLean | All for Beauty: Makeup and Hairdressing in Hollywood's Studio Era | |
Arts | AR Cinema: Visual Storytelling and Embodied Experiences with Augmented Reality Filters and Backgrounds | |
Li et al. | The Analysis of Two-Dimensional Animation Lens | |
KR101243832B1 (en) | Avata media service method and device using a recognition of sensitivity |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WITN | Withdrawal due to no request for examination |