KR20140065762A - System for providing character video and method thereof - Google Patents

System for providing character video and method thereof Download PDF

Info

Publication number
KR20140065762A
KR20140065762A KR1020120132397A KR20120132397A KR20140065762A KR 20140065762 A KR20140065762 A KR 20140065762A KR 1020120132397 A KR1020120132397 A KR 1020120132397A KR 20120132397 A KR20120132397 A KR 20120132397A KR 20140065762 A KR20140065762 A KR 20140065762A
Authority
KR
South Korea
Prior art keywords
user
character
face
image
facial feature
Prior art date
Application number
KR1020120132397A
Other languages
Korean (ko)
Inventor
윤형선
Original Assignee
토리인 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 토리인 주식회사 filed Critical 토리인 주식회사
Priority to KR1020120132397A priority Critical patent/KR20140065762A/en
Publication of KR20140065762A publication Critical patent/KR20140065762A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention relates to a system and method for providing a user-customized character video in real time, and more particularly, to a technique for providing a user-customized character video by synthesizing a face of a user with a character face in real time.
To this end, the system for providing a real-time user-customized character image includes a character image storage unit, a user face extracting unit, a style information inputting unit, a user face creating unit, a character image selecting unit, a synthesizing processing unit, and a display unit. The character video object storage unit stores character video objects. The user face extracting unit extracts the face of the user from the user image. The style information input unit receives style information to be applied to the extracted user's face from the user. The user face generating unit generates a face image of a user to which the style information is applied. The character image selection unit selects one character image to be combined with the face image of the user generated from the user. The synthesis processing unit synthesizes the user's face image to which the style information is applied in the face region of the character image of the selected character image to generate a synthesized image in real time. The display unit displays the generated composite image.

Description

TECHNICAL FIELD [0001] The present invention relates to a system and a method for providing a user-

The present invention relates to a system and a method for providing a user-customized character video in real time, and more particularly, to a system and a method for providing a user-customized character video in real time by synthesizing a user's face on a character face of an image, It is about technology that can make realism more vivid than empathy.

In many cases, a user has a desire to become the same as a hero or a hero in a movie while watching a movie or an animation, but this is practically impossible.

Accordingly, a variety of chain stores or vending machines that can attract users' interest, such as providing pictures of users in various shapes, are actively being provided. In recent years, when a user registers his or her own photograph through the Internet, Such as the ability to output a composite image to a specific image. For example, Korean Patent Registration No. 1177106, entitled " Digital Image Compositing Method ", combines at least one of two or more continuous operation images stored as a background image and photographs the image to satisfy a user's desire to decorate the captured image in various ways .

However, if you create a new video by matching the user's face to the animated character in general, the user must manually edit it. That is, existing video editing technology is proceeding in such a manner that a user changes a still image into a moving image, edits a voice, and converts a compressed format of the moving image. In addition, the existing video editing technology provides a simple function of cutting and attaching a video and decorating a video by giving various effects, and every step of editing a video is performed by the user's manual work.

As described above, since the conventional moving image editing technology provides only a function for the user to manually edit the desired moving image, it is difficult to edit the moving image without expert knowledge.

In addition, when changing the image of the main character of the animation, all of them must be manually operated. For example, in the case of 30 minutes of animation compressed at 30 frames per second, 54,000 images must be manually edited.

On the other hand, children in infancy are exposed to children's stories such as fairy tales, Aesop's fables, and wisdoms through various media, such as books, cartoons, animations, and plays. In recent years, with the active supply of PCs and the expansion of PC culture, children in early childhood are dealing with PCs professionally and enjoying games and community culture through PCs.

Especially, in the learning and entertainment aspect, the children are in the process of developing perception and emotional development, and they follow the educational videos and actively use them to contribute to the improvement of perception and emotional development. In other words, it is necessary to maximize the effects of story topic and lesson by giving more realistic feeling than the indirect experience emotion input through existing image or media.

Korean Patent No. 1177106 "Digital image synthesis method"

The present invention aims at enabling the children to perceive and develop their emotions and to contribute to the improvement of perception and emotional development by positively utilizing them in watching educational videos. .

It is another object of the present invention to provide a more realistic sense of reality than an indirect experience-type emotion input through existing images or media.

In addition, the present invention aims to maximize the effect of the transfer of learning and the entertainment effect such as the story topic and the lesson by making the imagination and the fun of the user become more intense and more intense.

In addition, the present invention aims at enabling a user to feel a strange and proud feeling by seeing a person who has become a main character in his or her favorite image or a friend of the main character.

To achieve these and other advantages and in accordance with the purpose of the present invention, as embodied and broadly described herein, there is provided a system for realizing a user-customized character video image including a character image storage unit, a user face extracting unit, a style information inputting unit, a user face creating unit, a character image selecting unit, do. The character video object storage unit stores character video objects. The user face extracting unit extracts the face of the user from the user image. The style information input unit receives style information to be applied to the extracted user's face from the user. The user face generating unit generates a face image of a user to which the style information is applied. The character image selection unit selects one character image to be combined with the face image of the user generated from the user. The synthesis processing unit synthesizes the user's face image to which the style information is applied in the face region of the character image of the selected character image to generate a synthesized image in real time. The display unit displays the generated composite image.

As described above, according to the present invention, the present invention can provide a user-customized character video object (animation, game, etc.) by synthesizing the face of a user (child) with a face of a main character such as Pororo in real time, It is possible to provide a personalized character video object for allowing the character to appear as a friend of the protagonist or the main character in the video, so that it has a more realistic feeling of reality than the indirect experience expression input through the existing video or media. And this sense of reality can maximize user 's imagination and fun, so as to immerse more and to convey the learning effect such as story topic and lesson.

In addition, it provides an experience of appearing in an animation, satisfying the desire of surrogate satisfaction and the special desire of "my own thing". In other words, the user can feel the excitement and proud feelings while watching the character of the hero or hero in her favorite video, and this experience will make her feel self-esteem and confidence. Therefore, As shown in FIG. In addition, the present invention can be differentiated from other contents because the family can hold the child of the infantile period and possess memories.

Also, since the user can become a favorite character or a friend of the main character in the synthesized image, it can contribute to the improvement of the character recognition.

Also, due to the special nature of the animation that became the main character, illegal downloading of video and reproduction products become meaningless. This is because nobody would want to have content that appears as a main character other than my child. As a result, content creators are encouraged to maintain the value of content.

If the match degree is less than 80%, the eye facial feature points are transformed and synthesized to the character facial feature points. If the match degree is less than 80%, the user's facial feature points are extracted and compared with the character facial feature points. It is possible to synthesize the pose and the facial expression so that the realism and the immersion are high. In other words, according to the pose of the character and the story of the image, the facial expression of the character is variously produced. According to the above-described synthesis method, even if one photograph does not include photographs of various facial expressions, It is possible to generate a child's facial expression and obtain a natural composite image.

1 is a diagram showing a schematic configuration of a real-time providing system for a user-customized character video according to an embodiment of the present invention.
FIG. 2 is a diagram illustrating a process of generating a user-customized character image according to the present invention.
FIG. 3 is a diagram illustrating an example of a user-customized character image created according to the present invention.
FIG. 4 is a flowchart illustrating a method of providing a user-adapted character image in real time according to an exemplary embodiment of the present invention.

Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. In the following description of the present invention, a detailed description of known functions and configurations incorporated herein will be omitted when it may make the subject matter of the present invention rather unclear. Further, in the description of the embodiments of the present invention, specific values are merely examples, and exaggerated values may be presented for convenience and understanding of the present invention.

<Description of System>

FIG. 1 is a diagram illustrating a schematic configuration of a system for real-time provision of a customized character video according to an exemplary embodiment of the present invention, and FIG. 2 is a diagram illustrating a real-time synthesis process of a customized character video.

Referring to FIG. 1, a system 100 for realizing a user-customized character image according to the present invention includes a character image storage unit 110, a user face extracting unit 120, a style information input unit 130, 140, a character video object selection unit 150, a composition processing unit 160, and a display unit 170, all or part of which are implemented as a server or a user terminal. In addition, each component may be configured as a combination of software, hardware, or software and hardware, and data can be exchanged through logical and physical connections to each other. In some cases, the data may be distributed to a plurality of apparatuses so as to exchange data through a network or the like. The real-time synthesis according to the present invention can be performed by selecting a corresponding application as shown in FIG. 2 (a), and the method is illustrative of one example, and is not limited by such a configuration.

The character video object storage unit 110 stores 2D or 3D character video objects, such as Pororo, which the child likes.

The user face extracting unit 120 extracts the face of the user from the user image. That is, when a child's face is photographed using a terminal such as a mobile phone, a PDA, or a smart phone, or a photo taken with a complex background or a plurality of people, The extracting unit 120 extracts a child face as shown in FIG. 2 (b).

The style information input unit 130 receives style information to be applied to the extracted user's face from the user. Here, the style information refers to information that can be applied to a child's face, such as a hairstyle, a wearable hat and accessories, and a ball touch.

The user's face generation unit 140 generates a face image of the user to which the style information is applied. Accordingly, the user can change the hair style and color of the extracted child or wear the hat as shown in FIG. 2 (c).

The character video object selection unit 150 selects one character video object to be combined with the generated face image of the user and a main character character or a friend character of the selected character video object. For example, as shown in (d) of FIG. 2, a character video object to be experienced by a child and a character in the video object are selected and input from among various character video objects displayed on the terminal.

The synthesis processing unit 160 synthesizes the user's face image to which the style information such as the hair style change or the wearing of the hat is applied to the face region of the character image of the selected character image to generate a 2D or 3D composite image in real time.

The display unit 170 displays the generated composite image. In other words, the display unit 170 displays the composite image in which the child is the main character through the display unit 170 as shown in FIG. 2 (e) in the configuration of the user terminal.

As described above, the present invention can provide a user-customized character video object (animation, game, etc.) by synthesizing the face of a user (child) with a face of a main character such as Pororo in real time, It is possible to provide a personalized character video image that allows a user to appear as a friend of the user, thereby providing a more realistic feeling of reality than an indirect experience type emotion input through a conventional image or media. And this sense of reality can maximize user 's imagination and fun, so as to immerse more and to convey the learning effect such as story topic and lesson.

In addition, it provides an experience of appearing in an animation, satisfying the desire of surrogate satisfaction and the special desire of "my own thing". In other words, the user can feel the excitement and proud feelings while watching the character of the hero or hero in her favorite video, and this experience will make her feel self-esteem and confidence. Therefore, As shown in FIG. In addition, the present invention can be differentiated from other contents because the family can hold the child of the infantile period and possess memories.

Also, since the user can become a favorite character or a friend of the main character in the synthesized image, it can contribute to the improvement of the character recognition.

Also, due to the special nature of the animation that became the main character, illegal downloading of video and reproduction products become meaningless. This is because nobody would want to have content that appears as a main character other than my child. As a result, content creators are encouraged to maintain the value of content.

Meanwhile, in order to provide a composite image with high realism and high immersion, it is preferable that the user-customized character video real-time providing system 100 according to the present invention modifies the facial expression of the user so as to be the same as the facial expression of the character. That is, the expression of the face of the child should be changed to match the character's various facial expressions such as the front or side view of the character, the sleeping appearance, the smiling face, and the crying appearance.

The synthesis processing unit 160 includes a character face extraction module 161, a user face extraction module 162, a face feature point comparison module 163, a facial feature point modification module 164, and a composite image generation module 165 .

The character face extraction module 161 extracts character face feature points from the face of the character for each scene of the character video selected by the user.

The user face extraction module 162 extracts user face feature points from the face of the user on the photograph. Here, the facial feature point refers to an element that can recognize the impression or facial expression appearing on the face such as a character and a child's face contour line, eyebrow, eye, nose, mouth, and such feature points include the shape and position of each element.

The facial feature point comparing module 163 compares the extracted facial feature points with the user facial feature points. In the present invention, the degree of matching between the facial feature points of the user and the facial feature points is compared.

The facial feature point transformation module 164 transforms the user facial feature point so that it is the same as the character facial feature point based on the comparison. That is, the facial feature point transforming module 164 synthesizes the child facial feature points with the character facial feature points when the match degree is 80% or more, and appropriately transforms the shape and position of the eye facial feature points when the match degree is less than 80%.

For example, changes in the shape and position of the eyes, mouth, or nose can be used to express a variety of facial expressions at the eye-to-eye or eye-to-eye ratio, or to maximize the reliability of the composition It is important to extract the shape and position of the eyes and mouth from each of the face of the character and the child, and to compare them. In the case of the eyes, the shape and position of the eye are extracted from each of the character and the face of the child, and the information related to the eye such as the degree of opening and closing of the eyelid, the position of the eye tail, the position of the pupil, If the match degree is 80% or more, the eye of the child can be synthesized on the eye of the character as it is. If the degree of agreement is less than 80%, the shape and position of the eye may be modified by moving the eye upward or downward according to the eyes of the character, or by raising or lowering the eye tail. Likewise, in the case of the mouth, the shape and position of the mouth are extracted from each of the face of the character and the child, and the mouth-related expression can be known by the information related to the mouth such as the degree of opening and closing of the mouth, the position of the upper lip and the lower lip, If the match is 80% or more, the child's mouth can be synthesized into the mouth of the character as it is. If the degree of agreement is less than 80%, the upper lip and lower lip of the child may be moved up and down and left and right according to the expression of the mouth of the character, or the shape and position of the mouth may be modified by combining the mouth of the character .

In the present invention, the reason for limiting the reference range of the degree of agreement to 80% is as follows. As a result of experimenting with various reference ranges of 70%, 80%, 90%, etc., This is because it is an optimal range in which the awkwardness of the synthesized character is excluded when the pose of the synthesized character and the facial expression are opposite to each other.

The composite image generation module 165 synthesizes the modified user facial feature points with the character facial feature points to generate a composite image.

As described above, after extracting facial feature points of a user on a photograph and comparing the facial feature points with the character facial feature points to determine the degree of conformity, if the degree of match is less than 80%, the eye facial feature points are modified and synthesized to character facial feature points. It is possible to combine the pose of the character and the expression so that the face can be synthesized while using the face of the child, and the effect of realism and immersion is high. In other words, according to the pose of the character and the story of the image, the facial expression of the character is variously produced. According to the above-described synthesis method, even if one photograph shows no photographs of various facial expressions, It is possible to generate a child's facial expression and obtain a natural composite image.

Meanwhile, in the present invention, it is possible to change the story development according to the user's selection beyond the user's becoming a friend of the main character or the main character in the character video stored in the character video object storage unit 110. [

<Description of Method>

A method for real-time provision of a user-customized character image according to the present invention will be described with reference to an exemplary diagram shown in FIG. 2 and FIG. 3, together with a flowchart shown in FIG. 4, for convenience.

1. Storing a character video object < S410 >

It stores 2D or 3D character video images that children like, such as Pororo.

2. Step of extracting the user's face from the user image &lt; S420 &gt;

The user's face is extracted from the user image. That is, when a child's face is photographed using a terminal such as a mobile phone, a PDA, or a smart phone, or a photo is taken with a complex background or a plurality of people, The face of the child is extracted as shown in (b) of FIG.

3. receiving the style information to be applied to the user's face extracted from the user (S430)

And receives style information to be applied to the face of the user extracted in step S420 from the user. Here, the style information refers to information that can be applied to a child's face, such as a hairstyle, a wearable hat and accessories, and a ball touch.

4. Step S440 of generating a face image of a user to which style information is applied

The face image of the user to which the style information is applied is generated in step S430. Accordingly, the user can change the hair style and color of the extracted child or wear the hat as shown in FIG. 2 (c).

5. Step S450 of selecting a character image to be combined with a user's face image

The user selects one of the character images and the main character or friend character of the selected character image, which is the face image of the user generated in step S440. For example, as shown in (d) of FIG. 2, a character video object desired to be experienced by a child and a character in the video object are selected and input from among various character video objects.

6. Step of generating a composite image in real time (S460)

In step S450, a face image of the character image of the selected character is synthesized with the user's face image to which style information such as the hair style change or the wearing of a hat is applied to generate a 2D or 3D composite image in real time.

7. Displaying the composite image (S470)

The composite image generated in step S460, that is, the composite image in which the child is the main character as shown in (e) of FIG.

As described above, according to the present invention, since the face of a user (child) can be synthesized in real time on the face of a main character such as Pororo, the user can be provided with a personalized character video object (animation, game Etc.), it is possible to have a more realistic feeling of reality than an indirect experience-type emotion input through existing images or media.

In addition, the user can feel the excitement and proud feelings while watching the character of the hero or protagonist in the video he usually liked, and since this experience will make him feel self-esteem and confidence, It can be deep in the brain.

Meanwhile, in the present invention, the user's facial expression is modified so as to be the same as the facial expression of the character in step S460, thereby providing a composite image having high realism and immersion. That is, the expression of the face of the child should be changed to match the character's various facial expressions such as the front or side view of the character, the sleeping appearance, the smiling face, and the crying appearance.

For this, in step S460, the following synthesis method is employed.

First, a character facial feature point is extracted from the face of the character for each scene of the character video selected by the user (S461), and the user facial feature point is extracted from the face of the user on the photograph (S462). Here, the facial feature point refers to an element that can recognize the impression or facial expression appearing on the face such as a character and a child's face contour line, eyebrow, eye, nose, mouth, and such feature points include the shape and position of each element. Next, the character facial feature points extracted in the steps S461 and S462 are compared with the user facial feature points. In the present invention, the matching degrees of the character facial feature points and the user facial feature points are compared (S463). Then, based on the comparison in step S463, the user's facial feature point is modified to be the same as the character facial feature point (S464). That is, at this stage, the eye facial feature points are synthesized to the character facial feature points when the match degree is 80% or more, and the morphology and position of the eye facial feature points are appropriately modified when the match degree is less than 80%.

For example, changes in the shape and position of the eyes, mouth, or nose can be used to express a variety of facial expressions at the eye-to-eye or eye-to-eye ratio, or to maximize the reliability of the composition It is important to extract the shape and position of the eyes and mouth from each of the face of the character and the child, and to compare them. In the case of the eyes, the shape and position of the eye are extracted from each of the character and the face of the child, and the information related to the eye such as the degree of opening and closing of the eyelid, the position of the eye tail, the position of the pupil, If the match degree is 80% or more, the eye of the child can be synthesized on the eye of the character as it is. If the degree of agreement is less than 80%, the shape and position of the eye may be modified by moving the eye upward or downward according to the eyes of the character, or by raising or lowering the eye tail. Likewise, in the case of the mouth, the shape and position of the mouth are extracted from each of the face of the character and the child, and the mouth-related expression can be known by the information related to the mouth such as the degree of opening and closing of the mouth, the position of the upper lip and the lower lip, If the match is 80% or more, the child's mouth can be synthesized into the mouth of the character as it is. If the degree of agreement is less than 80%, the upper lip and lower lip of the child may be moved up and down and left and right according to the expression of the mouth of the character, or the shape and position of the mouth may be modified by combining the mouth of the character .

In the present invention, the reason for limiting the reference range of the degree of agreement to 80% is as follows. As a result of experimenting with various reference ranges of 70%, 80%, 90%, etc., This is because it is an optimal range in which the awkwardness of the synthesized character is excluded when the pose of the synthesized character and the facial expression are opposite to each other.

Finally, in step S465, the synthesized image is synthesized by synthesizing the transformed user facial feature points with the character facial feature points (S465).

According to the synthesizing method described above, a character is posed and the story of an image is variously displayed according to the story of the character. According to the above-described synthesis method, It is possible to generate a facial expression of a child, and a natural synthetic image can be obtained, so that there is an effect of high realism and immersion.

The method of realizing a user-customized character video according to the present invention may be implemented in a form of a program command which can be executed through various computer means and recorded in a computer-readable medium. The computer-readable medium may include program instructions, data files, data structures, and the like, alone or in combination. The program instructions recorded on the medium may be those specially designed and constructed for the present invention or may be available to those skilled in the art of computer software. Examples of computer-readable media include magnetic media such as hard disks, floppy disks and magnetic tape; optical media such as CD-ROMs and DVDs; magnetic media such as floppy disks; Magneto-optical media, and hardware devices specifically configured to store and execute program instructions such as ROM, RAM, flash memory, and the like. Examples of program instructions include machine language code such as those produced by a compiler, as well as high-level language code that can be executed by a computer using an interpreter or the like. The hardware devices described above may be configured to operate as one or more software modules to perform the operations of the present invention, and vice versa.

The foregoing description is merely illustrative of the technical idea of the present invention and various changes and modifications may be made by those skilled in the art without departing from the essential characteristics of the present invention. Therefore, the embodiments disclosed in the present invention are intended to illustrate rather than limit the scope of the present invention, and the scope of the technical idea of the present invention is not limited by these embodiments. The scope of protection of the present invention should be construed according to the following claims, and all technical ideas falling within the scope of the same shall be construed as falling within the scope of the present invention.

100: Real-time delivery system of customized character video

Claims (9)

A character video object storage unit for storing character video objects;
A user face extracting unit for extracting a face of a user from a user image;
A style information input unit for inputting style information to be applied to the extracted user's face from a user;
A user face generating unit for generating a face image of a user to which the style information is applied;
A character video selection unit for selecting one character video to be combined with the face image of the user generated by the user;
A synthesis processing unit for synthesizing a user's face image to which the style information is applied in a face region of a character image of the selected character image to generate a synthesized image in real time; And
A display unit for displaying the generated composite image,
Wherein the system comprises:
The method according to claim 1,
The synthesis processing unit,
The user's facial expression is modified so as to be the same as the facial expression of the character
Real-time system of customized character video.
3. The method of claim 2,
The synthesis processing unit,
A character face extraction module for extracting character face feature points from the face of the character;
A user face extraction module for extracting user face feature points from the face of the user;
A facial feature point comparing module for comparing the character facial feature point with the user facial feature point;
A facial feature point transforming module for transforming the user facial feature point so as to be the same as the character facial feature point based on the comparison; And
A synthesized image generation module for synthesizing the modified user facial feature points with the character facial feature points to generate the synthesized image,
Wherein the system comprises:
The method of claim 3,
The facial feature point comparison module comprises:
Comparing the matching degree of the character facial feature point and the user facial feature point,
The facial feature point transforming module comprises:
If the degree of match is less than 80%, the shape and position of the user facial feature point are modified
Real-time system of customized character video.
(a) storing character video objects;
(b) extracting a user's face from the user image;
(c) receiving style information to be applied to the extracted user's face from a user;
(d) generating a face image of the user to which the style information is applied;
(e) receiving from the user a face image of the user and a character video object to be composited;
(f) generating a composite image in real time by synthesizing a user's face image to which the style information is applied in a face region of a character of the selected character image; And
(g) displaying the generated composite image
Wherein the method comprises the steps of:
6. The method of claim 5,
In the step (f)
The user's facial expression is modified so as to be the same as the facial expression of the character
A method of providing real time video of user customized character.
The method according to claim 6,
The step (f)
(f1) extracting a character facial feature point from the face of the character;
(f2) extracting user facial feature points from the face of the user;
(f3) comparing the character face feature point with the user face feature point;
(f4) transforming the user facial feature point to be the same as the character facial feature point based on the comparison; And
(f5) generating the synthesized image by synthesizing the modified user facial feature points with the character facial feature points
Wherein the method comprises the steps of:
8. The method of claim 7,
In the step (f3)
Comparing the matching degree of the character facial feature point and the user facial feature point,
In the step (f4)
If the degree of match is less than 80%, the shape and position of the user facial feature point are modified
A method of providing real time video of user customized character.
A computer-readable recording medium having recorded thereon a program for executing the method according to any one of claims 5 to 8.
KR1020120132397A 2012-11-21 2012-11-21 System for providing character video and method thereof KR20140065762A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020120132397A KR20140065762A (en) 2012-11-21 2012-11-21 System for providing character video and method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020120132397A KR20140065762A (en) 2012-11-21 2012-11-21 System for providing character video and method thereof

Publications (1)

Publication Number Publication Date
KR20140065762A true KR20140065762A (en) 2014-05-30

Family

ID=50892533

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020120132397A KR20140065762A (en) 2012-11-21 2012-11-21 System for providing character video and method thereof

Country Status (1)

Country Link
KR (1) KR20140065762A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160012362A (en) * 2014-07-23 2016-02-03 서울특별시 Security system and the control method thereof
KR20200025062A (en) * 2018-08-29 2020-03-10 주식회사 케이티 Apparatus, method and user device for prividing customized character
KR102399255B1 (en) * 2021-12-22 2022-05-18 주식회사 위딧 System and method for producing webtoon using artificial intelligence
KR20230080543A (en) * 2021-11-30 2023-06-07 (주) 키글 System for creating face avatar
WO2024085513A1 (en) * 2022-10-18 2024-04-25 삼성전자 주식회사 Display device and method for operating same

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160012362A (en) * 2014-07-23 2016-02-03 서울특별시 Security system and the control method thereof
KR20200025062A (en) * 2018-08-29 2020-03-10 주식회사 케이티 Apparatus, method and user device for prividing customized character
KR20230080543A (en) * 2021-11-30 2023-06-07 (주) 키글 System for creating face avatar
KR102399255B1 (en) * 2021-12-22 2022-05-18 주식회사 위딧 System and method for producing webtoon using artificial intelligence
WO2024085513A1 (en) * 2022-10-18 2024-04-25 삼성전자 주식회사 Display device and method for operating same

Similar Documents

Publication Publication Date Title
CN107154069B (en) Data processing method and system based on virtual roles
KR101445263B1 (en) System and method for providing personalized content
Prince Digital visual effects in cinema: The seduction of reality
US20090153552A1 (en) Systems and methods for generating individualized 3d head models
CN106648071A (en) Social implementation system for virtual reality
Ryokai et al. StoryFaces: pretend-play with ebooks to support social-emotional storytelling
KR20140065762A (en) System for providing character video and method thereof
Gress [digital] Visual Effects and Compositing
Van der Laan et al. Creating aesthetic, institutional and symbolic boundaries in fashion photo shoots
Cleland Image avatars: Self-other encounters in a mediated world
Hu Forming the Spectacle of Body: Analysis of the User-Platform Relationship through Body Performance Videos on TikTok
Reinhuber et al. Layered images: the desire to see more than the obvious
Wikayanto et al. Aesthetic Morphology of Animation
Doroski Thoughts of spirits in madness: Virtual production animation and digital technologies for the expansion of independent storytelling
Ng et al. A pedagogy of craft: Teaching culture analysis with machinima
KR102553432B1 (en) System for creating face avatar
Gan The newly developed form of Ganime and its relation to selective animation for adults in Japan
WO2023130715A1 (en) Data processing method and apparatus, electronic device, computer-readable storage medium, and computer program product
KR20120076456A (en) Avata media production method and device using a recognition of sensitivity
Franco Alonso Differences between Japanese and Western anatomical animation techniques applied to videogames
O’Meara Anna Biller
McLean All for Beauty: Makeup and Hairdressing in Hollywood's Studio Era
Arts AR Cinema: Visual Storytelling and Embodied Experiences with Augmented Reality Filters and Backgrounds
Li et al. The Analysis of Two-Dimensional Animation Lens
KR101243832B1 (en) Avata media service method and device using a recognition of sensitivity

Legal Events

Date Code Title Description
WITN Withdrawal due to no request for examination