KR101743763B1 - Method for providng smart learning education based on sensitivity avatar emoticon, and smart learning education device for the same - Google Patents
Method for providng smart learning education based on sensitivity avatar emoticon, and smart learning education device for the same Download PDFInfo
- Publication number
- KR101743763B1 KR101743763B1 KR1020150092072A KR20150092072A KR101743763B1 KR 101743763 B1 KR101743763 B1 KR 101743763B1 KR 1020150092072 A KR1020150092072 A KR 1020150092072A KR 20150092072 A KR20150092072 A KR 20150092072A KR 101743763 B1 KR101743763 B1 KR 101743763B1
- Authority
- KR
- South Korea
- Prior art keywords
- face
- avatar
- animation
- template
- information
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 230000035945 sensitivity Effects 0.000 title 1
- 230000002996 emotional effect Effects 0.000 claims abstract description 121
- 230000008451 emotion Effects 0.000 claims abstract description 95
- 238000001514 detection method Methods 0.000 claims abstract description 41
- 230000001815 facial effect Effects 0.000 claims abstract description 41
- 238000010606 normalization Methods 0.000 claims abstract description 41
- 230000014509 gene expression Effects 0.000 claims abstract description 32
- 230000008859 change Effects 0.000 claims abstract description 30
- 230000000694 effects Effects 0.000 claims abstract description 25
- 230000008569 process Effects 0.000 claims abstract description 23
- 239000011159 matrix material Substances 0.000 claims abstract description 20
- 230000009466 transformation Effects 0.000 claims abstract description 19
- 210000004709 eyebrow Anatomy 0.000 claims abstract description 15
- 238000012545 processing Methods 0.000 claims abstract description 14
- 239000000284 extract Substances 0.000 claims abstract description 11
- 238000000605 extraction Methods 0.000 claims abstract description 7
- 230000037303 wrinkles Effects 0.000 claims abstract description 7
- 230000008921 facial expression Effects 0.000 claims abstract description 6
- 238000012937 correction Methods 0.000 claims abstract description 4
- 210000005069 ears Anatomy 0.000 claims abstract description 3
- 210000001747 pupil Anatomy 0.000 claims description 5
- 238000007781 pre-processing Methods 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 claims 1
- 230000008901 benefit Effects 0.000 abstract description 4
- 239000012467 final product Substances 0.000 abstract description 4
- 238000007654 immersion Methods 0.000 abstract description 2
- 238000013507 mapping Methods 0.000 abstract description 2
- 101100289192 Pseudomonas fragi lips gene Proteins 0.000 description 7
- 238000010586 diagram Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 230000008909 emotion recognition Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 208000021267 infertility disease Diseases 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 210000000707 wrist Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/20—Education
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/02—Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
Landscapes
- Business, Economics & Management (AREA)
- Engineering & Computer Science (AREA)
- Tourism & Hospitality (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Primary Health Care (AREA)
- General Health & Medical Sciences (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Educational Technology (AREA)
- Strategic Management (AREA)
- Educational Administration (AREA)
- General Business, Economics & Management (AREA)
- Economics (AREA)
- Health & Medical Sciences (AREA)
- Processing Or Creating Images (AREA)
Abstract
The present invention relates to a smart learning learning method based on emotional avatar emoticons, and a smart learning learning terminal device for implementing the method. The controller 160 includes a camera 110, a touch screen 120, a microphone 130, a speaker 140, a transceiver 150, a controller 160, and a storage 170, In the smart learning learning terminal device 100 having the emotional avatar emoticon generation module 161, the hybrid feature point detection algorithm module 162 and the smart learning learning module 163, the emotional avatar emoticon generation module 161, The avatar emoticon generation module 161 includes a feature point detection unit 161a, a character generation unit 161b, a template resource image generation unit 161c, a 3D face application unit 161d and a voice parsing unit 161e. The detection means 161a detects the template character information previously stored in the storage unit 170 (in the case of the template character information, group icon information for each part of the emotional avatar face character, Information on the eye A group icon such as a wrinkle in the eye, a whitish eye, a frame of the eye in the case of the group icon information, and a group icon such as a lip line, a lip gloss, a lip wrinkle in the case of the mouth) Template character information, which is standard data for expressing a user character based on the 2D front face photograph of the feature point detection means 161a (template character information for character creation, which is a final product implemented in an animation form, The emotional avatar face character included in the emoticon as the face area, and the emotional avatar face character, which are formed as group information for each face component with the standard data stored in advance in order to generate the animation including the emotional avatar face character) , 2D frontal face photograph The face region is extracted and the position of the eyes, nose, and mouth (including the ears and jaws), which are the face components of the face region, is automatically grasped. From the template character information, Extracts standard data matched with eyes, nose, mouth, ear and jaw using group information of the 2D front face photograph image to be actually characterized, The character generation unit 161b generates a similarity transformation matrix selected from the group of the respective sites by using the reference points selected on the outline of the jaw and the standard data stored in advance, and the character generation unit 161b generates an animation using the emotional avatar face character To create 'emotional avatar emoticon' of the form, automatic extraction is performed on the face area of the 2D frontal face photograph image, The template character information previously stored in the storage unit 170 (template character information, change of the lips, change of the eye into the wink expression, change of the lips in the sloping expression of the lips) in accordance with the similarity transformation matrix generated by the step 161a The emotion avatar face character normalization information is generated and stored in the storage unit 170, and the feature point detection unit 161a detects the face character normalization information The emotional avatar face character normalization information is generated when one of the group icons of each part of the emotional avatar face character is extracted and combined using the similarity transformation matrix generated by the facial character generation unit 200. [ It is created for use for animation effects, shadow effects, and gamma correction and shading for eye eyebrow replacement. Emotional avatar face character normalization information ", and the template resource image generation means 161c generates the template animation image information by parsing the face area among the 2D front face image images extracted by the character generation means 161b parsing), the process of creating the 2D emotion avatar emoticon is performed by parsing the face area of the 2D front face photograph image with the template animation content information that can be expressed in the animation form, and the facial component is parsed in the parsed face area In the case of judging whether or not it is suitable for the self image animation implemented according to the template animation content information, the information of the standard face region in the animation in which the parsed face region is implemented according to the template animation content information, Percentage of fitness that is matching information of nose, mouth, ear and chin If it is judged that it is inappropriate if it is less than a predetermined threshold percentage and if it is suitable for its own image animation, it is determined as a face component constituting the face area of the 2D front face photograph image The 2D emotional avatar emoticon in the form of an animation is completed by changing the face component of the animation. On the contrary, when the determination result according to the judgment criterion is not suitable for the self image animation, the 'emotional avatar' Face character normalization information 'as a face component template in a self-image animation implemented according to the template animation content information stored in the storage unit 170, and the storage unit 170 stores the face- Element When the template animation content information corresponding to the animation template information for the facial expression animation is also stored in advance, the normalization information of the emotional avatar face character, which is the normalization information of the similarity transformation matrix, The facial component for the emotional avatar implementing the animation is changed and stored in the storage unit 170, and then the facial component for the emotional avatar is stored in the storage unit 170, When extracting the skin color and state of the user for implementing the face component corresponding to the selected face component template, and creating the skin corresponding to the face component template selected as the first partial process for applying to the 2D emotion avatar emoticon Depending on the part of the animation application, The face skin is removed and automatically generated by reflecting the skin property of the user according to the animation effect after extraction of the skin color attribute, and the second partial process for applying to the 2D emotion avatar emoticon is performed according to the animation effect The shape of the eyebrow, eye, lips and jaw of the face are changed. The shape of the eyebrow, eyes, lips and jaw of the face of the user, which is a face component of the user, is changed and the template face component And the attributes of the face area to be applied as a face character icon of the animation object are extracted from the 2D front face photograph image, and the attributes of the selected face component template To the size and color property of the extracted face area so as to apply it as an animation target icon The 3D face application unit 161d completes the generation of the 2D emotion avatar emoticon by changing the color and the size to match the face area of the 2D emotion avatar emoticon generated by the template resource image generation unit 161c, (3D face modeling based on 2D) to generate 3D emotion avatar emoticons that are displayed in an animation format in order to automatically generate an upper, lower, left, Based 2D facial morphing for performing a 3D face animation effect for distorting the top, bottom, left, and right sides of a user's 2D frontal face photograph image to produce a rotational effect, 2D), the face region of the 2D emotion avatar emoticon generated as the 2D front face photograph image is decoded and decoded An image is generated and stored in the storage unit 170. Thereafter, a polygon which is the smallest unit used for expressing the three-dimensional shape in the three-dimensional graphic is generated, a plurality of polygons are created and converted into a set of polygons, Dimensional face area data, which is texture-mapped data, by performing texture mapping for attaching the decoded image stored in the storage unit 170 to the generated polygon set, stores the set in the storage unit 170, Dimensional face emoticon to which the 3D face emotion data is applied is scaled down to a level of one hundredth (0.01 times), stored in the storage unit 170, output to the touch screen 120, and 3D face modeling based on 2D). The 2D emotion avatar emoticon created as a 2D frontal facial photo image is used to extract the image To detect the outline of an image, binarization or special processing is applied to the image of the image. Binarization is a process of lowering the color values of the various steps (256 steps) to values of 0 and 1, The edge information of the image is sharpened, the color image is changed to the gray level in the special processing, the outline detection is executed, the outline is extracted in the special processing, the user improves the selection of the reference point of the outline, The voice parsing unit 161e uses the template character information stored in the storage unit 170 corresponding to the 3D emoticon characterizing unit 161a and the 3D emoticon emoticon generated by the template resource image generating unit 161c, After extracting the emoticon template contents for the 3D emotion avatar emoticons generated by the emotion generating unit 170 from the storage unit 170, Avatar emoticon or 3D emotion avatar emoticon is output to the touch screen 120 in the form of animation based on the template animation content information, and then the voice signal input to the microphone 130 is received and the 2D emotion avatar emoticon or 3D Converted into 3D emotion avatar emoticon including a 2D emotion avatar emoticon including a voice expression and a voice emotion for each emotion avatar emoticon and stored in the storage unit 170 to complete generation of emotion avatar emoticons including voice .
As a result, the learner's own emotional avatar is immediately generated and parsed, and the portion of the learner's own voice is parsed to operate the content (such as menu driving) It provides a realistic and indirect experience that is very effective in educating children with experience-oriented characteristics by providing a part that is instantly inserted into the content and linked to the lip-sync of the emotional avatar corresponding to the learner's own avatar or standard avatar.
In addition, it provides the effect of improving the immersion degree of the smart learning learning using the emotional avatar emoticon of the automatic generation method of the form and color of the eyes, nose, mouth, etc. according to various expressions do.
In addition, it performs string parsing, generates emotional avatar emoticons including voice as lip sync animation type, and offers the advantage of real-time combination of emotional animation.
Description
The present invention relates to a method of providing a smart learning learning based on a emotional avatar emoticon and a smart learning learning terminal device for realizing the same. More specifically, the present invention relates to a smart learning learning terminal device, The learner's own voice is inserted immediately into the contents, and the learner's own avatar or standard avatar corresponds to the lipsync of the emotional avatar. A method of providing a smart learning learning based on emotional avatar emoticons to provide a realistic indirect experience that is highly effective for children's education having an experience-centered characteristic by providing a linked and recorded part, and a smart learning learning terminal ≪ / RTI >
Since the launch of mobile terminals equipped with computing functions such as smart phones and smart pads, smart devices have rapidly developed and related markets are rapidly expanding.
The e-learning service, which is a learning service through a communication network, has been introduced due to the development of the hardware of the smart device and the development of the network communication technology. In recent years, there has been a growing demand for e-learning services in various smart devices under the name of mobile learning and smart learning, which are limited to existing personal computers.
[Related Technical Literature]
1. Method for managing smart learning lectern and system and smart learning lecture apparatus (Patent Application No. 10-2013-0086517) (APPARAUS AND SYSTEM FOR SMART LEARING TEACHING DESK, AND MANAGEMENT METHOD OF MANAGING SMART LEARING TEACHING DESK)
2. Smart learning platform system and method for providing education services in a smart learning platform system (Patent Application No. 10-2012-0122443)
SUMMARY OF THE INVENTION The present invention has been made to solve the above-mentioned problems, and it is an object of the present invention to provide a method and a system for parsing a learner's own voice by parsing a learner's own prepared emotional avatar, ), The part of the learner's own voice is immediately inserted into the contents and is recorded in association with the learner's own avatar or the lipsync of the emotional avatar corresponding to the standard avatar, A method of providing smart learning learning based on emotional avatar emoticons for providing realistic indirect experiences, and a smart learning learning terminal device for implementing the learning learning terminal device.
In addition, the present invention can improve the immersion degree of the smart learning learning using the emotional avatar emoticons of the automatic generation method in the form and color of the eyes, nose, and mouth according to various expressions by the user himself / herself And a smart learning learning terminal device for implementing the smart learning learning terminal based on the emoticon based emoticon.
The present invention also provides a smart learning learning providing method based on emotional avatar emoticons for performing character string parsing, generating emotional avatar emoticons including a voice as a lip sync animation type, and providing an advantage of real-time combination of emotional animation, And to provide a smart learning learning terminal device for implementing the smart learning learning terminal device.
However, the objects of the present invention are not limited to the above-mentioned objects, and other objects not mentioned can be clearly understood by those skilled in the art from the following description.
In order to achieve the above object, a smart learning learning terminal device based on emotional avatar emoticons according to an embodiment of the present invention includes a camera 110, a
delete
delete
delete
delete
The method for providing smart learning learning based on the emotional avatar emoticon according to the embodiment of the present invention and the smart learning learning terminal device for implementing the same are configured such that the emotional avatar of the learner is immediately generated and parsed The learner's own voice is inserted immediately in the content, and the learner's own avatar or standard avatar corresponds to the lipsync of the emotional avatar, thereby recording the learner's own voice , Providing a realistic and indirect experience that is highly effective in educating children with experience-oriented characteristics.
In addition, a method of providing a smart learning learning based on emotional avatar emoticons according to another embodiment of the present invention and a smart learning learning terminal device for implementing the same, The emotion of the smart learning learning using the emotion type emoticon of the automatic generation method of the form and the color as the feeling of the photograph also provides the effect to improve the roll.
In addition, a method of providing a smart learning learning based on emotional avatar emoticons according to another embodiment of the present invention, and a smart learning learning terminal device for implementing the same, may perform character string parsing, and emotional avatar emoticons including voice may be referred to as lip- And provides the advantage of real-time combination of emotional animation.
FIG. 1 and FIG. 2 illustrate a smart learning learning method based on emotional avatar emoticons according to an embodiment of the present invention.
3 is a block diagram showing a configuration of a smart learning
4 is a block diagram showing the configuration of the emotional avatar
5 to 7 are views for explaining a user interface screen implemented in the smart
8 is a view for explaining the operation principle of the hybrid feature point
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, a detailed description of preferred embodiments of the present invention will be given with reference to the accompanying drawings. In the following description of the present invention, detailed description of known functions and configurations incorporated herein will be omitted when it may make the subject matter of the present invention rather unclear.
In the present specification, when any one element 'transmits' data or signals to another element, the element can transmit the data or signal directly to the other element, and through at least one other element Data or signal can be transmitted to another component.
FIG. 1 and FIG. 2 illustrate a smart learning learning method based on emotional avatar emoticons according to an embodiment of the present invention.
5 to 7 illustrate a user interface screen implemented in a smart learning learning terminal device (hereinafter, smart learning learning terminal device) 100 implementing a smart learning learning method based on emotional avatar emoticons according to an embodiment of the present invention Fig.
1 and 2, the smart
After step S10, the smart learning
After step S20, the smart
After step S30, the smart
After the step S40, the smart
After the step S50, the smart learning learning terminal 100 sets the emotional avatar emoticons matching with the selection signal of the step S50 as animation characters to form one group as a series, and stores them in the storage unit 170 The first problem of the smart learning problem is displayed on the touch screen 120 (S60).
After the step S60, the smart
After the step S70, the smart learning learning
After step S80, the smart learning learning
On the other hand, if all of the smart learning problems are not displayed as a result of the determination in step S90, the smart learning learning
3 is a block diagram showing a configuration of a smart learning learning
Hereinafter, the smart
Referring to FIG. 4, the emotional avatar
The feature point detection unit 161a obtains a similarity transformation matrix with the user's face based on the template character information previously stored in the storage unit 170. [
More specifically, in order to obtain the similarity transformation matrix using the template character information, which is standard data for representing the user character based on the 2D front face photograph of the feature point detecting means 161a, the 2D front face photograph image is analyzed, And the position of the eyes, nose, and mouth (including the ear, jaw, and the like), which are the face components of the face area, are automatically grasped.
Template character information for creating such a character is stored in advance as standard data to generate animations including emotional avatar face characters included in the face area and emotional avatar face characters as the final product, , And group information for each face component. Here, the template character information described above is group icon information for each part of the emotional avatar face character, and is generated in advance centering on each part. For example, if the template character information is the eye region group icon information, it may have a group icon such as a line of eyes, a lip gloss, a lip of a wrinkle, have.
That is, the feature point detection unit 161a extracts standard data matched with the eyes, nose, mouth, ear, and jaw using the group information for each face component from the template character information, stores the standard data, A similarity transformation matrix selected from the group of each site is generated using reference points selected from the outlines of the eyes, nose, mouth, ears, and jaw of each part of the 2D front face photograph image and pre-stored standard data.
The character generating means 161b performs automatic extraction of the face area in the 2D front face photograph image to generate an animation 'emotion avatar emoticon' using the emotional avatar face character embodying the face area of the user.
The character generation means 161b normalizes the user's face database by matching the template character information previously stored in the storage unit 170 according to the similarity transformation matrix generated by the feature point detection means 161a, Face character normalization information 'and stores it in the storage unit 170. The template character information means standard data for expressing changes in lips, changes in eyes to a winking expression, changes in the expression of lips to a pointed expression, and the like.
That is, the
In addition, the
The template resource image generation means 161c parses the face area among the 2D front face image images extracted by the character generation means 161b from the template animation content information. That is, a process for creating a 2D emotional avatar emoticon is performed by parsing a face area of a 2D frontal face photograph image with template animation content information that can be expressed in an animation form.
The template resource
Here, the template resource
If it is determined that the template image is suitable for its own image animation, the template resource
Conversely, when the determination result is not suitable for the self image animation, the template resource
More specifically, the template resource
To this end, the template resource
Accordingly, the template resource
Then, the template resource
More specifically, the template resource
In addition, the template resource
Next, the template resource
The template resource image generation means 161c completes the generation of the 2D emotion avatar emoticon by changing the selected face component template to the color and size in accordance with the size and color property of the extracted face region to apply the selected face component template to the animation target icon.
The 3D
More specifically, the 3D
3D face modeling based on 2D will be described. The 3D
Accordingly, the 3D
On the other hand, the 3D
The
The
The hybrid feature point
In other words, the hybrid feature point
The outline detection algorithm includes an entire outline b1 including the chin line and an eye line b2 including the pupil recognition line b21, the eye wrinkle recognition line b22 including the inner toe and the twin cups, and the eye expression change recognition line b23, The eyebrow and nose connection line b3, the nose volume change recognition line b31, the wristwrinkle recognition line b4, the fine recognition line b5, the nose and the nose connection line b6, The shape change recognition line b7, and the lip recognition line b8 are extracted by the image recognition method.
The center point detection algorithm extracts the eyebrow center point c1, eye center point c2, nose center point c3, nose center point c4, and lips center point c5 using the image recognition method.
The hybrid feature point
The hybrid feature point
The hybrid feature point
Further, the hybrid feature point
The other hybrid feature point
Then, the hybrid feature point
The
The smart
The smart
That is, in response to a request from the smart
The smart
The smart
The smart
The smart
As described above, preferred embodiments of the present invention have been disclosed in the present specification and drawings, and although specific terms have been used, they have been used only in a general sense to easily describe the technical contents of the present invention and to facilitate understanding of the invention , And are not intended to limit the scope of the present invention. It is to be understood by those skilled in the art that other modifications based on the technical idea of the present invention are possible in addition to the embodiments disclosed herein.
100: Smart learning learning terminal device
110: camera
120: Touch screen
130: microphone
140: Speaker
150: Transmitting /
160:
161: emotional avatar emoticon generation module
161a: feature point detection means
161b: character generating means
161c: template resource image generation means
161d: 3D face applying means
161e: voice parsing means
162: Hybrid feature point detection algorithm module
163: Smart Learning Learning Module
170:
Claims (5)
The emotional avatar emoticon generation module 161 includes a feature point detection unit 161a, a character generation unit 161b, a template resource image generation unit 161c, a 3D face application unit 161d, And speech parsing means 161e,
The feature point detection means (161a)
In the case of the template character information stored in the storage unit 170 (in the case of the template character information, group icon information for each part of the emotional avatar face character is generated in advance around each part, and the template character information is stored in the eye part group icon information And has a group icon such as a bottom line, a white line, a pupil border, and the like, and has a group icon such as a lip line, a lip gloss, a lip wrinkle, etc. in the case of a mouth) Template character information, which is standard data for expressing a user character based on a 2D front face photograph of the detecting means 161a (in the case of template character information for character creation, a facial expression is added to the emotional avatar emoticon, Emotional avatar face characters included as areas, The face image data is extracted and the 2D face image data is extracted to extract the similarity transformation matrix using the standard data stored in advance as the group information for each face component in order to generate the animation including the other face characters The position of the eye, nose, and mouth (including the ear, jaw, and the like), which are the face components of the face region, is automatically grasped,
Standard data matching with eyes, nose, mouth, ear, and jaw is extracted from the template character information using the group information of each face component, and the standard data is stored. Then, each part of the 2D front face image, Generates a similarity transformation matrix selected from the group of each site using the reference points selected from the outlines of the eyes, the nose, the mouth, the ears, and the jaw and the previously stored standard data,
The character generating means 161b,
In order to generate an animated 'emotional avatar emoticon' using the emotional avatar face character embodying the face region of the user, automatic extraction of the face area from the 2D frontal facial photograph image is performed, and the feature point detection means 161a In accordance with the generated similarity conversion matrix, the template character information (template character information previously stored in the storage unit 170, including the change of the lip, the change of the eye into the wink expression, and the change of the lip into the sloping expression And normalizes the user's face database to generate 'emotion avatar face character normalization information' and stores the 'emotion avatar face character normalization information' in the storage unit 170,
When generating the 'emotion avatar face character normalization information' obtained by extracting one of group icons of each part of the emotional avatar face character using the similarity transformation matrix generated by the feature point detection unit 161a, the emotion avatar face character normalization information Generates a shadow effect, and a 'emotional avatar face character normalization information' that performs gamma correction and shadow application for eye brow replacement,
The template resource image generation means 161c generates,
Among the 2D front face photograph images extracted by the character generating means 161b with the template animation content information, when parsing the face region, the face region of the 2D front face photograph image is parsed with the template animation content information that can be expressed in an animation form In the process of generating the 2D emotional avatar emoticon, when the face component is parsed in the parsed facial area and it is judged whether or not it is suitable for the self image animation implemented according to the template animation content information, The standard face part information in the animation implemented according to the template animation content information and the percentage of the fitness that is the matching information of the eyes, nose, mouth, ear and jaw in the extracted face area are analyzed. If the ratio is equal to or greater than a predetermined threshold percentage It is judged to be suitable, If it is judged that it is not suitable for the case of less than the predetermined threshold percentage, if it is suitable for the self-image animation, the 2D facial expression image of the 2D facial image by changing the facial component of the animation, The emotion avatar face character normalization information generated by the character generating means 161b is stored in the storage unit 170 as the template animation content information stored in the storage unit 170. In contrast, The face component template in the self-image animation implemented according to the present invention,
The storage unit 170 stores emotional avatar face character normalization information, which is a set of face component template information generated for face normalization, and template animation content information corresponding to animation template information for facial expression animation, , The face region cropping is performed on at least one of the face component template information constituting the 'emotion avatar face character normalization information', which is the normalization information of the similarity transformation matrix, for the emotion avatar implementing the animation The face component is changed and stored in the storage unit 170, and then the skin color and state of the user for implementing the face component corresponding to the selected face component template are extracted, The skin corresponding to the face component template selected as the partial process When generated, and after extraction of each skin color property of the user's face to reflect the user's skin properties in accordance with the animation effect removal and automatically generate the animated facial skin according to the part,
2D Emoticon Avatar Part 2 is a partial process to apply to emoticons. When you change face components, you change the shape of your eyebrows, eyes, lips and jaws according to the animation effect. The lips and the jaws are automatically changed, and the template face component generated by the first partial process is automatically adjusted to the user's face element color and shape attributes and applied to the user's face,
Extracts the size and color attributes of a face area to be applied as a face character icon in an animation object in a 2D front face photograph image,
The 2D emotion avatar emoticon creation is completed by changing the selected face component template to the color and size according to the size and color property of the extracted face area to be applied as the animation target icon,
The 3D face applying means 161d,
3D-face 3D morphing of the 2D emotion avatar emoticon generated by the template-resource image generating means 161c to automatically generate the upper, lower, left, and right views for the face region, modeling based on 2D) to create a 3D emotion avatar emoticon that is displayed in an animation format, the upper, lower, left, and right sides of the image of the user's 2D front face image are distorted by the central axis according to the animation specification, In the process of 3D face modeling based on 2D for performing 3D face animation effect expression, the face area of the 2D emotion avatar emoticon created with the 2D front face photograph image is decoded And stores the decoded image in the storage unit 170. Then, the decoded image is stored in the storage unit 170, A plurality of polygons are generated and converted into a set of polygons, the created polygon set is stored in the storage unit 170, and the decoded image stored in the storage unit 170 is stored on the generated polygon set Dimensional face region data, and finally scales down the 3D emotion avatar emoticon to which the 3D face region data is applied to a level of one hundredth (0.01) Stored in the memory 170, output to the touch screen 120,
A preprocessing image process for 3D face modeling based on 2D, which performs an outline detection of an image from image information of a face region of a 2D emotion avatar emoticon generated as a 2D front face photograph image, In order to perform detection, binary or special processing is applied to an image of a photograph. Binarization is a process of lowering the color values of several steps (256 steps) to values of 0 and 1. The edge information of the image is clarified through binarization, In the special processing, the color image is changed to the gray level or the outline is detected. In the special processing, the outline is extracted and the user improves the selection of the reference point of the outline. Using the pre-stored template character information,
The voice parsing means 161e,
The 2D emotion avatar emoticons generated by the template resource image generating means 161c and the emoticon template contents for the 3D emotion avatar emoticons generated by the 3D face applying means 161d are extracted from the storage unit 170, Emoticon or 3D emotion avatar emoticon is output to the touch screen 120 in the form of animation based on the template animation content information, and then the voice signal input to the microphone 130 is received and the 2D emotion avatar emoticon or the 3D emotion The emotion is converted into a 3D emotional avatar emoticon including a 2D emotional avatar emoticon including a voice expression and a voice emotion for each avatar emoticon and stored in the storage unit 170 to complete generation of emotional avatar emoticons including voice Avatar smartphone based on emoticons Learning terminal apparatus.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150092072A KR101743763B1 (en) | 2015-06-29 | 2015-06-29 | Method for providng smart learning education based on sensitivity avatar emoticon, and smart learning education device for the same |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150092072A KR101743763B1 (en) | 2015-06-29 | 2015-06-29 | Method for providng smart learning education based on sensitivity avatar emoticon, and smart learning education device for the same |
Publications (2)
Publication Number | Publication Date |
---|---|
KR20170002100A KR20170002100A (en) | 2017-01-06 |
KR101743763B1 true KR101743763B1 (en) | 2017-06-05 |
Family
ID=57832510
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020150092072A KR101743763B1 (en) | 2015-06-29 | 2015-06-29 | Method for providng smart learning education based on sensitivity avatar emoticon, and smart learning education device for the same |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR101743763B1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020102459A1 (en) * | 2018-11-13 | 2020-05-22 | Cloudmode Corp. | Systems and methods for evaluating affective response in a user via human generated output data |
KR102669801B1 (en) | 2023-12-29 | 2024-05-28 | 주식회사 티맥스알지 | Method and apparatus for generating and mapping avatar textures |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101979285B1 (en) * | 2018-01-29 | 2019-05-15 | 최예은 | Education system for programming learning and creativity improvement |
KR102661019B1 (en) | 2018-02-23 | 2024-04-26 | 삼성전자주식회사 | Electronic device providing image including 3d avatar in which motion of face is reflected by using 3d avatar corresponding to face and method for operating thefeof |
KR102605595B1 (en) | 2018-04-24 | 2023-11-23 | 현대자동차주식회사 | Apparatus, vehicle comprising the same, and control method of the vehicle |
KR102185469B1 (en) * | 2018-12-03 | 2020-12-02 | 정진해 | Companion Animal Emotion Bots Device using Artificial Intelligence and Communion Method |
KR102648993B1 (en) | 2018-12-21 | 2024-03-20 | 삼성전자주식회사 | Electronic device for providing avatar based on emotion state of user and method thereof |
CN112084814B (en) * | 2019-06-12 | 2024-02-23 | 广东小天才科技有限公司 | Learning assisting method and intelligent device |
KR20210012724A (en) * | 2019-07-26 | 2021-02-03 | 삼성전자주식회사 | Electronic device for providing avatar and operating method thereof |
KR102318111B1 (en) * | 2020-11-17 | 2021-10-27 | 주식회사 일루니 | Method and apparatus for generating story book which provides sticker reflecting user's face to character |
KR102637373B1 (en) * | 2021-01-26 | 2024-02-19 | 주식회사 플랫팜 | Apparatus and method for generating emoticon |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013027893A1 (en) * | 2011-08-22 | 2013-02-28 | Kang Jun-Kyu | Apparatus and method for emotional content services on telecommunication devices, apparatus and method for emotion recognition therefor, and apparatus and method for generating and matching the emotional content using same |
-
2015
- 2015-06-29 KR KR1020150092072A patent/KR101743763B1/en not_active Application Discontinuation
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013027893A1 (en) * | 2011-08-22 | 2013-02-28 | Kang Jun-Kyu | Apparatus and method for emotional content services on telecommunication devices, apparatus and method for emotion recognition therefor, and apparatus and method for generating and matching the emotional content using same |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020102459A1 (en) * | 2018-11-13 | 2020-05-22 | Cloudmode Corp. | Systems and methods for evaluating affective response in a user via human generated output data |
KR102669801B1 (en) | 2023-12-29 | 2024-05-28 | 주식회사 티맥스알지 | Method and apparatus for generating and mapping avatar textures |
Also Published As
Publication number | Publication date |
---|---|
KR20170002100A (en) | 2017-01-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101743763B1 (en) | Method for providng smart learning education based on sensitivity avatar emoticon, and smart learning education device for the same | |
US11688120B2 (en) | System and method for creating avatars or animated sequences using human body features extracted from a still image | |
US20220150285A1 (en) | Communication assistance system, communication assistance method, communication assistance program, and image control program | |
US20210174072A1 (en) | Microexpression-based image recognition method and apparatus, and related device | |
US7764828B2 (en) | Method, apparatus, and computer program for processing image | |
US11736756B2 (en) | Producing realistic body movement using body images | |
WO2018121777A1 (en) | Face detection method and apparatus, and electronic device | |
KR101743764B1 (en) | Method for providing ultra light-weight data animation type based on sensitivity avatar emoticon | |
US20150235416A1 (en) | Systems and methods for genterating a 3-d model of a virtual try-on product | |
WO2016111174A1 (en) | Effect generating device, effect generating method, and program | |
CN112379812A (en) | Simulation 3D digital human interaction method and device, electronic equipment and storage medium | |
CN110418095B (en) | Virtual scene processing method and device, electronic equipment and storage medium | |
CN115049016B (en) | Model driving method and device based on emotion recognition | |
CN108537162A (en) | The determination method and apparatus of human body attitude | |
US20220277586A1 (en) | Modeling method, device, and system for three-dimensional head model, and storage medium | |
CN114049290A (en) | Image processing method, device, equipment and storage medium | |
KR20160010810A (en) | Realistic character creation method and creating system capable of providing real voice | |
CN111597926A (en) | Image processing method and device, electronic device and storage medium | |
WO2021155666A1 (en) | Method and apparatus for generating image | |
KR100965622B1 (en) | Method and Apparatus for making sensitive character and animation | |
CN118799439A (en) | Digital human image fusion method, device, equipment and readable storage medium | |
CN112836545A (en) | 3D face information processing method and device and terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
E902 | Notification of reason for refusal | ||
N231 | Notification of change of applicant |