KR101743763B1 - Method for providng smart learning education based on sensitivity avatar emoticon, and smart learning education device for the same - Google Patents

Method for providng smart learning education based on sensitivity avatar emoticon, and smart learning education device for the same Download PDF

Info

Publication number
KR101743763B1
KR101743763B1 KR1020150092072A KR20150092072A KR101743763B1 KR 101743763 B1 KR101743763 B1 KR 101743763B1 KR 1020150092072 A KR1020150092072 A KR 1020150092072A KR 20150092072 A KR20150092072 A KR 20150092072A KR 101743763 B1 KR101743763 B1 KR 101743763B1
Authority
KR
South Korea
Prior art keywords
face
avatar
animation
template
information
Prior art date
Application number
KR1020150092072A
Other languages
Korean (ko)
Other versions
KR20170002100A (en
Inventor
김영자
Original Assignee
(주)참빛솔루션
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by (주)참빛솔루션 filed Critical (주)참빛솔루션
Priority to KR1020150092072A priority Critical patent/KR101743763B1/en
Publication of KR20170002100A publication Critical patent/KR20170002100A/en
Application granted granted Critical
Publication of KR101743763B1 publication Critical patent/KR101743763B1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/02Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Tourism & Hospitality (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Primary Health Care (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Educational Technology (AREA)
  • Strategic Management (AREA)
  • Educational Administration (AREA)
  • General Business, Economics & Management (AREA)
  • Economics (AREA)
  • Health & Medical Sciences (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention relates to a smart learning learning method based on emotional avatar emoticons, and a smart learning learning terminal device for implementing the method. The controller 160 includes a camera 110, a touch screen 120, a microphone 130, a speaker 140, a transceiver 150, a controller 160, and a storage 170, In the smart learning learning terminal device 100 having the emotional avatar emoticon generation module 161, the hybrid feature point detection algorithm module 162 and the smart learning learning module 163, the emotional avatar emoticon generation module 161, The avatar emoticon generation module 161 includes a feature point detection unit 161a, a character generation unit 161b, a template resource image generation unit 161c, a 3D face application unit 161d and a voice parsing unit 161e. The detection means 161a detects the template character information previously stored in the storage unit 170 (in the case of the template character information, group icon information for each part of the emotional avatar face character, Information on the eye A group icon such as a wrinkle in the eye, a whitish eye, a frame of the eye in the case of the group icon information, and a group icon such as a lip line, a lip gloss, a lip wrinkle in the case of the mouth) Template character information, which is standard data for expressing a user character based on the 2D front face photograph of the feature point detection means 161a (template character information for character creation, which is a final product implemented in an animation form, The emotional avatar face character included in the emoticon as the face area, and the emotional avatar face character, which are formed as group information for each face component with the standard data stored in advance in order to generate the animation including the emotional avatar face character) , 2D frontal face photograph The face region is extracted and the position of the eyes, nose, and mouth (including the ears and jaws), which are the face components of the face region, is automatically grasped. From the template character information, Extracts standard data matched with eyes, nose, mouth, ear and jaw using group information of the 2D front face photograph image to be actually characterized, The character generation unit 161b generates a similarity transformation matrix selected from the group of the respective sites by using the reference points selected on the outline of the jaw and the standard data stored in advance, and the character generation unit 161b generates an animation using the emotional avatar face character To create 'emotional avatar emoticon' of the form, automatic extraction is performed on the face area of the 2D frontal face photograph image, The template character information previously stored in the storage unit 170 (template character information, change of the lips, change of the eye into the wink expression, change of the lips in the sloping expression of the lips) in accordance with the similarity transformation matrix generated by the step 161a The emotion avatar face character normalization information is generated and stored in the storage unit 170, and the feature point detection unit 161a detects the face character normalization information The emotional avatar face character normalization information is generated when one of the group icons of each part of the emotional avatar face character is extracted and combined using the similarity transformation matrix generated by the facial character generation unit 200. [ It is created for use for animation effects, shadow effects, and gamma correction and shading for eye eyebrow replacement. Emotional avatar face character normalization information ", and the template resource image generation means 161c generates the template animation image information by parsing the face area among the 2D front face image images extracted by the character generation means 161b parsing), the process of creating the 2D emotion avatar emoticon is performed by parsing the face area of the 2D front face photograph image with the template animation content information that can be expressed in the animation form, and the facial component is parsed in the parsed face area In the case of judging whether or not it is suitable for the self image animation implemented according to the template animation content information, the information of the standard face region in the animation in which the parsed face region is implemented according to the template animation content information, Percentage of fitness that is matching information of nose, mouth, ear and chin If it is judged that it is inappropriate if it is less than a predetermined threshold percentage and if it is suitable for its own image animation, it is determined as a face component constituting the face area of the 2D front face photograph image The 2D emotional avatar emoticon in the form of an animation is completed by changing the face component of the animation. On the contrary, when the determination result according to the judgment criterion is not suitable for the self image animation, the 'emotional avatar' Face character normalization information 'as a face component template in a self-image animation implemented according to the template animation content information stored in the storage unit 170, and the storage unit 170 stores the face- Element When the template animation content information corresponding to the animation template information for the facial expression animation is also stored in advance, the normalization information of the emotional avatar face character, which is the normalization information of the similarity transformation matrix, The facial component for the emotional avatar implementing the animation is changed and stored in the storage unit 170, and then the facial component for the emotional avatar is stored in the storage unit 170, When extracting the skin color and state of the user for implementing the face component corresponding to the selected face component template, and creating the skin corresponding to the face component template selected as the first partial process for applying to the 2D emotion avatar emoticon Depending on the part of the animation application, The face skin is removed and automatically generated by reflecting the skin property of the user according to the animation effect after extraction of the skin color attribute, and the second partial process for applying to the 2D emotion avatar emoticon is performed according to the animation effect The shape of the eyebrow, eye, lips and jaw of the face are changed. The shape of the eyebrow, eyes, lips and jaw of the face of the user, which is a face component of the user, is changed and the template face component And the attributes of the face area to be applied as a face character icon of the animation object are extracted from the 2D front face photograph image, and the attributes of the selected face component template To the size and color property of the extracted face area so as to apply it as an animation target icon The 3D face application unit 161d completes the generation of the 2D emotion avatar emoticon by changing the color and the size to match the face area of the 2D emotion avatar emoticon generated by the template resource image generation unit 161c, (3D face modeling based on 2D) to generate 3D emotion avatar emoticons that are displayed in an animation format in order to automatically generate an upper, lower, left, Based 2D facial morphing for performing a 3D face animation effect for distorting the top, bottom, left, and right sides of a user's 2D frontal face photograph image to produce a rotational effect, 2D), the face region of the 2D emotion avatar emoticon generated as the 2D front face photograph image is decoded and decoded An image is generated and stored in the storage unit 170. Thereafter, a polygon which is the smallest unit used for expressing the three-dimensional shape in the three-dimensional graphic is generated, a plurality of polygons are created and converted into a set of polygons, Dimensional face area data, which is texture-mapped data, by performing texture mapping for attaching the decoded image stored in the storage unit 170 to the generated polygon set, stores the set in the storage unit 170, Dimensional face emoticon to which the 3D face emotion data is applied is scaled down to a level of one hundredth (0.01 times), stored in the storage unit 170, output to the touch screen 120, and 3D face modeling based on 2D). The 2D emotion avatar emoticon created as a 2D frontal facial photo image is used to extract the image To detect the outline of an image, binarization or special processing is applied to the image of the image. Binarization is a process of lowering the color values of the various steps (256 steps) to values of 0 and 1, The edge information of the image is sharpened, the color image is changed to the gray level in the special processing, the outline detection is executed, the outline is extracted in the special processing, the user improves the selection of the reference point of the outline, The voice parsing unit 161e uses the template character information stored in the storage unit 170 corresponding to the 3D emoticon characterizing unit 161a and the 3D emoticon emoticon generated by the template resource image generating unit 161c, After extracting the emoticon template contents for the 3D emotion avatar emoticons generated by the emotion generating unit 170 from the storage unit 170, Avatar emoticon or 3D emotion avatar emoticon is output to the touch screen 120 in the form of animation based on the template animation content information, and then the voice signal input to the microphone 130 is received and the 2D emotion avatar emoticon or 3D Converted into 3D emotion avatar emoticon including a 2D emotion avatar emoticon including a voice expression and a voice emotion for each emotion avatar emoticon and stored in the storage unit 170 to complete generation of emotion avatar emoticons including voice .
As a result, the learner's own emotional avatar is immediately generated and parsed, and the portion of the learner's own voice is parsed to operate the content (such as menu driving) It provides a realistic and indirect experience that is very effective in educating children with experience-oriented characteristics by providing a part that is instantly inserted into the content and linked to the lip-sync of the emotional avatar corresponding to the learner's own avatar or standard avatar.
In addition, it provides the effect of improving the immersion degree of the smart learning learning using the emotional avatar emoticon of the automatic generation method of the form and color of the eyes, nose, mouth, etc. according to various expressions do.
In addition, it performs string parsing, generates emotional avatar emoticons including voice as lip sync animation type, and offers the advantage of real-time combination of emotional animation.

Description

Technical Field [0001] The present invention relates to a smart learning learning terminal based on emotional avatar emoticon, and a smart learning learning terminal device for implementing the smart learning learning terminal based on emotional avatar emoticon,

The present invention relates to a method of providing a smart learning learning based on a emotional avatar emoticon and a smart learning learning terminal device for realizing the same. More specifically, the present invention relates to a smart learning learning terminal device, The learner's own voice is inserted immediately into the contents, and the learner's own avatar or standard avatar corresponds to the lipsync of the emotional avatar. A method of providing a smart learning learning based on emotional avatar emoticons to provide a realistic indirect experience that is highly effective for children's education having an experience-centered characteristic by providing a linked and recorded part, and a smart learning learning terminal ≪ / RTI >

Since the launch of mobile terminals equipped with computing functions such as smart phones and smart pads, smart devices have rapidly developed and related markets are rapidly expanding.

The e-learning service, which is a learning service through a communication network, has been introduced due to the development of the hardware of the smart device and the development of the network communication technology. In recent years, there has been a growing demand for e-learning services in various smart devices under the name of mobile learning and smart learning, which are limited to existing personal computers.

[Related Technical Literature]

1. Method for managing smart learning lectern and system and smart learning lecture apparatus (Patent Application No. 10-2013-0086517) (APPARAUS AND SYSTEM FOR SMART LEARING TEACHING DESK, AND MANAGEMENT METHOD OF MANAGING SMART LEARING TEACHING DESK)

2. Smart learning platform system and method for providing education services in a smart learning platform system (Patent Application No. 10-2012-0122443)

SUMMARY OF THE INVENTION The present invention has been made to solve the above-mentioned problems, and it is an object of the present invention to provide a method and a system for parsing a learner's own voice by parsing a learner's own prepared emotional avatar, ), The part of the learner's own voice is immediately inserted into the contents and is recorded in association with the learner's own avatar or the lipsync of the emotional avatar corresponding to the standard avatar, A method of providing smart learning learning based on emotional avatar emoticons for providing realistic indirect experiences, and a smart learning learning terminal device for implementing the learning learning terminal device.

In addition, the present invention can improve the immersion degree of the smart learning learning using the emotional avatar emoticons of the automatic generation method in the form and color of the eyes, nose, and mouth according to various expressions by the user himself / herself And a smart learning learning terminal device for implementing the smart learning learning terminal based on the emoticon based emoticon.

The present invention also provides a smart learning learning providing method based on emotional avatar emoticons for performing character string parsing, generating emotional avatar emoticons including a voice as a lip sync animation type, and providing an advantage of real-time combination of emotional animation, And to provide a smart learning learning terminal device for implementing the smart learning learning terminal device.

However, the objects of the present invention are not limited to the above-mentioned objects, and other objects not mentioned can be clearly understood by those skilled in the art from the following description.

In order to achieve the above object, a smart learning learning terminal device based on emotional avatar emoticons according to an embodiment of the present invention includes a camera 110, a touch screen 120, a microphone 130, a speaker 140, a transmitter / The control unit 160 includes a emotional avatar emoticon generation module 161, a hybrid feature point detection algorithm module 162, and a smart learning learning module 163. The control unit 160 includes a control unit 160, a storage unit 170, In the smart learning learning terminal device 100, the emotional avatar emoticon generation module 161 of the smart emotion avatar emoticon generation module 161 includes a feature point detection unit 161a, a character generation unit 161b, a template resource image generation unit The feature point detecting means 161a detects the template character information stored in the storage unit 170 (in the case of the template character information, the emotional avatar face 161a), the 3D face applying means 161d and the voice parsing means 161e. Carrick And a group icon such as a wrinkle under the eyes, a whitish eye, a pupil border, and the like in the case where the template character information is the eye part group icon information, and in the case of the mouth, the lip line (E.g., having a group icon such as a lip gloss, a lip gloss, and the like) based on the 2D front face photograph of the feature point detecting unit 161a, and the standard data for representing the user character based on the 2D front face photograph of the feature point detecting unit 161a In the case of the template character information for character creation, in order to generate animation including the emotional avatar face character included in the face area and the emotional avatar face character in the final product, which is the final product implemented in the animation format, Each face with saved standard data In order to perform a similarity transformation matrix calculation using the group information for the elements, a 2D frontal facial photograph image is analyzed and a facial region is extracted and the facial components of the facial region such as eyes, nose and mouth (selectively, Etc.), and standard data matching with eyes, nose, mouth, ear and jaw is extracted from the template character information using the group information of each face component, and the extracted standard data is stored Then, a similarity transformation matrix selected from the group of the respective parts is generated using the reference points selected from the outlines of the eyes, nose, mouth, ear, and jaw, which are the respective parts of the 2D front face photograph image to be converted into actual characters, , And the character generating means 161b generates an animated 'emotional avatar aunt using the emotional avatar face character embodying the face region of the user The facial region extracting unit 161a extracts the template character information stored in the storage unit 170 according to the similarity transformation matrix generated by the feature point detecting unit 161a, (The standard data for expressing the change of the lips with the template character information, the change of the eyes into the wink expression, and the change of the lips into the facial expression) are normalized to normalize the user's face database, Avatar face character normalization information 'is stored and stored in the storage unit 170. One of the group icons of each part of the emotional avatar face character is extracted using the similarity transformation matrix generated by the feature point detection unit 161a, 'Emotional avatar face character normalization information' when creating emotional avatar face character normalization information for each face component animated Quot; emotional avatar face character normalization information " which is generated for use for the effect, shadow effect, and gamma correction and shadow application for eye eyebrow replacement, and the template resource image generation unit 161c generates a template animation When parsing a face area out of the 2D front face photographs extracted by the character generating unit 161b, the face area of the 2D frontal face photograph image is parsed with template animation content information that can be expressed in an animation form, 2D emotion Avatar emoticon is generated, and when the facial component is parsed in the parsed facial area and judged whether it is suitable for its own image animation implemented according to the template animation content information, In an animation implemented according to the animation content information Which is matching information of eyes, nose, mouth, ear, and jaw in the extracted face region, and judges that it is appropriate if it is equal to or greater than a predetermined threshold percentage. On the contrary, If appropriate for its own image animation, 2D facial emoticon creation by changing the facial component of the animation as a facial component that constitutes the facial region of the 2D facial facial image image, completing creation of the 2D emotional avatar emoticon in animation form, Emotional avatar face character normalization information 'generated by the character generating unit 161b is implemented according to the template animation content information previously stored in the storage unit 170 when the determination result is not suitable for the self image animation In its own image animation The emotional avatar face character normalization information which is a set of the face component template information generated for the normalization of the face is stored in the storage unit 170 and corresponds to the animation template information for the facial expression animation When the template animation content information is also stored in advance, the face region cropping is performed on at least one of the face component template information constituting the 'emotion avatar face character normalization information' which is the normalization information of the similarity transformation matrix , The facial component for the emotional avatar that implements the animation is changed and stored in the storage unit 170 and then the skin color and state of the user for the facial component implementation corresponding to the selected facial component template are extracted, Selected as the first sub-process for applying to the emoticon emoticon If the skin corresponding to the face component template is generated, the facial skin is removed and automatically generated by reflecting the skin property of the user according to the animation effect after extracting each skin color attribute of the user face according to the animation application portion, and the 2D emotion avatar It is a second part process applied to the emoticons. When the face component is changed, the eyebrows, eyes, lips and jaws of the face are changed according to the animation effect. The template facial component generated by the first partial process is automatically adjusted to the user's facial color and shape attributes and applied to the user's face, and the 2D front facial photograph image is animated Extracts the size and color attributes of the face area to be applied as the face character icon, The 3D face application unit 161d completes the creation of the 2D emotion avatar emoticon by changing the color and the size to match the size and color property of the extracted face area to apply the component template as the animation target icon, 3D face modeling based on 2D to automatically generate an up, down, left, and right views of the face region of the 2D emotion avatar emoticon generated by the 2D facial face photograph image by means 161c ) To generate a 3D emotion avatar emoticon displayed in an animation format, and to generate a rotation effect by distorting the upper, lower, left, and right sides of the user's 2D front face photograph image according to the animation designation, In a 2D-based 3D face modeling based on 2D process for performing face animation effect representation, 2D The decoded image is generated by decoding the face area of the 2D emotion avatar emoticon generated as the front face photograph image, and stored in the storage unit 170, and then the smallest unit used to express the three- A plurality of polygons are generated and converted into a set of polygons, the created polygon set is stored in the storage unit 170, and the decoded image stored in the storage unit 170 is pasted on the generated polygon set Dimensional face region data, which is texture-mapped data, is finally generated by scaling down the 3D emotion avatar emoticon applied with the 3D face region data to a level of one hundredth (0.01) ), And outputs it to the touch screen 120. As a preprocessing image process for 3D face modeling based on 2D, The 2D emotion generated by the facial photograph image is detected from the image information of the facial region of the avatar emoticon, and the image is binarized or subjected to special processing in order to detect the outline of the image. 256 levels) to 0 and 1, and the edge information of the image is clarified through binarization. In the special processing, the color image is changed to the gray level or the outline detection is executed. In the special processing, The user uses the template character information previously stored in the storage unit 170 corresponding to the reference point and the standard data selected on the outline and the voice parsing unit 161e generates the template resource image The 2D emotion avatar emoticons generated by the means 161c, the 3D emotion avatar emoticons generated by the 3D face application means 161d After extracting the emoticon template contents for the generated 3D emotional avatar emoticon from the storage unit 170, the 2D emotional avatar emoticon or the 3D emotional avatar emoticon is output to the touch screen 120 in an animation format based on the template animation content information, A 2D emotional avatar emoticon including a voice expression and a 3D emotional avatar emoticon including a voice expression are received and stored in the 2D emotional avatar emoticon or the 3D emotional avatar emoticon that receives the voice signal input to the microphone 130 Unit 170 to complete the generation of emotional avatar emoticons including voice.

delete

delete

delete

delete

The method for providing smart learning learning based on the emotional avatar emoticon according to the embodiment of the present invention and the smart learning learning terminal device for implementing the same are configured such that the emotional avatar of the learner is immediately generated and parsed The learner's own voice is inserted immediately in the content, and the learner's own avatar or standard avatar corresponds to the lipsync of the emotional avatar, thereby recording the learner's own voice , Providing a realistic and indirect experience that is highly effective in educating children with experience-oriented characteristics.

In addition, a method of providing a smart learning learning based on emotional avatar emoticons according to another embodiment of the present invention and a smart learning learning terminal device for implementing the same, The emotion of the smart learning learning using the emotion type emoticon of the automatic generation method of the form and the color as the feeling of the photograph also provides the effect to improve the roll.

In addition, a method of providing a smart learning learning based on emotional avatar emoticons according to another embodiment of the present invention, and a smart learning learning terminal device for implementing the same, may perform character string parsing, and emotional avatar emoticons including voice may be referred to as lip- And provides the advantage of real-time combination of emotional animation.

FIG. 1 and FIG. 2 illustrate a smart learning learning method based on emotional avatar emoticons according to an embodiment of the present invention.
3 is a block diagram showing a configuration of a smart learning learning terminal device 100 according to an embodiment of the present invention.
4 is a block diagram showing the configuration of the emotional avatar emoticon generation module 161 in the smart learning learning terminal device 100 of FIG.
5 to 7 are views for explaining a user interface screen implemented in the smart learning terminal device 100 implementing the smart learning learning method based on the emotional avatar emoticon according to the embodiment of the present invention.
8 is a view for explaining the operation principle of the hybrid feature point detection algorithm module 162 of the smart learning learning terminal device 100 according to the embodiment of the present invention.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, a detailed description of preferred embodiments of the present invention will be given with reference to the accompanying drawings. In the following description of the present invention, detailed description of known functions and configurations incorporated herein will be omitted when it may make the subject matter of the present invention rather unclear.

In the present specification, when any one element 'transmits' data or signals to another element, the element can transmit the data or signal directly to the other element, and through at least one other element Data or signal can be transmitted to another component.

FIG. 1 and FIG. 2 illustrate a smart learning learning method based on emotional avatar emoticons according to an embodiment of the present invention.

5 to 7 illustrate a user interface screen implemented in a smart learning learning terminal device (hereinafter, smart learning learning terminal device) 100 implementing a smart learning learning method based on emotional avatar emoticons according to an embodiment of the present invention Fig.

1 and 2, the smart learning learning terminal 100 receives a selection signal for one of face images of at least one user previously stored in the storage unit 170 through the touch screen 120 , Receives the front face image of the user through the camera 110, receives the user's front face photograph image in the 2D state, separates and stores the image in the storage unit 170 (S10).

After step S10, the smart learning learning terminal device 100 generates emotional avatar emoticons and stores them in the storage unit 170 (S20).

After step S20, the smart learning learning terminal 100 generates emotional avatar emoticons in the form of animation for at least two emotional expressions and stores them in the storage unit 170 (S30).

After step S30, the smart learning learning terminal 100 receives a smart learning learning start request via the touch screen 120 (S40).

After the step S40, the smart learning learning terminal 100 displays the smart learning learning initial user interface screen as shown in FIG. 5 for user selection on the touch screen 120, A selection signal for matching with the user among the avatar emoticons is received (S50). 5A, when the user touches the lever a1 area with the touch screen 120, the lever rotates and the user name is displayed on the storage unit 170 as the area a2. When the user does not turn the lever, When touch input is performed, the corresponding user name can be selected.

After the step S50, the smart learning learning terminal 100 sets the emotional avatar emoticons matching with the selection signal of the step S50 as animation characters to form one group as a series, and stores them in the storage unit 170 The first problem of the smart learning problem is displayed on the touch screen 120 (S60).

After the step S60, the smart learning learning terminal 100 receives the answer to the smart learning question expressed in the step S60 through the touch screen 120 (S70).

After the step S70, the smart learning learning terminal device 100 determines the state information of the answer inputted in the step S70 and displays the emotion avatar emoticon in an animation format for emotion expression matched with the state information (S80) . That is, as shown in FIG. 5B, the emotional avatar emoticon can be operated in the basic state of the emotional avatar emoticon in an animation format for expressing emotion about joy as shown in FIG. 5B. 6 and 7 are reference diagrams showing animation effects of emotional avatar emoticons for various emotional expressions.

After step S80, the smart learning learning terminal apparatus 100 forms a group in a series and determines whether all the smart learning problems previously stored in the storage unit 170 have been displayed (S90) Lt; / RTI >

On the other hand, if all of the smart learning problems are not displayed as a result of the determination in step S90, the smart learning learning terminal device 100 returns to step S60 to determine the next problem among the smart learning problems, The processes of steps S60 to S90 are repeatedly performed until the display of all the problems is completed by displaying on the touch screen 120. [

3 is a block diagram showing a configuration of a smart learning learning terminal device 100 according to an embodiment of the present invention. 4 is a block diagram showing the configuration of the emotional avatar emoticon generation module 161 in the smart learning learning terminal device 100 of FIG. 3, the smart learning learning terminal 100 includes a camera 110, a touch screen 120, a microphone 130, a speaker 140, a transceiver 150, a controller 160, The control unit 160 includes an emotional avatar emoticon generation module 161, a hybrid feature point detection algorithm module 162, and a smart learning learning module 163.

Hereinafter, the smart learning learning terminal 100 will be described in detail with reference to the configuration of the controller 160. [

Referring to FIG. 4, the emotional avatar emoticon generation module 161 includes a feature point detection unit 161a, a character generation unit 161b, a template resource image generation unit 161c, a 3D face application unit Means 161d and voice parsing means 161e.

The feature point detection unit 161a obtains a similarity transformation matrix with the user's face based on the template character information previously stored in the storage unit 170. [

More specifically, in order to obtain the similarity transformation matrix using the template character information, which is standard data for representing the user character based on the 2D front face photograph of the feature point detecting means 161a, the 2D front face photograph image is analyzed, And the position of the eyes, nose, and mouth (including the ear, jaw, and the like), which are the face components of the face area, are automatically grasped.

Template character information for creating such a character is stored in advance as standard data to generate animations including emotional avatar face characters included in the face area and emotional avatar face characters as the final product, , And group information for each face component. Here, the template character information described above is group icon information for each part of the emotional avatar face character, and is generated in advance centering on each part. For example, if the template character information is the eye region group icon information, it may have a group icon such as a line of eyes, a lip gloss, a lip of a wrinkle, have.

That is, the feature point detection unit 161a extracts standard data matched with the eyes, nose, mouth, ear, and jaw using the group information for each face component from the template character information, stores the standard data, A similarity transformation matrix selected from the group of each site is generated using reference points selected from the outlines of the eyes, nose, mouth, ears, and jaw of each part of the 2D front face photograph image and pre-stored standard data.

The character generating means 161b performs automatic extraction of the face area in the 2D front face photograph image to generate an animation 'emotion avatar emoticon' using the emotional avatar face character embodying the face area of the user.

The character generation means 161b normalizes the user's face database by matching the template character information previously stored in the storage unit 170 according to the similarity transformation matrix generated by the feature point detection means 161a, Face character normalization information 'and stores it in the storage unit 170. The template character information means standard data for expressing changes in lips, changes in eyes to a winking expression, changes in the expression of lips to a pointed expression, and the like.

That is, the character generating unit 161b extracts one of group icons of each part of the emotional avatar face character using the similarity transformation matrix generated by the feature point detecting unit 161a, and combines 'emotional avatar face character normalization information' . Where emotional avatar face character normalization information is generated for use for animation effects on one of the face components.

In addition, the character generating unit 161b may generate a shadow effect and 'emotion avatar face character normalization information' that performs gamma correction and shading for eye browsing.

The template resource image generation means 161c parses the face area among the 2D front face image images extracted by the character generation means 161b from the template animation content information. That is, a process for creating a 2D emotional avatar emoticon is performed by parsing a face area of a 2D frontal face photograph image with template animation content information that can be expressed in an animation form.

The template resource image generation unit 161c determines whether or not it is suitable for the self image animation implemented according to the template animation content information in the state that the face component is parsed in the parsed face area. That is, the template resource image generation unit 161c generates the template resource image by using the standard face region information in the animation in which the parsed face region is implemented according to the template animation content information, and the matching of the eyes, nose, mouth, To determine the percentage of goodness that is information.

Here, the template resource image generation unit 161c determines that the template resource image generation unit 161c is appropriate when it is greater than or equal to a predetermined threshold percentage, and conversely determines that it is inappropriate if the template resource image generation unit 161c is less than the predetermined threshold percentage.

If it is determined that the template image is suitable for its own image animation, the template resource image generation unit 161c changes the face component of the animation with the face component constituting the face area of the 2D front face photograph image, Emoticon Avatar Emoticon creation is completed.

Conversely, when the determination result is not suitable for the self image animation, the template resource image generation unit 161c stores the 'emotion avatar face character normalization information' generated by the character generation unit 161b in the storage unit 170 Preset Template Select as the face component template in the self-image animation implemented according to the animation content information.

More specifically, the template resource image generation unit 161c determines whether the face component of the user photograph corresponding to the animation information is an animation suitable by changing the element of the 2D front face facial image, The 2D emotional avatar remote control is expressed in an animation format utilizing the 'emotional avatar face character normalization information' corresponding to the resource.

To this end, the template resource image generation unit 161c stores the emotion avatar face character normalization information, which is a set of the face component template information generated for face normalization, in the storage unit 170, The template animation content information corresponding to the template information is also stored in advance.

Accordingly, the template resource image generation unit 161c Facial component template information constituting the 'emotional avatar face character normalization information' which is the normalization information of the similarity transformation matrix, and performs facial region cropping on at least one of the facial component template information constituting the 'emotional avatar face character normalization information' And stores it in the storage unit 170. [

Then, the template resource image generation unit 161c extracts the skin color and state of the user for the face component implementation corresponding to the selected face component template.

More specifically, the template resource image generation unit 161c generates skin corresponding to the selected face component template as the first partial process for applying to the 2D emotion avatar emoticon. That is, each skin color attribute of the user's face is extracted according to the animation application part, and the face skin is removed and automatically generated according to the animation effect according to the skin attribute of the user.

In addition, the template resource image generation unit 161c performs a facial component change as a second partial process for applying to the 2D emotion avatar emoticon. That is, the animation generation module 14 changes the shape of the eyebrows, eyes, lips, and jaws of the face according to the animation effect. The shape of the eyebrows, the eyes, the lips, The template facial component generated by the first partial process is automatically adjusted to the face of the user's face element color and shape and applied to the user's face.

Next, the template resource image generation unit 161c extracts the size and color attribute of the face area to be applied as the face character icon in the animation object in the 2D front face photograph image.

The template resource image generation means 161c completes the generation of the 2D emotion avatar emoticon by changing the selected face component template to the color and size in accordance with the size and color property of the extracted face region to apply the selected face component template to the animation target icon.

The 3D face application unit 161d is a 2D-based image processing unit for automatically generating the up, down, left, and right views of the face area of the 2D emotion avatar emoticon generated as the 2D front face photograph image by the template resource image generating unit 161c (3D face modeling based on 2D) to generate a 3D emotion avatar emoticon expressed in an animation format.

More specifically, the 3D face applying unit 161d applies a 3D face animation effect expression for generating a rotational effect by distorting the upper, lower, left, and right sides of the user's 2D front face photograph image according to the animation designation as a central axis , It is possible to express a real sense emoticon animation at a level of one hundredth (0.01 times) of the existing 3D character which is very small data. To this end, the 3D face applying means 161d performs 2D face-based 3D face modeling based on 2D.

3D face modeling based on 2D will be described. The 3D face applying unit 161d decodes the face area of the 2D emotion avatar emoticon generated as the 2D front face facial photo image to generate a decoded image And stores it in the storage unit 170. Then, the 3D face applying unit 161d generates a polygon, which is the smallest unit used to represent the three-dimensional shape in the three-dimensional graphic, converts a plurality of polygons into a set of polygons, (170). Then, the 3D face applying unit 161d performs texture mapping for attaching the decoded image stored in the storage unit 170 onto the generated polygon set, thereby generating 3D face area data, which is texture-mapped data. Finally, the 3D face applying unit 161d scales down the 3D emotion avatar emoticon to which the 3D face area data is applied to a level of one hundredth (0.01 times), stores it in the storage unit 170, .

Accordingly, the 3D face applying unit 161d has the advantage of automatically generating a user character that is similar to a photograph by analyzing a single front photograph, and realizing sensible animation.

On the other hand, the 3D face applying unit 161d is a preprocessing image process for 3D face modeling based on 2D. The 3D face applying unit 161d extracts, from the image information of the face area of the 2D emotion avatar emoticon generated as the 2D front face photograph image, And performs binarization or special processing on the image of the photograph in order to detect the outline of the image. Binarization is the process of lowering the color values of several steps (usually 256) to values of 0 and 1, and the edge information of the image can be made clearer by binarization. The special processing is processing for changing the color image to a gray level or performing outline detection. When special processing is performed, the outline is extracted and the user can easily select the reference point on the outline. In the present invention, by using the template character information previously stored in the storage unit 170 corresponding to the reference point selected in the outline and the standard data, it is possible to perform the preprocessing process for more accurate outline extraction.

The voice parsing unit 161e stores the emoticon template contents for the 3D emotional avatar emoticons generated by the 3D face application unit 161d in the storage unit 170, the 2D emotion avatar emoticons generated by the template resource image generation unit 161c, .

The voice parsing unit 161e outputs the 2D emotion avatar emoticon or the 3D emotion avatar emoticon in the form of an animation based on the template animation content information on the touch screen 120 and then receives a voice signal input to the microphone 130 The 2D emotion avatar emoticon including the voice expression and the 3D emotion avatar emoticon including the voice expression for the 2D emotion avatar emoticon or the 3D emotion avatar emotion having the voice signal are stored and stored in the storage unit 170, Complete the creation of the avatar emoticons.

The hybrid feature point detection algorithm module 162 performs a feature extraction process on the face component among the 'emotion avatar face character normalization information' generated by the emotional avatar emoticon generation module 161 described above according to a request of the smart learning learning module 163 The smart learning learning module 163 may generate emotional avatar emoticons in an animation format for performing at least two emotional expressions (good, bad, etc.) using the emotional expression template character information and store them in the storage unit 170. [ .

In other words, the hybrid feature point detection algorithm module 162 generates a hybrid face detection face (FIG. 8A) and a center point detection (FIG. 8B) user face The detection algorithm for the region is used.

The outline detection algorithm includes an entire outline b1 including the chin line and an eye line b2 including the pupil recognition line b21, the eye wrinkle recognition line b22 including the inner toe and the twin cups, and the eye expression change recognition line b23, The eyebrow and nose connection line b3, the nose volume change recognition line b31, the wristwrinkle recognition line b4, the fine recognition line b5, the nose and the nose connection line b6, The shape change recognition line b7, and the lip recognition line b8 are extracted by the image recognition method.

The center point detection algorithm extracts the eyebrow center point c1, eye center point c2, nose center point c3, nose center point c4, and lips center point c5 using the image recognition method.

 The hybrid feature point detection algorithm module 162 identifies the eyebrow area from the eyebrow and nose connection line b3 as three-dimensional information classified into height, length, and width, and then divides the eyebrow area into dots And extracts the eyebrow central point c1 indicative of an edge varying more than a preset angle between the dots and stores it in the storage unit 170 as 'eyebrow change information'.

The hybrid feature point detection algorithm module 162 calculates eye feature points from the eye line b2 including the pupil recognition line b21, the eye wrinkle recognition line b22 including the infertile and twin pairs and the eye expression change recognition line b23, Dimensional information divided into a height, a length, and a width. Then, the eye region divided by the pupil recognizing line b21 is divided into a dot for distinguishing a predetermined distance interval and an edge for changing an angle more than a predetermined angle between each dot eye center point c2 indicating the edge of the eye is stored and stored in the storage unit 170 as eye change information.

The hybrid feature point detection algorithm module 162 also detects the nose area from the nose volume change recognition line b31, the fine recognition line b5, the nose and the nose bridge connection line b6, And a nose upper and lower center point (c3) for displaying an edge changing a predetermined distance distance and a predetermined angle or more from the top and the bottom of the nose region, (C3) representing the edge that changes by more than a predetermined angle between the dots that divide the straight line segments where the nose and the nose meet, and the dots that divide the preset distance interval by the area of the nose And stores it in the storage unit 170 as 'nose change information'.

Further, the hybrid feature point detection algorithm module 162 calculates three-dimensional information (height, length, and width) of the mouth area from the wrist bell recognition line b4, the nose shape change recognition line b7 and the lip recognition line b8 And a mouth center point c5 indicative of an edge of the mouth area divided by the lip recognition line b8 for dividing a preset distance interval and an edge changing more than a predetermined angle between the dots, And stores it in the storage unit 170 as 'mouth change information'.

The other hybrid feature point detection algorithm module 162 identifies the ear area and the jaw area using the jaw center point, which is not shown with the ear and the entire outline b1 in the above-described manner, as three-dimensional information, The center line can be extracted and stored in the storage unit 170 as 'ear change information' and 'jaw change information'.

Then, the hybrid feature point detection algorithm module 162 generates a face feature component for each emotion expression (e.g., good, bad, ...) in the emotion avatar face character normalization information of the 'emotion avatar face character normalization information' Eye change information, nose change information, mouth change information, ear change information, and center change amount information in the jaw change information, and generates an emotional avatar emoticon in an animation format for performing each emotion expression do.

The smart learning module 163 receives a selection signal for one of the front face photographs of at least one user previously stored in the storage unit 170 via the touch screen 120, Receives the face image of the user, receives the user's front face photograph image in the 2D state, and stores the separated image in the storage unit 170.

The smart learning learning module 163 transmits the emotional avatar emoticon generation request to the emotional avatar emoticon generation module 161 together with the 2D front face photograph image stored in the storage unit 170, 3D emotion avatar emoticon, and emotion avatar emotion including voice, and stores them in the storage unit 170. [0052] The emotion recognition unit 170 receives the emotion of the emotion.

The smart learning learning module 163 requests the hybrid feature point detection algorithm module 162 to generate the emotion avatar emoticon in the form of animation for at least two emotion expressions, (At least one of the 2D, 3D, and voice-included types) of the animation format for two or more emotional expressions and stores the received emotional avatar emoticon in the storage unit 170.

That is, in response to a request from the smart learning learning module 163, the hybrid feature point detection algorithm module 162 generates at least two emotion expressions using the emotion expression template character information for the face component among the 'emotion avatar face character normalization information' (E.g., good, bad, etc.), and stores the generated emotion avatar emoticon in the storage unit 170 to provide the smart learning learning module 163 with the smart learning learning module 163.

The smart learning learning module 163 receives a smart learning learning start request from the touch screen 120 according to a selection (touch) input of a smart learning learning application icon based on the emotional avatar emoticon by the user on the touch screen 120 The smart learning learning application data based on emotional avatar emoticons stored in the storage unit 170 is loaded onto the system memory (not shown) in the storage unit 170.

The smart learning learning module 163 displays the smart learning learning initial user interface screen for user selection on the touch screen 120 and then displays the smart learning learning initial screen on the touch screen 120, After receiving the selection signal, the emotion avatar emoticon matching with the selection signal is set as an animation character to form one group as a series, and the first problem of the smart learning problem stored in the storage unit 170 is displayed on the touch screen 120 ).

The smart learning learning module 163 receives the answer to the displayed smart learning question through the touch screen 120, determines the state information about the inputted answer, and outputs the sensory information in the form of an animation for emotion expression matched with the state information The avatar emoticon is extracted from the storage unit 170 and displayed on the touch screen 120.

The smart learning learning module 163 forms a group in a series and determines whether or not all the smart learning problems previously stored in the storage unit 170 are displayed after the answer to each question is input. When the answer to all the questions is input and the emotional avatar emoticon is expressed through the touch screen 120 in the form of an animation for expressing emotion as response information for the answer, smart running is performed.

As described above, preferred embodiments of the present invention have been disclosed in the present specification and drawings, and although specific terms have been used, they have been used only in a general sense to easily describe the technical contents of the present invention and to facilitate understanding of the invention , And are not intended to limit the scope of the present invention. It is to be understood by those skilled in the art that other modifications based on the technical idea of the present invention are possible in addition to the embodiments disclosed herein.

100: Smart learning learning terminal device
110: camera
120: Touch screen
130: microphone
140: Speaker
150: Transmitting /
160:
161: emotional avatar emoticon generation module
161a: feature point detection means
161b: character generating means
161c: template resource image generation means
161d: 3D face applying means
161e: voice parsing means
162: Hybrid feature point detection algorithm module
163: Smart Learning Learning Module
170:

Claims (5)

The control unit 160 includes a camera 110, a touch screen 120, a microphone 130, a speaker 140, a transceiver 150, a controller 160, and a storage unit 170. The controller 160 generates emotional avatar emoticons In the smart learning learning terminal device 100 having the module 161, the hybrid feature point detection algorithm module 162, and the smart learning learning module 163,
The emotional avatar emoticon generation module 161 includes a feature point detection unit 161a, a character generation unit 161b, a template resource image generation unit 161c, a 3D face application unit 161d, And speech parsing means 161e,
The feature point detection means (161a)
In the case of the template character information stored in the storage unit 170 (in the case of the template character information, group icon information for each part of the emotional avatar face character is generated in advance around each part, and the template character information is stored in the eye part group icon information And has a group icon such as a bottom line, a white line, a pupil border, and the like, and has a group icon such as a lip line, a lip gloss, a lip wrinkle, etc. in the case of a mouth) Template character information, which is standard data for expressing a user character based on a 2D front face photograph of the detecting means 161a (in the case of template character information for character creation, a facial expression is added to the emotional avatar emoticon, Emotional avatar face characters included as areas, The face image data is extracted and the 2D face image data is extracted to extract the similarity transformation matrix using the standard data stored in advance as the group information for each face component in order to generate the animation including the other face characters The position of the eye, nose, and mouth (including the ear, jaw, and the like), which are the face components of the face region, is automatically grasped,
Standard data matching with eyes, nose, mouth, ear, and jaw is extracted from the template character information using the group information of each face component, and the standard data is stored. Then, each part of the 2D front face image, Generates a similarity transformation matrix selected from the group of each site using the reference points selected from the outlines of the eyes, the nose, the mouth, the ears, and the jaw and the previously stored standard data,
The character generating means 161b,
In order to generate an animated 'emotional avatar emoticon' using the emotional avatar face character embodying the face region of the user, automatic extraction of the face area from the 2D frontal facial photograph image is performed, and the feature point detection means 161a In accordance with the generated similarity conversion matrix, the template character information (template character information previously stored in the storage unit 170, including the change of the lip, the change of the eye into the wink expression, and the change of the lip into the sloping expression And normalizes the user's face database to generate 'emotion avatar face character normalization information' and stores the 'emotion avatar face character normalization information' in the storage unit 170,
When generating the 'emotion avatar face character normalization information' obtained by extracting one of group icons of each part of the emotional avatar face character using the similarity transformation matrix generated by the feature point detection unit 161a, the emotion avatar face character normalization information Generates a shadow effect, and a 'emotional avatar face character normalization information' that performs gamma correction and shadow application for eye brow replacement,
The template resource image generation means 161c generates,
Among the 2D front face photograph images extracted by the character generating means 161b with the template animation content information, when parsing the face region, the face region of the 2D front face photograph image is parsed with the template animation content information that can be expressed in an animation form In the process of generating the 2D emotional avatar emoticon, when the face component is parsed in the parsed facial area and it is judged whether or not it is suitable for the self image animation implemented according to the template animation content information, The standard face part information in the animation implemented according to the template animation content information and the percentage of the fitness that is the matching information of the eyes, nose, mouth, ear and jaw in the extracted face area are analyzed. If the ratio is equal to or greater than a predetermined threshold percentage It is judged to be suitable, If it is judged that it is not suitable for the case of less than the predetermined threshold percentage, if it is suitable for the self-image animation, the 2D facial expression image of the 2D facial image by changing the facial component of the animation, The emotion avatar face character normalization information generated by the character generating means 161b is stored in the storage unit 170 as the template animation content information stored in the storage unit 170. In contrast, The face component template in the self-image animation implemented according to the present invention,
The storage unit 170 stores emotional avatar face character normalization information, which is a set of face component template information generated for face normalization, and template animation content information corresponding to animation template information for facial expression animation, , The face region cropping is performed on at least one of the face component template information constituting the 'emotion avatar face character normalization information', which is the normalization information of the similarity transformation matrix, for the emotion avatar implementing the animation The face component is changed and stored in the storage unit 170, and then the skin color and state of the user for implementing the face component corresponding to the selected face component template are extracted, The skin corresponding to the face component template selected as the partial process When generated, and after extraction of each skin color property of the user's face to reflect the user's skin properties in accordance with the animation effect removal and automatically generate the animated facial skin according to the part,
2D Emoticon Avatar Part 2 is a partial process to apply to emoticons. When you change face components, you change the shape of your eyebrows, eyes, lips and jaws according to the animation effect. The lips and the jaws are automatically changed, and the template face component generated by the first partial process is automatically adjusted to the user's face element color and shape attributes and applied to the user's face,
Extracts the size and color attributes of a face area to be applied as a face character icon in an animation object in a 2D front face photograph image,
The 2D emotion avatar emoticon creation is completed by changing the selected face component template to the color and size according to the size and color property of the extracted face area to be applied as the animation target icon,
The 3D face applying means 161d,
3D-face 3D morphing of the 2D emotion avatar emoticon generated by the template-resource image generating means 161c to automatically generate the upper, lower, left, and right views for the face region, modeling based on 2D) to create a 3D emotion avatar emoticon that is displayed in an animation format, the upper, lower, left, and right sides of the image of the user's 2D front face image are distorted by the central axis according to the animation specification, In the process of 3D face modeling based on 2D for performing 3D face animation effect expression, the face area of the 2D emotion avatar emoticon created with the 2D front face photograph image is decoded And stores the decoded image in the storage unit 170. Then, the decoded image is stored in the storage unit 170, A plurality of polygons are generated and converted into a set of polygons, the created polygon set is stored in the storage unit 170, and the decoded image stored in the storage unit 170 is stored on the generated polygon set Dimensional face region data, and finally scales down the 3D emotion avatar emoticon to which the 3D face region data is applied to a level of one hundredth (0.01) Stored in the memory 170, output to the touch screen 120,
A preprocessing image process for 3D face modeling based on 2D, which performs an outline detection of an image from image information of a face region of a 2D emotion avatar emoticon generated as a 2D front face photograph image, In order to perform detection, binary or special processing is applied to an image of a photograph. Binarization is a process of lowering the color values of several steps (256 steps) to values of 0 and 1. The edge information of the image is clarified through binarization, In the special processing, the color image is changed to the gray level or the outline is detected. In the special processing, the outline is extracted and the user improves the selection of the reference point of the outline. Using the pre-stored template character information,
The voice parsing means 161e,
The 2D emotion avatar emoticons generated by the template resource image generating means 161c and the emoticon template contents for the 3D emotion avatar emoticons generated by the 3D face applying means 161d are extracted from the storage unit 170, Emoticon or 3D emotion avatar emoticon is output to the touch screen 120 in the form of animation based on the template animation content information, and then the voice signal input to the microphone 130 is received and the 2D emotion avatar emoticon or the 3D emotion The emotion is converted into a 3D emotional avatar emoticon including a 2D emotional avatar emoticon including a voice expression and a voice emotion for each avatar emoticon and stored in the storage unit 170 to complete generation of emotional avatar emoticons including voice Avatar smartphone based on emoticons Learning terminal apparatus.
delete delete delete delete
KR1020150092072A 2015-06-29 2015-06-29 Method for providng smart learning education based on sensitivity avatar emoticon, and smart learning education device for the same KR101743763B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150092072A KR101743763B1 (en) 2015-06-29 2015-06-29 Method for providng smart learning education based on sensitivity avatar emoticon, and smart learning education device for the same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150092072A KR101743763B1 (en) 2015-06-29 2015-06-29 Method for providng smart learning education based on sensitivity avatar emoticon, and smart learning education device for the same

Publications (2)

Publication Number Publication Date
KR20170002100A KR20170002100A (en) 2017-01-06
KR101743763B1 true KR101743763B1 (en) 2017-06-05

Family

ID=57832510

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150092072A KR101743763B1 (en) 2015-06-29 2015-06-29 Method for providng smart learning education based on sensitivity avatar emoticon, and smart learning education device for the same

Country Status (1)

Country Link
KR (1) KR101743763B1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020102459A1 (en) * 2018-11-13 2020-05-22 Cloudmode Corp. Systems and methods for evaluating affective response in a user via human generated output data
KR102669801B1 (en) 2023-12-29 2024-05-28 주식회사 티맥스알지 Method and apparatus for generating and mapping avatar textures

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101979285B1 (en) * 2018-01-29 2019-05-15 최예은 Education system for programming learning and creativity improvement
KR102661019B1 (en) 2018-02-23 2024-04-26 삼성전자주식회사 Electronic device providing image including 3d avatar in which motion of face is reflected by using 3d avatar corresponding to face and method for operating thefeof
KR102605595B1 (en) 2018-04-24 2023-11-23 현대자동차주식회사 Apparatus, vehicle comprising the same, and control method of the vehicle
KR102185469B1 (en) * 2018-12-03 2020-12-02 정진해 Companion Animal Emotion Bots Device using Artificial Intelligence and Communion Method
KR102648993B1 (en) 2018-12-21 2024-03-20 삼성전자주식회사 Electronic device for providing avatar based on emotion state of user and method thereof
CN112084814B (en) * 2019-06-12 2024-02-23 广东小天才科技有限公司 Learning assisting method and intelligent device
KR20210012724A (en) * 2019-07-26 2021-02-03 삼성전자주식회사 Electronic device for providing avatar and operating method thereof
KR102318111B1 (en) * 2020-11-17 2021-10-27 주식회사 일루니 Method and apparatus for generating story book which provides sticker reflecting user's face to character
KR102637373B1 (en) * 2021-01-26 2024-02-19 주식회사 플랫팜 Apparatus and method for generating emoticon

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013027893A1 (en) * 2011-08-22 2013-02-28 Kang Jun-Kyu Apparatus and method for emotional content services on telecommunication devices, apparatus and method for emotion recognition therefor, and apparatus and method for generating and matching the emotional content using same

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013027893A1 (en) * 2011-08-22 2013-02-28 Kang Jun-Kyu Apparatus and method for emotional content services on telecommunication devices, apparatus and method for emotion recognition therefor, and apparatus and method for generating and matching the emotional content using same

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020102459A1 (en) * 2018-11-13 2020-05-22 Cloudmode Corp. Systems and methods for evaluating affective response in a user via human generated output data
KR102669801B1 (en) 2023-12-29 2024-05-28 주식회사 티맥스알지 Method and apparatus for generating and mapping avatar textures

Also Published As

Publication number Publication date
KR20170002100A (en) 2017-01-06

Similar Documents

Publication Publication Date Title
KR101743763B1 (en) Method for providng smart learning education based on sensitivity avatar emoticon, and smart learning education device for the same
US11688120B2 (en) System and method for creating avatars or animated sequences using human body features extracted from a still image
US20220150285A1 (en) Communication assistance system, communication assistance method, communication assistance program, and image control program
US20210174072A1 (en) Microexpression-based image recognition method and apparatus, and related device
US7764828B2 (en) Method, apparatus, and computer program for processing image
US11736756B2 (en) Producing realistic body movement using body images
WO2018121777A1 (en) Face detection method and apparatus, and electronic device
KR101743764B1 (en) Method for providing ultra light-weight data animation type based on sensitivity avatar emoticon
US20150235416A1 (en) Systems and methods for genterating a 3-d model of a virtual try-on product
WO2016111174A1 (en) Effect generating device, effect generating method, and program
CN112379812A (en) Simulation 3D digital human interaction method and device, electronic equipment and storage medium
CN110418095B (en) Virtual scene processing method and device, electronic equipment and storage medium
CN115049016B (en) Model driving method and device based on emotion recognition
CN108537162A (en) The determination method and apparatus of human body attitude
US20220277586A1 (en) Modeling method, device, and system for three-dimensional head model, and storage medium
CN114049290A (en) Image processing method, device, equipment and storage medium
KR20160010810A (en) Realistic character creation method and creating system capable of providing real voice
CN111597926A (en) Image processing method and device, electronic device and storage medium
WO2021155666A1 (en) Method and apparatus for generating image
KR100965622B1 (en) Method and Apparatus for making sensitive character and animation
CN118799439A (en) Digital human image fusion method, device, equipment and readable storage medium
CN112836545A (en) 3D face information processing method and device and terminal

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
N231 Notification of change of applicant