KR101672691B1 - Method and apparatus for generating emoticon in social network service platform - Google Patents

Method and apparatus for generating emoticon in social network service platform Download PDF

Info

Publication number
KR101672691B1
KR101672691B1 KR1020150104658A KR20150104658A KR101672691B1 KR 101672691 B1 KR101672691 B1 KR 101672691B1 KR 1020150104658 A KR1020150104658 A KR 1020150104658A KR 20150104658 A KR20150104658 A KR 20150104658A KR 101672691 B1 KR101672691 B1 KR 101672691B1
Authority
KR
South Korea
Prior art keywords
emoticon
subject
moving
effect
user
Prior art date
Application number
KR1020150104658A
Other languages
Korean (ko)
Inventor
정진욱
김재철
Original Assignee
주식회사 시어스랩
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 시어스랩 filed Critical 주식회사 시어스랩
Priority to KR1020150104658A priority Critical patent/KR101672691B1/en
Application granted granted Critical
Publication of KR101672691B1 publication Critical patent/KR101672691B1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking

Landscapes

  • Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Disclosed is a method and apparatus for generating an emoticon on a social network service platform. A method for generating an emoticon according to an embodiment of the present invention comprises the steps of: displaying a subject by activating a camera when an emoticon generation function is selected; applying a selected moving effect or sound effect to the displayed subject when any one of moving effects or sound effects is selected by a user; and, when the moving effect has been selected by the user, capturing the subject and the applied moving effect in response to a photographing instruction and generating an emoticon in which only the applied moving effect moves while the subject remains in a state of being captured, and, when the sound effect has been selected by the user, capturing the subject in response to a photographing instruction and generating an emoticon in which the applied sound effect is performed while the subject remains in a state of being captured.

Description

METHOD AND APPARATUS FOR GENERATING EMOTICON IN SOCIAL NETWORK SERVICE PLATFORM FIELD OF THE INVENTION [0001]

The present invention relates to the generation of emoticons, and it is an object of the present invention to provide an emoticon that includes a moving effect or a sound effect used in a messenger or an SNS using a messenger service or an interaction of a social network service (SNS) The present invention relates to a method and apparatus for generating an emoticon.

Cinemagraph was developed by J.K. Rolling's Harry Potter series was first introduced in the concept since it was introduced in 2011 by photographer Jamie Beck in New York and graphic artist Kevin Burg. The cinema graph is an intermediate step between photography and video, and features only a part of the picture is played indefinitely.

The cinema graph plays a part of a picture indefinitely in order to make only a part of a picture move, so that a plurality of pictures of a subject such as a picture in which a part of the subject is stopped, a picture in which a part of the subject moves, I need a picture, and I edit it to make a moving picture.

In other words, a cinema graph makes moving pictures by moving only specific objects included in a subject without any effect.

However, such a cinema graph is complicated to make a moving picture because it uses a plurality of photographs of a subject to move and edit only a specific object, and there is a problem that it is difficult for the general person to make it without expert knowledge and there is a problem that a moving emoticon There is also a difficult problem to create as well as to make moving pictures.

Accordingly, there is a need for a method that can easily generate emoticons or the like that are moved by using photographs.

Embodiments of the present invention provide an emoticon generation method and apparatus capable of generating an emoticon in real time by applying an effect or sound effect moving in conjunction with a messenger or an SNS to a subject and providing the generated emoticon to a messenger or an SNS .

In particular, embodiments of the present invention allow an emoticon generation application to be interlocked with a messenger or an SNS to be executed when an emoticon generation function for generating an emoticon in real time in a messenger or an SNS is selected, There is provided a moving emoticon generating method and apparatus capable of generating and providing an emoticon in real time by allowing only an effect or a sound to be performed while a subject to which the effect is applied is maintained in a captured state.

A method of generating an emoticon according to an embodiment of the present invention includes: displaying a subject by executing a camera when an emoticon generation function is selected; Applying the selected moving effect or sound effect to the displayed subject when any one of the effects or sound effects being selected by the user is selected; And capturing the subject and the applied moving effect according to a photographing command when the effect of moving by the user is selected and generating an emoticon in which only the applied effect moves while keeping the subject in a captured state, Capturing the subject according to the photographing command when the effect is selected, and generating an emoticon in which the applied sound effect is performed while keeping the subject in a captured state.

The step of displaying the subject may display the subject by executing the camera by a predetermined emoticon generation application when the emoticon generation function is selected in the messenger service or the social network service (SNS).

Further, the method of generating an emoticon according to an exemplary embodiment of the present invention may further include providing the generated moving emoticon to the messenger service or the social network service selected by the emoticon generating function.

Wherein applying to the subject comprises providing moving effects or sound effects corresponding to a predetermined emotional theme at the time of selecting the emoticon generation function and providing a moving effect or sound effect selected by the user among the provided moving effects or sound effects, Can be applied to the subject.

The applying to the subject may determine an application position of the selected moving effect on the subject based on the specific object included in the subject and apply the selected moving effect to the determined application position.

Further, a method of generating an emoticon according to an exemplary embodiment of the present invention includes: detecting a specific object in real time from the subject photographed by the camera when the emoticon generation function is selected; Extracting feature points of the detected specific object; Forming a plurality of meshes using the extracted minutiae; And adjusting the plurality of meshes to form the specific object, wherein the step of displaying the object may display the object including the specific object subjected to the pre-processing.

Wherein the preprocessing correcting step identifies at least one of eye, nose, jaw line, face contour, mouth, and eye line when the specific object is a person and adjusts the mesh for the identified one of the plurality of meshes You can pre-calibrate specific objects.

According to another aspect of the present invention, there is provided a method for generating an emoticon, the method comprising: displaying a selected one of a plurality of pictures already stored when the emoticon generating function is selected; Applying the selected moving effect or sound effect to the selected picture when any of the effects or sound effects being selected by the user is selected; And generating an emoticon on which the applied moving effect only moves or the applied sound effect is performed on the selected picture.

Wherein the step of displaying any one of the photographs comprises providing the plurality of pictures by a predetermined emoticon generation application when the emoticon generation function is selected in a messenger service or a social network service (SNS) Can be displayed.

According to an embodiment of the present invention, there is provided an apparatus for generating an emoticon, the apparatus comprising: a display unit for displaying a subject by executing a camera when an emoticon generation function is selected; An application unit for applying the selected moving effect or sound effect to the displayed subject when any one of the effects or sound effects being selected by the user is selected; Capturing the subject and the applied moving effect according to a photographing command when the effect of moving by the user is selected and generating an emoticon in which only the applied moving effect moves while keeping the subject in a captured state, And a generation unit for capturing the subject according to the photographing instruction when the sound effect is selected and generating an emoticon in which the applied sound effect is performed while keeping the subject in a captured state.

When the emoticon generation function is selected in the messenger service or the social network service (SNS), the display unit displays the subject by executing the camera by a predetermined emoticon generation application.

Furthermore, the emoticon generation apparatus according to an embodiment of the present invention may further include a provision unit for providing the generated emoticons to the messenger service or the social network service selected by the emoticon generation function.

Wherein the application unit provides moving effects or sound effects corresponding to a predetermined emotional theme at the time of selecting the emoticon generation function and outputs a moving effect or sound effect selected by the user among the provided moving effects or sound effects to the subject Can be applied.

The application unit may determine an application position of the selected moving effect in the subject based on the specific object included in the subject and apply the selected moving effect to the determined application position.

Further, an emoticon generation apparatus according to an exemplary embodiment of the present invention includes an object detection unit that detects a specific object in real time from the subject photographed by the camera when the emoticon generation function is selected; A feature point extracting unit for extracting feature points of the detected specific object; A mesh forming unit that forms a plurality of meshes using the extracted feature points; And a preprocessing corrector for performing preprocessing correction of the specific object by adjusting the plurality of meshes formed, wherein the display unit can display the object including the specific object subjected to the preprocessing correction.

Wherein the preprocess correction unit identifies at least one of an eye, a nose, a jaw line, a facial contour, a mouth, and an eye line when the specific object is a person, and adjusts a mesh for the identified at least one of the plurality of meshes, It is possible to perform preprocessing correction.

Embodiments of the present invention generate an emoticon including a moving emoticon or a sound effect without expert knowledge by applying an effect or a sound effect moving in conjunction with a messenger or an SNS to a subject to generate an emoticon in real time, In real time.

Embodiments of the present invention apply various kinds of moving effects or sound effects to a subject by using an emoticon generating application linked to a messenger or an SNS, and generate emoticons having various effects or sounds, As shown in FIG.

The embodiments of the present invention can be applied to a device equipped with a camera, for example, a device such as a smart phone, and an application linked to a messenger or a SNS can be installed in the smart phone. It is possible to provide amusement to create an emoticon having various effects or sounds.

FIG. 1 shows an example for explaining the present invention.
FIG. 2 is a flowchart illustrating an emoticon generation method according to an exemplary embodiment of the present invention.
FIG. 3 shows an operational flow diagram of an embodiment of step S230 shown in FIG.
Figures 4-7 illustrate examples for illustrating the method according to the present invention.
FIG. 8 shows a configuration of an emoticon generating apparatus according to an embodiment of the present invention.
9 is a flowchart illustrating an emoticon generation method according to another embodiment of the present invention.

Hereinafter, embodiments according to the present invention will be described in detail with reference to the accompanying drawings. However, the present invention is not limited to or limited by the embodiments. In addition, the same reference numerals shown in the drawings denote the same members.

The present invention generates emoticons in real time through photographing using a messenger or an emoticon generating application linked to a SNS, and applies a moving effect or a sound effect to a subject and moves only a moving effect while keeping the subject in a captured state Or generating an emoticon in which a sound effect is performed while keeping the subject in a captured state, and providing the generated emoticon to the messenger or the SNS.

FIG. 1 shows an example for explaining the present invention.

1, the present invention can be applied to an apparatus 100 having a camera or a messenger function installed therein, for example, an application such as a smart phone, a smart watch, When a signal related to emoticon generation moving from the messenger or the SNS is received by being interlocked with a messenger such as a kakao chat, a line or the SNS 200, for example, Facebook, the emoticon generation application automatically When the camera is executed and the camera is used to photograph a subject, a moving effect or a sound effect selected by the user can be applied to a subject to be photographed to generate a moving emoticon or an emoticon to which a sound effect is applied.

That is, the emoticon generation application applies the selected moving effect or sound effect to the subject photographed and displayed by the camera when any one of the provided moving effects or sound effects is selected, and when the effect is selected by the user, Captures the moving object and the moving effect, captures the subject in accordance with the shooting command when the sound effect is selected by the user, while moving the emoticon moving only the applied moving effect while keeping the captured object, You can create an emoticon in which the applied sound effect is performed while still being captured.

Here, the signal related to emoticon generation may include emotion information such as sadness, joy, absurdity, smile, laughter, and astonishment of the emotional theme of the moving emoticon to be generated, Or moving effects or sound effects corresponding to the emotion information among the sound effects, thereby selecting a moving effect or a sound effect corresponding to the emotion information.

At this time, the subject may include various objects such as a person, a building, an automobile, and the like. The moving effect selected by the user is a position where the selected moving effect information and the object information included in the subject to be photographed are applied Can be determined.

Hereinafter, for convenience of explanation, it is assumed that the present invention generates emoticons using moving effects and moving effects among sound effects in a smartphone equipped with a camera. Of course, it is apparent to those skilled in the art that the present invention is not limited to being applied to a smart phone, and that the present invention can be applied to all devices on which the present invention can be mounted.

FIG. 2 is a flowchart illustrating an emoticon generation method according to an exemplary embodiment of the present invention.

2, a method for generating an emoticon according to an exemplary embodiment of the present invention includes: a user accessing a messenger service or an SNS using a user terminal, for example, a smartphone, The emoticon generation signal is received by the smartphone when the emoticon generation function is clicked in the messenger or the SNS, for example, during conversation with another person or in the SNS page, so that the emoticon generation application can automatically (S210).

Here, the emoticon generation application can generate a moving emoticon using a photographing of a subject using a camera and a moving effect provided.

At this time, the emoticon generation signal in step S210 may include emotion theme information of the emoticon to be generated, for example, emotion information such as sadness, joy, suspense, smile, laughter, astonishment, or other theme information related to the emoticon.

The camera of the user terminal is executed by the emoticon generation application, and a subject including an object such as a person, for example, photographed by the executed camera is displayed on the screen (S220, S230).

Various filters may be applied to the subject displayed at step S230 according to the user's selection, or various functions of the camera for photographing the subject may be applied.

If the subject is displayed on the screen in step S230, a moving effect or moving sticker to be applied to the displayed subject is selected based on the user input (S240).

A moving effect or moving sticker applied to a subject is provided in an application providing the method of the present invention and has various effects such as a moving rabbit ear effect, a moving cloud effect, a moving heart effect, an upwardly moving heart balloon effect, .

At this time, the moving effects or moving stickers provided to the user may provide all the effects or stickers provided in the emoticon generation application, but when the emotion theme information is included in the emoticon generation signal, It may provide only the effects or stickers corresponding to the theme information. That is, the user can select a desired effect or sticker among the effects or stickers corresponding to the emotion theme information.

In step S240, when a moving effect to be applied by user input or selection is selected, the selected moving effect is applied to the displayed object on the screen, and it is determined whether or not the shooting command according to the user input is received (S250, S260).

Step S250 may determine the application position of the moving effect selected by the user in the subject based on the object included in the subject photographed by the camera, and apply the selected moving effect to the determined applied position. For example, if the moving effect selected by the user, for example, the effect of the moving rabbit ears on a person's head, is obtained, the person's head position is acquired from the subject being photographed and the rabbit's ear is applied to the acquired head position.

In step S250, the position of the effect applied in accordance with the motion generated when the motion is generated in the subject displayed on the screen due to the motion of the user photographing the subject may be changed. Of course, when the effect selected by the user is not applied to the subject displayed on the screen, the effect may not be applied to the subject, and the effect may be informed to the user that the effect is not applied.

As a result of the determination in step S260, when a photographing command is received by a user input, a captured image is generated by capturing a subject displayed on the screen and a moving effect applied to the subject, and the captured image is maintained in a captured state. A moving emoticon moving only the moving effect is generated (S270, S280).

At this time, the captured image generated in step S270 may mean an image in which both the subject and the moving effect are captured, the captured image may be displayed on the screen, and the predetermined at least one application, Line, KakaoTalk, and other messenger services, bands, and SNS.

In step S280, when a moving emoticon creation button formed in a partial area on the captured image displayed in step S270 is selected or selected by the user, a moving emoticon is generated so that only the moving effect applied in the captured image moves .

The moving emoticons generated in step S280 may be stored in the user terminal through the user input. If the emoticons are stored, the emoticons may be stored as a file, for example, a GIF (Graphics Interchange Format) file.

At this time, the moving emoticons generated in step S280 may be shared through at least one predetermined application, for example, a messenger service such as a line, a kakao chat, a band, an SNS, or the like.

In step S280, when the emoticon moving by using the moving subject and the moving subject are generated in step S280, the generated moving emoticon is provided to the messenger or the SNS requesting the creation of the emoticon, so that the face of the user, for example, A moving emoticon can be used (S290).

Also, the moving emoticon generation method according to the embodiment of the present invention may preprocess the subject to be photographed in real time to correct the subject, and generate the emoticon moving using the corrected subject.

This will be described with reference to FIG.

FIG. 3 shows an operational flow diagram of an embodiment of step S230 shown in FIG.

Referring to FIG. 3, a step S230 of displaying a subject on a screen detects a predetermined specific object, for example, a human face, from a photographed subject when the subject is photographed by the camera (S310).

The specific object detected in step S310 is not limited to a human face, and may include an animal face, a man-made structure, for example, a statue face, and the type of a specific object to be detected may be determined by a vendor. And may be set by a user through a user setting item as necessary.

At this time, the step S310 can detect the human face by recognizing the human face contour in the object.

When a specific object is detected in step S310, the feature points of the detected specific object are extracted, and a plurality of meshes are formed using the extracted feature points (S320, S330).

The feature points to be extracted are points corresponding to characteristics of a specific object and feature points can be extracted from the entire face region such as eyes, nose, mouth, eye line, lip, chin line, outline, And the feature points can be extracted by tracking the photographed face in real time.

Step S330 may form a plurality of meshes for a specific object, for example, a face portion of a human being, and may form a plurality of meshes using all the methods for forming meshes using minutiae.

If a plurality of meshes for a specific object are formed in step S330, a plurality of meshes formed are adjusted to perform preprocessing correction on the specific object (S340).

Here, the step S340 identifies at least one of eye, nose, jaw line, face contour, mouth, and eye line when the specific object is a person, and adjusts the mesh for at least one of the plurality of meshes to perform a pre- For example, by performing 3D rendering on the eye, nose, jaw line, and facial contour to perform correction such as cutting the jaw, raising the nose, enlarging the eyes, etc., It is possible to perform preprocessing correction.

In step S340, the degree of correction and the correction region for performing the preprocessing for a specific object may be preset by the business operator, but the present invention is not limited thereto and may be set by the user. Thus, step S340 can perform preprocessing correction of the human face by adjusting meshes for the human face based on the preset degree of correction and the corrected region.

That is, in step S340, the specific object may be subjected to preprocessing correction by adjusting the mesh for at least one identified based on the correction information preset by the user, for example, based on the corrected region and the corrected degree for each corrected region, The specific object may be pre-processed by adjusting the mesh for at least one identified based on the per-site correction level.

One method of preprocessing a specific object in step S340 may include 1) modifying a mesh shape for a plurality of meshes to preprocess the particular object, 2) adjusting the color, material, You can also pre-calibrate specific objects by adjusting the brightness. At this time, the meshes to be preprocessed may be meshes for at least one predetermined region among the regions constituting the specific object.

If the specific object of the subject is preprocessed and corrected at step S340, the subject including the specific object subjected to the preprocessing correction is displayed on the screen (S350).

As described above, the present invention can be photographed with a face that comes out more than a real face of a person, for example, by pre-processing the subject before photographing the subject.

A method of generating an emoticon according to an embodiment of the present invention will now be described in detail with reference to FIGS. 4 to 7. FIG.

Figures 4-7 illustrate examples for illustrating the method according to the present invention.

4 to 7, a method of generating a moving emoticon according to an exemplary embodiment of the present invention includes generating an emoticon moving in a messenger or an SNS in real time while using a messenger or an SNS using a user terminal, for example, When the emoticon generation function is clicked by the user, the emoticon generation signal is received by the smart phone, so that the emoticon generation application is automatically executed, the camera of the user terminal is executed by the emoticon generation application, Is displayed in a partial area 310 on the screen, as in the example shown in Fig.

At this time, the subject displayed in the partial area 310 on the screen may be displayed after being subjected to the preprocessing correction through the procedure shown in FIG.

4, when an effect selection button 320 for selecting an effect to be moved by a user is selected by the user, various movable effects or stickers 330 that can be applied, as in the example shown in FIG. 5, It is displayed in some areas.

Here, various moving effects or stickers 330 that can be applied may be effects or stickers corresponding to the emotion theme information if the emotion theme information to be generated is included in the emoticon generation signal.

For example, when the moving rabbit ear 340 shown in FIG. 5 is selected, the selected rabbit ear 340 is a target object in the subject. In this example, And applies the selected rabbit ears 350 to the acquired head position.

The rabbit ears 350 applied to the subject are repeatedly moved in a form in which the rabbit ears are bent, as shown in the right side of Fig. 5, in a form in which the rabbit ears stand, as shown in the left side of Fig. Of course, the movement of the rabbit ears is not limited to the movement between the standing and bent forms, but it can also be made to move left and right.

As described above, when the rabbit ear effect 340 is selected by the user in FIG. 5, by applying the rabbit ears 350 that are moved to the head position of the person being photographed by the camera, Is displayed on the screen. At this time, the selected rabbit ears are applied to the human head position obtained in real time by acquiring the human head position in real time when human movement occurs on the screen.

5, when a shooting command is received by a user input in a state where a moving effect is applied to a subject, an image displayed on a partial area 310 of the screen at a point of time when a shooting command is received And generates a captured image.

At this time, since the generated captured image is an image captured on the screen at the time of receiving the shooting command, the moving rabbit ears applied to the subject are also not captured and are in a captured state.

As shown in FIG. 6, when a captured image is generated, a button 370 for displaying a captured image on the screen and generating an emoticon moving in a partial area on the captured image, for example, a GIF button, A button 360 is displayed for storing or sharing information.

When the button 360 is input by the user, the capture image captured on the screen can be stored or shared as a picture file of a certain format, for example, a jpg file.

On the other hand, if the user selects the GIF button 370 for generating the emoticon moving in FIG. 6, the GIF button 370 is activated as shown in FIG. 6, Moving effect By moving the moving rabbit's ear only in the captured position, the rabbit's ears are bent repeatedly as shown in the left side of Fig. 7 and the right side of Fig. 7, as shown in the left side of Fig.

When a moving emoticon is generated, a button 360 for providing an emoticon moving in a certain area on the screen to the messenger or the SNS requesting the creation of the emoticon is displayed. When the corresponding button 360 is input by the user, the moving emoticon To the messenger or the SNS requesting the creation of the emoticons, the emoticons moving in the messenger or the SNS can be used in real time. Of course, the generated moving emoticon may be stored in the user terminal.

As described above, in the emoticon generation method according to an embodiment of the present invention, a moving emoticon including a moving effect by applying a moving effect to a subject is generated in real time, and the generated emoticon is provided to a messenger, a SNS, Can be provided.

In addition, since the method according to the embodiment of the present invention can generate animated emoticons by applying various moving effects, it is possible to make emoticons that are moved by any ordinary user who has no expert knowledge.

In addition, in the emoticon generation method according to the present invention, not only a moving emoticon is generated by applying a moving effect when a subject is photographed, but also a moving emoticon can be generated by applying a moving effect to a photograph of an already stored subject. That is, according to another embodiment of the present invention, as shown in FIG. 9, when a user wishes to generate a moving emoticon in real time in association with an emoticon generating application installed in a smartphone in a messenger or an SNS, When the function is clicked, the emoticon generation signal is received by the smartphone, so that the emoticon generation application is automatically executed, and the emoticon generation application provides the user with the pictures stored in advance by the application (S910, S920).

At this time, the emoticon generation signal in step S910 may include emotional theme information of the emoticon to be generated, for example, emotion information such as sadness, joy, suspense, smile, laughter, astonishment or other theme information related to the emoticon.

Then, the user selects one of the photographs already stored in step S920 and displays it on the screen. Then, the user applies the moving effect to the selected photograph by selecting the effect to be applied among the plurality of moving effects, A moving emoticon moving only the effect applied in the photograph is generated, and the generated moving emoticon can be provided to the messenger or the SNS (S930 to S970).

Embodiments of the present invention may also use emoticons created or stored by an emoticon generation application in the messenger or the SNS as a function such as "favorites" to use instantly depending on the situation. That is, the user can immediately use the emoticons generated / stored by the emoticon generation application of the present invention as well as emoticons provided by the messenger or the SNS itself.

FIG. 8 illustrates a configuration of an emoticon generation apparatus according to an embodiment of the present invention, and illustrates an apparatus for performing the emoticon generation method described in FIGS. 2 to 7.

Here, the emoticon generating device may be included in all the devices equipped with the camera.

8, an emoticon generating apparatus 800 according to an embodiment of the present invention includes a receiving unit 810, a providing unit 820, an object detecting unit 830, a feature point extracting unit 840, a mesh forming unit 850 A preprocessing correction unit 860, a display unit 870, an application unit 880, a generation unit 890, and a storage unit 900. Here, the configurations indicated by the dotted lines are configurations for performing the preprocessing correction, and may be omitted in the apparatus of the present invention depending on the situation.

The receiving unit 810 is configured to receive a signal related to emoticon generation from a messenger, an SNS, or the like, and the emoticon generation signal may include theme information such as emotional theme information of an emoticon to be generated.

The providing unit 820 provides a moving emoticon generated by a user, and provides a moving emoticon generated by a user such as a messenger or an SNS requesting emoticon generation.

The object detection unit 830 detects a specific object in real time from a subject photographed by the camera when the emoticon generation signal is received and the camera is automatically executed.

Here, the object detecting unit 830 can detect not only a human face but also an animal face, a man-made structure, for example, an in-phase face from the object, and the type of the specific object to be detected by the object detecting unit 830 is But may be set by a user through a user setting item as needed.

At this time, the object detecting unit 830 can detect the human face by recognizing the human face contour in the object.

The feature point extracting unit 840 extracts the feature points of the detected specific object.

In this case, the feature point extracting unit 840 may extract feature points from all areas of the face such as eyes, nose, mouth, eye line, lip, chin line, outline, The feature points can be extracted by tracking the photographed face in real time.

The mesh forming unit 850 forms a plurality of meshes using the extracted feature points.

Here, the mesh forming unit 850 can form a plurality of meshes for a specific object, for example, a face portion of a person, and can form a plurality of meshes using all the methods for forming meshes using minutiae points .

The preprocessing corrector 860 preprocesses the specific object by adjusting a plurality of formed meshes.

Here, the preprocessing corrector 860 identifies at least one of eye, nose, jaw line, face contour, mouth, and eye line when the specific object is a person, and adjusts the mesh for at least one of the plurality of meshes You can pre-calibrate the object, and especially adjust the mesh for the eye, nose, jaw line, and face contour. For example, 3D rendering to reduce jaw, raise nose, , The human face can be preprocessed and corrected.

The preprocessing corrector 860 can perform preprocessing correction on a specific object by adjusting a plurality of meshes using a preprocessing degree and a correction site for a specific object set by a business operator or a user providing the device.

At this time, the preprocessing corrector 860 can perform preprocessing correction of the specific object by adjusting the mesh for at least one identified based on the correction information preset by the user, for example, And may pre-process the specific object by adjusting the mesh for at least one identified based on the predetermined degree of correction for each face part.

Specifically, the preprocessing corrector 860 may modify the mesh shape of a plurality of meshes to perform preprocessing correction of a specific object, or may adjust the color, material, and brightness of a texture applied to a plurality of meshes, It is also possible to perform preprocessing correction.

In this case, the preprocessing corrector 860 may adjust the color, material, and brightness of the texture applied to the mesh based on the face color, brightness, and texture of the subject to be imaged and a preset degree of correction, , And the color, material, and brightness of the texture can be adjusted based on the preset degree of correction irrespective of the texture.

In addition, the preprocessing corrector 860 may adjust the color, material, and brightness of the texture to be applied to the meshes of the facial region identified in consideration of the facial atmosphere or entertainer's face set by the user.

The display unit 870 may include means for displaying all data related to the present invention such as a subject photographed by the camera or a corrected subject, a captured image captured by the camera, a moving emoticon generated using the captured captured image, a user interface, to be.

In this case, the display unit 870 is a means for displaying data, for example, a touch screen provided in a smart phone. When the emoticon generation signal is received from the messenger or the SNS, the camera is executed by the emoticon generation application , The subject photographed by the camera can be displayed.

The application unit 880 applies the selected moving effect to the photographed subject when any one of various moving effects that can be applied to the subject photographed and displayed by the camera is selected by the user.

At this time, the application unit 880 can determine the application position of the moving effect selected by the user in the subject based on the object included in the subject photographed by the camera, and apply the selected moving effect to the determined application position. For example, the application unit 880 may acquire a moving effect selected by the user, for example, a person's head position in a subject to be photographed when the moving rabbit's ear is applied to a person's head, Apply ear.

At this time, the application unit 880 can also apply the position of the effect applied in accordance with the motion generated when the motion is generated in the subject displayed on the screen by the motion of the user photographing the subject.

In this case, if the emoticon emotion theme information is included in the emoticon generation signal received by the receiver 810, the application unit 880 provides moving effects corresponding to the emotion theme information, and selects among the provided moving effects The effect can be applied to the subject.

The creator 890 captures the moving effect applied to the subject according to the photographing command input by the user, and generates a moving emoticon that moves only the applied effect while keeping the subject in a captured state.

At this time, the generating unit 890 generates a captured image by capturing the subject and the applied effect at the time when the capturing command is received, and provides a captured image generated by the display unit 870 so as to display the generated captured image And if a button formed in a partial area on the displayed captured image is selected by the user, the emoticon moving in the captured image can be generated.

The storage unit 900 stores data related to the present invention and stores an emoticon generation application, a captured image generated and stored by an application, moving emoticons, photographed pictures, various effect data, and the like.

The system or apparatus described above may be implemented as a hardware component, a software component, and / or a combination of hardware components and software components. For example, the systems, devices, and components described in the embodiments may be implemented in various forms such as, for example, a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable array ), A programmable logic unit (PLU), a microprocessor, or any other device capable of executing and responding to instructions. The processing device may execute an operating system (OS) and one or more software applications running on the operating system. The processing device may also access, store, manipulate, process, and generate data in response to execution of the software. For ease of understanding, the processing apparatus may be described as being used singly, but those skilled in the art will recognize that the processing apparatus may have a plurality of processing elements and / As shown in FIG. For example, the processing unit may comprise a plurality of processors or one processor and one controller. Other processing configurations are also possible, such as a parallel processor.

The software may include a computer program, code, instructions, or a combination of one or more of the foregoing, and may be configured to configure the processing device to operate as desired or to process it collectively or collectively Device can be commanded. The software and / or data may be in the form of any type of machine, component, physical device, virtual equipment, computer storage media, or device , Or may be permanently or temporarily embodied in a transmitted signal wave. The software may be distributed over a networked computer system and stored or executed in a distributed manner. The software and data may be stored on one or more computer readable recording media.

The method according to embodiments may be implemented in the form of a program instruction that may be executed through various computer means and recorded in a computer-readable medium. The computer-readable medium may include program instructions, data files, data structures, and the like, alone or in combination. The program instructions to be recorded on the medium may be those specially designed and configured for the embodiments or may be available to those skilled in the art of computer software. Examples of computer-readable media include magnetic media such as hard disks, floppy disks and magnetic tape; optical media such as CD-ROMs and DVDs; magnetic media such as floppy disks; Magneto-optical media, and hardware devices specifically configured to store and execute program instructions such as ROM, RAM, flash memory, and the like. Examples of program instructions include machine language code such as those produced by a compiler, as well as high-level language code that can be executed by a computer using an interpreter or the like. The hardware devices described above may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. For example, it is to be understood that the techniques described may be performed in a different order than the described methods, and / or that components of the described systems, structures, devices, circuits, Lt; / RTI > or equivalents, even if it is replaced or replaced.

Therefore, other implementations, other embodiments, and equivalents to the claims are also within the scope of the following claims.

Claims (16)

When the emoticon generation function is selected in the object detection unit, executing a camera and detecting a specific object in real time from a subject photographed by the camera;
Extracting feature points of the detected specific object from the feature point extraction unit;
Forming a plurality of meshes using the extracted feature points in a mesh forming section;
Preprocessing the specific object by adjusting a plurality of meshes formed on the basis of a preset degree of correction and a corrected region in the pre-processing correction unit;
Displaying the object including the specific object subjected to the preprocessing correction in a display unit;
Applying the selected moving effect or sound effect to the displayed subject when any one of the effects or sound effects being driven by the user in the application is selected; And
Capturing the subject and the applied moving effect in accordance with a capturing command according to the input of the user when the effect of moving by the user is selected in the generating unit, and controlling the emoticon Capturing the subject in accordance with the photographing command when the sound effect is selected by the user and generating an emoticon in which the applied sound effect is performed while keeping the subject in a captured state
And generating an emoticon.
The method according to claim 1,
The step of displaying the subject
When the emoticon generation function is selected in a messenger service or a social network service (SNS), the camera is executed by a predetermined emoticon generation application, thereby displaying the subject.
3. The method of claim 2,
Providing the generated emoticons to the messenger service or the social network service selected by the emoticon generation function
The emoticon generating method further comprising:
The method according to claim 1,
The step of applying to the subject
An emoticon that provides moving effects or sound effects corresponding to a predetermined emotional theme at the time of selecting the emoticon generation function and applies a moving effect or a sound effect selected by the user among the provided moving effects or sound effects to the subject, Generation method.
The method according to claim 1,
The step of applying to the subject
Determining an application position of the selected moving effect on the subject based on a specific object included in the subject, and applying the selected moving effect to the determined application position.
delete The method according to claim 1,
The preprocessing correcting step
Identifying at least one of an eye, a nose, a jaw line, a facial contour, a mouth, and an eye line when the specific object is a person and adjusting the mesh for the identified at least one of the plurality of meshes to perform pre- How to create emoticons.
delete delete An object detection unit that executes a camera when the emoticon generation function is selected and detects a specific object in real time from a subject photographed by the camera;
A feature point extracting unit for extracting feature points of the detected specific object;
A mesh forming unit that forms a plurality of meshes using the extracted feature points;
A preprocessing unit that preprocesses the specific object by adjusting a plurality of meshes formed on the basis of a preset correction degree and a correction part,
A display unit for displaying the subject including the specific object corrected by the preprocessing;
An application unit for applying the selected moving effect or sound effect to the displayed subject when any one of the effects or sound effects being selected by the user is selected; And
Capturing the subject and the applied moving effect according to a photographing command according to the input of the user when the effect of moving by the user is selected and generating an emoticon in which only the applied moving effect moves while keeping the subject in a captured state A generating unit that captures the subject according to the photographing instruction when the sound effect is selected by the user and generates an emoticon in which the applied sound effect is performed while keeping the subject in a captured state,
And an emoticon generating device.
11. The method of claim 10,
The display unit
And the camera is executed by a predetermined emoticon generation application when the emoticon generation function is selected in the messenger service or the social network service (SNS), thereby displaying the subject.
12. The method of claim 11,
And providing the generated emoticons to the messenger service or the social network service selected by the emoticon generating function
The emoticon generating device further comprising:
11. The method of claim 10,
The application unit
An emoticon that provides moving effects or sound effects corresponding to a predetermined emotional theme at the time of selecting the emoticon generation function and applies a moving effect or a sound effect selected by the user among the provided moving effects or sound effects to the subject, Generating device.
11. The method of claim 10,
The application unit
Determine an application position of the selected moving effect on the subject based on a specific object included in the subject, and apply the selected moving effect to the determined application position.
delete 11. The method of claim 10,
The pre-
Identifying at least one of an eye, a nose, a jaw line, a facial contour, a mouth, and an eye line when the specific object is a person and adjusting the mesh for the identified at least one of the plurality of meshes to perform pre- Emoticon generating device.
KR1020150104658A 2015-07-23 2015-07-23 Method and apparatus for generating emoticon in social network service platform KR101672691B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150104658A KR101672691B1 (en) 2015-07-23 2015-07-23 Method and apparatus for generating emoticon in social network service platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150104658A KR101672691B1 (en) 2015-07-23 2015-07-23 Method and apparatus for generating emoticon in social network service platform

Publications (1)

Publication Number Publication Date
KR101672691B1 true KR101672691B1 (en) 2016-11-07

Family

ID=57529540

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150104658A KR101672691B1 (en) 2015-07-23 2015-07-23 Method and apparatus for generating emoticon in social network service platform

Country Status (1)

Country Link
KR (1) KR101672691B1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018106069A1 (en) * 2016-12-08 2018-06-14 스타십벤딩머신 주식회사 Method and apparatus for producing content
KR20180073330A (en) * 2016-12-22 2018-07-02 주식회사 시어스랩 Method and apparatus for creating user-created sticker, system for sharing user-created sticker
KR101943898B1 (en) * 2017-08-01 2019-01-30 주식회사 카카오 Method for providing service using sticker, and user device
KR20190062005A (en) * 2017-11-28 2019-06-05 강동우 Method for making emoticon during chatting
KR20190106971A (en) * 2017-02-10 2019-09-18 주식회사 시어스랩 Live streaming image generating method and apparatus, live streaming service providing method and apparatus, live streaming system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100062207A (en) * 2008-12-01 2010-06-10 삼성전자주식회사 Method and apparatus for providing animation effect on video telephony call
KR20130082898A (en) * 2011-12-22 2013-07-22 김선미 Method for using user-defined emoticon in community service
KR20140049340A (en) * 2012-10-17 2014-04-25 에스케이플래닛 주식회사 Apparatus and methods of making user emoticon

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100062207A (en) * 2008-12-01 2010-06-10 삼성전자주식회사 Method and apparatus for providing animation effect on video telephony call
KR20130082898A (en) * 2011-12-22 2013-07-22 김선미 Method for using user-defined emoticon in community service
KR20140049340A (en) * 2012-10-17 2014-04-25 에스케이플래닛 주식회사 Apparatus and methods of making user emoticon

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018106069A1 (en) * 2016-12-08 2018-06-14 스타십벤딩머신 주식회사 Method and apparatus for producing content
KR20180065656A (en) * 2016-12-08 2018-06-18 스타십벤딩머신 주식회사 Apparatus and method for creating contents
KR101873897B1 (en) * 2016-12-08 2018-08-02 스타십벤딩머신 주식회사 Apparatus and method for creating contents
KR20180073330A (en) * 2016-12-22 2018-07-02 주식회사 시어스랩 Method and apparatus for creating user-created sticker, system for sharing user-created sticker
KR101944112B1 (en) * 2016-12-22 2019-04-17 주식회사 시어스랩 Method and apparatus for creating user-created sticker, system for sharing user-created sticker
KR20190106971A (en) * 2017-02-10 2019-09-18 주식회사 시어스랩 Live streaming image generating method and apparatus, live streaming service providing method and apparatus, live streaming system
KR102053128B1 (en) 2017-02-10 2019-12-06 주식회사 시어스랩 Live streaming image generating method and apparatus, live streaming service providing method and apparatus, live streaming system
KR101943898B1 (en) * 2017-08-01 2019-01-30 주식회사 카카오 Method for providing service using sticker, and user device
KR20190062005A (en) * 2017-11-28 2019-06-05 강동우 Method for making emoticon during chatting
KR102063728B1 (en) * 2017-11-28 2020-01-08 강동우 Method for making emoticon during chatting

Similar Documents

Publication Publication Date Title
US10559062B2 (en) Method for automatic facial impression transformation, recording medium and device for performing the method
EP3457683B1 (en) Dynamic generation of image of a scene based on removal of undesired object present in the scene
CN108377334B (en) Short video shooting method and device and electronic terminal
KR101672691B1 (en) Method and apparatus for generating emoticon in social network service platform
KR101655078B1 (en) Method and apparatus for generating moving photograph
US11176355B2 (en) Facial image processing method and apparatus, electronic device and computer readable storage medium
CN113287118A (en) System and method for face reproduction
JPWO2018047687A1 (en) Three-dimensional model generation device and three-dimensional model generation method
JP2022064987A (en) Constitution and realization of interaction between digital medium and observer
KR101831516B1 (en) Method and apparatus for generating image using multi-stiker
CN109997171B (en) Display device and recording medium storing program
KR101711684B1 (en) 3d avatars output device and method
CN113973190A (en) Video virtual background image processing method and device and computer equipment
US20160180572A1 (en) Image creation apparatus, image creation method, and computer-readable storage medium
JP2018113616A (en) Information processing unit, information processing method, and program
JP5949030B2 (en) Image generating apparatus, image generating method, and program
CN111787354B (en) Video generation method and device
CN113709545A (en) Video processing method and device, computer equipment and storage medium
US11087514B2 (en) Image object pose synchronization
KR20210056944A (en) Method for editing image
US20230209182A1 (en) Automatic photography composition recommendation
KR20160128900A (en) Method and apparatus for generating moving photograph based on moving effect
US11770604B2 (en) Information processing device, information processing method, and information processing program for head-related transfer functions in photography
KR101774913B1 (en) Method and apparatus for displaying images using pre-processing
JP6889191B2 (en) Game programs and game equipment

Legal Events

Date Code Title Description
A201 Request for examination
FPAY Annual fee payment

Payment date: 20190807

Year of fee payment: 4