KR101672691B1 - Method and apparatus for generating emoticon in social network service platform - Google Patents
Method and apparatus for generating emoticon in social network service platform Download PDFInfo
- Publication number
- KR101672691B1 KR101672691B1 KR1020150104658A KR20150104658A KR101672691B1 KR 101672691 B1 KR101672691 B1 KR 101672691B1 KR 1020150104658 A KR1020150104658 A KR 1020150104658A KR 20150104658 A KR20150104658 A KR 20150104658A KR 101672691 B1 KR101672691 B1 KR 101672691B1
- Authority
- KR
- South Korea
- Prior art keywords
- emoticon
- subject
- moving
- effect
- user
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 230000000694 effects Effects 0.000 claims abstract description 180
- 238000007781 pre-processing Methods 0.000 claims description 35
- 230000002996 emotional effect Effects 0.000 claims description 7
- 230000001815 facial effect Effects 0.000 claims description 6
- 238000001514 detection method Methods 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims 1
- 230000004044 response Effects 0.000 abstract description 3
- 230000003213 activating effect Effects 0.000 abstract 1
- 230000006870 function Effects 0.000 description 19
- 241000283973 Oryctolagus cuniculus Species 0.000 description 13
- 230000008451 emotion Effects 0.000 description 12
- 210000005069 ears Anatomy 0.000 description 10
- 210000003128 head Anatomy 0.000 description 9
- 239000000463 material Substances 0.000 description 5
- 240000007711 Peperomia pellucida Species 0.000 description 4
- 241001465754 Metazoa Species 0.000 description 2
- 244000240602 cacao Species 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/01—Social networking
Landscapes
- Business, Economics & Management (AREA)
- Tourism & Hospitality (AREA)
- Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Economics (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Computing Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
The present invention relates to the generation of emoticons, and it is an object of the present invention to provide an emoticon that includes a moving effect or a sound effect used in a messenger or an SNS using a messenger service or an interaction of a social network service (SNS) The present invention relates to a method and apparatus for generating an emoticon.
Cinemagraph was developed by J.K. Rolling's Harry Potter series was first introduced in the concept since it was introduced in 2011 by photographer Jamie Beck in New York and graphic artist Kevin Burg. The cinema graph is an intermediate step between photography and video, and features only a part of the picture is played indefinitely.
The cinema graph plays a part of a picture indefinitely in order to make only a part of a picture move, so that a plurality of pictures of a subject such as a picture in which a part of the subject is stopped, a picture in which a part of the subject moves, I need a picture, and I edit it to make a moving picture.
In other words, a cinema graph makes moving pictures by moving only specific objects included in a subject without any effect.
However, such a cinema graph is complicated to make a moving picture because it uses a plurality of photographs of a subject to move and edit only a specific object, and there is a problem that it is difficult for the general person to make it without expert knowledge and there is a problem that a moving emoticon There is also a difficult problem to create as well as to make moving pictures.
Accordingly, there is a need for a method that can easily generate emoticons or the like that are moved by using photographs.
Embodiments of the present invention provide an emoticon generation method and apparatus capable of generating an emoticon in real time by applying an effect or sound effect moving in conjunction with a messenger or an SNS to a subject and providing the generated emoticon to a messenger or an SNS .
In particular, embodiments of the present invention allow an emoticon generation application to be interlocked with a messenger or an SNS to be executed when an emoticon generation function for generating an emoticon in real time in a messenger or an SNS is selected, There is provided a moving emoticon generating method and apparatus capable of generating and providing an emoticon in real time by allowing only an effect or a sound to be performed while a subject to which the effect is applied is maintained in a captured state.
A method of generating an emoticon according to an embodiment of the present invention includes: displaying a subject by executing a camera when an emoticon generation function is selected; Applying the selected moving effect or sound effect to the displayed subject when any one of the effects or sound effects being selected by the user is selected; And capturing the subject and the applied moving effect according to a photographing command when the effect of moving by the user is selected and generating an emoticon in which only the applied effect moves while keeping the subject in a captured state, Capturing the subject according to the photographing command when the effect is selected, and generating an emoticon in which the applied sound effect is performed while keeping the subject in a captured state.
The step of displaying the subject may display the subject by executing the camera by a predetermined emoticon generation application when the emoticon generation function is selected in the messenger service or the social network service (SNS).
Further, the method of generating an emoticon according to an exemplary embodiment of the present invention may further include providing the generated moving emoticon to the messenger service or the social network service selected by the emoticon generating function.
Wherein applying to the subject comprises providing moving effects or sound effects corresponding to a predetermined emotional theme at the time of selecting the emoticon generation function and providing a moving effect or sound effect selected by the user among the provided moving effects or sound effects, Can be applied to the subject.
The applying to the subject may determine an application position of the selected moving effect on the subject based on the specific object included in the subject and apply the selected moving effect to the determined application position.
Further, a method of generating an emoticon according to an exemplary embodiment of the present invention includes: detecting a specific object in real time from the subject photographed by the camera when the emoticon generation function is selected; Extracting feature points of the detected specific object; Forming a plurality of meshes using the extracted minutiae; And adjusting the plurality of meshes to form the specific object, wherein the step of displaying the object may display the object including the specific object subjected to the pre-processing.
Wherein the preprocessing correcting step identifies at least one of eye, nose, jaw line, face contour, mouth, and eye line when the specific object is a person and adjusts the mesh for the identified one of the plurality of meshes You can pre-calibrate specific objects.
According to another aspect of the present invention, there is provided a method for generating an emoticon, the method comprising: displaying a selected one of a plurality of pictures already stored when the emoticon generating function is selected; Applying the selected moving effect or sound effect to the selected picture when any of the effects or sound effects being selected by the user is selected; And generating an emoticon on which the applied moving effect only moves or the applied sound effect is performed on the selected picture.
Wherein the step of displaying any one of the photographs comprises providing the plurality of pictures by a predetermined emoticon generation application when the emoticon generation function is selected in a messenger service or a social network service (SNS) Can be displayed.
According to an embodiment of the present invention, there is provided an apparatus for generating an emoticon, the apparatus comprising: a display unit for displaying a subject by executing a camera when an emoticon generation function is selected; An application unit for applying the selected moving effect or sound effect to the displayed subject when any one of the effects or sound effects being selected by the user is selected; Capturing the subject and the applied moving effect according to a photographing command when the effect of moving by the user is selected and generating an emoticon in which only the applied moving effect moves while keeping the subject in a captured state, And a generation unit for capturing the subject according to the photographing instruction when the sound effect is selected and generating an emoticon in which the applied sound effect is performed while keeping the subject in a captured state.
When the emoticon generation function is selected in the messenger service or the social network service (SNS), the display unit displays the subject by executing the camera by a predetermined emoticon generation application.
Furthermore, the emoticon generation apparatus according to an embodiment of the present invention may further include a provision unit for providing the generated emoticons to the messenger service or the social network service selected by the emoticon generation function.
Wherein the application unit provides moving effects or sound effects corresponding to a predetermined emotional theme at the time of selecting the emoticon generation function and outputs a moving effect or sound effect selected by the user among the provided moving effects or sound effects to the subject Can be applied.
The application unit may determine an application position of the selected moving effect in the subject based on the specific object included in the subject and apply the selected moving effect to the determined application position.
Further, an emoticon generation apparatus according to an exemplary embodiment of the present invention includes an object detection unit that detects a specific object in real time from the subject photographed by the camera when the emoticon generation function is selected; A feature point extracting unit for extracting feature points of the detected specific object; A mesh forming unit that forms a plurality of meshes using the extracted feature points; And a preprocessing corrector for performing preprocessing correction of the specific object by adjusting the plurality of meshes formed, wherein the display unit can display the object including the specific object subjected to the preprocessing correction.
Wherein the preprocess correction unit identifies at least one of an eye, a nose, a jaw line, a facial contour, a mouth, and an eye line when the specific object is a person, and adjusts a mesh for the identified at least one of the plurality of meshes, It is possible to perform preprocessing correction.
Embodiments of the present invention generate an emoticon including a moving emoticon or a sound effect without expert knowledge by applying an effect or a sound effect moving in conjunction with a messenger or an SNS to a subject to generate an emoticon in real time, In real time.
Embodiments of the present invention apply various kinds of moving effects or sound effects to a subject by using an emoticon generating application linked to a messenger or an SNS, and generate emoticons having various effects or sounds, As shown in FIG.
The embodiments of the present invention can be applied to a device equipped with a camera, for example, a device such as a smart phone, and an application linked to a messenger or a SNS can be installed in the smart phone. It is possible to provide amusement to create an emoticon having various effects or sounds.
FIG. 1 shows an example for explaining the present invention.
FIG. 2 is a flowchart illustrating an emoticon generation method according to an exemplary embodiment of the present invention.
FIG. 3 shows an operational flow diagram of an embodiment of step S230 shown in FIG.
Figures 4-7 illustrate examples for illustrating the method according to the present invention.
FIG. 8 shows a configuration of an emoticon generating apparatus according to an embodiment of the present invention.
9 is a flowchart illustrating an emoticon generation method according to another embodiment of the present invention.
Hereinafter, embodiments according to the present invention will be described in detail with reference to the accompanying drawings. However, the present invention is not limited to or limited by the embodiments. In addition, the same reference numerals shown in the drawings denote the same members.
The present invention generates emoticons in real time through photographing using a messenger or an emoticon generating application linked to a SNS, and applies a moving effect or a sound effect to a subject and moves only a moving effect while keeping the subject in a captured state Or generating an emoticon in which a sound effect is performed while keeping the subject in a captured state, and providing the generated emoticon to the messenger or the SNS.
FIG. 1 shows an example for explaining the present invention.
1, the present invention can be applied to an
That is, the emoticon generation application applies the selected moving effect or sound effect to the subject photographed and displayed by the camera when any one of the provided moving effects or sound effects is selected, and when the effect is selected by the user, Captures the moving object and the moving effect, captures the subject in accordance with the shooting command when the sound effect is selected by the user, while moving the emoticon moving only the applied moving effect while keeping the captured object, You can create an emoticon in which the applied sound effect is performed while still being captured.
Here, the signal related to emoticon generation may include emotion information such as sadness, joy, absurdity, smile, laughter, and astonishment of the emotional theme of the moving emoticon to be generated, Or moving effects or sound effects corresponding to the emotion information among the sound effects, thereby selecting a moving effect or a sound effect corresponding to the emotion information.
At this time, the subject may include various objects such as a person, a building, an automobile, and the like. The moving effect selected by the user is a position where the selected moving effect information and the object information included in the subject to be photographed are applied Can be determined.
Hereinafter, for convenience of explanation, it is assumed that the present invention generates emoticons using moving effects and moving effects among sound effects in a smartphone equipped with a camera. Of course, it is apparent to those skilled in the art that the present invention is not limited to being applied to a smart phone, and that the present invention can be applied to all devices on which the present invention can be mounted.
FIG. 2 is a flowchart illustrating an emoticon generation method according to an exemplary embodiment of the present invention.
2, a method for generating an emoticon according to an exemplary embodiment of the present invention includes: a user accessing a messenger service or an SNS using a user terminal, for example, a smartphone, The emoticon generation signal is received by the smartphone when the emoticon generation function is clicked in the messenger or the SNS, for example, during conversation with another person or in the SNS page, so that the emoticon generation application can automatically (S210).
Here, the emoticon generation application can generate a moving emoticon using a photographing of a subject using a camera and a moving effect provided.
At this time, the emoticon generation signal in step S210 may include emotion theme information of the emoticon to be generated, for example, emotion information such as sadness, joy, suspense, smile, laughter, astonishment, or other theme information related to the emoticon.
The camera of the user terminal is executed by the emoticon generation application, and a subject including an object such as a person, for example, photographed by the executed camera is displayed on the screen (S220, S230).
Various filters may be applied to the subject displayed at step S230 according to the user's selection, or various functions of the camera for photographing the subject may be applied.
If the subject is displayed on the screen in step S230, a moving effect or moving sticker to be applied to the displayed subject is selected based on the user input (S240).
A moving effect or moving sticker applied to a subject is provided in an application providing the method of the present invention and has various effects such as a moving rabbit ear effect, a moving cloud effect, a moving heart effect, an upwardly moving heart balloon effect, .
At this time, the moving effects or moving stickers provided to the user may provide all the effects or stickers provided in the emoticon generation application, but when the emotion theme information is included in the emoticon generation signal, It may provide only the effects or stickers corresponding to the theme information. That is, the user can select a desired effect or sticker among the effects or stickers corresponding to the emotion theme information.
In step S240, when a moving effect to be applied by user input or selection is selected, the selected moving effect is applied to the displayed object on the screen, and it is determined whether or not the shooting command according to the user input is received (S250, S260).
Step S250 may determine the application position of the moving effect selected by the user in the subject based on the object included in the subject photographed by the camera, and apply the selected moving effect to the determined applied position. For example, if the moving effect selected by the user, for example, the effect of the moving rabbit ears on a person's head, is obtained, the person's head position is acquired from the subject being photographed and the rabbit's ear is applied to the acquired head position.
In step S250, the position of the effect applied in accordance with the motion generated when the motion is generated in the subject displayed on the screen due to the motion of the user photographing the subject may be changed. Of course, when the effect selected by the user is not applied to the subject displayed on the screen, the effect may not be applied to the subject, and the effect may be informed to the user that the effect is not applied.
As a result of the determination in step S260, when a photographing command is received by a user input, a captured image is generated by capturing a subject displayed on the screen and a moving effect applied to the subject, and the captured image is maintained in a captured state. A moving emoticon moving only the moving effect is generated (S270, S280).
At this time, the captured image generated in step S270 may mean an image in which both the subject and the moving effect are captured, the captured image may be displayed on the screen, and the predetermined at least one application, Line, KakaoTalk, and other messenger services, bands, and SNS.
In step S280, when a moving emoticon creation button formed in a partial area on the captured image displayed in step S270 is selected or selected by the user, a moving emoticon is generated so that only the moving effect applied in the captured image moves .
The moving emoticons generated in step S280 may be stored in the user terminal through the user input. If the emoticons are stored, the emoticons may be stored as a file, for example, a GIF (Graphics Interchange Format) file.
At this time, the moving emoticons generated in step S280 may be shared through at least one predetermined application, for example, a messenger service such as a line, a kakao chat, a band, an SNS, or the like.
In step S280, when the emoticon moving by using the moving subject and the moving subject are generated in step S280, the generated moving emoticon is provided to the messenger or the SNS requesting the creation of the emoticon, so that the face of the user, for example, A moving emoticon can be used (S290).
Also, the moving emoticon generation method according to the embodiment of the present invention may preprocess the subject to be photographed in real time to correct the subject, and generate the emoticon moving using the corrected subject.
This will be described with reference to FIG.
FIG. 3 shows an operational flow diagram of an embodiment of step S230 shown in FIG.
Referring to FIG. 3, a step S230 of displaying a subject on a screen detects a predetermined specific object, for example, a human face, from a photographed subject when the subject is photographed by the camera (S310).
The specific object detected in step S310 is not limited to a human face, and may include an animal face, a man-made structure, for example, a statue face, and the type of a specific object to be detected may be determined by a vendor. And may be set by a user through a user setting item as necessary.
At this time, the step S310 can detect the human face by recognizing the human face contour in the object.
When a specific object is detected in step S310, the feature points of the detected specific object are extracted, and a plurality of meshes are formed using the extracted feature points (S320, S330).
The feature points to be extracted are points corresponding to characteristics of a specific object and feature points can be extracted from the entire face region such as eyes, nose, mouth, eye line, lip, chin line, outline, And the feature points can be extracted by tracking the photographed face in real time.
Step S330 may form a plurality of meshes for a specific object, for example, a face portion of a human being, and may form a plurality of meshes using all the methods for forming meshes using minutiae.
If a plurality of meshes for a specific object are formed in step S330, a plurality of meshes formed are adjusted to perform preprocessing correction on the specific object (S340).
Here, the step S340 identifies at least one of eye, nose, jaw line, face contour, mouth, and eye line when the specific object is a person, and adjusts the mesh for at least one of the plurality of meshes to perform a pre- For example, by performing 3D rendering on the eye, nose, jaw line, and facial contour to perform correction such as cutting the jaw, raising the nose, enlarging the eyes, etc., It is possible to perform preprocessing correction.
In step S340, the degree of correction and the correction region for performing the preprocessing for a specific object may be preset by the business operator, but the present invention is not limited thereto and may be set by the user. Thus, step S340 can perform preprocessing correction of the human face by adjusting meshes for the human face based on the preset degree of correction and the corrected region.
That is, in step S340, the specific object may be subjected to preprocessing correction by adjusting the mesh for at least one identified based on the correction information preset by the user, for example, based on the corrected region and the corrected degree for each corrected region, The specific object may be pre-processed by adjusting the mesh for at least one identified based on the per-site correction level.
One method of preprocessing a specific object in step S340 may include 1) modifying a mesh shape for a plurality of meshes to preprocess the particular object, 2) adjusting the color, material, You can also pre-calibrate specific objects by adjusting the brightness. At this time, the meshes to be preprocessed may be meshes for at least one predetermined region among the regions constituting the specific object.
If the specific object of the subject is preprocessed and corrected at step S340, the subject including the specific object subjected to the preprocessing correction is displayed on the screen (S350).
As described above, the present invention can be photographed with a face that comes out more than a real face of a person, for example, by pre-processing the subject before photographing the subject.
A method of generating an emoticon according to an embodiment of the present invention will now be described in detail with reference to FIGS. 4 to 7. FIG.
Figures 4-7 illustrate examples for illustrating the method according to the present invention.
4 to 7, a method of generating a moving emoticon according to an exemplary embodiment of the present invention includes generating an emoticon moving in a messenger or an SNS in real time while using a messenger or an SNS using a user terminal, for example, When the emoticon generation function is clicked by the user, the emoticon generation signal is received by the smart phone, so that the emoticon generation application is automatically executed, the camera of the user terminal is executed by the emoticon generation application, Is displayed in a
At this time, the subject displayed in the
4, when an
Here, various moving effects or
For example, when the moving
The
As described above, when the
5, when a shooting command is received by a user input in a state where a moving effect is applied to a subject, an image displayed on a
At this time, since the generated captured image is an image captured on the screen at the time of receiving the shooting command, the moving rabbit ears applied to the subject are also not captured and are in a captured state.
As shown in FIG. 6, when a captured image is generated, a
When the
On the other hand, if the user selects the
When a moving emoticon is generated, a
As described above, in the emoticon generation method according to an embodiment of the present invention, a moving emoticon including a moving effect by applying a moving effect to a subject is generated in real time, and the generated emoticon is provided to a messenger, a SNS, Can be provided.
In addition, since the method according to the embodiment of the present invention can generate animated emoticons by applying various moving effects, it is possible to make emoticons that are moved by any ordinary user who has no expert knowledge.
In addition, in the emoticon generation method according to the present invention, not only a moving emoticon is generated by applying a moving effect when a subject is photographed, but also a moving emoticon can be generated by applying a moving effect to a photograph of an already stored subject. That is, according to another embodiment of the present invention, as shown in FIG. 9, when a user wishes to generate a moving emoticon in real time in association with an emoticon generating application installed in a smartphone in a messenger or an SNS, When the function is clicked, the emoticon generation signal is received by the smartphone, so that the emoticon generation application is automatically executed, and the emoticon generation application provides the user with the pictures stored in advance by the application (S910, S920).
At this time, the emoticon generation signal in step S910 may include emotional theme information of the emoticon to be generated, for example, emotion information such as sadness, joy, suspense, smile, laughter, astonishment or other theme information related to the emoticon.
Then, the user selects one of the photographs already stored in step S920 and displays it on the screen. Then, the user applies the moving effect to the selected photograph by selecting the effect to be applied among the plurality of moving effects, A moving emoticon moving only the effect applied in the photograph is generated, and the generated moving emoticon can be provided to the messenger or the SNS (S930 to S970).
Embodiments of the present invention may also use emoticons created or stored by an emoticon generation application in the messenger or the SNS as a function such as "favorites" to use instantly depending on the situation. That is, the user can immediately use the emoticons generated / stored by the emoticon generation application of the present invention as well as emoticons provided by the messenger or the SNS itself.
FIG. 8 illustrates a configuration of an emoticon generation apparatus according to an embodiment of the present invention, and illustrates an apparatus for performing the emoticon generation method described in FIGS. 2 to 7.
Here, the emoticon generating device may be included in all the devices equipped with the camera.
8, an
The receiving
The providing
The
Here, the
At this time, the
The feature
In this case, the feature
The
Here, the
The
Here, the
The
At this time, the
Specifically, the
In this case, the
In addition, the
The
In this case, the
The
At this time, the
At this time, the
In this case, if the emoticon emotion theme information is included in the emoticon generation signal received by the
The
At this time, the generating
The
The system or apparatus described above may be implemented as a hardware component, a software component, and / or a combination of hardware components and software components. For example, the systems, devices, and components described in the embodiments may be implemented in various forms such as, for example, a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable array ), A programmable logic unit (PLU), a microprocessor, or any other device capable of executing and responding to instructions. The processing device may execute an operating system (OS) and one or more software applications running on the operating system. The processing device may also access, store, manipulate, process, and generate data in response to execution of the software. For ease of understanding, the processing apparatus may be described as being used singly, but those skilled in the art will recognize that the processing apparatus may have a plurality of processing elements and / As shown in FIG. For example, the processing unit may comprise a plurality of processors or one processor and one controller. Other processing configurations are also possible, such as a parallel processor.
The software may include a computer program, code, instructions, or a combination of one or more of the foregoing, and may be configured to configure the processing device to operate as desired or to process it collectively or collectively Device can be commanded. The software and / or data may be in the form of any type of machine, component, physical device, virtual equipment, computer storage media, or device , Or may be permanently or temporarily embodied in a transmitted signal wave. The software may be distributed over a networked computer system and stored or executed in a distributed manner. The software and data may be stored on one or more computer readable recording media.
The method according to embodiments may be implemented in the form of a program instruction that may be executed through various computer means and recorded in a computer-readable medium. The computer-readable medium may include program instructions, data files, data structures, and the like, alone or in combination. The program instructions to be recorded on the medium may be those specially designed and configured for the embodiments or may be available to those skilled in the art of computer software. Examples of computer-readable media include magnetic media such as hard disks, floppy disks and magnetic tape; optical media such as CD-ROMs and DVDs; magnetic media such as floppy disks; Magneto-optical media, and hardware devices specifically configured to store and execute program instructions such as ROM, RAM, flash memory, and the like. Examples of program instructions include machine language code such as those produced by a compiler, as well as high-level language code that can be executed by a computer using an interpreter or the like. The hardware devices described above may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa.
While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. For example, it is to be understood that the techniques described may be performed in a different order than the described methods, and / or that components of the described systems, structures, devices, circuits, Lt; / RTI > or equivalents, even if it is replaced or replaced.
Therefore, other implementations, other embodiments, and equivalents to the claims are also within the scope of the following claims.
Claims (16)
Extracting feature points of the detected specific object from the feature point extraction unit;
Forming a plurality of meshes using the extracted feature points in a mesh forming section;
Preprocessing the specific object by adjusting a plurality of meshes formed on the basis of a preset degree of correction and a corrected region in the pre-processing correction unit;
Displaying the object including the specific object subjected to the preprocessing correction in a display unit;
Applying the selected moving effect or sound effect to the displayed subject when any one of the effects or sound effects being driven by the user in the application is selected; And
Capturing the subject and the applied moving effect in accordance with a capturing command according to the input of the user when the effect of moving by the user is selected in the generating unit, and controlling the emoticon Capturing the subject in accordance with the photographing command when the sound effect is selected by the user and generating an emoticon in which the applied sound effect is performed while keeping the subject in a captured state
And generating an emoticon.
The step of displaying the subject
When the emoticon generation function is selected in a messenger service or a social network service (SNS), the camera is executed by a predetermined emoticon generation application, thereby displaying the subject.
Providing the generated emoticons to the messenger service or the social network service selected by the emoticon generation function
The emoticon generating method further comprising:
The step of applying to the subject
An emoticon that provides moving effects or sound effects corresponding to a predetermined emotional theme at the time of selecting the emoticon generation function and applies a moving effect or a sound effect selected by the user among the provided moving effects or sound effects to the subject, Generation method.
The step of applying to the subject
Determining an application position of the selected moving effect on the subject based on a specific object included in the subject, and applying the selected moving effect to the determined application position.
The preprocessing correcting step
Identifying at least one of an eye, a nose, a jaw line, a facial contour, a mouth, and an eye line when the specific object is a person and adjusting the mesh for the identified at least one of the plurality of meshes to perform pre- How to create emoticons.
A feature point extracting unit for extracting feature points of the detected specific object;
A mesh forming unit that forms a plurality of meshes using the extracted feature points;
A preprocessing unit that preprocesses the specific object by adjusting a plurality of meshes formed on the basis of a preset correction degree and a correction part,
A display unit for displaying the subject including the specific object corrected by the preprocessing;
An application unit for applying the selected moving effect or sound effect to the displayed subject when any one of the effects or sound effects being selected by the user is selected; And
Capturing the subject and the applied moving effect according to a photographing command according to the input of the user when the effect of moving by the user is selected and generating an emoticon in which only the applied moving effect moves while keeping the subject in a captured state A generating unit that captures the subject according to the photographing instruction when the sound effect is selected by the user and generates an emoticon in which the applied sound effect is performed while keeping the subject in a captured state,
And an emoticon generating device.
The display unit
And the camera is executed by a predetermined emoticon generation application when the emoticon generation function is selected in the messenger service or the social network service (SNS), thereby displaying the subject.
And providing the generated emoticons to the messenger service or the social network service selected by the emoticon generating function
The emoticon generating device further comprising:
The application unit
An emoticon that provides moving effects or sound effects corresponding to a predetermined emotional theme at the time of selecting the emoticon generation function and applies a moving effect or a sound effect selected by the user among the provided moving effects or sound effects to the subject, Generating device.
The application unit
Determine an application position of the selected moving effect on the subject based on a specific object included in the subject, and apply the selected moving effect to the determined application position.
The pre-
Identifying at least one of an eye, a nose, a jaw line, a facial contour, a mouth, and an eye line when the specific object is a person and adjusting the mesh for the identified at least one of the plurality of meshes to perform pre- Emoticon generating device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150104658A KR101672691B1 (en) | 2015-07-23 | 2015-07-23 | Method and apparatus for generating emoticon in social network service platform |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150104658A KR101672691B1 (en) | 2015-07-23 | 2015-07-23 | Method and apparatus for generating emoticon in social network service platform |
Publications (1)
Publication Number | Publication Date |
---|---|
KR101672691B1 true KR101672691B1 (en) | 2016-11-07 |
Family
ID=57529540
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020150104658A KR101672691B1 (en) | 2015-07-23 | 2015-07-23 | Method and apparatus for generating emoticon in social network service platform |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR101672691B1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018106069A1 (en) * | 2016-12-08 | 2018-06-14 | 스타십벤딩머신 주식회사 | Method and apparatus for producing content |
KR20180073330A (en) * | 2016-12-22 | 2018-07-02 | 주식회사 시어스랩 | Method and apparatus for creating user-created sticker, system for sharing user-created sticker |
KR101943898B1 (en) * | 2017-08-01 | 2019-01-30 | 주식회사 카카오 | Method for providing service using sticker, and user device |
KR20190062005A (en) * | 2017-11-28 | 2019-06-05 | 강동우 | Method for making emoticon during chatting |
KR20190106971A (en) * | 2017-02-10 | 2019-09-18 | 주식회사 시어스랩 | Live streaming image generating method and apparatus, live streaming service providing method and apparatus, live streaming system |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20100062207A (en) * | 2008-12-01 | 2010-06-10 | 삼성전자주식회사 | Method and apparatus for providing animation effect on video telephony call |
KR20130082898A (en) * | 2011-12-22 | 2013-07-22 | 김선미 | Method for using user-defined emoticon in community service |
KR20140049340A (en) * | 2012-10-17 | 2014-04-25 | 에스케이플래닛 주식회사 | Apparatus and methods of making user emoticon |
-
2015
- 2015-07-23 KR KR1020150104658A patent/KR101672691B1/en active Search and Examination
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20100062207A (en) * | 2008-12-01 | 2010-06-10 | 삼성전자주식회사 | Method and apparatus for providing animation effect on video telephony call |
KR20130082898A (en) * | 2011-12-22 | 2013-07-22 | 김선미 | Method for using user-defined emoticon in community service |
KR20140049340A (en) * | 2012-10-17 | 2014-04-25 | 에스케이플래닛 주식회사 | Apparatus and methods of making user emoticon |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018106069A1 (en) * | 2016-12-08 | 2018-06-14 | 스타십벤딩머신 주식회사 | Method and apparatus for producing content |
KR20180065656A (en) * | 2016-12-08 | 2018-06-18 | 스타십벤딩머신 주식회사 | Apparatus and method for creating contents |
KR101873897B1 (en) * | 2016-12-08 | 2018-08-02 | 스타십벤딩머신 주식회사 | Apparatus and method for creating contents |
KR20180073330A (en) * | 2016-12-22 | 2018-07-02 | 주식회사 시어스랩 | Method and apparatus for creating user-created sticker, system for sharing user-created sticker |
KR101944112B1 (en) * | 2016-12-22 | 2019-04-17 | 주식회사 시어스랩 | Method and apparatus for creating user-created sticker, system for sharing user-created sticker |
KR20190106971A (en) * | 2017-02-10 | 2019-09-18 | 주식회사 시어스랩 | Live streaming image generating method and apparatus, live streaming service providing method and apparatus, live streaming system |
KR102053128B1 (en) | 2017-02-10 | 2019-12-06 | 주식회사 시어스랩 | Live streaming image generating method and apparatus, live streaming service providing method and apparatus, live streaming system |
KR101943898B1 (en) * | 2017-08-01 | 2019-01-30 | 주식회사 카카오 | Method for providing service using sticker, and user device |
KR20190062005A (en) * | 2017-11-28 | 2019-06-05 | 강동우 | Method for making emoticon during chatting |
KR102063728B1 (en) * | 2017-11-28 | 2020-01-08 | 강동우 | Method for making emoticon during chatting |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10559062B2 (en) | Method for automatic facial impression transformation, recording medium and device for performing the method | |
EP3457683B1 (en) | Dynamic generation of image of a scene based on removal of undesired object present in the scene | |
CN108377334B (en) | Short video shooting method and device and electronic terminal | |
KR101672691B1 (en) | Method and apparatus for generating emoticon in social network service platform | |
KR101655078B1 (en) | Method and apparatus for generating moving photograph | |
US11176355B2 (en) | Facial image processing method and apparatus, electronic device and computer readable storage medium | |
CN113287118A (en) | System and method for face reproduction | |
JPWO2018047687A1 (en) | Three-dimensional model generation device and three-dimensional model generation method | |
JP2022064987A (en) | Constitution and realization of interaction between digital medium and observer | |
KR101831516B1 (en) | Method and apparatus for generating image using multi-stiker | |
CN109997171B (en) | Display device and recording medium storing program | |
KR101711684B1 (en) | 3d avatars output device and method | |
CN113973190A (en) | Video virtual background image processing method and device and computer equipment | |
US20160180572A1 (en) | Image creation apparatus, image creation method, and computer-readable storage medium | |
JP2018113616A (en) | Information processing unit, information processing method, and program | |
JP5949030B2 (en) | Image generating apparatus, image generating method, and program | |
CN111787354B (en) | Video generation method and device | |
CN113709545A (en) | Video processing method and device, computer equipment and storage medium | |
US11087514B2 (en) | Image object pose synchronization | |
KR20210056944A (en) | Method for editing image | |
US20230209182A1 (en) | Automatic photography composition recommendation | |
KR20160128900A (en) | Method and apparatus for generating moving photograph based on moving effect | |
US11770604B2 (en) | Information processing device, information processing method, and information processing program for head-related transfer functions in photography | |
KR101774913B1 (en) | Method and apparatus for displaying images using pre-processing | |
JP6889191B2 (en) | Game programs and game equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
FPAY | Annual fee payment |
Payment date: 20190807 Year of fee payment: 4 |