WO2020259152A1 - 贴纸生成方法、装置、介质和电子设备 - Google Patents
贴纸生成方法、装置、介质和电子设备 Download PDFInfo
- Publication number
- WO2020259152A1 WO2020259152A1 PCT/CN2020/091805 CN2020091805W WO2020259152A1 WO 2020259152 A1 WO2020259152 A1 WO 2020259152A1 CN 2020091805 W CN2020091805 W CN 2020091805W WO 2020259152 A1 WO2020259152 A1 WO 2020259152A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sticker
- target object
- area
- anchor point
- display
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 230000008676 import Effects 0.000 claims abstract description 31
- 238000004422 calculation algorithm Methods 0.000 claims description 28
- 230000011218 segmentation Effects 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 8
- 210000001747 pupil Anatomy 0.000 claims 2
- 238000001514 detection method Methods 0.000 abstract description 15
- 230000000694 effects Effects 0.000 abstract description 14
- 238000010586 diagram Methods 0.000 description 17
- 241000282326 Felis catus Species 0.000 description 15
- 238000012545 processing Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 10
- 210000005069 ears Anatomy 0.000 description 7
- 210000000887 face Anatomy 0.000 description 6
- 210000003128 head Anatomy 0.000 description 6
- 230000008859 change Effects 0.000 description 5
- 230000014509 gene expression Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 235000012469 Cleome gynandra Nutrition 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 241001465754 Metazoa Species 0.000 description 2
- 241000283973 Oryctolagus cuniculus Species 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 238000005034 decoration Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 238000003780 insertion Methods 0.000 description 2
- 230000037431 insertion Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 238000001179 sorption measurement Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000003796 beauty Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20101—Interactive definition of point of interest, landmark or seed
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Definitions
- the present invention relates to the field of computer technology, in particular to a method, device, medium and electronic equipment for generating stickers.
- the purpose of the present invention is to provide a sticker generating method, device, medium and electronic equipment, which can solve at least one of the above-mentioned technical problems.
- the embodiment of the present disclosure specifically provides a sticker generation method, which includes:
- the background image including a target object
- the sticker is generated according to the display area, the tracking area, and the resources of the sticker.
- the present invention provides a sticker generating device, including:
- a background obtaining unit configured to obtain a background image, the background image including a target object
- a sticker display unit for displaying the display area of the sticker and the anchor point of the sticker in the background image
- An import receiving unit for receiving an import instruction of the sticker
- the sticker import unit is configured to import the resource of the sticker according to the import instruction and display it in the display area of the sticker;
- An area selection unit for dynamically selecting a tracking area according to the position of the anchor point of the sticker, wherein the tracking area is an image area in the target object;
- the sticker generating unit generates the sticker according to the display area, the tracking area, and the resources of the sticker.
- the present invention provides an electronic device, including: one or more processors; a storage device, for storing one or more programs, when the one or more programs are When the one or more processors are executed, the one or more processors implement the sticker generation method described in any one of the above.
- Fig. 1 shows a flowchart of a sticker generating method according to an embodiment of the present invention
- Figure 2 shows a schematic diagram of the meaning of a canvas according to an embodiment of the present invention
- FIG. 3 shows a schematic diagram of 2D drawing board elements according to an embodiment of the present invention
- Fig. 4 shows a schematic diagram of segmentation of a triangulation algorithm (Delaunay) according to an embodiment of the present invention
- 5a-5b show the positional relationship diagrams of the triangular area formed by anchor points and key points according to an embodiment of the present invention
- FIG. 6 shows a schematic diagram of a situation in which an anchor point is located outside the range of a target object according to an embodiment of the present invention
- Fig. 7 shows a structural diagram of a sticker generating device according to an embodiment of the present invention.
- Fig. 8 shows a schematic diagram of a connection structure of an electronic device according to an embodiment of the present invention.
- Fig. 1 is a flowchart of an embodiment of a sticker generation method provided by an embodiment of the disclosure.
- the sticker generation method provided in this embodiment is used to set sticker special effects on the avatar of a two-dimensional image.
- the sticker generation method requires an image processing device to execute
- the image processing device can be implemented as software, or as a combination of software and hardware, and the expression image effect generating device can be integrated in a device in the image processing system, such as an image processing server or an image processing terminal device.
- the method includes the following steps:
- Step S101 Acquire a background image, where the background image includes a target object.
- the image in this embodiment can be a static picture, a dynamic picture, or a video stream or real-time video shooting.
- the image includes the target object that needs to be used as a sticker effect.
- the target object can be the face of a person or an animal. Department or head.
- the image processing device obtains the background image to be processed by receiving the image acquisition instruction from the client, and by opening and calling the image or camera device.
- the background image can be a stored picture or video, or a picture or video taken in real time. As a background image for setting stickers.
- Step S102 displaying the display area of the sticker and the anchor point of the sticker in the background image.
- the face is selected as the background image.
- the effect of the sticker is actually to paste some pictures on the face. At the same time, these pictures change according to the position of the face.
- the position of the sticker is dynamically adjusted according to the result of face detection. Achieve simple real-time dynamic sticker effects.
- the position of the sticker is determined by the anchor point of the sticker, and the position of the anchor point on the canvas determines the position of the sticker on the canvas, thereby determining the corresponding position of the sticker and the background image.
- the center point of the rectangular area where the sticker is located or an end point of the rectangular area is defined as the anchor point of the sticker.
- the anchor point coordinates O(x0, y0) are set in the image, and the anchor point is set to be dragged Yes, the position can be set in advance according to the type of sticker.
- This embodiment describes how to set the relative position of an anchor point coordinate and the target object, and update it in real time. If multiple stickers are set, multiple anchor points need to be set.
- the background real-time update step S103 receiving the import instruction of the sticker.
- the canvas is an editable image area and image elements, as shown in Figure 2.
- 2D panels include canvas, rulers, guides, locks, etc.
- the position of the sticker on the canvas is set by the user, the sticker can be moved freely within the canvas, and the sticker can be dragged and zoomed freely within the canvas.
- the sticker setting instruction sent by the client is received, and the initial position of the anchor point of the sticker is set.
- the initial position is set by the user, that is, the initial coordinate of the anchor point O(x0, y0), the set anchor point can be dragged, and the position can be set in advance according to the type of sticker.
- Step S104 Import the resource of the sticker according to the import instruction and display it in the display area of the sticker.
- the stickers can be various, such as cat ears, cat whiskers, rabbit ears, horns, hats, decorations, graffiti, etc.
- the types of the stickers include single pictures and sequence frame animations, and the display mode of the stickers is selected from positive
- the display mode of one of display, reverse display, angle display, horizontal mirror display and vertical mirror display can be single selection or multiple selection. Multiple images can be mirrored horizontally or vertically for symmetrical settings , You can also rotate arbitrarily at a certain angle, and set stickers at a certain angle. If the background image is a video, you can set it in time sequence. Of course, this setting has a certain degree of freedom and personalization.
- the sticker will be displayed in the target object of the background image according to the setting instructions.
- the cat ear sticker is mirrored and symmetrically displayed on both sides of the mouth.
- the cat ear sticker corresponds to two rectangular sticker display areas, the sticker area Located within the range of the target object.
- the cat ear stickers are mirrored and symmetrically displayed on both sides of the top of the head, and the cat ear stickers correspond to two rectangular sticker display areas, and the sticker area is outside the range of the target object.
- the position and size of the sticker are set according to the needs of the user. For example, the sticker can be set on the top of the head, on the mouth, or in other positions, and the size can also be adjusted.
- the position and size of the sticker can be realized by dragging.
- the sticker can be dragged at any point within the canvas, and it can be dragged in the 2D panel. Sticker dragging will adsorb the reference line, including boundary adsorption and central axis adsorption.
- Sticker dragging includes two modes: arbitrary drag and restricted drag.
- the restricted drag includes X, Y, XY and YX direction drag.
- the setting operation of the sticker is implemented on a 2D panel, where the 2D panel can be scaled arbitrarily within a specified range, and the pixel values of internal elements remain unchanged.
- the 2D panel also includes a lock, which is located on the leftmost side of the horizontal ruler. After the position is locked, the reference line cannot be dragged, and the reference line cannot be displayed by dragging from the ruler.
- the reference line includes two types: horizontal and vertical. It appears when dragged from the ruler, and disappears after dragging into the ruler. It follows the zoom of the 2D panel to change its width, and it is displayed in real time relative to the upper left corner of the canvas when dragging. s position.
- a tracking area is dynamically selected according to the anchor point position of the sticker, wherein the tracking area is an image area in the target object.
- the tracking area of the tracking area is used to display the area corresponding to the tracking of the sticker in the background image.
- the human face in the background image needs to be segmented first, and the area segmentation is performed by The identified key points on the face are segmented, and the key points are at least three, usually tens to hundreds.
- the person’s face is taken as the target object for setting analysis.
- the face key point information of the face can reflect the position and direction of the face.
- the key point information includes internal feature key point information and edge feature key Point information, the face key point detection algorithm in this embodiment integrates the structural information described by the face edge line into the key point detection, which greatly improves the algorithm in extremes such as large profile faces, exaggerated expressions, occlusion, and blur. Detection accuracy under the circumstances.
- the face key point algorithm provided by the lab obtains 106 key point coordinates of the face.
- the algorithm captures the common image feature information of the face to obtain the face key point detection data set Wider Facial Landmarks in-the- wild(WFLW), contains the face image data labeled with key points and face attributes, including posture, expression, lighting, makeup, occlusion and blur transformation, and aims to help the academic community to more targeted evaluation of key point algorithms in various Robustness under conditions.
- the number of key points is not less than 3, but if there are too few key points, there will be too few feature representations of the target object in the image, which cannot accurately reflect the characteristics of the target object, and the position of the target object cannot be accurately located.
- the region segmentation of the human face (target object) in the background image includes: using the triangulation algorithm of the key point set to segment the human face (target object) into N triangular regions, where N is a natural number.
- Figure 4 shows the segmentation diagram of the triangulation algorithm (Delaunay). As shown in the figure, the discrete key points are connected into Delaunay triangles according to the triangulation algorithm (Delaunay), and the target object is divided into multiple triangle regions. The triangulation algorithm of the point set divides the target object into N triangular regions, where N is a natural number.
- All faces in the plan view are triangular faces, and the collection of all triangular faces is the convex hull of the scattered point set V.
- Delaunay triangulation The most commonly used triangulation in practice is Delaunay triangulation, which is a special triangulation. Let’s start with Delaunay:
- Delaunay edge Suppose an edge e in E (the two end points are a, b), if e meets the following conditions, it is called a Delaunay edge: there is a circle passing through two points a and b, inside the circle (note that the circle is inside , The maximum three points on the circle are a circle) does not contain any other points in the point set V. This feature is also called the empty circle feature.
- Delaunay triangulation If a triangulation T of the point set V contains only Delaunay edges, then the triangulation is called Delaunay triangulation.
- T is any triangulation of V
- T is a Delaunay triangulation of V, currently only when the inside of the circumcircle of each triangle in T does not contain any point in V.
- Delaunay triangle there are flanging algorithm, point-by-point insertion algorithm, division and merge algorithm, Bowyer-Watson algorithm, etc.
- This embodiment adopts a point-by-point insertion algorithm.
- Delaunay triangles are mainly used. It is understood that no other point in the point set can exist in the circumcircle of each triangle.
- the optimized pseudo code is:
- a multi-triangular region complex composed of multiple key points can be obtained.
- the human face is divided into 159 triangles (not shown) by Delaunay.
- the tracking area is a triangular area composed of three key points in the face (target object).
- the tracking area is a triangular area containing the anchor point; when the anchor point of the sticker is outside the area range of the face (target object), the tracking area It is the three key point coordinates that are relatively fixed and can best reflect the characteristics of the target object within the range of the human face (target object), that is, two eyes and the end of the nose are selected as the three key points by default.
- the tracking area is updated in real time according to the position of the anchor point of the sticker on the canvas.
- the display of the tracking area includes two display modes: display point prompt and non-display point prompt.
- Figures 5a-5b show the positional relationship diagram of the triangular area formed by the anchor point and the key point. As shown in Figure 5a-5b, according to the coordinates of the sticker anchor point, it is determined which triangle the anchor point is located in. Specifically:
- the tracking area is determined.
- the tracking area is determined as the triangular area containing the anchor point, that is, the anchor point O is located inside the triangle ABC
- the situation is updated in real time as the anchor position changes.
- Fig. 6 shows a schematic diagram of a situation where the anchor point is outside the range of the target object.
- the position of the sticker is outside the range of the target object (such as a human face)
- Fig. 4 when the anchor point is outside the range of the target object, Three key point coordinates within the range of the target object that are relatively fixed and best reflect the characteristics of the target object are selected by default. In principle, you can select any three key points with obvious characteristics in the target object.
- the target object is the face of a character and the anchor point is outside the range of the face, two are selected by default
- the eyes and the end of the nose are used as three key points.
- Such three key points have distinctive features, and the formed triangle has good face positioning characteristics, and the obtained anchor point position is more accurate.
- Step S106 Generate the sticker according to the display area, the tracking area, and the resources of the sticker.
- the setting method of the sticker is determined. After setting the position and sticker size information, the sticker will be updated in real time as the face (target object) moves. The tracking area always changes with the movement of the sticker's anchor point. Determine the sticker and the person The relative position of the face (target object), so that when the user takes a photo and adds a sticker, the sticker changes with the change of the position of the face in the video stream and remains at a specific position relative to the face. For example, if the sticker is cat ears, that is, two stickers form a horizontal mirror image, a cat ear sticker is automatically generated according to the setting method, the adjusted sticker size, and the relative position with the human face. At the same time, the sticker can be adjusted manually to create personalized requirements.
- the present invention provides a sticker generating device 700, including:
- the background obtaining unit 701 is configured to obtain a background image, and the background image includes a target object.
- the image in this embodiment can be a static picture, a dynamic picture, or a video stream or real-time video shooting.
- the image includes the target object that needs to be used as a sticker effect.
- the target object can be the face of a person or an animal. Department or head.
- the image processing device obtains the background image to be processed by receiving the image acquisition instruction from the client, and by opening and calling the image or camera device.
- the background image can be a stored picture or video, or a picture or video taken in real time. As a background image for setting stickers.
- the sticker display unit 702 is configured to display the display area of the sticker and the anchor point of the sticker in the background image.
- the face is selected as the background image.
- the effect of the sticker is actually to paste some pictures on the face. At the same time, these pictures change according to the position of the face.
- the position of the sticker is dynamically adjusted according to the result of face detection. Achieve simple real-time dynamic sticker effects.
- the position of the sticker is determined by the anchor point of the sticker, and the position of the anchor point on the canvas determines the position of the sticker on the canvas, thereby determining the corresponding position of the sticker and the background image.
- the center point of the rectangular area where the sticker is located or an end point of the rectangular area is defined as the anchor point of the sticker.
- the anchor point coordinates O(x0, y0) are set in the image, and the anchor point is set to be dragged Yes, the position can be set in advance according to the type of sticker.
- the anchor point can be displayed in the 2D drawing board.
- the display is represented by a common black point, a circle and a center point graphic " ⁇ " or other graphic forms, or not.
- the position of the anchor point is used to locate the position of the sticker.
- the instruction receiving unit 703 is configured to receive the import instruction of the sticker.
- the position of the sticker on the canvas is set by the user, the sticker can be moved freely within the canvas, and the sticker can be dragged and zoomed freely within the canvas.
- the sticker setting instruction sent by the client is received, and the initial position of the sticker anchor point is set.
- the initial position is set by the user, that is, the initial coordinate of the anchor point O(x0, y0), set
- the anchor point can be dragged, and the position can be set in advance according to the type of sticker.
- the sticker import unit 704 is configured to import the resource of the sticker according to the import instruction and display it in the display area of the sticker. Import the sticker into the background image, and the sticker will be displayed in the target object of the background image according to the setting instructions.
- the cat ear sticker is mirrored and symmetrically displayed on both sides of the mouth.
- the cat ear sticker corresponds to two rectangular sticker display areas, the sticker area Located within the range of the target object.
- the cat ear stickers are mirrored and symmetrically displayed on both sides of the top of the head, and the cat ear stickers correspond to two rectangular sticker display areas, and the sticker area is outside the range of the target object. .
- the stickers can be various, such as cat ears, cat whiskers, rabbit ears, horns, hats, decorations, graffiti, etc.
- the types of the stickers include single pictures and sequence frame animations, and the display mode of the stickers is selected from positive
- the display mode of one of display, reverse display, angle display, horizontal mirror display and vertical mirror display can be single selection or multiple selection. Multiple images can be mirrored horizontally or vertically for symmetrical settings , You can also rotate arbitrarily at a certain angle, and set stickers at a certain angle. If the background image is a video, you can set it in time sequence. Of course, this setting has a certain degree of freedom and personalization.
- the area selection unit 705 is configured to dynamically select a tracking area according to the position of the anchor point of the sticker, wherein the tracking area is an image area in the target object.
- the tracking area of the tracking area is used to display the area corresponding to the tracking of the sticker in the background image.
- the human face in the background image needs to be segmented first, and the area segmentation is performed by
- the identified key points on the face are segmented, and the key points are at least three, usually tens to hundreds.
- the person’s face is taken as the target object for setting analysis.
- the face key point information of the face can reflect the position and direction of the face.
- the key point information includes internal feature key point information and edge feature key Point information
- the face key point detection algorithm in this embodiment integrates the structural information described by the face edge line into the key point detection, which greatly improves the algorithm in extremes such as large profile faces, exaggerated expressions, occlusion, and blur. Detection accuracy under the circumstances.
- the face key point detection algorithm is used to obtain 106 key point coordinates of the face.
- the number of key points is not less than 3, but if there are too few key points, the feature representation of the target object in the image is too few. , Can not accurately reflect the characteristics of the target object, and cannot accurately locate the position of the target object.
- the region segmentation of the human face (target object) in the background image includes: using the triangulation algorithm of the key point set to segment the human face (target object) into N triangular regions, where N is a natural number.
- the tracking area is a triangular area composed of three key points in the face (target object).
- the tracking area is a triangular area containing the anchor point; when the anchor point of the sticker is outside the area range of the face (target object), the tracking area It is the three key point coordinates that are relatively fixed and can best reflect the characteristics of the target object within the range of the human face (target object), that is, two eyes and the end of the nose are selected as the three key points by default.
- the tracking area is updated in real time according to the position of the anchor point of the sticker on the canvas.
- the sticker generating unit 706 generates the sticker according to the display area, the tracking area, and the resources of the sticker.
- the setting method of the sticker is determined. After setting the position and sticker size information, the sticker will be updated in real time as the face (target object) moves.
- the tracking area always changes with the movement of the sticker's anchor point. Determine the sticker and the person The relative position of the face (target object), so that when the user takes a photo and adds a sticker, the sticker changes with the change of the position of the face in the video stream and remains at a specific position relative to the face. For example, if the sticker is cat ears, that is, two stickers form a horizontal mirror image, a cat ear sticker is automatically generated according to the setting method, the adjusted sticker size, and the relative position with the human face. At the same time, the sticker can be adjusted manually to create personalized requirements.
- the device further includes:
- the anchor point obtaining unit is used to obtain the anchor point of the sticker, and the anchor point is used to locate the position of the sticker.
- the position of the anchor point of the sticker is set at the center point of the display area of the sticker or a vertex position of the display area, the anchor point coordinate O(x0, y0) is set in the image, and the anchor point is set to It can be dragged, and the position can be set in advance according to the type of sticker.
- This embodiment describes how to set the relative position of an anchor point coordinate and the target object, and update it in real time. If multiple stickers are set, multiple anchor points need to be set.
- the device further includes:
- the key point acquiring unit is configured to acquire the target object on the background image and the key points of the target object, and the key points are used to define the tracking area.
- the face key point detection algorithm in this embodiment integrates the structural information described by the face edge line into the key point detection. Through the face key point detection algorithm, the 106 key point coordinates of the face are obtained.
- the algorithm passes Capture the general image feature information of the face, and obtain the face key point detection data set Wider Facial Landmarks in-the-wild (WFLW), including the face image data labeled with key points and face attributes, including posture, expression, lighting,
- WFLW Wider Facial Landmarks in-the-wild
- the transformation of makeup, occlusion, and blur is designed to help the academic community more targeted evaluation of the robustness of key point algorithms under various conditions.
- the number of key points is not less than 3, but if there are too few key points, there will be too few feature representations of the target object in the image, which cannot accurately reflect the characteristics of the target object, and the position of the target object cannot be accurately located.
- the area segmentation unit is configured to segment the target object by using the key points.
- the region segmentation of the human face (target object) in the background image includes: using the triangulation algorithm of the key point set to segment the human face (target object) into N triangular regions, where N is a natural number.
- Figure 4 shows a schematic diagram of the segmentation of the triangulation algorithm (Delaunay).
- the device further includes:
- the area selection unit is configured to dynamically select the corresponding image area in the target object in real time according to the anchor point of the sticker.
- the tracking area is determined.
- the tracking area is determined as the triangular area containing the anchor point, that is, the anchor point O is located inside the triangle ABC
- the situation is updated in real time as the anchor position changes.
- the position of the anchor point of the sticker is outside the range of the target object (such as a human face)
- three key point coordinates within the range of the target object that are relatively fixed and best reflect the characteristics of the target object are selected by default. In principle, you can select any three key points with obvious characteristics in the target object.
- the target object is the face of a character and the anchor point is outside the range of the face
- two are selected by default
- the eyes and the end of the nose are used as three key points.
- Such three key points have distinctive features, and the formed triangle has good face positioning characteristics, and the obtained anchor point position is more accurate.
- This embodiment provides an electronic device used in a method for generating a sticker.
- the electronic device includes: at least one processor; and a memory communicatively connected with the at least one processor; wherein,
- the memory stores instructions executable by the one processor, and the instructions are executed by the at least one processor so that the at least one processor can:
- the background image including a target object
- the sticker is generated according to the display area, the tracking area, and the resources of the sticker.
- FIG. 8 shows a schematic structural diagram of an electronic device 800 suitable for implementing embodiments of the present disclosure.
- Electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablets), PMPs (portable multimedia players), vehicle-mounted terminals (for example, Mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers, etc.
- the electronic device shown in FIG. 8 is only an example, and should not bring any limitation to the function and scope of use of the embodiments of the present disclosure.
- the electronic device 800 may include a processing device (such as a central processing unit, a graphics processor, etc.) 801, which may be loaded into a random access device according to a program stored in a read-only memory (ROM) 802 or from a storage device 808
- the program in the memory (RAM) 803 executes various appropriate actions and processing.
- the RAM 803 also stores various programs and data required for the operation of the electronic device 800.
- the processing device 801, the ROM 802, and the RAM 803 are connected to each other through a bus 804.
- An input/output (I/O) interface 805 is also connected to the bus 804.
- the following devices can be connected to the I/O interface 805: including input devices 806 such as touch screen, touch panel, keyboard, mouse, image sensor, microphone, accelerometer, gyroscope, etc.; including, for example, liquid crystal display (LCD), speakers, An output device 807 such as a vibrator; a storage device 808 such as a magnetic tape and a hard disk; and a communication device 809.
- the communication device 809 may allow the electronic device 800 to perform wireless or wired communication with other devices to exchange data.
- FIG. 8 shows an electronic device 800 having various devices, it should be understood that it is not required to implement or have all the illustrated devices. It may alternatively be implemented or provided with more or fewer devices.
- the process described above with reference to the flowchart can be implemented as a computer software program.
- the embodiments of the present disclosure include a computer program product, which includes a computer program carried on a computer-readable medium, and the computer program contains program code for executing the method shown in the flowchart.
- the computer program may be downloaded and installed from the network through the communication device 809, or installed from the storage device 808, or installed from the ROM 802.
- the processing device 801 the above-mentioned functions defined in the method of the embodiment of the present disclosure are executed.
- the aforementioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
- the computer-readable storage medium may be, for example, but not limited to, an electric, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the above. More specific examples of computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
- a computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
- a computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier wave, and a computer-readable program code is carried therein. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
- the computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium.
- the computer-readable signal medium may send, propagate, or transmit the program for use by or in combination with the instruction execution system, apparatus, or device .
- the program code contained on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: wire, optical cable, RF (Radio Frequency), etc., or any suitable combination of the above.
- the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or it may exist alone without being assembled into the electronic device.
- the aforementioned computer-readable medium carries one or more programs, and when the aforementioned one or more programs are executed by the electronic device, the electronic device:
- the background image includes the target object; the display area of the sticker and the anchor point of the sticker are displayed in the background image; the import instruction of the sticker is received; the resource of the sticker is imported according to the import instruction and displayed in In the display area of the sticker; dynamically select a tracking area according to the anchor point position of the sticker, wherein the tracking area is an image area in the target object; and generate the tracking area according to the display area, the tracking area, and the resources of the sticker Stickers.
- the computer program code used to perform the operations of the present disclosure may be written in one or more programming languages or a combination thereof.
- the above-mentioned programming languages include object-oriented programming languages—such as Java, Smalltalk, C++, and also conventional Procedural programming language-such as "C" language or similar programming language.
- the program code can be executed entirely on the user's computer, partly on the user's computer, executed as an independent software package, partly on the user's computer and partly executed on a remote computer, or entirely executed on the remote computer or server.
- the remote computer can be connected to the user’s computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to pass Internet connection).
- LAN local area network
- WAN wide area network
- each block in the flowchart or block diagram can represent a module, program segment, or part of code, and the module, program segment, or part of code contains one or more for realizing the specified logical function Executable instructions.
- the functions marked in the block may also occur in a different order from the order marked in the drawings. For example, two blocks shown in succession can actually be executed substantially in parallel, or they can sometimes be executed in the reverse order, depending on the functions involved.
- each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or operations Or it can be realized by a combination of dedicated hardware and computer instructions.
- the units involved in the embodiments described in the present disclosure may be implemented in a software manner, or may be implemented in a hardware manner. Among them, the name of the unit does not constitute a limitation on the unit itself under certain circumstances.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Processing Or Creating Images (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (20)
- 一种贴纸生成方法,其特征在于,包括:获取背景图像,所述背景图像中包括目标对象;在所述背景图像中显示贴纸的显示区域以及贴纸的锚点;接收所述贴纸的导入指令;根据所述导入指令导入贴纸的资源并显示在所述贴纸的显示区域中;根据所述贴纸的锚点位置动态选择追踪区域,其中所述追踪区域为目标对象中的图像区域;以及根据所述显示区域、追踪区域以及贴纸的资源生成所述贴纸。
- 根据权利要求1所述的方法,其特征在于,将所述贴纸的锚点的位置设置在贴纸的所述显示区域的中心点或者所述显示区域的一个顶点位置。
- 根据权利要求1所述的方法,其特征在于,还包括:获取所述背景图像上的所述目标对象的关键点;通过所述关键点对所述目标对象进行区域分割。
- 根据权利要求3所述的方法,其特征在于,所述目标对象关键点至少为三个。
- 根据权利要求4所述的方法,其特征在于,所述对所述目标对象进行区域分割,包括:采用点集的三角剖分算法将所述目标对象分割成N个三角形区域,其中N为自然数。
- 根据权利要求5所述的方法,其特征在于,所述追踪区域为目标对象中的图像区域,具体为所述锚点对应的为所述目标对象中三个关键点构成的三角形区域。
- 根据权利要求6所述的方法,其特征在于,当所述贴纸的锚点在所述目标对象的区域范围内部时,所述追踪区域为包含所述锚点的三角形区域。
- 根据权利要求6所述的方法,其特征在于,当所述贴纸的锚点在所述目标对象的区域范围外部时,所述追踪区 域为目标对象范围内的固定三个关键点坐标构成的三角形区域。
- 根据权利要求8所述的方法,其特征在于,所述目标对象为人物的脸部,且所述锚点在所述脸部范围以外时,所述固定三个关键点选为两个眼睛的瞳孔和鼻子端部,所述追踪区域为两个眼睛的瞳孔和鼻子端部所构成的三角形区域。
- 根据权利要求1所述的方法,其特征在于,所述贴纸的资源包括贴纸的类型和贴纸的显示方式。
- 根据权利要求10所述的方法,其特征在于,所述贴纸的类型包括单张图片和序列帧动画,所述贴纸的显示方式选自正向显示、反向显示、按角度显示、按水平镜像和按垂直镜像显示的其中之一的显示方式。
- 根据权利要求1所述的方法,其特征在于,所述方法还包括:根据所述贴纸的锚点在画布中的位置实时更新所述追踪区域。
- 根据权利要求1所述的方法,其特征在于,还包括:接收所述贴纸的位置调整和大小缩放指令,使所述贴纸在画布范围内任意拖动、缩放。
- 一种贴纸生成装置,其特征在于,包括:背景获取单元,用于获取背景图像,所述背景图像中包括目标对象;贴纸显示单元,用于在所述背景图像中显示贴纸的显示区域以及贴纸的锚点;指令接收单元,用于接收所述贴纸的导入指令;贴纸导入单元,用于根据所述导入指令导入贴纸的资源并显示在所述贴纸的显示区域中;区域选择单元,用于根据所述贴纸的锚点位置动态选择追踪区域,其中所述追踪区域为目标对象中的图像区域;以及贴纸生成单元,根据所述显示区域、追踪区域以及贴纸的资源生成所述贴纸。
- 根据权利要求14所述的装置,其特征在于,还包括:锚点获取单元,用于获取贴纸的锚点,该锚点用于定位所述贴纸的位置。
- 根据权利要求14所述的装置,其特征在于,还包括:关键点获取单元,用于获取所述背景图像上的目标对象以及所述目标对象的关键点,所述关键点用于定义所述追踪区域。
- 根据权利要求16所述的装置,其特征在于,还包括:区域分割单元,用于通过所述关键点对所述目标对象进行区域分割,将所述目标对象分割成N个三角形区域,所述N为自然数。
- 根据权利要求14所述的装置,其特征在于,还包括:区域选择单元,用于根据所述贴纸的锚点实时动态选择对应的所述目标对象中的图像区域。
- 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述程序被处理器执行时实现如权利要求1至13中任一项所述的方法。
- 一种电子设备,其特征在于,包括:一个或多个处理器;存储装置,用于存储一个或多个程序,当所述一个或多个程序被所述一个或多个处理器执行时,使得所述一个或多个处理器实现如权利要求1至13中任一项所述的方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2020301254A AU2020301254B2 (en) | 2019-06-25 | 2020-05-22 | Sticker generating method and apparatus, and medium and electronic device |
CA3143817A CA3143817A1 (en) | 2019-06-25 | 2020-05-22 | Sticker generating method and apparatus, and medium and electronic device |
US17/560,140 US11494961B2 (en) | 2019-06-25 | 2021-12-22 | Sticker generating method and apparatus, and medium and electronic device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910556164.5 | 2019-06-25 | ||
CN201910556164.5A CN112132859A (zh) | 2019-06-25 | 2019-06-25 | 贴纸生成方法、装置、介质和电子设备 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/560,140 Continuation US11494961B2 (en) | 2019-06-25 | 2021-12-22 | Sticker generating method and apparatus, and medium and electronic device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020259152A1 true WO2020259152A1 (zh) | 2020-12-30 |
Family
ID=73849446
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/091805 WO2020259152A1 (zh) | 2019-06-25 | 2020-05-22 | 贴纸生成方法、装置、介质和电子设备 |
Country Status (5)
Country | Link |
---|---|
US (1) | US11494961B2 (zh) |
CN (1) | CN112132859A (zh) |
AU (1) | AU2020301254B2 (zh) |
CA (1) | CA3143817A1 (zh) |
WO (1) | WO2020259152A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112929683A (zh) * | 2021-01-21 | 2021-06-08 | 广州虎牙科技有限公司 | 视频处理方法、装置、电子设备及存储介质 |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113315924A (zh) * | 2020-02-27 | 2021-08-27 | 北京字节跳动网络技术有限公司 | 图像特效处理方法及装置 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107343225A (zh) * | 2016-08-19 | 2017-11-10 | 北京市商汤科技开发有限公司 | 在视频图像中展示业务对象的方法、装置和终端设备 |
US20180253824A1 (en) * | 2016-01-21 | 2018-09-06 | Tencent Technology (Shenzhen) Company Limited | Picture processing method and apparatus, and storage medium |
CN108846878A (zh) * | 2018-06-07 | 2018-11-20 | 奇酷互联网络科技(深圳)有限公司 | 人脸贴图生成方法、装置、可读存储介质及移动终端 |
CN109191544A (zh) * | 2018-08-21 | 2019-01-11 | 北京潘达互娱科技有限公司 | 一种贴纸礼物展示方法、装置、电子设备及存储介质 |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7391888B2 (en) * | 2003-05-30 | 2008-06-24 | Microsoft Corporation | Head pose assessment methods and systems |
CN102436668A (zh) * | 2011-09-05 | 2012-05-02 | 上海大学 | 京剧脸谱自动化妆方法 |
CN105957123A (zh) * | 2016-04-19 | 2016-09-21 | 乐视控股(北京)有限公司 | 图片编辑的方法、装置及终端设备 |
US20180234708A1 (en) * | 2017-02-10 | 2018-08-16 | Seerslab, Inc. | Live streaming image generating method and apparatus, live streaming service providing method and apparatus, and live streaming system |
CN109391792B (zh) * | 2017-08-03 | 2021-10-29 | 腾讯科技(深圳)有限公司 | 视频通信的方法、装置、终端及计算机可读存储介质 |
CN108986016B (zh) * | 2018-06-28 | 2021-04-20 | 北京微播视界科技有限公司 | 图像美化方法、装置及电子设备 |
CN108958610A (zh) * | 2018-07-27 | 2018-12-07 | 北京微播视界科技有限公司 | 基于人脸的特效生成方法、装置和电子设备 |
CN109147007B (zh) * | 2018-08-01 | 2023-09-01 | Oppo(重庆)智能科技有限公司 | 贴纸加载方法、装置、终端及计算机可读存储介质 |
-
2019
- 2019-06-25 CN CN201910556164.5A patent/CN112132859A/zh active Pending
-
2020
- 2020-05-22 WO PCT/CN2020/091805 patent/WO2020259152A1/zh active Application Filing
- 2020-05-22 CA CA3143817A patent/CA3143817A1/en active Pending
- 2020-05-22 AU AU2020301254A patent/AU2020301254B2/en active Active
-
2021
- 2021-12-22 US US17/560,140 patent/US11494961B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180253824A1 (en) * | 2016-01-21 | 2018-09-06 | Tencent Technology (Shenzhen) Company Limited | Picture processing method and apparatus, and storage medium |
CN107343225A (zh) * | 2016-08-19 | 2017-11-10 | 北京市商汤科技开发有限公司 | 在视频图像中展示业务对象的方法、装置和终端设备 |
CN108846878A (zh) * | 2018-06-07 | 2018-11-20 | 奇酷互联网络科技(深圳)有限公司 | 人脸贴图生成方法、装置、可读存储介质及移动终端 |
CN109191544A (zh) * | 2018-08-21 | 2019-01-11 | 北京潘达互娱科技有限公司 | 一种贴纸礼物展示方法、装置、电子设备及存储介质 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112929683A (zh) * | 2021-01-21 | 2021-06-08 | 广州虎牙科技有限公司 | 视频处理方法、装置、电子设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
AU2020301254B2 (en) | 2023-05-25 |
CN112132859A (zh) | 2020-12-25 |
US20220139016A1 (en) | 2022-05-05 |
US11494961B2 (en) | 2022-11-08 |
AU2020301254A1 (en) | 2022-02-17 |
CA3143817A1 (en) | 2020-12-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021139408A1 (zh) | 显示特效的方法、装置、存储介质及电子设备 | |
CN110766777B (zh) | 虚拟形象的生成方法、装置、电子设备及存储介质 | |
WO2020186935A1 (zh) | 虚拟对象的显示方法、装置、电子设备和计算机可读存储介质 | |
WO2020248900A1 (zh) | 全景视频的处理方法、装置及存储介质 | |
WO2022007565A1 (zh) | 增强现实的图像处理方法、装置、电子设备及存储介质 | |
US20230386137A1 (en) | Elastic object rendering method and apparatus, device, and storage medium | |
US11494961B2 (en) | Sticker generating method and apparatus, and medium and electronic device | |
WO2021093689A1 (zh) | 面部图像变形方法、装置、电子设备和计算机可读介质 | |
US11561651B2 (en) | Virtual paintbrush implementing method and apparatus, and computer readable storage medium | |
WO2024104248A1 (zh) | 虚拟全景图的渲染方法、装置、设备及存储介质 | |
CN109754464A (zh) | 用于生成信息的方法和装置 | |
WO2023193642A1 (zh) | 视频处理方法、装置、设备及介质 | |
WO2022247630A1 (zh) | 图像处理方法、装置、电子设备及存储介质 | |
CN112257594A (zh) | 多媒体数据的显示方法、装置、计算机设备及存储介质 | |
CN109816791B (zh) | 用于生成信息的方法和装置 | |
CN110069996A (zh) | 头部动作识别方法、装置和电子设备 | |
WO2022057576A1 (zh) | 人脸图像显示方法、装置、电子设备及存储介质 | |
US20230334801A1 (en) | Facial model reconstruction method and apparatus, and medium and device | |
CN114049403A (zh) | 一种多角度三维人脸重建方法、装置及存储介质 | |
WO2021073204A1 (zh) | 对象的显示方法、装置、电子设备及计算机可读存储介质 | |
CN112488909A (zh) | 多人脸的图像处理方法、装置、设备及存储介质 | |
WO2021121291A1 (zh) | 图像处理方法、装置、电子设备及计算机可读存储介质 | |
CN112132913A (zh) | 图像处理方法、装置、介质和电子设备 | |
US11876843B2 (en) | Method, apparatus, medium and electronic device for generating round-table video conference | |
US20230409121A1 (en) | Display control method, apparatus, electronic device, medium, and program product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20833088 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 3143817 Country of ref document: CA |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2020301254 Country of ref document: AU Date of ref document: 20200522 Kind code of ref document: A |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20833088 Country of ref document: EP Kind code of ref document: A1 |