WO2020108291A1 - 人脸美化方法、装置、计算机设备及存储介质 - Google Patents

人脸美化方法、装置、计算机设备及存储介质 Download PDF

Info

Publication number
WO2020108291A1
WO2020108291A1 PCT/CN2019/117475 CN2019117475W WO2020108291A1 WO 2020108291 A1 WO2020108291 A1 WO 2020108291A1 CN 2019117475 W CN2019117475 W CN 2019117475W WO 2020108291 A1 WO2020108291 A1 WO 2020108291A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
adjustment
area
parameter
parameters
Prior art date
Application number
PCT/CN2019/117475
Other languages
English (en)
French (fr)
Inventor
汪倩怡
邢雪源
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2020108291A1 publication Critical patent/WO2020108291A1/zh
Priority to US17/194,880 priority Critical patent/US11410284B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/10Selection of transformation methods according to the characteristics of the input images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/20Linear translation of whole images or parts thereof, e.g. panning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20072Graph-based image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • Embodiments of the present application relate to the field of image processing, and in particular, to a face beautification method, device, computer equipment, and storage medium.
  • Beauty application is an application to beautify the face in the picture.
  • the user can use the beauty application to perform real-time beautification during the shooting process, and can also use the beauty application to beautify the captured photos.
  • the user when using the beauty application, can adjust the facial shape and facial features of the human face in the image by manually pushing and pulling. For example, the user can push and pull the facial contour of the human face in the image to realize the thin face function; and also can push and pull the facial features of the human face in the image to realize the local enlargement or reduction of the facial features area.
  • a face beautification method, device, computer equipment, and storage medium are provided.
  • an embodiment of the present application provides a face beautification method, which is executed by a computer device, and the method includes:
  • adjustment parameters corresponding to the target face according to the target face and the reference face, the adjustment parameters including face shape adjustment parameters and facial features adjustment parameters;
  • the adjusted target face is displayed.
  • an embodiment of the present application provides a face beautification device, and the device includes:
  • the acquisition module is used to acquire the target face contained in the target image
  • a first generation module configured to generate adjustment parameters corresponding to the target face according to the target face and the reference face, the adjustment parameters including face shape adjustment parameters and facial features adjustment parameters;
  • the second generation module is used to generate a displacement vector according to the face shape adjustment parameter and the facial features adjustment parameter, and the displacement vector is used to represent the size change, position change and angle change of the face shape and facial features of the target face during the adjustment happening;
  • An adjustment module configured to adjust the face shape and facial features of the target face according to the displacement vector
  • the display module is used for displaying the adjusted target face.
  • a computer device which includes a processor and a memory, and the memory stores computer-readable instructions, which when executed by the processor, causes the processor to execute the Steps of face beautification method.
  • a non-volatile computer-readable storage medium which stores computer-readable instructions, which when executed by one or more processors cause the one or more processes
  • the device executes the steps of the face beautification method.
  • the present application also provides a computer program product containing instructions that, when run on a computer, causes the computer to perform the face beautification method described in the above aspect.
  • FIG. 1 shows a schematic diagram of an implementation environment provided by an embodiment of the present application
  • FIG. 2 shows an implementation schematic diagram of a face beautification process in an image retouching scenario according to an embodiment of the present application
  • FIG. 3 shows an implementation schematic diagram of a face beautification process in a shooting scene according to an embodiment of the present application
  • FIG. 4 shows a flowchart of a face beautification method provided by an embodiment of the present application
  • FIG. 5 shows a schematic diagram of the principle of a face beautification method provided by an embodiment of the present application
  • FIG. 6 shows a flowchart of a face beautification method provided by another embodiment of the present application.
  • FIG. 7 shows a schematic diagram of an interface for displaying and selecting a face template according to an embodiment of the present application
  • FIG. 8 shows a schematic diagram of key points on the face of a target person according to an embodiment of the present application.
  • FIG. 9 shows an implementation schematic diagram of the process of adjusting grid vertices and redrawing pixel points according to an embodiment of the present application.
  • FIG. 10 shows a flowchart of a face beautification method provided by another embodiment of the present application.
  • FIG. 11 shows a schematic diagram of the process of adjusting the vertices of a grid and redrawing pixels according to an embodiment of the present application
  • FIG. 12 shows a schematic diagram of the process of adjusting the vertices of a grid and redrawing pixels in another embodiment of the present application
  • FIG. 13 shows a schematic diagram of an image rendering pipeline rendering process according to an embodiment of the present application.
  • FIG. 14 shows a flowchart of a face beautification method provided by another embodiment of the present application.
  • FIG. 15 shows a block diagram of a face beautification device provided by an embodiment of the present application.
  • FIG. 16 shows a schematic structural diagram when a computer device provided by an embodiment of the present application is specifically implemented as a terminal.
  • Face key points points used to locate the feature points on the face.
  • the feature points can be feature points of the facial features (eyebrows, eyes, mouth, nose, ears) and key points of the face contour.
  • the face key point is the face key point detection model output after the face image is input into the face key point detection model.
  • the key points of the face can be divided into 5 points, 68 points, 83 points and 90 points.
  • Shaders Programs in the programmable rendering pipeline are called shaders and are divided according to their role. Shaders include vertex shaders (Vertex Shader) and fragment shaders (Fragment Shader) (or pixel shaders). Among them, the vertex shader is used to process vertex data, that is, to determine the vertex position of the graphic vertex; the fragment shader is used to process pixel data, that is, to render and shade each pixel in the graphic. When rendering graphics, usually use the vertex shader to render the graphics vertices, and then use the fragment shader to render the pixels in the vertex triangle (composed of three vertices connected).
  • Face template A template used to describe facial features, which contains face data and facial features data.
  • the face shape data includes at least one of the following: overall face shape, forehead height, forehead width, chin height, and chin width; facial facial features data include at least one of the following: eye size, eye spacing, eye longitudinal position, eyes Horizontal position, nose size, nose height, nose vertical position, nose wings size, nose tip size, mouth size, mouth horizontal position, lip thickness, eyebrow longitudinal position, eyebrow spacing.
  • FIG. 1 shows a schematic diagram of an implementation environment provided by an embodiment of the present application.
  • the implementation environment includes a terminal 120 and a server 140.
  • the terminal 120 is an electronic device installed with a beauty application, and the electronic device may be a smart phone, a tablet computer, a personal computer, or the like.
  • the terminal 120 is a smartphone as an example.
  • the beauty application may be a picture processing application with a beauty function, and the picture processing application is used to perform beauty treatment on the face in the photographed or downloaded picture; or, the beauty application It may be a camera application with a beauty function, which is used to perform beauty treatment on the face contained in the image currently collected by the terminal 120; or, the beauty application may also be a live broadcast with a beauty function Application program, this live application is used to perform beautification on the face collected in the image, and then push the local video stream data to other live viewing clients through the live broadcast server; or, the beautification application can also be Short video application with beautification function. This short video application is used for beautification of human faces during the shooting of short videos, and the captured short videos are posted to the short video platform for other users to click to watch.
  • the embodiments of the present application do not limit the specific types of beauty application programs.
  • the terminal 120 and the server 140 are connected through a wired or wireless network.
  • the server 140 is a server cluster or cloud computing center composed of one server and several servers.
  • the server 140 is a background server of the beauty application in the terminal 120.
  • the face template library 141 of the server 140 stores several face templates.
  • the beauty application needs to update the local face template, or needs to obtain a new person
  • the terminal 120 sends a request for obtaining a face template to the server 140, and receives face template data fed back by the server 140.
  • Subsequent beauty applications can perform face beauty according to the face template.
  • the face template may be a template made according to the star face.
  • the terminal 120 when the face beautification method provided by the embodiment of the present application is applied to the terminal 120, the terminal 120 locally beautifies the face in the image through the beautification application, and The image after the face is displayed; when the face beautification method provided in the embodiment of the present application is applied to the server 140, the terminal 120 uploads the image to the server 140, and the server 140 performs face beautification on the face in the image, The image after Yan is sent to the terminal 120 for display by the terminal 120.
  • the aforementioned wireless network or wired network uses standard communication technologies and/or protocols.
  • the network is usually the Internet, but it can also be any network, including but not limited to local area network (Local Area Network, LAN), metropolitan area network (Metropolitan Area Network, MAN), wide area network (Wide Area Network, WAN), mobile, wired or wireless Network, private network, or any combination of virtual private networks).
  • technologies and/or formats including HyperText Markup (Language, HTML), Extensible Markup Language (XML), etc. are used to represent data exchanged over the network.
  • SSL Secure Socket Layer
  • TLS Transport Layer Security
  • VPN Virtual Private Network
  • IPsec Internet Protocol Security
  • the following embodiments take the face beautification method applied to the terminal 120 in FIG. 1 as an example for description.
  • the face beautification method provided in the embodiment of the present application can be used to beautify the face shape and facial features, and applicable application scenarios include a retouching scene and a shooting scene.
  • applicable application scenarios include a retouching scene and a shooting scene.
  • the following describes face beautification methods in different application scenarios.
  • the face beautification method When the face beautification method is applied to a retouching scene, the face beautification method can be implemented as an image processing application, and installed and run in a terminal.
  • the image processing application provides a "one-key shaping" function entrance, and provides several reference face templates for users to choose.
  • the terminal starts the image processing application.
  • the user Before retouching, the user first selects the picture 22 to be beautified from the album. After the picture selection is completed, the picture processing application provides several kinds of beauty function entrances for the user to choose.
  • the image processing application When the user clicks on the "one-key shaping" function entry 23, the image processing application further displays a number of reference face templates for the user to choose.
  • the image processing application beautifies the face in the beautified picture according to the face parameters corresponding to the reference face template 24 and the face parameters of the face in the picture to be beautified , So that the contour of the face and the facial features ratio after beautification tend to refer to the face template, and beautification prompt information 25 is displayed during the beautification process.
  • the picture processing application displays the beautified picture 26 and provides a saving portal for the user to save.
  • the face beautification method can be implemented as a camera application program, and installed and run in the terminal.
  • the camera application provides a "one-key shaping" function entrance, and provides several reference face templates for users to choose.
  • the terminal starts the camera application.
  • the user Before shooting, the user first selects the "one-key shaping" function entry 32 among the several beauty function entrances provided by the picture processing application, and then further selects "standard” from the several reference face templates displayed by the picture processing application "Refer to face template 33.
  • the camera processing application beautifies the face in the view frame according to the face parameters corresponding to the reference face template 33 and the face parameters of the face in the view frame, and displays the beautified face image 34 in real time in the viewfinder In the picture. Subsequent users can complete the shooting by clicking the shooting control 34.
  • the above-mentioned face beautification method can also be used in other scenes involving beautification of faces in images, and the embodiments of the present application do not limit specific application scenarios.
  • FIG. 4 shows a flowchart of a face beautification method provided by an embodiment of the present application.
  • This embodiment is exemplified by the method applied to the terminal 120 in FIG. 1.
  • the method may include the following steps:
  • Step 401 Acquire a target face included in the target image.
  • the target image is a picture, or the target image is an image displayed in the viewfinder screen.
  • the target image when applied to the picture processing application installed in the terminal, the target image is the imported picture; when applied to the camera application installed in the terminal, the target image is the real-time image displayed in the viewfinder screen.
  • the terminal after acquiring the target image, the terminal performs face detection on the target image, and if a face in the target image is detected, extracts the face in the target image as the target face, and performs step 402; if it is not detected When a face in the target image is reached, a prompt message is displayed to remind the user that the face cannot be detected.
  • the target face included in the target image may be a front face (ie, a front face), or may be a face at any angle (ie, a side face).
  • Step 402 Generate adjustment parameters corresponding to the target face according to the target face and the reference face.
  • the adjustment parameters include face shape adjustment parameters and facial features adjustment parameters.
  • the adjustment parameter is used to indicate that the target face is adjusted based on the reference face.
  • the reference face is a face template selected by the terminal from a number of face templates with high similarity to the target face, or the reference face is a face manually selected by the user from a number of face templates template.
  • the reference human face is a virtual human face constructed according to the standard human facial contour and the facial facial features ratio, or the reference human face is a human face of a public figure.
  • the terminal generates the adjustment parameters corresponding to the target face according to the face parameters of the target face and the face parameters of the reference face, and the adjustment parameters include a face shape for adjusting the face shape of the target face The adjustment parameters, and/or the facial adjustment parameters used to adjust the facial features of the target face.
  • the face shape adjustment parameters may include at least one of a human face width scaling ratio, a V face angle adjustment amount, a chin height adjustment amount, and a forehead height adjustment amount;
  • the facial features adjustment parameters may include facial features scaling ratio, facial features position offset At least one of the volume and the angle of facial features adjustment.
  • a displacement vector is generated according to the face shape adjustment parameters and facial features adjustment parameters, and the displacement vector is used to represent the size change, position change, and angle change of the face shape and facial features of the target face during the adjustment process.
  • the target is not directly adjusted according to the adjustment parameters.
  • the face is adjusted by first generating displacement vectors of the face shape and facial features in the target face based on the adjustment parameters, and then adjusting the size, position, and angle of the face shape and facial features according to the displacement vectors.
  • the displacement vector is generated by the vertex shader according to the adjustment parameters.
  • Step 404 Adjust the face shape and facial features of the target face according to the displacement vector.
  • the terminal adjusts the face shape and facial features of the target human face simultaneously according to the displacement vector, or the terminal adjusts the face shape and facial features of the target human face according to the predetermined adjustment order according to the displacement vector.
  • the terminal adjusts the target face through the vertex shader and the fragment shader according to the displacement vector.
  • the adjustment methods used when adjusting the face shape include size adjustment and angle adjustment, wherein the size adjustment includes adjusting the face aspect ratio of the target face, the height of the lower packet and the height of the forehead, and the angle adjustment includes adjusting the target face The angle of the V face.
  • the adjustment methods used when adjusting the facial features include size adjustment, angle adjustment, and position adjustment, wherein the size adjustment includes adjusting the facial features ratio, the angle adjustment includes adjusting the tilt angle of the facial features, and the position adjustment includes adjusting the facial features on the human face s position
  • the terminal generates the adjustment parameters corresponding to the target face by analyzing the difference between the target face and the reference face, and Based on the adjustment parameters, the target face is adjusted to tend to the reference face, ensuring that the adjusted target face's face shape and facial features ratio meet the standard, which improves the facial beauty effect.
  • the contour of the target face and the dynamic change process of the facial features can be displayed on the display interface.
  • Step 405 Display the adjusted target face.
  • the terminal After completing the adjustment, the terminal displays the adjusted target face.
  • the terminal simultaneously displays the target face before and after adjustment to facilitate the user to know the difference between the face shape and facial features before and after adjustment.
  • the terminal after obtaining the target face in the target image, the reference face that meets the preset criteria according to the face shape and facial features is generated, and the adjustment parameters corresponding to the target face are generated.
  • the terminal after acquiring the target image 501, the terminal performs face detection on the target image 501 to identify the target face 502 therein; at the same time, the terminal displays a number of face templates 503 for the user to select, based on The face template selected by the user determines the reference face 504.
  • the terminal generates an adjustment parameter 507 for face beautification.
  • the terminal divides the target human face 502 to obtain several adjustment regions 508.
  • the terminal uses the adjustment parameters 507 corresponding to the adjustment regions 508 and passes the vertex shader 509 The position of the vertex in the adjustment area 508 is adjusted, and the pixel points in the adjustment area 508 are redrawn by the fragment shader 510, and finally the adjusted target face 511 is obtained.
  • the following uses an exemplary embodiment to describe the process of face beautification.
  • FIG. 6 shows a flowchart of a face beautification method provided by another embodiment of the present application.
  • This embodiment is exemplified by the method applied to the terminal 120 in FIG. 1.
  • the method may include the following steps:
  • Step 601 Acquire a target face contained in the target image.
  • Step 602 Display at least one face template.
  • the face template includes a standard face template and/or a preset character template.
  • the standard face template corresponds to the face parameters of the standard face
  • the preset character template corresponds to the person of the preset character. Face parameters.
  • the terminal displays at least one face template for the user to select, and the face template may be a template built into the beauty application or a template downloaded from the server.
  • the beauty application running on the terminal provides a “one-key shaping” portal 71.
  • the portal 71 displays at least one face template option 72 for the user to select.
  • each face template corresponds to its own face parameters, and the face parameters include face parameters and facial features parameters.
  • the face parameters corresponding to the face template include the face type parameters and facial features parameters of the standard face (standards such as three courts and five eyes); when the face template is a preset character template At this time, the face parameters corresponding to the face template include face parameters and facial features parameters according to preset characters (such as public figures or stars).
  • the server pre-locates the face key points of the face image of the preset person, and then calculates the face shape parameters of the preset person based on the positioning to obtain the face key points and The facial features parameters, thereby generating a preset character template containing the above parameters.
  • the user may also upload a personal photo through the terminal, and the server generates a corresponding personal face template based on the personal photo and feeds it back to the terminal, which is not limited in this embodiment of the present application.
  • Step 603 When receiving the selection signal for the face template, determine the reference face according to the selected face template.
  • the terminal when receiving the selection signal for the face template 71 of “standard face”, the terminal determines the “standard face” as the reference face for beautifying the face.
  • Step 604 Identify the key points of the face of the target face.
  • the terminal performs face detection on the target image to identify the key points of the face on the target face.
  • the terminal can adopt the methods based on Active Shape Model (ASM) and Active Appearance Model (AAM), Cascaded Pose Regression (CPR) or Deep Learning Model to perform face recognition.
  • ASM Active Shape Model
  • AAM Active Appearance Model
  • CPR Cascaded Pose Regression
  • Deep Learning Model Deep Learning Model
  • the terminal may detect each face position based on the key points of the face. For example, each face part such as the left eye, right eye, nose, left eyebrow, right eyebrow, chin, and mouth in the target face can be detected to obtain the area where each face part is located. For example, the area where the left eyebrow is located can be determined according to the key points of the 8 face of the left eyebrow, and the area where the nose is located can be determined according to the key points of the 14 face of the nose.
  • each face part such as the left eye, right eye, nose, left eyebrow, right eyebrow, chin, and mouth in the target face can be detected to obtain the area where each face part is located.
  • the area where the left eyebrow is located can be determined according to the key points of the 8 face of the left eyebrow
  • the area where the nose is located can be determined according to the key points of the 14 face of the nose.
  • the terminal recognizes 83 or 90 key points of the face of the target person. Schematically, the distribution of key points on the face of the target person is shown in FIG. 8.
  • the terminal when beautification of the specified face in the image is required, after positioning the key points of the face, the terminal compares the obtained key points of the face with the key points of the specified face In order to calculate the face similarity between each face in the image and the designated face, and determine the face with the highest face similarity as the face to beautify. Subsequent terminals only beautify the specified face in the image, but not other faces.
  • Step 605 Determine the initial face parameters of the target face according to the key points of the face.
  • the initial face parameters include the initial face shape parameters and the initial facial features parameters.
  • the terminal obtains the key point type corresponding to the key points of the face, and determines the initial face type parameters according to the key points of the face contour, and determines the initial facial feature parameters according to the key points of the facial features.
  • face contour key points include at least one of cheek key points, chin key points, and forehead key points
  • facial facial features key points include eye key points, eyebrow key points, nose key points, and mouth key points At least one of them.
  • the initial face shape parameters determined by the terminal according to the key points of the face contour include at least one of the initial face width-height ratio, the initial chin height, and the initial V-face angle; the initial facial feature parameters determined according to the key points of the facial features It includes at least one of the initial facial features ratio, initial facial features position and initial facial features angle.
  • the above-mentioned initial face parameters may be expressed by a ratio (such as a quarter of the width of the face), or may also be expressed by a numerical value (such as 100px).
  • the terminal may directly determine the initial face parameter according to the identified key points of the face; when the initial face parameter and the reference face.
  • the terminal needs to cut the face area from the target image based on the face width (the size is larger than the face width, for example, it can be 3 times the face width ), and convert the face area and the reference face to the same coordinate space, and then determine the initial face parameters of the target face to avoid adjustment deviations caused by different coordinate spaces.
  • the initial face width to height ratio face width/face height
  • the face width is the distance between the key points of the left face contour and the right face contour
  • the face height is the upper face contour The distance between the key point and the key point of the next face contour.
  • the initial V-face angle is the angle formed by the cheek tangent and the chin tangent.
  • the terminal determines the cheek tangent 81 according to the key point of the cheek, and determines the chin tangent 82 according to the key point of the chin, thereby determining the initial V-face angle according to the angle formed by the cheek tangent 81 and the chin tangent 82.
  • the initial facial features position includes an initial facial feature lateral position and an initial facial feature longitudinal position, where the initial facial feature lateral height is the position of the facial features in the face width direction, and the initial facial feature longitudinal position is the location of the facial features in the face height direction, Moreover, the initial lateral position of the facial features and the vertical position of the initial facial features can be expressed in proportion (for example, at a quarter of the face height, at a half of the face width, etc.).
  • the initial facial features angle includes an initial eyebrow angle and an initial eye angle, where the initial eyebrow angle is the angle between the eyebrow line and the horizontal direction, and the initial eye angle is the angle between the eye line and the horizontal direction. As shown in FIG. 8, the initial eye angle of the left eye is the angle between the left eye line 83 and the horizontal direction.
  • Step 606 Generate an adjustment parameter according to the initial face parameter and the reference face parameter corresponding to the reference face.
  • the reference face parameter includes a reference face type parameter and a reference facial feature parameter.
  • the terminal Since the target face needs to be adjusted based on the reference face in the future, the terminal generates an adjustment parameter characterizing the difference between the two based on the initial face parameter and the reference face parameter corresponding to the reference face, where the reference face parameter
  • the parameter type is the same as the initial face parameter.
  • the terminal may adopt the following manner when generating the adjustment parameter:
  • the face width scaling ratio reference face width/height ratio/initial face width/height ratio
  • the subsequent terminal keeps the face height in accordance with the face width scaling ratio to the face width of the target face Make adjustments.
  • the terminal calculates that the face width scaling ratio is 1.03, that is, the face width needs to be enlarged by 1.03 times.
  • the aspect ratio of the target face is close to the reference face aspect ratio to avoid the problem of overweight or thin face after beautification.
  • the adjustment amount of the V-face angle in the adjustment parameters determines the adjustment amount of the V-face angle in the adjustment parameters, where the V-face angle is the angle formed by the cheek tangent and the chin tangent .
  • the V-face angle adjustment amount reference V-face angle-initial V-face angle
  • subsequent terminals adjust the cheek or chin of the target face according to the V-face angle adjustment amount, so that the V-face angle of the beautified face tends to be reference The V-face angle of the human face.
  • the chin height adjustment amount reference chin height-initial chin height
  • the subsequent terminal adjusts the chin height of the target face according to the chin height adjustment amount, so that the chin height of the face after beautification tends to Refer to the chin height of the face.
  • the chin height adjustment amount reference chin height/initial chin height
  • the subsequent terminal calculates the chin height offset according to the face height and the chin height adjustment amount.
  • the facial features ratio includes the ratio of facial features height to face height and facial features width. The ratio of the width of the face.
  • the scale of facial features initial facial features ratio/reference facial features ratio, where, when the initial facial features ratio is the initial facial features height ratio, the reference facial features ratio is the reference facial features height ratio, and the initial facial features ratio is When the initial facial features width ratio, the reference facial features ratio is the reference facial features width ratio.
  • the terminal calculates that the eye width scaling factor is 1.1, that is, the eye width of the target face needs to be enlarged to 1.1 times.
  • the ratio of the initial facial features width and facial features to the human face tends to refer to the human face, thus achieving the effect of optimizing the facial features ratio.
  • the facial feature position offset reference facial feature position-initial facial feature position, where the facial feature position offset includes a facial feature longitudinal offset and a facial feature lateral offset, and subsequent terminals are based on the initial facial feature position, based on The facial features offset adjusts the facial features in the vertical and horizontal positions of the face.
  • the adjustment amount of the facial features angle in the adjustment parameters is determined.
  • the subsequent terminal adjusts the facial feature angle of the target face according to the facial feature angle adjustment amount, so that the facial feature angle after beautification tends to refer to the facial feature angle.
  • the terminal calculates that the eye angle adjustment is 1°.
  • this embodiment only takes the generation of the above adjustment parameters as an example for description, and other parameters for adjusting the face shape and facial features can be used as adjustment parameters, which is not limited in this application.
  • the terminal After the face shape and facial features corresponding adjustment parameters are generated through the above steps, the terminal further adjusts the target face based on the following steps 607 to 612.
  • Step 607 Divide the target image into a grid of a predetermined size.
  • the terminal adjusts the target face in units of grids. Therefore, before adjusting the target face, the terminal first divides the target image into a rectangular grid of a predetermined size. Among them, the smaller the size of the grid, the finer the face adjustment, and accordingly, the higher the face adjustment effect.
  • the terminal divides the target image into 50 ⁇ 66 grids, that is, the width of each grid is 1/50 of the width of the target image, and the height of each grid is 1/66 of the height of the target image.
  • the terminal divides the target image 91 into several grids.
  • Step 608 Divide the adjustment area according to the key point types of the key points of the face on the target face.
  • the key point types include the key points of the face contour and the key points of the facial features.
  • the adjustment areas include the contour adjustment area and the facial features adjustment area.
  • the terminal first divides the contour adjustment area according to the face contour key points in the face key points, and divides the facial features according to the face facial feature key points in the face key points Adjust the area. That is, the divided contour adjustment area contains the key points of the face contour, and the divided facial adjustment area contains the key points of the facial features.
  • this step may include the following steps.
  • Step 608A according to the key points of the key points of the face, divide the face area, the face area includes the contour area and the facial features area.
  • the terminal divides the contour area according to the key points of the face contour, and divides the facial features area according to the facial feature contour points.
  • the contour area includes the chin area, the cheek area, the forehead area, and the facial features area includes the eye area, the nose area, and the eyebrow area. Area and mouth area.
  • the divided face area is the smallest area area containing key points corresponding to the face.
  • the eye area is the smallest area that contains all key points of the eye.
  • the divided contour area and facial features area are elliptical areas.
  • the terminal divides the oval eye area 92 according to the eye key points in the key points in the human face.
  • Step 608B Determine the adjustment areas corresponding to each face area.
  • the area of the adjustment area is larger than the area of the face area, and the face area is located inside the adjustment area.
  • the terminal determines the corresponding adjustment area on the basis of the face area.
  • the terminal stretch the boundary of the surrounding area of the face area (a predetermined amount of stretching) to obtain the corresponding adjustment area.
  • the terminal stretches the boundary area of the eye area 92 to obtain an oval eye adjustment area 93.
  • the terminal may also use other methods to determine the adjustment area corresponding to the face area, which is not limited in this embodiment of the present application.
  • Step 609 For each adjustment region, according to the adjustment parameters corresponding to the adjustment region, a vertex shader is used to calculate the displacement vector of the mesh vertices in the adjustment region during each adjustment.
  • the process of adjusting the face to be optimized is the process of adjusting the coordinates of the grid vertices in each adjustment area.
  • the adjustment of the mesh vertices may need to be performed multiple times, for example, the mesh vertices in the eye adjustment area, the nose adjustment area, and the mouth adjustment area need to be adjusted separately. Therefore, in a possible implementation manner In each time, the terminal performs pixel rendering based on the adjusted mesh vertex every time the terminal completes the mesh vertex adjustment to obtain an intermediate image, and uses the intermediate image as an input for the next mesh vertex adjustment.
  • the adjustment order of the adjustment regions will affect the final adjustment effect. For example, there is an intersection between the eye adjustment area and the eyebrow adjustment area in the target image, and there are mesh vertices in the intersection area. If the eye adjustment area is adjusted first, the mesh vertices in the intersection area may be adjusted after adjustment. The eyebrow adjustment area is moved out, so that when the eyebrow adjustment area is adjusted later, the mesh vertices in the original intersection area cannot be adjusted.
  • the vertex shader calculates the displacement of each mesh vertex in the adjustment area Vector (that is, the offset information of the mesh vertices), and store the displacement vector in the output of the fragment shader, as an input for the next adjustment of the mesh vertex coordinates.
  • the terminal every time the terminal adjusts the vertices of the mesh through the vertex shader, it will not directly render the full-resolution intermediate image through the vertex shader, thereby reducing the amount of rendering at each adjustment, thereby improving the rendering efficiency ; And, the vertex shader does not directly adjust the mesh vertices according to the displacement vector, so the adjustment timing of the adjustment area does not affect the final adjustment effect.
  • each vertex of the 50*66 grid is passed through the vertex shader After making adjustments, the vertex shader stores the displacement vector of the mesh vertices in the fragment shader, and uses the original mesh vertex coordinates as input for the next adjustment.
  • a displacement vector map is generated according to the displacement vector calculated during each adjustment, and the displacement vector map is used to indicate the change of the position of the mesh vertex during multiple adjustments.
  • the terminal After the last grid vertex coordinate adjustment is completed, the terminal accumulates the displacement vectors obtained after the previous grid vertex coordinate adjustments to generate a displacement vector graph of the grid vertex.
  • the terminal accumulates the displacement vectors obtained after the first to nth grid vertex adjustments to generate a displacement vector graph of grid vertices in 50*66 grids.
  • Step 611 according to the original coordinates of the mesh vertices in the adjustment area and the displacement distance and displacement direction indicated by the displacement vector diagram, adjust the coordinates of the mesh vertices by the vertex shader.
  • the terminal adjusts the coordinates of the vertices of each mesh through a vertex shader.
  • the subsequent terminal redraws the pixel points in the adjusted grid by the fragment shader to complete the beautification of the facial features.
  • the terminal obtains the displacement vector corresponding to the grid vertex from the displacement vector graph, thereby determining the displacement direction and the displacement distance corresponding to the grid vertex, and then in the grid Based on the original coordinates of the vertices, the coordinates of the mesh vertices are adjusted according to the displacement direction and displacement distance.
  • the terminal first determines the mesh vertex (open circle in FIG. 9) in the eye adjustment area 93, and then determines the eye according to the displacement vector diagram
  • the displacement direction of the mesh vertex in the left area of the adjustment area 93 is to the left, and the displacement distance is 10px
  • the displacement direction of the mesh vertex in the right area of the eye adjustment area 93 is to the right, and the displacement distance is 10px
  • the abscissa of the mesh vertex in the left area is reduced by 10px
  • the abscissa of the mesh vertex in the right area is increased by 10px.
  • step 612 each pixel in the adjusted grid is drawn by the fragment shader.
  • the terminal After adjusting the mesh vertices in each adjustment area through the above step 611, the terminal further redraws each pixel point in the adjusted mesh through the fragment shader.
  • the image rendering pipeline of image rendering is applied to the deformation scene, and the simple and intuitive performance is to move the position of the pixel.
  • the method can be: first divide the image into multiple mesh areas, and then pass the loaded vertex array of the mesh area as an input to the vertex shader, in the vertex shader vertex shader, modify the mesh area according to the shape of the deformation Vertex position.
  • the vertex shader is responsible for determining the position of the vertex.
  • the coordinates of the vertices of the mesh area can be adjusted according to the deformation rules (that is, the deformation parameters of each deformation unit), and the fragment shading fragment is responsible for each pixel.
  • the pixels inside each grid area make a difference according to the coordinates of the vertices, and finally convert the coordinates of the modified grid vertices into the screen coordinate system (that is, project onto the screen for display).
  • the shape of the mesh will change after the adjustment of the mesh vertices, so the fragment shader deforms the original pixels in each mesh through the interpolation algorithm to achieve the face shape and The effect of facial features deformation.
  • the interpolation process is automatically completed by Open Graphics Library (OpenGL), and can use nearest neighbor interpolation, bilinear interpolation, pixel area relationship resampling, bicubic interpolation or other interpolation methods.
  • Step 613 Display the adjusted target face.
  • step 405 For the implementation of this step, reference may be made to step 405 above, and this embodiment will not repeat them here.
  • the terminal provides multiple face templates for the user to select, and uses the face template selected by the user as the reference face to adjust the target face so that the adjusted face conforms to the user's aesthetic preferences.
  • the terminal determines the initial face parameters of the target face on the basis of the recognized key points of the face, so as to subsequently generate adjustment parameters based on the initial face parameters and the reference face parameters of the reference face, and adjust the subsequent face Provide quantitative indicators to improve the accuracy of face beautification.
  • the terminal divides the target image into several grids and divides different adjustment areas, so that the vertex shader and fragment shader can adjust and redraw the grid vertices and pixel points in the adjustment area to improve beauty.
  • the terminal divides the target image into several grids and divides different adjustment areas, so that the vertex shader and fragment shader can adjust and redraw the grid vertices and pixel points in the adjustment area to improve beauty.
  • the precision it makes the facial beauty effect more natural.
  • the terminal adjusts the coordinates of the vertices of the grid using a displacement vector graph based on the vertices of the grid, while reducing the amount of rendering and improving the rendering efficiency, and avoiding the impact of the adjustment timing on the final adjustment effect.
  • step 609 may further include the following steps.
  • step 609A according to the adjustment parameters corresponding to the adjustment area and the first adjustment factor, the displacement vector of the mesh vertex in the face area is calculated by the vertex shader.
  • different sub-regions in the adjustment region correspond to different adjustment factors, and the adjustment factor has a positive correlation with the adjustment amplitude of the grid vertex coordinates.
  • the eye area 92 corresponds to the first adjustment factor
  • the area other than the eye area 92 corresponds to the second adjustment factor
  • the first One adjustment factor is greater than the second adjustment factor
  • the first adjustment factor corresponding to the face area in the adjustment area is 1, and the second adjustment factor corresponding to the adjustment area outside the face area is a fixed value less than 1, such as 0.5; or, the adjustment area corresponding to the adjustment area outside the face area corresponds to The second adjustment factor of gradual transition from 1 to 0.
  • the terminal calculates the actual adjustment parameters according to the adjustment parameters and the first adjustment factor, so as to adjust the coordinates of the mesh vertices in the face area according to the actual adjustment parameters.
  • the actual adjustment parameter anjustment parameter ⁇ first adjustment factor.
  • x2 (x1–center_x)*a*mask1*adjustValue
  • y2 (y1–center_y)*a*mask1*adjustValue
  • a represents a constant coefficient, a can be set to 1.3, or its value can be flexibly set according to actual needs;
  • mask1 is the first adjustment factor;
  • adjustValue represents the facial adjustment parameters, mask1*adjustValue is the actual adjustment parameters;
  • Step 609B According to the adjustment parameters corresponding to the adjustment area and the second adjustment factor, calculate the displacement vector of the mesh vertices outside the face area in the adjustment area through the vertex shader.
  • the vertex shader For the adjustment area outside the face area, the vertex shader adjusts the mesh vertices outside the face area according to the second adjustment factor and the corresponding adjustment parameters.
  • the adjustment range of the coordinates of the mesh vertices in the face area is greater than the adjustment range of the coordinates of the mesh vertices outside the face area.
  • the terminal calculates the actual adjustment parameter according to the adjustment parameter and the second adjustment factor, so as to adjust the coordinates of the mesh vertex outside the face area according to the actual adjustment parameter.
  • the actual adjustment parameter is adjusted parameter ⁇ second adjustment factor.
  • x2 (x1–center_x)*a*mask2*adjustValue
  • y2 (y1–center_y)*a*mask2*adjustValue
  • a represents a constant coefficient, a can be set to 1.3, or its value can be flexibly set according to actual needs;
  • mask2 is the second adjustment factor;
  • adjustValue represents the facial adjustment parameters, mask1*adjustValue is the actual adjustment parameters;
  • the adjustment effect of the adjustment area is more natural, thereby improving the beautification of the face effect.
  • FIG. 15 shows a block diagram of a face beautification device provided by an embodiment of the present application.
  • the device may be the terminal 120 in the implementation environment shown in FIG. 1 or may be provided on the terminal 120.
  • the device may include:
  • the obtaining module 1501 is used to obtain the target face contained in the target image
  • the first generation module 1502 is configured to generate adjustment parameters corresponding to the target face according to the target face and the reference face, and the adjustment parameters include face shape adjustment parameters and facial features adjustment parameters;
  • the second generation module 1503 is configured to generate a displacement vector according to the face shape adjustment parameter and the facial features adjustment parameter, and the displacement vector is used to represent the size change, position change, and angle of the face shape and facial features of the target face during the adjustment process Changes;
  • An adjustment module 1504 configured to adjust the face shape and facial features of the target human face according to the displacement vector
  • the display module 1505 is configured to display the adjusted target face.
  • the first generating module 1502 includes:
  • a recognition unit for recognizing key points of the face of the target face
  • a determining unit configured to determine initial face parameters of the target face according to the key points of the face, and the initial face parameters include initial face type parameters and initial facial features parameters;
  • the first generating unit is configured to generate the adjustment parameter according to the initial face parameter and the reference face parameter corresponding to the reference face, and the reference face parameter includes a reference face type parameter and a reference facial feature parameter.
  • the first generating unit is used to:
  • the scaling ratio of the face width in the adjustment parameter is determined according to the initial face width-height ratio in the initial face shape parameter and the reference face width-height ratio in the reference face shape parameter.
  • the V face angle adjustment amount in the adjustment parameters is determined according to the initial V face angle in the initial face shape parameter and the reference V face angle in the reference face shape parameter, where the V face angle is defined by the cheek tangent and chin tangent Formed angle
  • the first generating unit is used to:
  • the facial features ratio includes the ratio of facial features height to face height And the ratio of facial features to the width of the face
  • the adjustment amount of the facial feature angle in the adjustment parameter is determined according to the initial facial feature angle in the initial facial feature parameter and the reference facial feature angle in the reference facial feature parameter.
  • the device further includes:
  • a first dividing module configured to divide the target image into a grid of a predetermined size
  • the second dividing module is used to divide the adjustment area according to the key point types of the face key points on the target face, the key point types include face contour key points and face facial features key points, and the adjustment area includes Contour adjustment area and facial features adjustment area;
  • the second generation module 1503 includes:
  • a calculation unit for each adjustment region, according to the adjustment parameters corresponding to the adjustment region, the vertex shader is used to calculate the displacement vector of the vertices of the mesh in the adjustment region for each adjustment;
  • the second generating unit is configured to generate a displacement vector map according to the displacement vector calculated during each adjustment, and the displacement vector map is used to indicate a change in the position of the mesh vertex during multiple adjustments.
  • the second dividing module is used to:
  • the face area includes a contour area and a facial features area
  • the adjustment areas corresponding to the respective face areas the area of the adjustment area is larger than the area of the face area, and the face area is located inside the adjustment area.
  • the adjustment module 1504 includes:
  • the first adjusting unit is configured to adjust the coordinates of the mesh vertices through the vertex shader according to the original coordinates of the mesh vertices in the adjustment area and the displacement distance and displacement direction indicated by the displacement vector diagram;
  • the second adjustment unit is used to draw each pixel in the adjusted grid through the fragment shader.
  • calculation unit is used to:
  • the displacement vector of the mesh vertices outside the face area in the adjustment area is calculated by the vertex shader, the first adjustment factor is greater than the The second adjustment factor, and the adjustment range of the coordinates of the grid vertices in the face area is greater than the adjustment range of the coordinates of the grid vertices outside the face area.
  • the device further includes:
  • a template display module is used to display at least one face template, and the face template includes a standard face template and/or a preset character template, the standard face template corresponds to the face parameters of the standard face, and the predetermined Set the character template to correspond to the face parameters of the preset character;
  • the reference face determination module is configured to determine the reference face according to the selected face template when receiving the selection signal for the face template.
  • the embodiment of the present application after obtaining the target face in the target image, the reference face that meets the preset criteria according to the face shape and facial features is generated, and the adjustment parameters corresponding to the target face are generated. Adjust the size, angle, and/or position of the face shape and facial features of the target face, and then display the adjusted target face; the embodiments of the present application take into account the difference of different faces, according to the target face and the reference person Based on the difference between the faces, an adjustment strategy in accordance with the characteristics of the target face is formulated, and the targeted formulation of the beauty strategy is realized, thereby improving the beauty effect of the face.
  • FIG. 16 shows a schematic structural diagram of a computer device provided by an embodiment of the present application.
  • the terminal can be implemented as the terminal 120 in the implementation environment shown in FIG. 1 to implement the face beautification method provided in the foregoing embodiment. Specifically:
  • the terminal includes: a processor 1601 and a memory 1602.
  • the processor 1601 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on.
  • the processor 1601 may adopt at least one hardware form of DSP (Digital Signal Processing, digital signal processing), FPGA (Field-Programmable Gate Array), PLA (Programmable Logic Array). achieve.
  • the processor 1601 may also include a main processor and a coprocessor.
  • the main processor is a processor for processing data in a wake-up state, also known as a CPU (Central Processing Unit).
  • the coprocessor is A low-power processor for processing data in the standby state.
  • the processor 1601 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used to render and draw content that needs to be displayed on the display screen.
  • the processor 1601 may also include an AI (Artificial Intelligence, Artificial Intelligence) processor, which is used to process computing operations related to machine learning.
  • AI Artificial Intelligence, Artificial Intelligence
  • the memory 1602 may include one or more computer-readable storage media, which may be tangible and non-transitory.
  • the memory 1602 may also include high-speed random access memory, and non-volatile memory, such as one or more magnetic disk storage devices and flash memory storage devices.
  • the non-transitory computer-readable storage medium in the memory 1602 is used to store at least one instruction for execution by the processor 1601 to implement the face beautification method provided in the present application.
  • the computer device may optionally further include: a peripheral device interface 1603 and at least one peripheral device.
  • the peripheral device includes at least one of a radio frequency circuit 1604, a touch display screen 1605, a camera 1606, an audio circuit 1607, a positioning component 1608, and a power supply 1609.
  • the peripheral device interface 1603 may be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 1601 and the memory 1602.
  • the processor 1601, the memory 1602, and the peripheral device interface 1603 are integrated on the same chip or circuit board; in some other embodiments, any one of the processor 1601, the memory 1602, and the peripheral device interface 1603 or Both can be implemented on a separate chip or circuit board, which is not limited in this embodiment.
  • the radio frequency circuit 1604 is used to receive and transmit RF (Radio Frequency) signals, also called electromagnetic signals.
  • the radio frequency circuit 1604 communicates with a communication network and other communication devices through electromagnetic signals.
  • the radio frequency circuit 1604 converts the electrical signal into an electromagnetic signal for transmission, or converts the received electromagnetic signal into an electrical signal.
  • the radio frequency circuit 1604 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a user identity module card, and so on.
  • the radio frequency circuit 1604 can communicate with other computer devices through at least one wireless communication protocol.
  • the wireless communication protocol includes but is not limited to: World Wide Web, Metropolitan Area Network, Intranet, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity) networks.
  • the radio frequency circuit 1604 may further include circuits related to NFC (Near Field Communication), which is not limited in this application.
  • the touch display screen 1605 is used to display a UI (User Interface).
  • the UI may include graphics, text, icons, video, and any combination thereof.
  • the touch display 1605 also has the ability to collect touch signals on or above the surface of the touch display 1605.
  • the touch signal can be input to the processor 1601 as a control signal for processing.
  • the touch display screen 1605 is used to provide virtual buttons and/or virtual keyboards, also called soft buttons and/or soft keyboards.
  • touch display screen 1605 there may be one touch display screen 1605, which is provided with a front panel of the computer device; in other embodiments, there may be at least two touch display screens 1605, which are respectively provided on different surfaces of the computer device or have a folded design
  • the touch display screen 1605 may be a flexible display screen, which is provided on a curved surface or a folding surface of the computer device. Even, the touch display screen 1605 can also be set as a non-rectangular irregular figure, that is, a shaped screen.
  • the touch display 1605 can be made of LCD (Liquid Crystal), Liquid Crystal Display (OLED), Organic Light-Emitting Diode (Organic Light Emitting Diode) and other materials.
  • the camera assembly 1606 is used to collect images or videos.
  • the camera assembly 1606 includes a front camera and a rear camera.
  • the front camera is used for video calls or selfies
  • the rear camera is used for photos or videos.
  • the camera assembly 1606 may also include a flash.
  • the flash can be a single-color flash or a dual-color flash. Dual color temperature flash refers to the combination of warm light flash and cold light flash, which can be used for light compensation at different color temperatures.
  • the audio circuit 1607 is used to provide an audio interface between the user and the terminal.
  • the audio circuit 1607 may include a microphone and a speaker.
  • the microphone is used to collect sound waves of the user and the environment, and convert the sound waves into electrical signals and input them to the processor 1601 for processing, or input them to the radio frequency circuit 1604 to implement voice communication.
  • the microphone can also be an array microphone or an omnidirectional acquisition microphone.
  • the speaker is used to convert the electrical signal from the processor 1601 or the radio frequency circuit 1604 into sound waves.
  • the speaker can be a traditional thin-film speaker or a piezoelectric ceramic speaker.
  • the speaker When the speaker is a piezoelectric ceramic speaker, it can not only convert electrical signals into sound waves audible by humans, but also convert electrical signals into sound waves inaudible to humans for distance measurement and other purposes.
  • the audio circuit 1607 may further include a headphone jack.
  • the positioning component 1608 is used to locate the current geographic location of the terminal to implement navigation or LBS (Location Based Service, location-based service).
  • the positioning component 1608 may be a positioning component based on the GPS (Global Positioning System) of the United States, the Beidou system of China, or the Galileo system of Russia.
  • the power supply 1609 is used to supply power to various components in the terminal.
  • the power supply 1609 may be alternating current, direct current, disposable batteries, or rechargeable batteries.
  • the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery.
  • the wired rechargeable battery is a battery charged through a wired line
  • the wireless rechargeable battery is a battery charged through a wireless coil.
  • the rechargeable battery can also be used to support fast charging technology.
  • the terminal further includes one or more sensors 1610.
  • the one or more sensors 1610 include, but are not limited to: an acceleration sensor 1611, a gyro sensor 1612, a pressure sensor 1613, a fingerprint sensor 1614, an optical sensor 1615, and a proximity sensor 1616.
  • FIG. 16 does not constitute a limitation on the terminal, and may include more or fewer components than those illustrated, or combine certain components, or adopt different component arrangements.
  • Embodiments of the present application also provide a computer-readable storage medium, in which computer-readable instructions are stored, and the computer-readable instructions are executed by the processor to implement the face beautification method provided by the foregoing embodiments.
  • the present application also provides a computer program product containing instructions, which when executed on a computer, causes the computer to execute the face beautification method described in the above embodiments.
  • a person of ordinary skill in the art may understand that all or part of the steps in the method of the above embodiments may be completed by hardware, or may be completed by a program instructing related hardware.
  • the program may be stored in a computer-readable storage medium.
  • the storage medium mentioned above may be a read-only memory, a magnetic disk or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

一种人脸美化方法,包括:获取目标图像中包含的目标人脸;根据目标人脸和参考人脸,生成目标人脸对应的调整参数,调整参数包括脸型调整参数和五官调整参数;根据脸型调整参数和五官调整参数生成位移向量,位移向量用于表示目标人脸的脸型和五官在调整过程中的尺寸变化、位置变化和角度变化情况;根据位移向量对目标人脸的脸型和五官进行调整;显示调整后的目标人脸。

Description

人脸美化方法、装置、计算机设备及存储介质
本申请要求于2018年11月30日提交中国专利局,申请号为2018114531822、发明名称为“人脸美化方法、装置、终端及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及图像处理领域,特别涉及一种人脸美化方法、装置、计算机设备及存储介质。
背景技术
美颜应用程序是一种对画面中人脸进行美化的应用程序。用户可以使用美颜应用程序在拍摄过程中进行实时美化,也可以使用美颜应用程序对已拍摄照片进行美化。
相关技术中,用户使用美颜应用程序时,可以通过手动推拉等方式调整图像中人脸的脸型以及五官。比如,用户可以通过推拉图像中人脸的脸部轮廓,实现瘦脸功能;也可以通过推拉图像中人脸的五官,实现五官区域的局部放大或缩小。
然而,采用手动推拉方式对人脸进行调整时,由于用户无法精准控制调整尺度,因此需要重复进行多次推拉调整,导致人脸美化的效率较低,且美化效果不佳。
发明内容
根据本申请提供的各种实施例,提供一种人脸美化方法、装置、计算机设备及存储介质。
一方面,本申请实施例提供了一种人脸美化方法,由计算机设备执行,所述方法包括:
获取目标图像中包含的目标人脸;
根据所述目标人脸和参考人脸,生成所述目标人脸对应的调整参数,所 述调整参数包括脸型调整参数和五官调整参数;
根据所述脸型调整参数和所述五官调整参数生成位移向量,所述位移向量用于表示目标人脸的脸型和五官在调整过程中的尺寸变化、位置变化和角度变化情况;
根据所述位移向量对所述目标人脸的脸型和五官进行调整;及
显示调整后的所述目标人脸。
另一方面,本申请实施例提供了一种人脸美化装置,所述装置包括:
获取模块,用于获取目标图像中包含的目标人脸;
第一生成模块,用于根据所述目标人脸和参考人脸,生成所述目标人脸对应的调整参数,所述调整参数包括脸型调整参数和五官调整参数;
第二生成模块,用于根据所述脸型调整参数和所述五官调整参数生成位移向量,所述位移向量用于表示目标人脸的脸型和五官在调整过程中的尺寸变化、位置变化和角度变化情况;
调整模块,用于根据所述位移向量对所述目标人脸的脸型和五官进行调整;及
显示模块,用于显示调整后的所述目标人脸。
另一方面,提供了一种计算机设备,包括处理器和存储器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行所述人脸美化方法的步骤。
另一方面,提供了一种非易失性的计算机可读存储介质,存储有计算机可读指令,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行所述人脸美化方法的步骤。
另一方面,本申请还提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述方面所述的人脸美化方法。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征、目的和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中 所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1示出了本申请一个实施例提供的实施环境的示意图;
图2示出了本申请一个实施例修图场景下人脸美化过程的实施示意图;
图3示出了本申请一个实施例拍摄场景下人脸美化过程的实施示意图;
图4示出了本申请一个实施例提供的人脸美化方法的流程图;
图5示出了本申请一个实施例本申请实施例提供的人脸美化方法的原理示意图;
图6示出了本申请另一个实施例提供的人脸美化方法的流程图;
图7示出了本申请一个实施例显示并选择人脸模板过程的界面示意图;
图8示出了本申请一个实施例目标人脸上人脸关键点的示意图;
图9示出了本申请一个实施例调整网格顶点以及重绘像素点过程的实施示意图;
图10示出了本申请另一个实施例提供的人脸美化方法的流程图;
图11示出了本申请一个实施例调整网格顶点以及重绘像素点过程的原理示意图;
图12示出了本申请另一个实施例调整网格顶点以及重绘像素点过程的原理示意图;
图13示出了本申请一个实施例图像渲染管线渲染过程的实施示意图;
图14示出了本申请另一个实施例提供的人脸美化方法的流程图;
图15示出了本申请一个实施例提供的人脸美化装置的框图;
图16示出了本申请一个实施例提供的计算机设备具体实现为终端时的结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
为了方便理解,下面对本申请实施例中涉及的名词进行说明。
人脸关键点:用于对人脸上特征点进行定位的点位,该特征点可以是人脸五官(眉、眼、口、鼻、耳)特征点和人脸轮廓关键点。通常情况下,该人脸关键点是将人脸图像输入人脸关键点检测模型后,由人脸关键点检测模型输出。并且,按照精度进行划分,人脸关键点可以分为5点、68点、83点和90点。
着色器:可编程渲染管线中的程序被称为着色器,按照作用进行划分,着色器包括顶点着色器(Vertex Shader)和片段着色器(Fragment Shader)(或称为像素着色器)。其中,顶点着色器用于处理顶点数据,即确定图形顶点的顶点位置;片段着色器用于处理像素点数据,即对图形中的各个像素点进行渲染着色。进行图形渲染时,通常先利用顶点着色器渲染图形顶点,然后利用片段着色器对顶点三角形(由三个顶点连接构成)中的像素点进行渲染。
人脸模板:用于描述人脸面部特征的模板,其中包含人脸脸型数据和人脸五官数据。可选的,人脸脸型数据包括如下至少一种:整体脸型、额头高度、额头宽度、下巴高度、下巴宽度;人脸五官数据包括如下至少一种:眼睛尺寸、眼睛间距、眼睛纵向位置、眼睛横行位置、鼻子尺寸、鼻子高度、鼻子纵向位置、鼻翼尺寸、鼻尖尺寸、嘴巴尺寸、嘴巴横向位置、嘴唇厚度、眉毛纵向位置、眉毛间距。
请参考图1,其示出了本申请一个实施例提供的实施环境的示意图。该实施环境中包括终端120和服务器140。
终端120是安装有美颜类应用程序的电子设备,该电子设备可以是智能手机、平板电脑或个人计算机等等。图1中以终端120是智能手机为例进行说明。
其中,美颜类应用程序可以是具有美颜功能的图片处理应用程序,该图片处理应用程序用于对已拍摄照片或下载图片中的人脸进行美颜处理;或者,美颜类应用程序还可以是具有美颜功能的相机应用程序,该相机应用程序用于对终端120当前采集图像中所包含的人脸进行美颜处理;或者,美颜类应用程序还可以是具有美颜功能的直播应用程序,该直播类应用程序用于对采集到图像中的人脸进行美颜处理后,通过直播服务器将本地视频流数据推送至其他观看直播客户端;或者,美颜类应用程序还可以是具有美颜功能的短 视频应用程序,该短视频应用程序用于在拍摄短视频过程中对人脸进行美颜处理,并将拍摄的短视频发布到短视频平台,供其他用户点击观看。本申请实施例并不对美颜类应用程序的具体类型进行限定。
终端120与服务器140之间通过有线或无线网络相连。
服务器140是一台服务器、若干台服务器构成的服务器集群或云计算中心。本申请实施例中,服务器140是终端120中美颜类应用程序的后台服务器。
在一种可能的实施方式中,如图1所示,服务器140的人脸模板库141中存储有若干人脸模板,当美颜类应用程序需要更新本地人脸模板,或者需要获取新的人脸模板时,终端120即向服务器140发送人脸模板获取请求,并接收服务器140反馈的人脸模板数据。后续美颜类应用程序即可根据该人脸模板进行人脸美颜。其中,该人脸模板可以是根据明星人脸制作而成的模板。
在一种可能的应用场景下,当本申请实施例提供的人脸美化方法应用于终端120时,终端120即通过美颜类应用程序在本地对图像中的人脸进行美颜,并对美颜后的图像进行显示;当本申请实施例提供的人脸美化方法应用于服务器140时,终端120将图像上传至服务器140,由服务器140对图像中的人脸进行美颜处理后,将美颜后的图像发送至终端120,供终端120进行显示。
可选地,上述的无线网络或有线网络使用标准通信技术和/或协议。网络通常为因特网、但也可以是任何网络,包括但不限于局域网(Local Area Network,LAN)、城域网(Metropolitan Area Network,MAN)、广域网(Wide Area Network,WAN)、移动、有线或者无线网络、专用网络或者虚拟专用网络的任何组合)。在一些实施例中,使用包括超文本标记语言(Hyper Text Mark-up Language,HTML)、可扩展标记语言(Extensible Markup Language,XML)等的技术和/或格式来代表通过网络交换的数据。此外还可以使用诸如安全套接字层(Secure Socket Layer,SSL)、传输层安全(Transport Layer Security,TLS)、虚拟专用网络(Virtual Private Network,VPN)、网际协议安全(Internet Protocol Security,IPsec)等常规加密技术来加密所有或者一些链路。在另一些实施例中,还可以使用定制和/或专用数据通信技术取代或者补 充上述数据通信技术。
为了方便表述,下述各个实施例以人脸美化方法用于图1中的终端120为例进行说明。
本申请实施例提供的人脸美化方法可用于对人脸脸型以及人脸五官进行美化,其适用的应用场景包括修图场景和拍摄场景。下面对不同应用场景下人脸美化方法进行说明。
修图场景
将人脸美化方法应用于修图场景时,该人脸美化方法可以实现成为图片处理应用程序,并安装运行在终端中。该图片处理应用程序提供“一键整形”功能入口,并提供若干参考人脸模板供用户选择。
如图2所示,用户点击图片处理应用程序对应的应用图标21后,终端即开启图片处理应用程序。在进行修图前,用户首先从相册中选取待美化图片22。完成图片选择后,图片处理应用程序提供若干种美颜功能入口供用户选择。当用户点击“一键整形”功能入口23后,图片处理应用程序进一步显示若干参考人脸模板供用户选择。用户选择“标准”参考人脸模板24后,图片处理应用程序即根据参考人脸模板24对应的人脸参数,以及待美化图片中人脸的人脸参数,对待美化图片中的人脸进行美化,使美化后人脸的轮廓以及五官比例趋近于参考人脸模板,并在美化过程中显示美化提示信息25。完成美化后,图片处理应用程序即对美化后的图片26进行显示,并提供保存入口供用户保存。
拍摄场景
将人脸美化方法应用于拍摄场景时,该人脸美化方法可以实现成为相机应用程序,并安装运行在终端中。该相机应用程序提供“一键整形”功能入口,并提供若干参考人脸模板供用户选择。
如图3所示,用户点击相机应用程序对应的应用图标31后,终端即开启相机应用程序。在进行拍摄前,用户首先图片处理应用程序提供的若干种美颜功能入口中,选择“一键整形”功能入口32,然后进一步从图片处理应用程序显示的若干参考人脸模板供中选择“标准”参考人脸模板33。相机处理应用程序根据参考人脸模板33对应的人脸参数,以及取景画面中人脸的人脸参 数,对取景画面中的人脸进行美化,并将美化后的人脸图像34实时显示在取景画面中。后续用户通过点击拍摄控件34即可完成拍摄。
当然除了应用于上述场景外,上述人脸美化方法还可以用于其它涉及对图像中人脸进行美化的场景,本申请实施例并不对具体应用场景构成限定。
请参考图4,其示出了本申请一个实施例提供的人脸美化方法的流程图。本实施例以该方法应用于图1中的终端120来举例说明,该方法可以包括以下几个步骤:
步骤401,获取目标图像中包含的目标人脸。
可选的,该目标图像是图片,或者,该目标图像是取景画面中显示的图像。
比如,当应用于终端中安装的图片处理应用程序时,该目标图像即为导入的图片;当应用于终端中安装的相机应用程序时,该目标图像即为取景画面中显示的实时图像。
可选的,获取到目标图像后,终端对目标图像进行人脸检测,若检测到目标图像中的人脸,则提取目标图像中的人脸作为目标人脸,并执行步骤402;若未检测到目标图像中的人脸,则显示提示信息,提示用户无法检测到人脸。
可选的,目标图像中的包含的目标人脸可以是正面人脸(即正脸),也可以是任意角度人脸(即侧脸)。
步骤402,根据目标人脸和参考人脸,生成目标人脸对应的调整参数,调整参数包括脸型调整参数和五官调整参数。
其中,调整参数用于指示基于参考人脸对目标人脸进行调整。
可选的,该参考人脸是终端从若干人脸模板中选取的与目标人脸相似度较高的人脸模板,或者,该参考人脸是用户手动从若干人脸模板中选取的人脸模板。
其中,该参考人脸是按照标准人脸轮廓和人脸五官比例构建的虚拟人脸,或者,该参考人脸是公众人物的人脸。
在一种可能的实施方式中,终端根据目标人脸的人脸参数以及参考人脸的人脸参数,生成目标人脸对应的调整参数,该调整参数中包括用于调整目标人脸脸型的脸型调整参数,和/或,用于调整目标人脸五官的五官调整参数。
可选的,脸型调整参数中可以包括人脸宽度缩放比例、V脸角度调整量、下巴高度调整量和额头高度调整量中的至少一种;五官调整参数可以包括五官缩放比例、五官位置偏移量和五官角度调整量中的至少一种。
步骤403,根据脸型调整参数和五官调整参数生成位移向量,位移向量用于表示目标人脸的脸型和五官在调整过程中的尺寸变化、位置变化和角度变化情况。
为了达到一键整形的效果,并避免对不同人脸部位整形时相互影响,导致人脸美化效果不佳的问题,本申请实施例中,终端生成调整参数后,并非直接根据调整参数对目标人脸进行调整,而是首先基于调整参数生成每次调整时目标人脸中脸型和五官的位移向量,进而根据位移向量对脸型和五官的尺寸、位置以及角度进行调整。
可选的,该位移向量由顶点着色器根据调整参数生成。
步骤404,根据位移向量对目标人脸的脸型和五官进行调整。
在一种可能的实施方式中,终端根据位移向量同时对目标人脸的脸型和五官进行调整,或者,终端根据位移向量,按照预定调整顺序对目标人脸的脸型和五官进行调整。
可选的,终端根据位移向量,通过顶点着色器和片段着色器对目标人脸进行调整。
可选的,对脸型进行调整时采用的调整方式包括尺寸调整和角度调整,其中,尺寸调整包括调整目标人脸的人脸宽高比、下包高度和额头高度,角度调整包括调整目标人脸的V脸角度。
可选的,对五官进行调整时采用的调整方式包括尺寸调整、角度调整和位置调整,其中,尺寸调整包括调整五官比例,角度调整包括调整五官的倾斜角度,位置调整包括调整五官在人脸上的位置
不同与相关技术中按照统一的美颜策略对不同人脸进行美颜,本申请实施例中,终端通过分析目标人脸与参考人脸之间的差异,生成目标人脸对应的调整参数,并基于该调整参数将目标人脸调整至趋向于参考人脸,保证调整后的目标人脸的脸型和五官比例符合标准,提高了人脸美颜效果。
可选的,为了方便用户实时观察动态调整的效果,在对目标人脸进行调整过程中,可以在显示界面中展示目标人脸轮廓以及五官的动态变化过程。
步骤405,显示调整后的目标人脸。
完成调整后,终端对调整后的目标人脸进行显示。可选的,终端同时对调整前后的目标人脸进行显示,方便用户知悉调整前后人脸脸型以及五官的差异。综上所述,本申请实施例中,获取到目标图像中的目标人脸后,根据脸型和五官均符合预设标准的参考人脸,生成目标人脸对应的调整参数,并根据调整参数对目标人脸的脸型和五官进行尺寸调整、角度调整和/或位置调整,进而对调整后的目标人脸进行显示;本申请实施例考虑到不同人脸的差异性,根据目标人脸和参考人脸之间的差异,制定出符合目标人脸特征的调整策略,实现了美颜策略的针对性制定,进而提高了人脸的美颜效果。
在一种可能的实施方式中,终端获取到目标图像501后,对目标图像501进行人脸检测,识别出其中的目标人脸502;同时,终端显示若干人脸模板503供用户选择,从而根据用户选取的人脸模板确定参考人脸504。
进一步的,根据目标人脸502对应的初始人脸参数505以及参考人脸504对应的参考人脸参数506,终端生成用于人脸美化的调整参数507。基于调整参数507进行人脸美化前,终端对目标人脸502进行划分,得到若干个调整区域508,针对每个调整区域508,终端根据该调整区域508对应的调整参数507,通过顶点着色器509对调整区域508中顶点的位置进行调整,并通过片段着色器510对调整区域508内的像素点进行重绘,最终得到调整后的目标人脸511。下面采用示意性的实施例对人脸美化的过程进行说明。
请参考图6,其示出了本申请另一个实施例提供的人脸美化方法的流程图。本实施例以该方法应用于图1中的终端120来举例说明,该方法可以包括以下几个步骤:
步骤601,获取目标图像中包含的目标人脸。
本步骤的实施方式可以参考上述步骤401,本实施例在此不再赘述。
步骤602,显示至少一个人脸模板,人脸模板中包含标准人脸模板和/或预设人物模板,标准人脸模板对应标准人脸的人脸参数,预设人物模板对应预设人物的人脸参数。
在一种可能的实施方式中,终端显示至少一个人脸模板供用户进行选择,该人脸模板可以是美颜类应用程序中内置的模板,也可以是从服务器处下载 的模板。
示意性的,如图7所示,终端运行的美颜类应用程序提供“一键整形”入口71,当用户点击入口71后,终端即显示至少一个人脸模板选项72供用户选择。
可选的,每个人脸模板对应各自的人脸参数,该人脸参数中包含脸型参数和五官参数。
其中,当人脸模板为标准人脸模板时,该人脸模板对应的人脸参数包含标准人脸(三庭五眼等标准)的脸型参数以及五官参数;当人脸模板为预设人物模板时,该人脸模板对应的人脸参数即包含根据预设人物(比如公众人物或明星)的脸型参数以及五官参数。
关于预设人物模板的生成方式,在一种可能的实施方式中,服务器预先对预设人物的面部图片进行人脸关键点定位,然后根据定位得到人脸关键点计算预设人物的脸型参数以及五官参数,从而生成包含上述参数的预设人物模板。
在其他可能的实施方式中,用户也可以通过终端上传个人照片,由服务器根据个人照片生成相应的个人人脸模板,并反馈给终端,本申请实施例对此不做限定。
步骤603,当接收到对人脸模板的选择信号时,根据选中的人脸模板确定参考人脸。
示意性的,如图7所示,当接收到对“标准脸”这一人脸模板71的选择信号时,终端即将“标准脸”确定为人脸美化的参考人脸。
步骤604,识别目标人脸的人脸关键点。
为了方便后续量化目标人脸与参考人脸之间的脸型以及五官的差异,获取到目标图像后,终端对目标图像进行人脸检测,识别出目标人脸上的人脸关键点。
其中,终端可以采用基于主动形状模型(Active Shape Model,ASM)和主动外观模型(Active Appearnce Model,AAM)、基于级联姿态回归(Cascaded Pose Regression,CPR)或基于深度学习模型等方法进行人脸关键点定位,本申请并不对人脸检测的具体方式进行限定。
可选的,识别出人脸关键点后,终端可以基于人脸关键点对各个人脸部 位进行检测。例如,可以对目标人脸中左眼睛、右眼睛、鼻子、左眉毛、右眉毛、下巴及嘴巴等各个人脸部位进行检测,得到各个人脸部位所在的区域。例如,可以根据左眉毛的8个人脸关键点确定左眉毛所在区域,可以根据鼻子的14个人脸关键点确定鼻子所在区域。
可选的,为了提高后续美化效果,终端识别目标人脸上的83或90个人脸关键点。示意性的,目标人脸上的人脸关键点分布如图8所示。
在其他可能的实施方式中,当需要实现针对图像中指定人脸美化时,在进行人脸关键点定位后,终端将定位得到的人脸关键点与指定人脸的人脸关键点进行比对,从而计算出图像中各张人脸与指定人脸之间的人脸相似度,并将人脸相似度最高的人脸确定为待美化的人脸。后续终端仅对图像中的指定人脸进行美化,而不会对其他人脸进行美化。
步骤605,根据人脸关键点确定目标人脸的初始人脸参数,初始人脸参数包括初始脸型参数和初始五官参数。
在一种可能的实施方式中,终端获取人脸关键点对应的关键点类型,并根据人脸轮廓关键点确定初始脸型参数,根据人脸五官关键点确定初始五官参数。其中,人脸轮廓关键点包括脸颊关键点、下巴关键点和额头关键点中的至少一种,人脸五官关键点包括眼部关键点、眉部关键点、鼻部关键点和嘴部关键点中的至少一种。
相应的,终端根据人脸轮廓关键点确定出的初始脸型参数包括初始人脸宽高比、初始下巴高度和初始V脸角度中的至少一种;根据人脸五官关键点确定出的初始五官参数包括初始五官占比、初始五官位置和初始五官角度中的至少一种。其中,上述初始人脸参数可以使用比值(比如人脸宽度的四分之一)表示,也可以采用数值(比如100px)表示。
在一种可能的实施方式中,当初始人脸参数和参考人脸参数均采用比值时,终端可以直接根据识别出的人脸关键点确定初始人脸参数;当初始人脸参数和参考人脸参数中部分参数采用数值表示时,终端识别出人脸关键点后,需要以人脸宽度为基准,从目标图像中裁剪出人脸区域(尺寸大于人脸宽度,比如可以是3倍人脸宽度),并将人脸区域和参考人脸转换到同一坐标空间,进而确定目标人脸的初始人脸参数,避免因坐标空间不同导致的调整偏差。
可选的,初始人脸宽高比=人脸宽度/人脸高度,且人脸宽度为左人脸轮 廓关键点与右人脸轮廓关键点之间的距离,人脸高度为上人脸轮廓关键点与下人脸轮廓关键点之间的距离。
可选的,初始V脸角度为脸颊切线与下巴切线所形成的夹角。如图8所示,终端根据脸颊关键点确定出脸颊切线81,并根据下巴关键点确定出下巴切线82,从而根据脸颊切线81和下巴切线82形成的夹角确定初始V脸角度。
可选的,初始五官占比包括初始五官高度占比和初始五官宽度占比,且初始五官高度占比=五官高度/人脸高度,初始五官宽度占比=五官宽度/人脸宽度。需要说明的是,初始五官占比中还包括初始眼睛间距占比,该初始眼睛间距占比为眼睛瞳孔间距占人脸宽度的比值。
可选的,初始五官位置包括初始五官横向位置和初始五官纵向位置,其中,初始五官横向高度为五官在人脸宽度方向上的位置,初始五官纵向位置为五官在人脸高度方向上的位置,且初始五官横向位置和初始五官纵向位置可以采用比例表示(比如位于四分之一人脸高度、位于二分之一人脸宽度等等)。可选的,初始五官角度包括初始眉毛角度和初始眼睛角度,其中,初始眉毛角度为眉角连线与水平方向的夹角,初始眼睛角度为眼角连线与水平方向的夹角。如图8所示,左眼的初始眼睛角度为左眼眼角连线83与水平方向的夹角。
需要说明的是,本实施例仅以上述初始人脸参数为例进行示意性说明,其他用于描述人脸脸型以及五官特征的参数都可以作为初始人脸参数,本申请并不对此构成限定。
步骤606,根据初始人脸参数和参考人脸对应的参考人脸参数,生成调整参数,参考人脸参数包括参考脸型参数和参考五官参数。
由于后续需要以参考人脸为基准对目标人脸进行调整,因此,终端基于初始人脸参数和参考人脸对应的参考人脸参数,生成表征两者差异的调整参数,其中,参考人脸参数与初始人脸参数的参数类型相同。
在一种可能的实施方式中,终端生成调整参数时可以采用如下方式:
一、根据初始脸型参数中的初始人脸宽高比以及参考脸型参数中的参考人脸宽高比,确定调整参数中的人脸宽度缩放比例。
可选的,人脸宽度缩放比例=参考人脸宽高比/初始人脸宽高比,后续终端即在保持人脸高度的情况下,根据人脸宽度缩放比例对目标人脸的人脸宽 度进行调整。
比如,当参考人脸宽高比0.618,而初始人脸宽高比为0.6时(人脸偏瘦长),终端计算得到人脸宽度缩放比例为1.03,即需要将人脸宽度放大1.03倍。
基于人脸宽度缩放比例对目标人脸进行调整后,目标人脸的宽高比趋近于参考人脸的宽高比,避免出现美化后人脸过胖或过瘦的问题。
二、根据初始脸型参数中的初始V脸角度以及参考脸型参数中的参考V脸角度,确定调整参数中的V脸角度调整量,其中,V脸角度是脸颊切线与下巴切线所形成的夹角。
可选的,V脸角度调整量=参考V脸角度-初始V脸角度,后续终端即根据V脸角度调整量调整目标人脸的脸颊或下巴,使得美化后人脸的V脸角度趋向于参考人脸的V脸角度。
三、根据初始脸型参数中的初始下巴高度以及参考脸型参数中的参考下巴高度,确定调整参数中的下巴高度调整量。
可选的,当下巴高度采用长度表示时,下巴高度调整量=参考下巴高度-初始下巴高度,后续终端即根据下巴高度调整量调整目标人脸的下巴高度,使得美化后人脸的下巴高度趋向于参考人脸的下巴高度。
在其他可能的实施方式中,当下巴高度采用比值表示时,下巴高度调整量=参考下巴高度/初始下巴高度,后续终端即根据人脸高度以及下巴高度调整量计算下巴高度偏移量。
四、根据初始五官参数中的初始五官占比以及参考五官参数中的参考五官占比,确定调整参数中的五官缩放比例,其中,五官占比包括五官高度占人脸高度的比例以及五官宽度占人脸宽度的比例。
可选的,五官缩放比例=初始五官占比/参考五官占比,其中,当初始五官占比为初始五官高度占比时,参考五官占比为参考五官高度占比,当初始五官占比为初始五官宽度占比时,参考五官占比为参考五官宽度占比。
比如,当参考人脸的目标眼睛宽度占比符合“三庭五眼”的标准时,眼睛宽度占比为人脸宽度的1/5时,若初始眼睛宽度占比为1/5.5时(眼睛宽度偏小),终端计算得到眼睛宽度缩放比例为1.1,即目标人脸的眼睛宽度需要放大为1.1倍。
基于五官缩放比例对初始五官的高度和宽度进行调节后,初始人脸上五官宽度和五官高度占人脸的比例趋向于参考人脸,从而达到了优化五官比例的效果。
五、根据初始五官参数中的初始五官位置以及参考五官参数中的参考五官位置,确定调整参数中的五官位置偏移量。
可选的,五官位置偏移量=参考五官位置-初始五官位置,其中,五官位置偏移量包括五官纵向偏移量和五官横向偏移量,后续终端即在初始五官位置的基础上,根据五官位置偏移量调整五官在人脸纵向以及横向位置。
六、根据初始五官参数中的初始五官角度以及参考五官参数中的参考五官角度,确定调整参数中的五官角度调整量。
可选的,五官角度调整量=参考五官角度-初始五官角度,或,五官角度调整量=初始五官角度/参考五官角度。后续终端即根据五官角度调整量调整目标人脸的五官角度,使得美化后人脸的五官角度趋向于参考人脸的五官角度。
比如,当目标眼睛角度为10°,而初始眼睛角度为9°时,终端计算得到眼睛角度调整量为1°。
需要说明的是,本实施例仅以生成上述调整参数为例进行说明,其他用于调整人脸脸型以及五官特征的参数都可以作为调整参数,本申请并不对此构成限定。
通过上述步骤生成人脸脸型和五官对应调整参数后,终端进一步通过下述步骤607至612在目标人脸的基础上进行调整。
步骤607,将目标图像划分为预定尺寸的网格。
本申请实施例中,终端以网格为单位对目标人脸进行调整,因此,对目标人脸进行调整前,终端首先将目标图像划分为预定尺寸的矩形网格。其中,网格的尺寸越小,人脸调整越精细,相应的,人脸调整效果越高。
比如,终端将目标图像划分为50×66个网格,即每个网格的宽度为目标图像宽度的1/50,每个网格的高度为目标图像高度的1/66。
示意性的,如图9所示,终端将目标图像91划分为若干网格。
步骤608,根据目标人脸上人脸关键点的关键点类型,划分调整区域,关键点类型包括人脸轮廓关键点和人脸五官关键点,调整区域包括轮廓调整 区域和五官调整区域。
由于需要分别对人脸轮廓以及各个五官进行调整,因此,终端首先根据人脸关键点中的人脸轮廓关键点划分出轮廓调整区域,根据人脸关键点中的人脸五官关键点划分出五官调整区域。即划分出的轮廓调整区域中包含人脸轮廓关键点,划分出的五官调整区域中包含人脸五官关键点。
在一种可能的实施方式中,如图10所示,本步骤可以包括如下步骤。
步骤608A,根据人脸关键点的关键点类型,划分人脸区域,人脸区域包括轮廓区域和五官区域。
终端根据人脸轮廓关键点划分出轮廓区域,根据人脸五官轮廓点划分出五官区域,其中,轮廓区域包括下巴区域、脸颊区域、额头区域,五官区域包括眼部区域、鼻部区域、眉部区域和嘴部区域。
可选的,划分出的人脸区域是包含对应人脸关键点的最小面积区域。比如,眼部区域为包含所有眼部关键点的最小区域。
可选的,划分出的轮廓区域和五官区域均为椭圆形区域。
示意性的,如图9所示,以人脸中的眼睛为例,终端根据人脸关键点中的眼部关键点,划分出椭圆形的眼部区域92。
步骤608B,确定各个人脸区域各自对应的调整区域,调整区域的面积大于人脸区域的面积,且人脸区域位于调整区域内部。
若仅对划分出的轮廓区域和五官区域进行调整,调整效果不够自然,因此,为了达到更加自然的调整效果,终端在人脸区域的基础上确定出的对应的调整区域。
在一种可能的实施方式中,终端在人脸区域的基础上,对人脸区域的四周区域边界进行拉伸(预定拉伸量),从而得到对应的调整区域。
示意性的,如图9所示,终端对眼部区域92的边界区域进行拉伸,得到椭圆形的眼部调整区域93。
当然,终端也可以采用其他方式确定人脸区域对应的调整区域,本申请实施例对此不做限定。
步骤609,对于各个调整区域,根据调整区域对应的调整参数,通过顶点着色器计算每次调整时调整区域内网格顶点的位移向量。
划分出网格并确定出各个调整区域后,对待优化人脸进行调整的过程即 为调整各个调整区域内网格顶点坐标的过程。
由于网格顶点的调整可能需要进行多次,比如,需要分别对眼部调整区域内、鼻部调整区域内、嘴部调整区域内的网格顶点进行调整,因此,在一种可能的实施方式中,终端每完成一次网格顶点调整后,基于调整后的网格顶点进行像素点渲染,得到中间图像,并将该中间图像作为下一次网格顶点调整的输入。
示意性的,如图11所示,对于一张1080*1920的目标图像,若将目标图像划分为50*66个网格,每次对50*66个网格的网格顶点进行调整后,终端即对1080*1920个像素点进行绘制得到中间图像,并将中间图像作为下一次调整时的输入。最终,经过n次网格顶点调整及像素点绘制后,终端得到美颜后的目标图像。
然而,采用上述方式进行渲染时,由于每次网格顶点调整后都需要渲染一张中间图像,而中间图像的分辨率较大,导致渲染速度较慢,对终端渲染性能要求较高,容易出现画面卡顿的问题。
此外,当不同调整区域之间存在交集时,采用上述方法进行调整时,调整区域的调整先后顺序将会影响最终的调整效果。比如,目标图像中眼部调整区域与眉部调整区域之间存在交集,且交集区域中存在网格顶点,若先对眼部调整区域进行调整,交集区域中的网格顶点在调整后可能会移出眉部调整区域,导致后续对眉部调整区域进行调整时,无法对原先交集区域内的网格顶点进行调整。
为了提高渲染效率并避免调整时序对调整效果造成的影响,在一种可能的实施方式中,每次根据调整参数调整网格顶点的坐标时,顶点着色器计算调整区域内各个网格顶点的位移向量(即网格顶点的偏移信息),并将该位移向量存储在片段着色器的输出结果中,作为下一次调整网格顶点坐标时的输入。
换句话讲,终端每次通过顶点着色器调整网格顶点后,并不会直接通过顶点着色器渲染全分辨率的中间图像,从而降低了每次调整时的渲染量,进而提高了渲染效率;并且,顶点着色器并未根据位移向量对网格顶点进行直接调整,因此调整区域的调整时序并不会影响最终的调整效果。
示意性的,如图12所示,对于一张1080*1920的目标图像,若将目标图 像划分为50*66个网格,每次通过顶点着色器对50*66个网格的网格顶点进行调整后,顶点着色器将网格顶点的位移向量存储在片段着色器中,并将原始的网格顶点坐标作为下一次调整时的输入。
步骤610,根据各次调整时计算得到的位移向量,生成位移向量图,位移向量图用于表示多次调整过程中网格顶点的位置变化情况。
在一种可能的实施方式中,完成最后一次网格顶点坐标调整后,终端对先前各次网格顶点坐标调整后得到的位移向量进行累加,生成网格顶点的位移向量图。
示意性的,如图12所示,终端对第1至第n次网格顶点调整后得到的位移向量进行累加,生成50*66个网格中网格顶点的位移向量图。
步骤611,根据调整区域内网格顶点的原始坐标以及位移向量图指示的位移距离和位移方向,通过顶点着色器调整网格顶点的坐标。
进一步的,基于生成的位移向量图,终端通过顶点着色器对各个网格顶点的坐标进行调整。后续终端即通过对片段着色器对调整后网格内的像素点进行重绘,完成人脸五官的美化。
在一种可能的实施方式中,对于任一网格顶点,终端从位移向量图中获取该网格顶点对应的位移向量,从而确定该网格顶点对应的位移方向以及位移距离,进而在网格顶点的原始坐标的基础上,根据该位移方向以及位移距离调整网格顶点的坐标。
示意性的,如图9所示,对于确定出的眼部调整区域93,终端首先确定眼部调整区域93内的网格顶点(图9中的空心圆),然后根据位移向量图确定眼部调整区域93中左侧区域内网格顶点的位移方向为向左位移,位移距离为10px,眼部调整区域93中右侧区域内网格顶点的位移方向为向右位移,位移距离为10px,从而将左侧区域内网格顶点的横坐标减10px,将右侧区域内网格顶点的横坐标加10px。
步骤612,通过片段着色器绘制调整后网格中的各个像素点。
通过上述步骤611对各个调整区域内的网格顶点进行调整后,终端进一步通过片段着色器重绘调整后网格中的各个像素点。
如图13所示,图像渲染的图像渲染管线应用到形变场景下,简单直观的表现就是移动像素点的位置。其做法可以是:先把图像划分成多个网格区域, 然后将加载的网格区域的顶点数组作为输入传递给顶点着色器,在顶点着色vertex shader中,按照需要形变的形状修改网格区域的顶点位置。在使用渲染管线过程中,vertex shader负责确定顶点位置,在vertex shader中可以按照形变规则(即各个形变单元的形变参数)调整网格区域的顶点的坐标,片段着色fragment shader负责每个像素绘制,每个网格区域内部的像素按照顶点的坐标做差值,最后把修改过的网格顶点位置坐标转换到屏幕坐标系中(即投影到屏幕上进行显示)。
在一种可能的实施方式中,由于经过网格顶点调整后,网格的形状会发生变化,因此片段着色器通过插值算法,对每个网格内原始像素点进行形变,以此达到脸型和五官形变的效果。其中,插值过程由开放图形库(Open Graphics Library,OpenGL)自动完成,且可以采用最近邻插值、双线性插值、像素区域关系重采样、双三次插值或其他插值方法。
示意性的,如图9所示,终端通过顶点着色器调整眼部调整区域93内网格顶点的坐标后,进一步通过片段着色器重绘眼部调整区域93内的像素点,最终得到宽度放大后的眼部图像94。
步骤613,显示调整后的目标人脸。
本步骤的实施方式可以参考上述步骤405,本实施例在此不再赘述。
本实施例中,终端提供多种人脸模板供用户选择,并以用户选择的人脸模板为参考人脸,对目标人脸进行调整,使得调整后的人脸符合用户审美偏好。
同时,终端在识别出的人脸关键点的基础上,确定目标人脸的初始人脸参数,以便后续基于初始人脸参数和参考人脸的参考人脸参数生成调整参数,为后续人脸调整提供量化指标,提高人脸美化的准确性。
此外,终端通过将目标图像划分为若干网格,并划分出不同的调整区域,从而通过顶点着色器以及片段着色器对调整区域内的网格顶点以及像素点进行调整重绘,在提高美颜精准度的同时,使人脸的美颜效果更加自然。
此外,终端采用基于网格顶点的位移向量图调整网格顶点的坐标,在降低渲染量并提高渲染效率的同时,避免调整时序对最终调整效果产生的影响。
为了达到更加自然的美颜效果,在图6的基础上,如图14所示,步骤 609还可以包括如下步骤。
步骤609A,根据调整区域对应的调整参数以及第一调整因子,通过顶点着色器计算人脸区域内网格顶点的位移向量。
在一种可能的实施方式中,调整区域中不同子区域对应不同的调整因子,且调整因子与网格顶点坐标的调整幅度呈正相关关系。
示意性的,如图9所示,眼部调整区域93中,眼部区域92对应第一调整因子,眼部区域92以外的区域(眼部调整区域93内)对应第二调整因子,且第一调整因子大于第二调整因子。
可选的,调整区域内人脸区域对应的第一调整因子为1,人脸区域外调整区域对应的第二调整因子为小于1的定值,比如0.5;或者,人脸区域外调整区域对应的第二调整因子由1向0逐渐过渡。
在一种可能的实施方式中,终端根据调整参数以及第一调整因子计算实际调整参数,从而根据实际调整参数调整人脸区域内网格顶点的坐标。其中,实际调整参数=调整参数×第一调整因子。
在根据实际调整参数对网格坐标点进行调整时,对于人脸区域内的网格顶点(x1,y1),经过调整参数调整后,该网格顶点的坐标变为(x2,y2),其中:
x2=(x1–center_x)*a*mask1*adjustValue;
y2=(y1–center_y)*a*mask1*adjustValue;
其中,a表示常系数,a可以设置为1.3,或者其取值可以实际需要进行灵活设置;mask1为第一调整因子;adjustValue表示五官调整参数,mask1*adjustValue即为实际调整参数;center_x表示网格中心的x值;center_y表示网格中心的y值。
步骤609B,根据调整区域对应的调整参数以及第二调整因子,通过顶点着色器计算调整区域中人脸区域以外网格顶点的所述位移向量。
对于人脸区域以外的调整区域,顶点着色器根据第二调整因子以及对应的调整参数,对人脸区域以外网格顶点进行调整。其中,人脸区域内网格顶点的坐标的调整幅度大于人脸区域外网格顶点的坐标的调整幅度。
在一种可能的实施方式中,终端根据调整参数以及第二调整因子计算实际调整参数,从而根据实际调整参数调整人脸区域外网格顶点的坐标。其中, 实际调整参数=调整参数×第二调整因子。
比如,当调整参数指示眼部宽度增加10%,且第二调整因子为0.5时,终端确定眼部区域外网格顶点的实际调整参数为10%×0.5=5%。
在根据实际调整参数对网格坐标点进行调整时,对于人脸区域外的网格顶点(x1,y1),经过调整参数调整后,该网格顶点的坐标变为(x2,y2),其中:
x2=(x1–center_x)*a*mask2*adjustValue;
y2=(y1–center_y)*a*mask2*adjustValue;
其中,a表示常系数,a可以设置为1.3,或者其取值可以实际需要进行灵活设置;mask2为第二调整因子;adjustValue表示五官调整参数,mask1*adjustValue即为实际调整参数;center_x表示网格中心的x值;center_y表示网格中心的y值。
本实施例中,通过为调整区域内不同的子区域设置不同的调整因子,并基于调整因子对调整区域内的网格顶点进行调整,使得调整区域的调整效果更加自然,进而提高了人脸美化效果。
下述为本申请装置实施例,可以用于执行本申请方法实施例。对于本申请装置实施例中未披露的细节,请参照本申请方法实施例。
请参考图15,其示出了本申请一个实施例提供的人脸美化装置的框图。该装置可以是图1所示实施环境中的终端120,也可以设置在终端120上。该装置可以包括:
获取模块1501,用于获取目标图像中包含的目标人脸;
第一生成模块1502,用于根据所述目标人脸和参考人脸,生成所述目标人脸对应的调整参数,所述调整参数包括脸型调整参数和五官调整参数;
第二生成模块1503,用于根据所述脸型调整参数和所述五官调整参数生成位移向量,所述位移向量用于表示目标人脸的脸型和五官在调整过程中的尺寸变化、位置变化和角度变化情况;
调整模块1504,用于根据所述位移向量对所述目标人脸的脸型和五官进行调整;
显示模块1505,用于显示调整后的所述目标人脸。
可选的,所述第一生成模块1502,包括:
识别单元,用于识别所述目标人脸的人脸关键点;
确定单元,用于根据所述人脸关键点确定所述目标人脸的初始人脸参数,所述初始人脸参数包括初始脸型参数和初始五官参数;
第一生成单元,用于根据所述初始人脸参数和所述参考人脸对应的参考人脸参数,生成所述调整参数,所述参考人脸参数包括参考脸型参数和参考五官参数。
可选的,所述第一生成单元,用于:
根据所述初始脸型参数中的初始人脸宽高比以及所述参考脸型参数中的参考人脸宽高比,确定所述调整参数中的人脸宽度缩放比例。
根据所述初始脸型参数中的初始V脸角度以及所述参考脸型参数中的参考V脸角度,确定所述调整参数中的V脸角度调整量,其中,V脸角度是脸颊切线与下巴切线所形成的夹角;
和/或,
根据所述初始脸型参数中的初始下巴高度以及所述参考脸型参数中的参考下巴高度,确定所述调整参数中的下巴高度调整量;
可选的,所述第一生成单元,用于:
根据所述初始五官参数中的初始五官占比以及所述参考五官参数中的参考五官占比,确定所述调整参数中的五官缩放比例,其中,五官占比包括五官高度占人脸高度的比例以及五官宽度占人脸宽度的比例;
和/或,
根据所述初始五官参数中的初始五官位置以及所述参考五官参数中的参考五官位置,确定所述调整参数中的五官位置偏移量;
和/或,
根据所述初始五官参数中的初始五官角度以及所述参考五官参数中的参考五官角度,确定所述调整参数中的五官角度调整量。
可选的,所述装置还包括:
第一划分模块,用于将所述目标图像划分为预定尺寸的网格;
第二划分模块,用于根据所述目标人脸上人脸关键点的关键点类型,划分调整区域,所述关键点类型包括人脸轮廓关键点和人脸五官关键点,所述 调整区域包括轮廓调整区域和五官调整区域;
所述第二生成模块1503,包括:
计算单元,用于对于各个调整区域,根据所述调整区域对应的调整参数,通过顶点着色器计算每次调整时所述调整区域内网格顶点的所述位移向量;
第二生成单元,用于根据各次调整时计算得到的所述位移向量,生成位移向量图,所述位移向量图用于表示多次调整过程中所述网格顶点的位置变化情况。
可选的,所述第二划分模块,用于:
根据所述人脸关键点的所述关键点类型,划分人脸区域,所述人脸区域包括轮廓区域和五官区域;
确定各个所述人脸区域各自对应的所述调整区域,所述调整区域的面积大于所述人脸区域的面积,且所述人脸区域位于所述调整区域内部。
可选的,所述调整模块1504,包括:
第一调整单元,用于根据所述调整区域内网格顶点的原始坐标以及所述位移向量图指示的位移距离和位移方向,通过所述顶点着色器调整所述网格顶点的坐标;
第二调整单元,用于通过片段着色器绘制调整后网格中的各个像素点。
可选的,所述计算单元,用于:
根据所述调整区域对应的调整参数以及第一调整因子,通过所述顶点着色器计算所述人脸区域内网格顶点的所述位移向量;
根据所述调整区域对应的调整参数以及第二调整因子,通过所述顶点着色器计算所述调整区域中所述人脸区域以外网格顶点的所述位移向量,所述第一调整因子大于所述第二调整因子,且所述人脸区域内网格顶点的坐标的调整幅度大于所述人脸区域外网格顶点的坐标的调整幅度。
可选的,所述装置还包括:
模板显示模块,用于显示至少一个人脸模板,所述人脸模板中包含标准人脸模板和/或预设人物模板,所述标准人脸模板对应标准人脸的人脸参数,所述预设人物模板对应预设人物的人脸参数;
参考人脸确定模块,用于当接收到对所述人脸模板的选择信号时,根据选中的人脸模板确定所述参考人脸。
综上所述,本申请实施例中,获取到目标图像中的目标人脸后,根据脸型和五官均符合预设标准的参考人脸,生成目标人脸对应的调整参数,并根据调整参数对目标人脸的脸型和五官进行尺寸调整、角度调整和/或位置调整,进而对调整后的目标人脸进行显示;本申请实施例考虑到不同人脸的差异性,根据目标人脸和参考人脸之间的差异,制定出符合目标人脸特征的调整策略,实现了美颜策略的针对性制定,进而提高了人脸的美颜效果。
请参考图16,其示出了本申请一个实施例提供的计算机设备的结构示意图。该终端可以实现成为图1所示实施环境中的终端120,以实施上述实施例提供的人脸美化方法。具体来讲:
终端包括有:处理器1601和存储器1602。
处理器1601可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器1601可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。处理器1601也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器1601可以在集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器1601还可以包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器1602可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是有形的和非暂态的。存储器1602还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器1602中的非暂态的计算机可读存储介质用于存储至少一个指令,该至少一个指令用于被处理器1601所执行以实现本申请中提供的人脸美化方法。
在一些实施例中,计算机设备还可选包括有:外围设备接口1603和至少一个外围设备。具体地,外围设备包括:射频电路1604、触摸显示屏1605、 摄像头1606、音频电路1607、定位组件1608和电源1609中的至少一种。
外围设备接口1603可被用于将I/O(Input/Output,输入/输出)相关的至少一个外围设备连接到处理器1601和存储器1602。在一些实施例中,处理器1601、存储器1602和外围设备接口1603被集成在同一芯片或电路板上;在一些其他实施例中,处理器1601、存储器1602和外围设备接口1603中的任意一个或两个可以在单独的芯片或电路板上实现,本实施例对此不加以限定。
射频电路1604用于接收和发射RF(Radio Frequency,射频)信号,也称电磁信号。射频电路1604通过电磁信号与通信网络以及其他通信设备进行通信。射频电路1604将电信号转换为电磁信号进行发送,或者,将接收到的电磁信号转换为电信号。可选地,射频电路1604包括:天线系统、RF收发器、一个或多个放大器、调谐器、振荡器、数字信号处理器、编解码芯片组、用户身份模块卡等等。射频电路1604可以通过至少一种无线通信协议来与其它计算机设备进行通信。该无线通信协议包括但不限于:万维网、城域网、内联网、各代移动通信网络(2G、3G、4G及5G)、无线局域网和/或WiFi(Wireless Fidelity,无线保真)网络。在一些实施例中,射频电路1604还可以包括NFC(Near Field Communication,近距离无线通信)有关的电路,本申请对此不加以限定。
触摸显示屏1605用于显示UI(User Interface,用户界面)。该UI可以包括图形、文本、图标、视频及其它们的任意组合。触摸显示屏1605还具有采集在触摸显示屏1605的表面或表面上方的触摸信号的能力。该触摸信号可以作为控制信号输入至处理器1601进行处理。触摸显示屏1605用于提供虚拟按钮和/或虚拟键盘,也称软按钮和/或软键盘。在一些实施例中,触摸显示屏1605可以为一个,设置计算机设备的前面板;在另一些实施例中,触摸显示屏1605可以为至少两个,分别设置在计算机设备的不同表面或呈折叠设计;在再一些实施例中,触摸显示屏1605可以是柔性显示屏,设置在计算机设备的弯曲表面上或折叠面上。甚至,触摸显示屏1605还可以设置成非矩形的不规则图形,也即异形屏。触摸显示屏1605可以采用LCD(Liquid Crystal Display,液晶显示器)、OLED(Organic Light-Emitting Diode,有机发光二极管)等材质制备。
摄像头组件1606用于采集图像或视频。可选地,摄像头组件1606包括前置摄像头和后置摄像头。通常,前置摄像头用于实现视频通话或自拍,后置摄像头用于实现照片或视频的拍摄。在一些实施例中,后置摄像头为至少两个,分别为主摄像头、景深摄像头、广角摄像头中的任意一种,以实现主摄像头和景深摄像头融合实现背景虚化功能,主摄像头和广角摄像头融合实现全景拍摄以及VR(Virtual Reality,虚拟现实)拍摄功能。在一些实施例中,摄像头组件1606还可以包括闪光灯。闪光灯可以是单色温闪光灯,也可以是双色温闪光灯。双色温闪光灯是指暖光闪光灯和冷光闪光灯的组合,可以用于不同色温下的光线补偿。
音频电路1607用于提供用户和终端之间的音频接口。音频电路1607可以包括麦克风和扬声器。麦克风用于采集用户及环境的声波,并将声波转换为电信号输入至处理器1601进行处理,或者输入至射频电路1604以实现语音通信。出于立体声采集或降噪的目的,麦克风可以为多个,分别设置在终端的不同部位。麦克风还可以是阵列麦克风或全向采集型麦克风。扬声器则用于将来自处理器1601或射频电路1604的电信号转换为声波。扬声器可以是传统的薄膜扬声器,也可以是压电陶瓷扬声器。当扬声器是压电陶瓷扬声器时,不仅可以将电信号转换为人类可听见的声波,也可以将电信号转换为人类听不见的声波以进行测距等用途。在一些实施例中,音频电路1607还可以包括耳机插孔。
定位组件1608用于定位终端的当前地理位置,以实现导航或LBS(Location Based Service,基于位置的服务)。定位组件1608可以是基于美国的GPS(Global Positioning System,全球定位系统)、中国的北斗系统或俄罗斯的伽利略系统的定位组件。
电源1609用于为终端中的各个组件进行供电。电源1609可以是交流电、直流电、一次性电池或可充电电池。当电源1609包括可充电电池时,该可充电电池可以是有线充电电池或无线充电电池。有线充电电池是通过有线线路充电的电池,无线充电电池是通过无线线圈充电的电池。该可充电电池还可以用于支持快充技术。
在一些实施例中,终端还包括有一个或多个传感器1610。该一个或多个传感器1610包括但不限于:加速度传感器1611、陀螺仪传感器1612、压力 传感器1613、指纹传感器1614、光学传感器1615以及接近传感器1616。
本领域技术人员可以理解,图16中示出的结构并不构成对终端的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
本申请实施例还提供一种计算机可读存储介质,所述存储介质中存储有计算机可读指令,计算机可读指令由所述处理器执行以实现上述各个实施例提供的人脸美化方法。
本申请还提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述各个实施例所述的人脸美化方法。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
本领域普通技术人员可以理解实现上述实施例方法中全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。以上所述仅为本申请的较佳实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (20)

  1. 一种人脸美化方法,由计算机设备执行,所述方法包括:
    获取目标图像中包含的目标人脸;
    根据所述目标人脸和参考人脸,生成所述目标人脸对应的调整参数,所述调整参数包括脸型调整参数和五官调整参数;
    根据所述脸型调整参数和所述五官调整参数生成位移向量,所述位移向量用于表示目标人脸的脸型和五官在调整过程中的尺寸变化、位置变化和角度变化情况;
    根据所述位移向量对所述目标人脸的脸型和五官进行调整;及
    显示调整后的所述目标人脸。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述目标人脸和参考人脸,生成所述目标人脸对应的调整参数,包括:
    识别所述目标人脸的人脸关键点;
    根据所述人脸关键点确定所述目标人脸的初始人脸参数,所述初始人脸参数包括初始脸型参数和初始五官参数;
    根据所述初始人脸参数和所述参考人脸对应的参考人脸参数,生成所述调整参数,所述参考人脸参数包括参考脸型参数和参考五官参数。
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述初始人脸参数和所述参考人脸对应的参考人脸参数,生成所述调整参数,包括:
    根据所述初始脸型参数中的初始人脸宽高比以及所述参考脸型参数中的参考人脸宽高比,确定所述调整参数中的人脸宽度缩放比例。
  4. 根据权利要求2所述的方法,其特征在于,所述根据所述初始人脸参数和所述参考人脸对应的参考人脸参数,生成所述调整参数,包括:
    根据所述初始脸型参数中的初始V脸角度以及所述参考脸型参数中的参考V脸角度,确定所述调整参数中的V脸角度调整量,其中,V脸角度是脸颊切线与下巴切线所形成的夹角。
  5. 根据权利要求2所述的方法,其特征在于,所述根据所述初始人脸参数和所述参考人脸对应的参考人脸参数,生成所述调整参数,包括:
    根据所述初始脸型参数中的初始下巴高度以及所述参考脸型参数中的参考下巴高度,确定所述调整参数中的下巴高度调整量。
  6. 根据权利要求2所述的方法,其特征在于,所述根据所述初始人脸参数和所述参考人脸对应的参考人脸参数,生成所述调整参数,包括:
    根据所述初始五官参数中的初始五官占比以及所述参考五官参数中的参考五官占比,确定所述调整参数中的五官缩放比例,其中,五官占比包括五官高度占人脸高度的比例以及五官宽度占人脸宽度的比例。
  7. 根据权利要求2所述的方法,其特征在于,所述根据所述初始人脸参数和所述参考人脸对应的参考人脸参数,生成所述调整参数,包括:
    根据所述初始五官参数中的初始五官位置以及所述参考五官参数中的参考五官位置,确定所述调整参数中的五官位置偏移量。
  8. 根据权利要求2所述的方法,其特征在于,所述根据所述初始人脸参数和所述参考人脸对应的参考人脸参数,生成所述调整参数,包括:
    根据所述初始五官参数中的初始五官角度以及所述参考五官参数中的参考五官角度,确定所述调整参数中的五官角度调整量。
  9. 根据权利要求1至8任一所述的方法,其特征在于,所述根据所述脸型调整参数和所述五官调整参数生成位移向量之前,所述方法还包括:
    将所述目标图像划分为预定尺寸的网格;
    根据所述目标人脸上人脸关键点的关键点类型,划分调整区域,所述关键点类型包括人脸轮廓关键点和人脸五官关键点,所述调整区域包括轮廓调整区域和五官调整区域;
    所述根据所述脸型调整参数和所述五官调整参数生成位移向量,包括:
    对于各个调整区域,根据所述调整区域对应的调整参数,通过顶点着色器计算每次调整时所述调整区域内网格顶点的所述位移向量;及
    根据各次调整时计算得到的所述位移向量,生成位移向量图,所述位移向量图用于表示多次调整过程中所述网格顶点的位置变化情况。
  10. 根据权利要求9所述的方法,其特征在于,所述根据所述目标人脸上人脸关键点的关键点类型,划分调整区域,包括:
    根据所述人脸关键点的所述关键点类型,划分人脸区域,所述人脸区域包括轮廓区域和五官区域;及
    确定各个所述人脸区域各自对应的所述调整区域,所述调整区域的面积大于所述人脸区域的面积,且所述人脸区域位于所述调整区域内部。
  11. 根据权利要求9所述的方法,其特征在于,所述根据所述位移向量对所述目标人脸的脸型和五官进行调整,包括:
    根据所述调整区域内网格顶点的原始坐标以及所述位移向量图指示的位移距离和位移方向,通过所述顶点着色器调整所述网格顶点的坐标;及
    通过片段着色器绘制调整后网格中的各个像素点。
  12. 根据权利要求10所述的方法,其特征在于,所述根据所述调整区域对应的调整参数,通过顶点着色器计算每次调整时所述调整区域内网格顶点的所述位移向量,包括:
    根据所述调整区域对应的调整参数以及第一调整因子,通过所述顶点着色器计算所述人脸区域内网格顶点的所述位移向量;
    根据所述调整区域对应的调整参数以及第二调整因子,通过所述顶点着色器计算所述调整区域中所述人脸区域以外网格顶点的所述位移向量,所述第一调整因子大于所述第二调整因子,且所述人脸区域内网格顶点的坐标的调整幅度大于所述人脸区域外网格顶点的坐标的调整幅度。
  13. 根据权利要求1至12任一所述的方法,其特征在于,所述获取目标图像中包含的目标人脸之后,所述方法还包括:
    显示至少一个人脸模板,所述人脸模板中包含标准人脸模板和/或预设人物模板,所述标准人脸模板对应标准人脸的人脸参数,所述预设人物模板对 应预设人物的人脸参数;及
    当接收到对所述人脸模板的选择信号时,根据选中的人脸模板确定所述参考人脸。
  14. 一种人脸美化装置,其特征在于,所述装置包括:
    获取模块,用于获取目标图像中包含的目标人脸;
    第一生成模块,用于根据所述目标人脸和参考人脸,生成所述目标人脸对应的调整参数,所述调整参数包括脸型调整参数和五官调整参数;
    第二生成模块,用于根据所述脸型调整参数和所述五官调整参数生成位移向量,所述位移向量用于表示目标人脸的脸型和五官在调整过程中的尺寸变化、位置变化和角度变化情况;
    调整模块,用于根据所述位移向量对所述目标人脸的脸型和五官进行调整;
    显示模块,用于显示调整后的所述目标人脸。
  15. 根据权利要求14所述的装置,其特征在于,所述第一生成模块,包括:
    识别单元,用于识别所述目标人脸的人脸关键点;
    确定单元,用于根据所述人脸关键点确定所述目标人脸的初始人脸参数,所述初始人脸参数包括初始脸型参数和初始五官参数;
    第一生成单元,用于根据所述初始人脸参数和所述参考人脸对应的参考人脸参数,生成所述调整参数,所述参考人脸参数包括目参考型参数和参考五官参数。
  16. 根据权利要求14或15的装置,其特征在于,所述装置还包括:
    第一划分模块,用于将所述目标图像划分为预定尺寸的网格;
    第二划分模块,用于根据所述目标人脸上人脸关键点的关键点类型,划分调整区域,所述关键点类型包括人脸轮廓关键点和人脸五官关键点,所述调整区域包括轮廓调整区域和五官调整区域;
    所述第二生成模块包括:计算单元,用于对于各个调整区域,根据所述调整区域对应的调整参数,通过顶点着色器计算每次调整时所述调整区域内网格顶点的所述位移向量;及
    第二生成单元,用于根据各次调整时计算得到的所述位移向量,生成位移向量图,所述位移向量图用于表示多次调整过程中所述网格顶点的位置变化情况。
  17. 根据权利要求16所述的装置,其特征在于,所述调整模块包括:
    第一调整单元,用于根据所述调整区域内网格顶点的原始坐标以及所述位移向量图指示的位移距离和位移方向,通过所述顶点着色器调整所述网格顶点的坐标;及
    第二调整单元,用于通过片段着色器绘制调整后网格中的各个像素点。
  18. 根据权利要求17所述的装置,其特征在于,所述计算单元还用于:
    根据所述调整区域对应的调整参数以及第一调整因子,通过所述顶点着色器计算所述人脸区域内网格顶点的所述位移向量;
    根据所述调整区域对应的调整参数以及第二调整因子,通过所述顶点着色器计算所述调整区域中所述人脸区域以外网格顶点的所述位移向量,所述第一调整因子大于所述第二调整因子,且所述人脸区域内网格顶点的坐标的调整幅度大于所述人脸区域外网格顶点的坐标的调整幅度。
  19. 一种计算机设备,其特征在于,所述计算机设备包括处理器和存储器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行如权利要求1至13中任一项所述的人脸美化方法的步骤。
  20. 一种非易失性的计算机可读存储介质,其特征在于,所述存储介质中存储有计算机可读指令,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如权利要求1至13中任一项所述的人脸美化方法的步骤。
PCT/CN2019/117475 2018-11-30 2019-11-12 人脸美化方法、装置、计算机设备及存储介质 WO2020108291A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/194,880 US11410284B2 (en) 2018-11-30 2021-03-08 Face beautification method and apparatus, computer device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811453182.2 2018-11-30
CN201811453182.2A CN109584151B (zh) 2018-11-30 2018-11-30 人脸美化方法、装置、终端及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/194,880 Continuation US11410284B2 (en) 2018-11-30 2021-03-08 Face beautification method and apparatus, computer device, and storage medium

Publications (1)

Publication Number Publication Date
WO2020108291A1 true WO2020108291A1 (zh) 2020-06-04

Family

ID=65925733

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/117475 WO2020108291A1 (zh) 2018-11-30 2019-11-12 人脸美化方法、装置、计算机设备及存储介质

Country Status (3)

Country Link
US (1) US11410284B2 (zh)
CN (1) CN109584151B (zh)
WO (1) WO2020108291A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112330571A (zh) * 2020-11-27 2021-02-05 维沃移动通信有限公司 图像处理方法、装置和电子设备
US20210200304A1 (en) * 2019-12-31 2021-07-01 Lenovo (Beijing) Co., Ltd. Display method and electronic device

Families Citing this family (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109584151B (zh) 2018-11-30 2022-12-13 腾讯科技(深圳)有限公司 人脸美化方法、装置、终端及存储介质
CN110097622B (zh) * 2019-04-23 2022-02-25 北京字节跳动网络技术有限公司 渲染图像的方法、装置、电子设备和计算机可读存储介质
CN110136236B (zh) * 2019-05-17 2022-11-29 腾讯科技(深圳)有限公司 三维角色的个性化脸部显示方法、装置、设备及存储介质
CN110335207B (zh) * 2019-06-04 2022-01-21 重庆七腾科技有限公司 一种基于群像选择的智能影像优化方法及其系统
CN110378847A (zh) * 2019-06-28 2019-10-25 北京字节跳动网络技术有限公司 人脸图像处理方法、装置、介质及电子设备
CN111652795A (zh) * 2019-07-05 2020-09-11 广州虎牙科技有限公司 脸型的调整、直播方法、装置、电子设备和存储介质
CN112528707A (zh) * 2019-09-18 2021-03-19 广州虎牙科技有限公司 图像处理方法、装置、设备及存储介质
CN110992276A (zh) * 2019-11-18 2020-04-10 北京字节跳动网络技术有限公司 一种图像处理方法、装置、介质和电子设备
US11922540B2 (en) 2020-02-14 2024-03-05 Perfect Mobile Corp. Systems and methods for segment-based virtual application of facial effects to facial regions displayed in video frames
US11404086B2 (en) * 2020-02-14 2022-08-02 Perfect Mobile Corp. Systems and methods for segment-based virtual application of makeup effects to facial regions displayed in video frames
CN111275650B (zh) 2020-02-25 2023-10-17 抖音视界有限公司 美颜处理方法及装置
CN111399730A (zh) * 2020-03-12 2020-07-10 北京字节跳动网络技术有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN111415397B (zh) * 2020-03-20 2024-03-08 广州虎牙科技有限公司 一种人脸重构、直播方法、装置、设备及存储介质
CN111476871B (zh) * 2020-04-02 2023-10-03 百度在线网络技术(北京)有限公司 用于生成视频的方法和装置
CN111563855B (zh) * 2020-04-29 2023-08-01 百度在线网络技术(北京)有限公司 图像处理的方法及装置
CN113596314B (zh) * 2020-04-30 2022-11-11 北京达佳互联信息技术有限公司 图像处理方法、装置及电子设备
CN111966852B (zh) * 2020-06-28 2024-04-09 北京百度网讯科技有限公司 基于人脸的虚拟整容的方法和装置
CN111797754B (zh) * 2020-06-30 2024-07-19 上海掌门科技有限公司 图像检测的方法、装置、电子设备及介质
CN114095646B (zh) * 2020-08-24 2022-08-26 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质
CN112200716B (zh) * 2020-10-15 2022-03-22 广州博冠信息科技有限公司 图形处理方法、装置、电子设备及非易失性存储介质
CN112102374B (zh) * 2020-11-23 2021-03-12 北京蜜莱坞网络科技有限公司 图像处理方法、装置、电子设备及介质
CN112149647B (zh) * 2020-11-24 2021-02-26 北京蜜莱坞网络科技有限公司 图像处理方法、装置、设备及存储介质
CN112508777A (zh) * 2020-12-18 2021-03-16 咪咕文化科技有限公司 一种美颜方法、电子设备及存储介质
CN113419695A (zh) * 2021-06-11 2021-09-21 北京达佳互联信息技术有限公司 对目标对象的调整项进行显示的方法、装置、电子设备
CN113658035B (zh) * 2021-08-17 2023-08-08 北京百度网讯科技有限公司 脸部变换方法、装置、设备、存储介质以及产品
CN113793252B (zh) * 2021-08-26 2023-07-18 展讯通信(天津)有限公司 图像处理方法、装置、芯片及其模组设备
CN113657357B (zh) * 2021-10-20 2022-02-25 北京市商汤科技开发有限公司 图像处理方法、装置、电子设备及存储介质
CN114092357A (zh) * 2021-11-26 2022-02-25 维沃移动通信有限公司 图像处理方法、图像处理装置和电子设备
CN114119935B (zh) * 2021-11-29 2023-10-03 北京百度网讯科技有限公司 图像处理方法和装置
CN115423827B (zh) * 2022-11-03 2023-03-24 北京百度网讯科技有限公司 图像处理方法、装置、电子设备和存储介质
CN116883230A (zh) * 2023-07-25 2023-10-13 厦门像甜科技有限公司 一种人脸形变处理方法和装置以及设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104574299A (zh) * 2014-12-25 2015-04-29 小米科技有限责任公司 人脸图片处理方法及装置
CN105096353A (zh) * 2014-05-05 2015-11-25 腾讯科技(深圳)有限公司 一种图像处理方法及装置
CN107705248A (zh) * 2017-10-31 2018-02-16 广东欧珀移动通信有限公司 图像处理方法、装置、电子设备和计算机可读存储介质
CN108876732A (zh) * 2018-05-25 2018-11-23 北京小米移动软件有限公司 人脸美颜方法及装置
CN109584151A (zh) * 2018-11-30 2019-04-05 腾讯科技(深圳)有限公司 人脸美化方法、装置、终端及存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106296571B (zh) * 2016-07-29 2019-06-04 厦门美图之家科技有限公司 一种基于人脸网格的缩小鼻翼方法、装置和计算设备
CN106919906B (zh) * 2017-01-25 2021-04-20 迈吉客科技(北京)有限公司 一种图像互动方法及互动装置
CN107833177A (zh) * 2017-10-31 2018-03-23 维沃移动通信有限公司 一种图像处理方法及移动终端
CN107730449B (zh) * 2017-11-07 2021-12-14 深圳市云之梦科技有限公司 一种人脸五官美化处理的方法及系统
CN108717719A (zh) * 2018-05-23 2018-10-30 腾讯科技(深圳)有限公司 卡通人脸图像的生成方法、装置及计算机存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105096353A (zh) * 2014-05-05 2015-11-25 腾讯科技(深圳)有限公司 一种图像处理方法及装置
CN104574299A (zh) * 2014-12-25 2015-04-29 小米科技有限责任公司 人脸图片处理方法及装置
CN107705248A (zh) * 2017-10-31 2018-02-16 广东欧珀移动通信有限公司 图像处理方法、装置、电子设备和计算机可读存储介质
CN108876732A (zh) * 2018-05-25 2018-11-23 北京小米移动软件有限公司 人脸美颜方法及装置
CN109584151A (zh) * 2018-11-30 2019-04-05 腾讯科技(深圳)有限公司 人脸美化方法、装置、终端及存储介质

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210200304A1 (en) * 2019-12-31 2021-07-01 Lenovo (Beijing) Co., Ltd. Display method and electronic device
US12086305B2 (en) * 2019-12-31 2024-09-10 Lenovo (Beijing) Co., Ltd. Display method and electronic device comprising selecting and obtaining image data of a local area of a user's face
CN112330571A (zh) * 2020-11-27 2021-02-05 维沃移动通信有限公司 图像处理方法、装置和电子设备

Also Published As

Publication number Publication date
US20210192703A1 (en) 2021-06-24
US11410284B2 (en) 2022-08-09
CN109584151B (zh) 2022-12-13
CN109584151A (zh) 2019-04-05

Similar Documents

Publication Publication Date Title
WO2020108291A1 (zh) 人脸美化方法、装置、计算机设备及存储介质
US11403763B2 (en) Image segmentation method and apparatus, computer device, and storage medium
CN110189340B (zh) 图像分割方法、装置、电子设备及存储介质
CN110929651B (zh) 图像处理方法、装置、电子设备及存储介质
US20200387698A1 (en) Hand key point recognition model training method, hand key point recognition method and device
US20200401790A1 (en) Face image processing method and device, and storage medium
CN108594997B (zh) 手势骨架构建方法、装置、设备及存储介质
US11436779B2 (en) Image processing method, electronic device, and storage medium
WO2021008456A1 (zh) 图像处理方法、装置、电子设备及存储介质
CN110097576B (zh) 图像特征点的运动信息确定方法、任务执行方法和设备
US12008464B2 (en) Neural network based face detection and landmark localization
CN111541907B (zh) 物品显示方法、装置、设备及存储介质
US11030733B2 (en) Method, electronic device and storage medium for processing image
JP7487293B2 (ja) 仮想カメラの動きを制御する方法及び装置並びコンピュータ装置及びプログラム
WO2022042290A1 (zh) 一种虚拟模型处理方法、装置、电子设备和存储介质
CN110335224B (zh) 图像处理方法、装置、计算机设备及存储介质
WO2021218121A1 (zh) 图像处理方法、装置、电子设备及存储介质
WO2019237747A1 (zh) 图像裁剪方法、装置、电子设备及计算机可读存储介质
CN111324250A (zh) 三维形象的调整方法、装置、设备及可读存储介质
CN112581358B (zh) 图像处理模型的训练方法、图像处理方法及装置
WO2020114097A1 (zh) 一种边界框确定方法、装置、电子设备及存储介质
CN110807769B (zh) 图像显示控制方法及装置
CN112257594A (zh) 多媒体数据的显示方法、装置、计算机设备及存储介质
CN109472855B (zh) 一种体绘制方法、装置及智能设备
JP2023537721A (ja) 顔画像表示方法、装置、電子機器及び記憶媒体

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19889411

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19889411

Country of ref document: EP

Kind code of ref document: A1