WO2021012596A1 - 图像调整方法、装置、存储介质以及设备 - Google Patents

图像调整方法、装置、存储介质以及设备 Download PDF

Info

Publication number
WO2021012596A1
WO2021012596A1 PCT/CN2019/126539 CN2019126539W WO2021012596A1 WO 2021012596 A1 WO2021012596 A1 WO 2021012596A1 CN 2019126539 W CN2019126539 W CN 2019126539W WO 2021012596 A1 WO2021012596 A1 WO 2021012596A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
face
points
feature points
coordinates
Prior art date
Application number
PCT/CN2019/126539
Other languages
English (en)
French (fr)
Inventor
邹超洋
Original Assignee
广州视源电子科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州视源电子科技股份有限公司 filed Critical 广州视源电子科技股份有限公司
Publication of WO2021012596A1 publication Critical patent/WO2021012596A1/zh

Links

Images

Classifications

    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Definitions

  • the embodiments of the present application relate to the field of image processing, and in particular, to an image adjustment method, device, storage medium, and equipment.
  • the inventor found that there are currently two main ways to face thinning a portrait.
  • the first is to achieve the effect of face thinning by moving the feature points below the eyes of the face. This way Most of the processed face images make the adjustment of the face very uncoordinated, the chin becomes sharp, and the face loses curvature and beauty.
  • the second method is to map the feature points to the preset face model. The effect of face adjustment is achieved by mapping to the built-in template. This method is easy to distort the background and make the adjustment effect poor.
  • the embodiments of the present application provide an image adjustment method, device, storage medium, and equipment, which have the effect of preventing the face and background from being distorted in the process of face image processing.
  • an image adjustment method including the following steps:
  • an image adjustment device including:
  • the face detection module is used to obtain the first image, and perform face detection on the first image to obtain a set of facial feature points;
  • the face deformation constraint point determination module is used to select part of the face feature points from the face feature point set as the face deformation constraint points;
  • the face deformation constraint amount determination module is configured to calculate the face deformation constraint amount according to the coordinates of the face deformation constraint point;
  • the coordinate adjustment module is configured to adjust the coordinates of the facial contour feature points in the facial feature point set according to the facial deformation constraint amount to obtain a second image
  • a coordinate transformation module configured to perform affine transformation on the pixels in the first image to obtain the transformed coordinates of each pixel in the second image
  • the third image acquisition module is configured to fill the pixel values in the second image according to the transformed coordinates in the first image to obtain a third image.
  • a computer-readable storage medium on which a computer program is stored, and the computer program is executed by a processor to implement the aforementioned image adjustment method.
  • a computer device including a memory, a processor, and a computer program stored in the storage and executable by the processor, and the processor executes the computer program When realizing the image adjustment method described above.
  • some facial feature points are selected as face deformation constraint points, and the face deformation constraint amount is calculated according to the coordinates of the face deformation constraint point, and the face deformation constraint amount is calculated according to the face deformation constraint amount.
  • the coordinates of the facial contour feature points in the feature point set are adjusted to effectively constrain the face adjustment, which can prevent the face from becoming sharp in the chin or losing the curvature of the face in the process of face image processing, resulting in distortion of the face; at the same time, Prevent the background from being distorted during face image processing, which is more in line with actual application scenarios.
  • FIG. 1 is a schematic block diagram of an application environment of an image adjustment method shown in an embodiment of the application
  • FIG. 3 is a schematic diagram of the numbers and positions of facial feature points in an image adjustment method according to an embodiment of the application;
  • FIG. 5 is a flowchart of obtaining an affine transformation matrix according to an embodiment of the application.
  • FIG. 6 is a flowchart of determining transformation coordinates shown in an embodiment of the application.
  • FIG. 7 is a structural block diagram of an image adjustment device shown in an embodiment of the application.
  • FIG. 8 is a structural block diagram of a coordinate transformation module shown in an embodiment of the application.
  • FIG. 9 is a structural block diagram of an affine transformation matrix determination module shown in an embodiment of the application.
  • FIG. 10 is a structural block diagram of a transformation coordinate determination module shown in an embodiment of the application.
  • FIG. 11 is a structural block diagram of a computer device shown in an embodiment of the application.
  • first, second, third, etc. may be used in the embodiments of the present application to describe various information, the information should not be limited to these terms. These terms are only used to distinguish the same type of information from each other.
  • first information may also be referred to as second information, and similarly, the second information may also be referred to as first information.
  • the words "if"/"if” as used herein can be interpreted as "when” or "when” or "in response to certainty”.
  • FIG. 1 is a schematic block diagram of an application environment of the image adjustment method shown in an embodiment of the present application.
  • the application environment of the image adjustment method includes a computer device 100, a first image 200 and a third image 300.
  • the computer device 100 runs an application program 110 that applies the image adjustment method of the embodiment of the present application.
  • the application program includes a face detection tool and an image adjustment method. The user inputs the first image 200 in the computer device. Then, the face feature point set is obtained by the face detection tool, and the face feature point set is adjusted by the image adjustment method to obtain the third image 200.
  • the computer device 100 may be any smart terminal, for example, it may be specifically a computer, a mobile phone, a tablet computer, an interactive smart tablet, a PDA (Personal Digital Assistant, personal digital assistant), an e-book reader, a multimedia player, etc.
  • the application program 110 may also be presented in other forms adapted to the smart terminal. In some examples, it may also be presented in the form of, for example, a system plug-in or a web page plug-in.
  • the face detection tool may use an existing face detection algorithm tool, such as Dlib, OpenCV, etc. In this embodiment, the Dlib face detection algorithm tool is preferably used.
  • the first image 200 is usually an image including a human face, which may be an image captured by a camera device, or an artificially synthesized image, and in the embodiment of the present application, the first image 200 is an input to be Adjusted image.
  • the third image 300 is an image processed by the image adjustment method of the embodiment of the present application, and the displayed face shape is usually thinner than the face shape of the first image 200, that is, the target image after face-lifting.
  • FIG. 2 is a flowchart of an image adjustment method shown in an embodiment of this application, including the following steps:
  • Step S201 Obtain a first image, and perform face detection on the first image to obtain a set of face feature points.
  • Step S202 Select part of the face feature points from the set of face feature points as the face deformation constraint points.
  • Step S203 According to the coordinates of the face deformation constraint point, calculate and obtain the face deformation constraint amount.
  • Step S204 Adjust the coordinates of the face contour feature points in the face feature point set according to the face deformation constraint amount to obtain a second image.
  • Step S205 Perform affine transformation on the pixels in the first image to obtain the transformed coordinates of each pixel in the second image.
  • Step S206 Fill the pixel values in the second image according to the transformed coordinates to obtain a third image.
  • the first image in the embodiment of the present application is usually an image including a human face, which may be an image obtained by shooting with a camera device, or an image synthesized manually, and the first image in the embodiment of the present application is The input image to be adjusted.
  • the second image is an image determined by adjusting the coordinates of the facial contour feature points in the facial feature point set in the first image according to the facial deformation constraint amount, which is used as an example of the present application
  • the intermediate image in the image adjustment method is further transformed and adjusted on the basis of the second image to obtain a third image.
  • the third image is an image processed by the image adjustment method of the embodiment of the present application, and the displayed face shape is usually thinner than the face shape of the first image, that is, the target image after face-lifting.
  • some facial feature points are selected as face deformation constraint points, and the face deformation constraint amount is calculated according to the coordinates of the face deformation constraint point, and the face deformation constraint amount is calculated according to the face deformation constraint amount.
  • the coordinates of the facial contour feature points in the feature point set are adjusted to effectively constrain the face adjustment, which can prevent the face from becoming sharp in the chin or losing the curvature of the face in the process of face image processing, resulting in distortion of the face; at the same time, Prevent the background from being distorted during face image processing, which is more in line with actual application scenarios.
  • the face feature point set in step S201 of the embodiment of the present application includes multiple face feature points located in different parts of the face.
  • the face feature point set includes face key points
  • the key points of the face include: face contour feature points, eyebrow feature points, eye feature points, nose feature points, and mouth feature points. Selecting the above-mentioned face feature points can characterize most of the features of the face and effectively distinguish different faces.
  • the key points of the face can be detected and recognized on the first image through various existing face detection algorithms.
  • the Dlib face detection algorithm tool is used to perform face detection on the first image to obtain 68 face key points, and the 68 face key points are numbered 0-67 according to preset rules to distinguish them.
  • the numbers 0-16 correspond to the feature points representing the face contour
  • the numbers 17-26 correspond to the eyebrow feature points
  • the numbers 27-35 correspond to the nose feature points
  • the numbers 36-47 correspond to the eye feature points
  • 48-67 are corresponding characteristic points of the mouth.
  • the face feature point set in step S201 also includes four outer feature points; the four outer feature points are obtained by the following steps: selecting the smallest abscissa of the face key points and the face key points The maximum ordinate of the human face, the maximum abscissa of the key points of the face, and the minimum ordinate of the key points of the face are combined in pairs to obtain four outer feature points outside the face.
  • the smallest abscissa of the face key points and the largest ordinate of the face key points are used to determine the first outer feature point;
  • the largest abscissa of the face key point and the largest ordinate of the face key point determine the second outer feature point;
  • the smallest abscissa of the face key point and the smallest ordinate of the face key point determine the third outer feature point ;
  • the obtained four outer feature points can also be numbered according to a preset rule to distinguish them.
  • the smallest abscissa of the face key point is the abscissa of the face key point numbered 0
  • the largest ordinate of the face key point is the ordinate of the face key point numbered 18.
  • the ordinate of the face key point, the second outer feature point outside the face formed by these two coordinates, and the corresponding number is 69; the smallest abscissa of the face key point is the abscissa of the face key point numbered 0 , The minimum ordinate of the face key point is the ordinate of the face key point numbered 8.
  • These two coordinates constitute the third outer feature point outside the face, corresponding to the number 70; the largest face key point The abscissa is the abscissa of the face key point numbered 16, and the smallest ordinate of the face key point is the ordinate of the face key point numbered 8.
  • These two coordinates constitute the fourth outside of the face.
  • Feature point, the corresponding number is 71.
  • the adjustment range of the chin and the key points close to the eyes should be small, while the adjustment range of the face area should be larger.
  • Sensitive areas such as edges will appear distorted and burrs and jagged. Therefore, in order to meet the needs of face-lifting images and prevent sensitive areas such as face edges from being distorted, burrs and jagged, the face selected in step S202
  • the deformation constraint points are two outer corners feature points, or two inner corners feature points, or chin feature points and brow center feature points.
  • the two outer corner feature points are used as face deformation constraint points, it can effectively constrain the face contour feature point deformation, which can meet the needs of most users, and is suitable for face-lift adjustment of images of different sizes and different tilt angles , Has a better constraint effect. Therefore, in the implementation of this application, it is preferable to use two outer corner feature points as the face deformation constraint points. Specifically, the two outer corner feature points are respectively the face feature points corresponding to the number 39 The face feature point corresponding to the number 42.
  • step S203 may include: obtaining the difference of the abscissas of the two face deformation constraint points, and combining the The arithmetic square root of the difference is used as the face deformation constraint.
  • the calculation formula for the facial deformation constraint amount is:
  • degree represents the amount of face deformation constraint
  • point1.x represents the abscissa of one face deformation constraint point
  • point2.x represents the abscissa of the other face deformation constraint point
  • point1.x-point2.x The value of is greater than 0.
  • the face-lifting adjustment is mainly the adjustment of the left and right lateral directions of the face, by taking the arithmetic square root of the difference between the abscissas of the two face deformation constraint points as the face deformation constraint amount, the effective constraint on the face-lifting adjustment can be realized. Effectively prevent face deformation.
  • the step of adjusting the coordinates of the facial contour feature points in the facial feature point set according to the facial deformation constraint amount in step S204 may include: maintaining the facial contour feature points The ordinate of the face contour remains unchanged, and the abscissa of the face contour feature point is adjusted by the following method: the number of the face contour feature point is calculated with the preset numerical parameters to obtain the calculation result; the calculation result is calculated with the person The arithmetic square root of the product of the face deformation constraint amount, and use the arithmetic square root of the product as the offset of the abscissa of the face contour feature point; adjust the horizontal of the face contour feature point according to the offset coordinate.
  • y represents the offset of the abscissa of the face contour feature point; degree represents the face deformation constraint amount; x represents the number of the face contour feature point.
  • the result of the superposition and summation of the abscissa of the face contour point and the offset is used as the adjusted face The abscissa of the contour point.
  • the coordinates of the face contour feature points in the face feature point set are adjusted, and then the face adjustment is effectively restricted, which can prevent the face from becoming sharp or sharp in the face image processing process.
  • the curvature of the face is lost to prevent distortion of the face.
  • the step of performing affine transformation on the pixels in the first image in step S205 to obtain the transformed coordinates of each pixel in the second image may include :
  • Step S2051 Calculate an affine transformation matrix between the pixel points of the first image and the pixel points of the second image according to the facial feature points of the first image and the facial feature points of the second image.
  • Step S2052 Perform affine transformation on the pixels in the first image according to the affine transformation matrix to obtain the transformed coordinates of each pixel in the second image.
  • the simulation between the pixel points of the first image and the pixel points of the second image is calculated according to the facial feature points of the first image and the facial feature points of the second image in step S2051.
  • the steps of projecting the transformation matrix may include:
  • Step S20511 Calculate the influence weight of each face feature point in the first image on the pixel point of the first image
  • Step S20512 Obtain a weighted average of the face feature points of the first image according to each face feature point of the first image and the corresponding influence weight;
  • Step S20513 Obtain a weighted average of the face feature points of the second image according to each face feature point of the second image and the corresponding influence weight;
  • Step S20514 According to the difference between the weighted average of the face feature points of the first image and the face feature points of the first image, the face feature points of the second image and the face of the second image The difference between the weighted average values of the feature points and the influence weight are calculated to obtain an affine transformation matrix.
  • contral i contral i -contral*
  • P represents the pixel of the first image
  • Px represents the abscissa of the pixel
  • Py represents the ordinate of the pixel;
  • control[i].x represents the face numbered i in the first image
  • the abscissa of the feature point; control[i].y represents the ordinate of the face feature point numbered i in the first image.
  • the influence weight corresponding to each face feature point of the second image is the same as the influence weight corresponding to each face feature point before adjustment, that is, the influence weight corresponding to each face feature point of the first image and the corresponding The influence weights are the same.
  • the first image is calculated.
  • step S2052 the pixels in the first image are affine transformed according to the affine transformation matrix to obtain the pixel points in the second image.
  • the step of transforming coordinates may include:
  • Step S20521 Calculate the difference between the pixel point coordinates of the first image and the weighted average value of the facial feature points of the first image;
  • Step S20522 Calculate the product of the difference and the influence weight, and superimpose and sum the product result and the weighted average of the facial feature points of the second image to obtain the pixel value in the second image Transform coordinates.
  • the calculation formula for the transformed coordinates of the pixel in the second image is:
  • L(P) is the transformed coordinate of the pixel in the second image
  • P is the coordinate of the pixel in the first image
  • M represents the affine transformation matrix of the pixel.
  • the transformed coordinates of the pixel points in the second image can be determined, and then the adjusted image can be determined.
  • the coordinates of one of the pixels of the corresponding second image are obtained through affine transformation, and the pixel coordinates of the second image can be filled with the pixel corresponding to the pixel of the first image
  • the coordinates of the nearest neighbor pixels of the corresponding input image can be obtained through inverse affine transformation, and the nearest neighbor of the first image
  • the pixel value corresponding to the coordinate of the pixel point is filled into the pixel point of the second image, thereby achieving image fusion.
  • the specific affine transformation method can refer to the aforementioned affine transformation between the facial feature point coordinates of the first image and the face feature point coordinates of the second image.
  • step S206 according to the first image
  • the step of filling the pixel values in the second image with the transformed coordinates of, and obtaining the third image includes performing a nearest neighbor interpolation operation on the second image.
  • the pixel value in the second image can be filled simply and quickly through the nearest neighbor interpolation operation.
  • an image adjustment device 400 including:
  • the face detection module 401 is configured to obtain a first image, perform face detection on the first image, and obtain a face feature point set;
  • the face deformation constraint point determination module 402 is configured to select part of the face feature points from the face feature point set as the face deformation constraint points;
  • the face deformation constraint amount determining module 403 is configured to calculate the face deformation constraint amount according to the coordinates of the face deformation constraint point;
  • the coordinate adjustment module 404 is configured to adjust the coordinates of the facial contour feature points in the facial feature point set according to the facial deformation constraint amount to obtain a second image;
  • the coordinate transformation module 405 is configured to perform affine transformation on the pixels in the first image to obtain the transformed coordinates of each pixel in the second image;
  • the third image acquisition module 406 is configured to fill the pixel values in the second image according to the transformed coordinates in the first image to obtain a third image.
  • some facial feature points are selected as face deformation constraint points, and the face deformation constraint amount is calculated according to the coordinates of the face deformation constraint point, and the face deformation constraint amount is calculated according to the face deformation constraint amount.
  • the coordinates of the facial contour feature points in the feature point set are adjusted to effectively constrain the face adjustment, which can prevent the face from becoming sharp in the chin or losing the curvature of the face in the process of face image processing, resulting in distortion of the face; at the same time, Prevent the background from being distorted during face image processing, which is more in line with actual application scenarios.
  • the face feature point set includes multiple face feature points located in different parts of the face.
  • the face feature point set includes face key points
  • the face key points include: face Contour feature points, eyebrow feature points, eye feature points, nose feature points, and mouth feature points. Selecting the above-mentioned face feature points can characterize most of the features of the face and effectively distinguish different faces.
  • the key points of the face can be detected and recognized on the first image through various existing face detection algorithms.
  • the Dlib face detection algorithm tool is used to perform face detection on the first image to obtain 68 face key points, and the 68 face key points are numbered 0-67 according to preset rules to distinguish them.
  • the numbers 0-16 correspond to the feature points representing the face contour
  • the numbers 17-26 correspond to the eyebrow feature points
  • the numbers 27-35 correspond to the nose feature points
  • the numbers 36-47 correspond to the eye feature points
  • 48-67 are corresponding characteristic points of the mouth.
  • the face feature point set also includes four outer feature points
  • the face detection module 401 is also used to determine the four outer feature points, specifically for: selecting the smallest of the face key points The abscissa, the largest ordinate of the key points of the face, the largest abscissa of the key points of the face, and the smallest ordinate of the key points of the face are combined in pairs to obtain four outer feature points outside the face.
  • the smallest abscissa of the face key points and the largest ordinate of the face key points are used to determine the first outer feature point;
  • the largest abscissa of the face key point and the largest ordinate of the face key point determine the second outer feature point;
  • the smallest abscissa of the face key point and the smallest ordinate of the face key point determine the third outer feature point ;
  • the obtained four outer feature points can also be numbered according to a preset rule to distinguish them.
  • the smallest abscissa of the face key point is the abscissa of the face key point numbered 0
  • the largest ordinate of the face key point is the ordinate of the face key point numbered 18.
  • the ordinate of the face key point, the second outer feature point outside the face formed by these two coordinates, and the corresponding number is 69; the smallest abscissa of the face key point is the abscissa of the face key point numbered 0 , The minimum ordinate of the face key point is the ordinate of the face key point numbered 8.
  • These two coordinates constitute the third outer feature point outside the face, corresponding to the number 70; the largest face key point The abscissa is the abscissa of the face key point numbered 16, and the smallest ordinate of the face key point is the ordinate of the face key point numbered 8.
  • These two coordinates constitute the fourth outside of the face.
  • Feature point, the corresponding number is 71.
  • the adjustment range of the chin and the key points close to the eyes should be small, while the adjustment range of the face area should be larger.
  • Sensitive areas such as edges will appear to be distorted, burrs and jagged. Therefore, in order to meet the requirements of image face-lifting and prevent sensitive areas such as face edges from being distorted, burrs and jagged, the selected face deformation constraint point Two outer corner feature points, or two inner corner feature points, or chin feature points and brow center feature points are used as face deformation constraint points.
  • the two outer corner feature points are used as face deformation constraint points, it can effectively constrain the face contour feature point deformation, which can meet the needs of most users, and is suitable for face-lift adjustment of images of different sizes and different tilt angles Therefore, in this implementation, it is preferable to use two outer corner feature points as the face deformation constraint points.
  • the two outer corner feature points are the face feature points corresponding to the number 39 and the face corresponding to the number 42. Feature points.
  • the coordinate adjustment module 402 is configured to calculate and obtain the face deformation constraint amount according to the face deformation constraint point, including obtaining the difference of the abscissa of the two face deformation constraint points, and The arithmetic square root of the difference is used as the face deformation constraint amount.
  • the calculation formula for the facial deformation constraint amount is:
  • degree represents the amount of face deformation constraint
  • point1.x represents the abscissa of one face deformation constraint point
  • point2.x represents the abscissa of the other face deformation constraint point
  • point1.x-point2.x The value of is greater than 0.
  • the face-lifting adjustment is mainly the adjustment of the left and right lateral directions of the face, by taking the arithmetic square root of the difference between the abscissas of the two face deformation constraint points as the face deformation constraint amount, the effective constraint on the face-lifting adjustment can be realized. Effectively prevent face deformation.
  • the coordinate adjustment module 404 is configured to adjust the coordinates of the facial contour feature points in the facial feature point set according to the facial deformation constraint amount, including: maintaining the face The ordinate of the contour feature point remains unchanged, and the abscissa of the face contour feature point is adjusted in the following way: the number of the face contour feature point is calculated with preset numerical parameters to obtain the calculation result; the calculation is calculated The arithmetic square root of the product of the result and the face deformation constraint amount, and the arithmetic square root of the product is used as the offset of the abscissa of the face contour feature point; adjust the face according to the offset The abscissa of the contour feature point.
  • y represents the offset of the abscissa of the face contour feature point; degree represents the face deformation constraint amount; x represents the number of the face contour feature point.
  • the result of the superposition and summation of the abscissa of the face contour point and the offset is used as the adjusted face The abscissa of the contour point.
  • the coordinates of the face contour feature points in the face feature point set are adjusted, and then the face adjustment is effectively constrained, which can prevent the face from becoming sharp or losing the curvature of the face.
  • the face is distorted.
  • the coordinate transformation module 405 includes:
  • the affine transformation matrix determination module 4051 is used to calculate the affine transformation matrix between the pixel points of the first image and the pixel points of the second image according to the face feature points of the first image and the face feature points of the second image .
  • the transformation coordinate determination module 4052 is configured to perform affine transformation on the pixels in the first image according to the affine transformation matrix to obtain the transformed coordinates of each pixel in the second image.
  • the affine transformation matrix determination module 4051 includes:
  • the influence weight determination module 40511 is configured to calculate the influence weight of each face feature point in the first image on the pixel points of the first image.
  • a first weighted average calculation module 40512 configured to obtain a weighted average of the face feature points of the first image according to each face feature point of the first image and the corresponding influence weight;
  • the second weighted average calculation module 40513 is configured to obtain the weighted average of the face feature points of the second image according to each face feature point of the second image and the corresponding influence weight;
  • the affine transformation matrix calculation module 40514 is used to calculate the difference between the facial feature points of the first image and the weighted average of the facial feature points of the first image, the facial feature points of the second image, and the The difference between the weighted average values of the facial feature points of the second image and the influence weight are calculated to obtain an affine transformation matrix.
  • contral i contral i -contral*
  • P represents the pixel of the first image
  • Px represents the abscissa of the pixel
  • Py represents the ordinate of the pixel;
  • control[i].x represents the face numbered i in the first image
  • the abscissa of the feature point; control[i].y represents the ordinate of the face feature point numbered i in the first image.
  • the first image is calculated.
  • the transformation coordinate determination module 4052 may include:
  • the difference calculation module 40521 is configured to calculate the difference between the pixel point coordinates of the first image and the weighted average value of the facial feature points of the first image;
  • the coordinate calculation module 40522 is configured to calculate the product of the difference and the influence weight, and superimpose and sum the product result and the weighted average of the face feature points of the second image to obtain the pixel point in the first image. Two transformed coordinates in the image.
  • the calculation formula for the transformed coordinates of the pixel in the second image is:
  • L(P) is the transformed coordinate of the pixel in the second image
  • P is the coordinate of the pixel in the first image
  • M represents the affine transformation matrix of the pixel.
  • the transformed coordinates of the pixel points in the second image can be determined, and then the adjusted image can be determined.
  • the coordinates of one of the pixels of the corresponding second image are obtained through affine transformation, and the pixel coordinates of the second image can be filled with the pixel corresponding to the pixel of the first image
  • the coordinates of the nearest neighbor pixels of the corresponding input image can be obtained through inverse affine transformation, and the nearest neighbor of the first image
  • the pixel value corresponding to the coordinate of the pixel point is filled into the pixel point of the second image, thereby achieving image fusion.
  • the specific affine transformation method can refer to the aforementioned affine transformation between the facial feature point coordinates of the first image and the face feature point coordinates of the second image
  • the third image acquisition module 406 is used for The transformed coordinates in the first image fill the pixel values in the second image, and when the third image is obtained, it includes being used to perform a nearest neighbor interpolation operation on the second image.
  • the embodiment of the present application also provides a computer device, including:
  • a memory for storing a computer program executable by the processor
  • FIG. 11 is a structural block diagram of a computer device shown in an embodiment of the present application.
  • the computer equipment includes: a processor 501, a memory 502, a display screen 503 with touch function, an input device 504, an output device 505, and a communication device 506.
  • the number of processors 501 in the computer device may be one or more.
  • One processor 501 is taken as an example in FIG. 7.
  • the number of memories 502 in the electronic device may be one or more.
  • One memory 502 is taken as an example in FIG. 7.
  • the processor 501, the memory 502, the display screen 503 with touch function, the input device 504, the output device 505, and the communication device 506 of the electronic device may be connected by a bus or other means. In FIG. 10, the connection by a bus is taken as an example.
  • the electronic device may be a computer, a mobile phone, a tablet computer, an interactive smart tablet, a PDA (Personal Digital Assistant, personal digital assistant), an e-book reader, a multimedia player, etc.
  • the electronic device is an interactive smart tablet as an example for description.
  • the memory 502 can be used to store software programs, computer-executable programs, and modules, such as the image adjustment method program described in any embodiment of the present application, and any embodiment described in the present application Program instructions/modules corresponding to the image adjustment method (for example, the face detection module 401, the face deformation constraint point determination module 402, the face deformation constraint amount determination module 403, the coordinate adjustment module 404, the coordinate transformation module in the image adjustment device 405, the third image acquisition module 406, etc.).
  • the memory 502 may mainly include a program storage area and a data storage area.
  • the program storage area may store an operating system and an application program required by at least one function; the data storage area may store data created according to the use of the device, and the like.
  • the memory 502 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage devices.
  • the memory 502 may further include a memory remotely provided with respect to the processor 501, and these remote memories may be connected to the device through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, corporate intranets, local area networks, mobile communication networks, and combinations thereof.
  • the display screen 503 may be a display screen with a touch function, which may be a capacitive screen, an electromagnetic screen or an infrared screen.
  • the display screen 503 is used to display data according to the instructions of the processor 501, such as displaying the first image and the third image, and is also used to receive touch operations on the display screen 501 and send corresponding signals to the processor. 501 or other devices.
  • the display screen 503 is an infrared screen, it also includes an infrared touch frame.
  • the infrared touch frame is arranged around the display screen 503. It can also be used to receive infrared signals and send the infrared signals to the processor. 501 or other equipment.
  • the display screen 503 may also be a display screen without a touch function.
  • the input device 504 may be used to receive input images and generate key signal inputs related to user settings and function control of the electronic device. It may also be a camera used to obtain images and a sound pickup device to obtain audio data.
  • the output device 1204 may include audio equipment such as speakers. It should be noted that the specific composition of the input device 1203 and the output device 1204 can be set according to actual conditions.
  • the communication device 505 is used to establish a communication connection with other devices, and it may be a wired communication device and/or a wireless communication device.
  • the processor 501 executes various functional applications and data processing of the device by running software programs, instructions, and modules stored in the memory 502, that is, realizes the above-mentioned image adjustment method.
  • the processor 501 when the processor 501 executes one or more programs stored in the memory 502, it specifically implements the following operations: obtain a first image, and perform face detection on the first image to obtain a person Set of facial feature points. Part of the face feature points are selected from the face feature point set as the face deformation constraint points. According to the coordinates of the face deformation constraint point, the face deformation constraint amount is calculated. According to the facial deformation constraint amount, the coordinates of the facial contour feature points in the facial feature point set are adjusted to obtain the second image. Performing affine transformation on the pixels in the first image to obtain the transformed coordinates of each pixel in the second image. Filling the pixel values in the second image according to the transformed coordinates to obtain a third image.
  • some facial feature points are selected as face deformation constraint points, and the face deformation constraint amount is calculated according to the coordinates of the face deformation constraint point, and the face deformation constraint amount is calculated according to the face deformation constraint amount.
  • the coordinates of the facial contour feature points in the feature point set are adjusted to effectively constrain the face adjustment, which can prevent the face from becoming sharp in the chin or losing the curvature of the face in the process of face image processing, resulting in distortion of the face; at the same time, Prevent the background from being distorted during face image processing, which is more in line with actual application scenarios.
  • the face feature point set includes multiple face feature points located in different parts of the face.
  • the face feature point set includes face key points, and the face The key points include: facial contour feature points, eyebrow feature points, eye feature points, nose feature points, and mouth feature points. Selecting the above-mentioned face feature points can characterize most of the features of the face and effectively distinguish different faces.
  • the key points of the face can be detected and recognized on the first image through various existing face detection algorithms.
  • the Dlib face detection algorithm tool is used to perform face detection on the first image to obtain 68 face key points, and the 68 face key points are numbered 0-67 according to preset rules to distinguish them.
  • the numbers 0-16 correspond to the feature points representing the face contour
  • the numbers 17-26 correspond to the eyebrow feature points
  • the numbers 27-35 correspond to the nose feature points
  • the numbers 36-47 correspond to the eye feature points
  • 48-67 are corresponding characteristic points of the mouth.
  • the face feature point set also includes four outer feature points; the four outer feature points are obtained through the following steps, including: selecting the smallest abscissa of the face key points and the largest vertical axis The coordinates, the largest abscissa of the key points of the face, and the smallest ordinate of the key points of the face are combined in pairs to obtain four outer feature points outside the face.
  • the smallest abscissa of the face key points and the largest ordinate of the face key points are used to determine the first outer feature point;
  • the largest abscissa of the face key point and the largest ordinate of the face key point determine the second outer feature point;
  • the smallest abscissa of the face key point and the smallest ordinate of the face key point determine the third outer feature point ;
  • the obtained four outer feature points can be numbered according to preset rules to distinguish them.
  • the smallest abscissa of the face key point is the abscissa of the face key point numbered 0
  • the largest ordinate of the face key point is the ordinate of the face key point numbered 18.
  • the ordinate of the face key point, the second outer feature point outside the face formed by these two coordinates, and the corresponding number is 69; the smallest abscissa of the face key point is the abscissa of the face key point numbered 0 , The minimum ordinate of the face key point is the ordinate of the face key point numbered 8.
  • These two coordinates constitute the third outer feature point outside the face, corresponding to the number 70; the largest face key point The abscissa is the abscissa of the face key point numbered 16, and the smallest ordinate of the face key point is the ordinate of the face key point numbered 8.
  • These two coordinates constitute the fourth outside of the face.
  • Feature point, the corresponding number is 71.
  • the selected face deformation constraint points are two outer corner feature points, or The two inner corner feature points, or the chin feature point and the brow center feature point are used as face deformation constraint points. Since the two outer corner feature points are used as face deformation constraint points, it can effectively constrain the face contour feature point deformation, which can meet the needs of most users, and is suitable for face-lift adjustment of images of different sizes and different tilt angles Therefore, in this implementation, it is preferable to use two outer corner feature points as the face deformation constraint points. Specifically, the two outer corner feature points are the face feature points corresponding to the number 39 and the face corresponding to the number 42. Feature points.
  • the method includes executing: acquiring the difference value of the abscissa of the two face deformation constraint points, and The arithmetic square root of the difference is used as the face deformation constraint amount.
  • the calculation formula for the facial deformation constraint amount is:
  • degree represents the amount of face deformation constraint
  • point1.x represents the abscissa of one face deformation constraint point
  • point2.x represents the abscissa of the other face deformation constraint point
  • point1.x-point2.x The value of is greater than 0.
  • the processor when the processor executes the adjustment of the coordinates of the facial contour feature points in the facial feature point set according to the facial deformation constraint amount, it may include executing: maintaining the facial contour feature points The ordinate of the face is unchanged, and the abscissa of the face contour point is obtained by the following method: the number of the face contour feature point and the preset numerical parameter are calculated to obtain the calculation result; the calculation result is calculated with the face The arithmetic square root of the product of the deformation constraint amount, and the arithmetic square root of the product is taken as the offset of the abscissa of the face contour feature point; the abscissa of the face contour feature point is adjusted according to the offset .
  • y represents the offset of the abscissa of the face contour feature point; degree represents the face deformation constraint amount; x represents the number of the face contour feature point.
  • the result of the superposition and summation of the abscissa of the face contour point and the offset is used as the adjusted face The abscissa of the contour point.
  • the coordinates of the face contour feature points in the face feature point set are adjusted, and then the face adjustment is effectively constrained, which can prevent the face from becoming sharp or losing the curvature of the face.
  • the face is distorted.
  • the processor 501 executes one or more programs stored in the memory 502, it implements the affine transformation of the pixels in the first image to obtain each pixel
  • the following operations are specifically implemented: according to the facial feature point coordinates of the first image and the face feature point coordinates of the second image, the pixel points of the first image and the pixel points of the second image are calculated
  • the affine transformation matrix between the affine transformation matrix and the affine transformation of the pixel in the first image according to the affine transformation matrix to obtain the transformation coordinates of each pixel in the second image.
  • the processor 501 executes one or more programs stored in the memory 502, it can calculate the pixel points of the first image and the first image according to the facial feature points of the first image and the facial feature points of the second image.
  • the affine transformation matrix between the pixels of the two images includes executing: calculating the weight of the influence of each face feature point in the first image on the pixel points of the first image; The face feature points and the corresponding influence weights are used to obtain the weighted average value of the face feature points of the first image; and all the facial feature points of the second image and the corresponding influence weights are obtained.
  • the weighted average of the facial feature points of the second image according to the difference between the facial feature points of the first image and the weighted average of the facial feature points of the first image, the face of the second image
  • the difference between the feature point and the weighted average of the face feature points of the second image and the influence weight are calculated to obtain an affine transformation matrix.
  • contral i contral i -contral*
  • P represents the pixel of the first image
  • Px represents the abscissa of the pixel
  • Py represents the ordinate of the pixel;
  • control[i].x represents the face numbered i in the first image
  • the abscissa of the feature point; control[i].y represents the ordinate of the face feature point numbered i in the first image.
  • the first image is calculated.
  • the processor 501 executes one or more programs stored in the memory 502 to perform affine transformation on the pixels in the first image, and obtain the transformed coordinates of each pixel in the second image, It includes performing: calculating the difference between the pixel point coordinates of the first image and the weighted average of the facial feature points of the first image; calculating the product of the difference and the influence weight, and calculating the product result Perform superposition and summation with the weighted average value of the facial feature points of the second image to obtain the transformed coordinates of the pixel points in the second image.
  • the calculation formula for the transformed coordinates of the pixel in the second image is:
  • L(P) is the transformed coordinate of the pixel in the second image
  • P is the coordinate of the pixel in the first image
  • M represents the affine transformation matrix of the pixel.
  • the transformed coordinates of the pixel points in the second image can be determined, and then the adjusted image can be determined.
  • the coordinates of one of the pixels of the corresponding second image are obtained through affine transformation, and the pixel coordinates of the second image can be filled with the pixel corresponding to the pixel of the first image
  • the coordinates of the nearest neighbor pixels of the corresponding input image can be obtained through inverse affine transformation, and the nearest neighbor of the first image
  • the pixel value corresponding to the coordinate of the pixel point is filled into the pixel point of the second image, thereby achieving image fusion.
  • the specific affine transformation method may refer to the affine transformation between the facial feature point coordinates of the first image and the face feature point coordinates of the second image
  • the processor 501 performs the affine transformation according to the first image
  • Filling the transformed coordinates in the second image with the pixel values in the second image to obtain the third image includes performing a nearest neighbor interpolation operation on the second image.
  • the pixel value in the second image can be filled simply and quickly through the nearest neighbor interpolation operation.
  • the embodiment of the present application also provides a computer-readable storage medium on which a computer program is stored.
  • the steps of implementing any one of the above-mentioned image adjustment methods include: acquiring a first image, and Perform face detection on the first image to obtain a set of facial feature points. Part of the face feature points are selected from the face feature point set as the face deformation constraint points. According to the coordinates of the face deformation constraint point, the face deformation constraint amount is calculated. According to the facial deformation constraint amount, the coordinates of the facial contour feature points in the facial feature point set are adjusted to obtain the second image. Performing affine transformation on the pixels in the first image to obtain the transformed coordinates of each pixel in the second image. Filling the pixel values in the second image according to the transformed coordinates to obtain a third image.
  • some facial feature points are selected as face deformation constraint points, and the face deformation constraint amount is calculated according to the coordinates of the face deformation constraint point, and the face deformation constraint amount is calculated according to the face deformation constraint amount.
  • the coordinates of the facial contour feature points in the feature point set are adjusted to effectively restrict the face adjustment, which can prevent the face from becoming sharp or losing the curvature of the face, causing the face to be distorted; at the same time, to prevent the background from being distorted. More in line with actual application scenarios.
  • the embodiments of the present application may adopt the form of a computer program product implemented on one or more storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing program codes.
  • Computer-readable storage media include permanent and non-permanent, removable and non-removable media, and information storage can be achieved by any method or technology.
  • the information can be computer-readable instructions, data structures, program modules, or other data.
  • Examples of computer storage media include, but are not limited to: phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disc (DVD) or other optical storage, Magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices.
  • PRAM phase change memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • RAM random access memory
  • ROM read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • flash memory or other memory technology
  • CD-ROM compact disc
  • DVD digital versatile disc
  • Magnetic cassettes magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices.
  • the computer device provided above can be used to execute the image adjustment method provided by any of the above embodiments, and has corresponding functions and beneficial effects.
  • the implementation process of the functions and roles of each component in the above-mentioned device refer to the implementation process of the corresponding steps in the above-mentioned method for details, which will not be repeated here.
  • the relevant part can refer to the part of the description of the method embodiment.
  • the device embodiments described above are merely illustrative, and the components described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in One place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the objectives of the solutions of the embodiments of the present application. Those of ordinary skill in the art can understand and implement it without creative work.

Abstract

提供一种图像调整方法、装置、存储介质以及设备,该方法包括:获取第一图像,并对第一图像进行人脸检测,获得人脸特征点集;从人脸特征点集中选取部分人脸特征点作为人脸变形约束点;根据人脸变形约束点的坐标,计算获得人脸变形约束量;根据人脸变形约束量,对人脸特征点集中的人脸轮廓特征点的坐标进行调整,获得第二图像;对第一图像中的像素点进行仿射变换,获得像素点在第二图像中的变换坐标;根据变换坐标填充第二图像中的像素值,获得第三图像。根据人脸变形约束点确定的人脸变形约束量,对人脸特征点集中的人脸轮廓特征点的坐标进行调整,可防止人脸图像处理过程中人脸变得下巴尖锐或脸蛋部位失去弧度,防止背景发生扭曲变形。

Description

图像调整方法、装置、存储介质以及设备
本申请要求在2019年7月24日提交中国专利局、申请号为201910670893.3的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及图像处理领域,特别是涉及一种图像调整方法、装置、存储介质以及设备。
背景技术
在拍摄照片和拍摄视频非常火爆的当下,用户对拍摄的人脸图像的要求也越来越高,特别是对于现在人们的审美品味来说,较瘦的脸通常被认为具有较佳的美感,因此,对人像瘦脸功能的需求也更加显著。
发明人在实现本申请实施例的过程中,发现:目前对人像瘦脸的方式主要有两种,第一种是通过人脸眼睛以下部位的特征点的移动,来实现瘦脸的效果,这种方式处理后的人脸图像大都使脸型调整得非常不协调,变得下巴尖锐、脸蛋失去弧度及美感。第二种是将特征点往预设的人脸模型上映射的方式,通过往内置模板的映射达到脸型调整的效果,这种方式容易使背景扭曲变形,调整效果差。
发明内容
本申请实施例提供了一种图像调整方法、装置、存储介质以及设备,其具有防止人脸图像处理过程中人脸和背景发生扭曲的效果。
根据本申请实施例的第一方面,提供一种图像调整方法,包括如下步骤:
获取第一图像,并对第一图像进行人脸检测,获得人脸特征点集;
从人脸特征点集中选取部分人脸特征点作为人脸变形约束点;
根据所述人脸变形约束点的坐标,计算获得人脸变形约束量;
根据所述人脸变形约束量,对人脸特征点集中的人脸轮廓特征点的坐标进行调整,获得第二图像;
对所述第一图像中的像素点进行仿射变换,获得各所述像素点在第二图像中的变换坐标;
根据所述变换坐标填充所述第二图像中的像素值,获得第三图像。
根据本申请实施例的第二方面,提供一种图像调整装置,包括:
人脸检测模块,用于获取第一图像,并对第一图像进行人脸检测,获得人脸特征点集;
人脸变形约束点确定模块,用于从人脸特征点集中选取部分人脸特征点作 为人脸变形约束点;
人脸变形约束量确定模块,用于根据所述人脸变形约束点的坐标,计算获得人脸变形约束量;
坐标调整模块,用于根据所述人脸变形约束量,对人脸特征点集中的人脸轮廓特征点的坐标进行调整,获得第二图像;
坐标变换模块,用于对所述第一图像中的像素点进行仿射变换,获得各所述像素点在第二图像中的变换坐标;
第三图像获取模块,用于根据所述第一图像中的变换坐标填充所述第二图像中的像素值,获得第三图像。
根据本申请实施例的第三方面,提供一种计算机可读存储介质,其上储存有计算机程序,该计算机程序被处理器执行时实现前述所述的图像调整方法。
根据本申请实施例的第四方面,提供一种计算机设备,包括存储器,处理器以及储存在所述储存器中并可被所述处理器执行的计算机程序,所述处理器执行所述计算机程序时实现如前述所述的图像调整方法。
本申请实施例选取部分人脸特征点作为人脸变形约束点,并根据所述人脸变形约束点的坐标,计算获得人脸变形约束量,且根据所述人脸变形约束量,对人脸特征点集中的人脸轮廓特征点的坐标进行调整,对人脸调整进行有效约束,可防止人脸图像处理过程中人脸变得下巴尖锐或脸蛋部位失去弧度,导致人脸发生扭曲;同时,防止人脸图像处理过程中背景发生扭曲变形,更加符合实际应用场景。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本申请实施例。
为了更好地理解和实施,下面结合附图详细说明本申请实施例。
附图说明
图1为本申请实施例示出的图像调整方法的应用环境的示意框图;
图2为本申请实施例示出的图像调整方法的流程图;
图3为本申请实施例示出的图像调整方法的人脸特征点的编号和位置示意图;
图4为本申请实施例示出的图像调整方法的进行仿射变换的流程图;
图5为本申请实施例示出的获取仿射变换矩阵的流程图;
图6为本申请实施例示出的确定变换坐标的流程图;
图7为本申请实施例示出的图像调整装置的结构框图;
图8为本申请实施例示出的坐标变换模块的结构框图;
图9为本申请实施例示出的仿射变换矩阵确定模块的结构框图;
图10为本申请实施例示出的变换坐标确定模块的结构框图;
图11为本申请实施例示出的计算机设备的结构框图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请实施例相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请实施例的一些方面相一致的装置和方法的例子。
在本申请实施例使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本申请实施例。在本申请实施例和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。
应当理解,尽管在本申请实施例可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本申请实施例范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”/“若”可以被解释成为“在……时”或“当……时”或“响应于确定”。
请参阅图1,其是本申请实施例示出的图像调整方法的应用环境的示意框图。
如图1所示,所述图像调整方法的应用环境包括计算机设备100、第一图像200和第三图像300。所述计算机设备100上运行有应用本申请实施例图像调整方法的应用程序110,所述应用程序包括用于人脸检测工具和图像调整方法,用户通过在计算机设备中输入所述第一图像200后,通过人脸检测工具获得人脸特征点集,通过图像调整方法对人脸特征点集进行调整,获得第三图像200。
所述计算机设备100可以是任何智能终端,例如,可以具体为计算机、手机、平板电脑、交互式智能平板、PDA(Personal Digital Assistant,个人数字助理)、电子书阅读器、多媒体播放器等。基于不同的智能终端,所述应用程序110还可以是以适应该智能终端的其他形式呈现。在一些例子中,还可以是以例如系统插件、网页插件等形式呈现。所述人脸检测工具可以采用现有的人脸检测算法工具,如Dlib、OpenCV等工具,在本实施例中,优选采用Dlib人脸检测算法工具。
所述第一图像200通常为包括人脸的图像,其可以为通过摄像装置拍摄获得的图像,也可以为人工合成的图像,并且,本申请实施例中所述第一图像200为输入的待调整的图像。所述第三图像300为经过本申请实施例的图像调整方法处理后的图像,其显示的脸型通常比所述第一图像200的脸型更加瘦,也即为瘦脸后的目标图像。
请参阅图2,其为本申请实施例示出的图像调整方法的流程图,包括如下步骤:
步骤S201:获取第一图像,并对第一图像进行人脸检测,获得人脸特征点 集。
步骤S202:从人脸特征点集中选取部分人脸特征点作为人脸变形约束点。
步骤S203:根据所述人脸变形约束点的坐标,计算获得人脸变形约束量。
步骤S204:根据所述人脸变形约束量,对人脸特征点集中的人脸轮廓特征点的坐标进行调整,获得第二图像。
步骤S205:对所述第一图像中的像素点进行仿射变换,获得各所述像素点在第二图像中的变换坐标。
步骤S206:根据所述变换坐标填充所述第二图像中的像素值,获得第三图像。
本申请实施例中所述第一图像通常为包括人脸的图像,其可以为通过摄像装置拍摄获得的图像,也可以为人工合成的图像,并且,本申请实施例中所述第一图像为输入的待调整的图像。所述第二图像为根据所述人脸变形约束量,对所述第一图像中的人脸特征点集中的人脸轮廓特征点的坐标进行调整后确定的图像,其作为本申请实施例的图像调整方法中的中间图像,通过在所述第二图像的基础上进一步变换调整,进而获得第三图像。所述第三图像为经过本申请实施例的图像调整方法处理后的图像,其显示的脸型通常比所述第一图像的脸型更加瘦,也即为瘦脸后的目标图像。
本申请实施例选取部分人脸特征点作为人脸变形约束点,并根据所述人脸变形约束点的坐标,计算获得人脸变形约束量,且根据所述人脸变形约束量,对人脸特征点集中的人脸轮廓特征点的坐标进行调整,对人脸调整进行有效约束,可防止人脸图像处理过程中人脸变得下巴尖锐或脸蛋部位失去弧度,导致人脸发生扭曲;同时,防止人脸图像处理过程中背景发生扭曲变形,更加符合实际应用场景。
请参阅图3,本申请实施例的步骤S201中所述人脸特征点集包括了位于人脸不同部位的多个人脸特征点,可选地,所述人脸特征点集包括人脸关键点,所述人脸关键点包括:人脸轮廓特征点、眉毛特征点、眼睛特征点、鼻子特征点和嘴巴特征点。选取上述的人脸特征点能够表征人脸的大部分特征和有效区分不同的人脸。所述人脸关键点可以通过现有的各种人脸检测算法对第一图像进行检测和识别。在本申请实施例中,通过Dlib人脸检测算法工具对第一图像进行人脸检测来获得68个人脸关键点,且该68个人脸关键点按照预设的规则进行编号0-67加以区分,具体地,编号为0-16对应表征人脸轮廓特征点,编号为17-26对应表征眉毛特征点;编号为27-35对应表征鼻子特征点;编号为36-47对应表征眼睛特征点;编号48-67为对应表征嘴巴特征点。进一步地,在面对第一图像时,以第一图像的最左边顶点的位置作为原点坐标,以第一图像的上侧边作为x的正方向,以第一图像的左侧边作为y的负方向,建立直角坐标系,则第一图像的各个人脸关键点均可以用坐标进行表示,可以获得每个编号对应的人脸关键点的坐标位置。
在一个示例性实施例中,为防止其他非人脸轮廓点所在区域在调整后发生变形和扭曲,同时防止背景及人脸边缘区域发生扭曲变形,以对背景及人脸边 缘区域起到隔离和保护的效果,步骤S201中所述人脸特征点集还包括四个外侧特征点;所述四个外侧特征点通过以下步骤获得:选取人脸关键点中的最小横坐标、人脸关键点中的最大纵坐标、人脸关键点中的最大横坐标、人脸关键点中的最小纵坐标两两相互组合,获得人脸外侧的四个外侧特征点。具体的,由于坐标为横坐标和纵坐标构成,因此,在上述选取的坐标中以人脸关键点中的最小横坐标和人脸关键点中的最大纵坐标确定第一个外侧特征点;以人脸关键点的最大横坐标和人脸关键点的最大纵坐标确定第二个外侧特征点;以人脸关键点的最小横坐标和人脸关键点的最小纵坐标确定第三个外侧特征点;以人脸关键点的最大横坐标和人脸关键点的最小纵坐标确定第四个外侧特征点。进一步地,还可以对获得的四个外侧特征点按照预设的规则进行编号以加以区分。具体的,人脸关键点的最小横坐标即编号为0的人脸关键点的横坐标、人脸关键点的最大纵坐标即编号为18的人脸关键点的纵坐标,这两个坐标构成的人脸外侧的第一个外侧特征点,对应编号为68;人脸关键点的最大横坐标即编号为16的人脸关键点的横坐标、人脸关键点的最大纵坐标即编号为18的人脸关键点的纵坐标,这两个坐标构成的人脸外侧的第二个外侧特征点,对应编号为69;人脸关键点的最小横坐标编号为0的人脸关键点的横坐标、人脸关键点的最小纵坐标即编号为8的人脸关键点的纵坐标,这两个坐标构成的人脸外侧的第三个外侧特征点,对应编号为70;人脸关键点的最大横坐标即编号为16的人脸关键点的横坐标、人脸关键点的最小纵坐标即编号为8的人脸关键点的纵坐标,这两个坐标构成的人脸外侧的第四个外侧特征点,对应编号为71。
在一个示例性实施例中,考虑到图像瘦脸需求时,下巴及靠近眼睛的关键点调整幅度应该较小,而脸蛋区域调整幅度应该较大,同时,若脸蛋区域调整方式不合适,在人脸边缘这样的敏感区域会出现扭曲、毛刺锯齿情况的出现,因此,为满足图像瘦脸需求,防止人脸边缘这样的敏感区域会出现扭曲、毛刺锯齿情况的出现,步骤S202中选取的所述人脸变形约束点为两个外眼角特征点、或两个内眼角特征点、或下巴特征点和眉心特征点。由于两个外眼角特征点作为人脸变形约束点,可实现对人脸轮廓特征点形变进行有效约束,进而可满足大部分用户的需求,可适用于不同尺寸、不同倾斜角度的图像的瘦脸调整,具有更好的约束效果,因此,本申请实施中,优选采用两个外眼角特征点作为人脸变形约束点,具体地,两个外眼角特征点分别为编号为39对应的人脸特征点和编号为42对应的人脸特征点。
进一步地,步骤S203中所述根据所述人脸变形约束点,计算获得人脸变形约束量的步骤,可以包括:获取两个所述人脸变形约束点的横坐标的差值,并将所述差值的算术平方根作为人脸变形约束量。具体的,所述人脸变形约束量的计算公式为:
Figure PCTCN2019126539-appb-000001
上述公式中,degree表示人脸变形约束量;point1.x表示其中一个人脸变形约束点的横坐标,point2.x表示另一个人脸变形约束点的横坐标,且point1.x-point2.x的值大于0。
由于瘦脸调整主要为人脸左右横向方向的调整,因此,通过以两个人脸变形约束点的横坐标的差值的算术平方根作为人脸变形约束量,可以实现对人脸瘦脸调整时的有效约束,有效防止人脸变形。
在一个示例性实施例中,步骤S204中所述根据所述人脸变形约束量,对人脸特征点集中的人脸轮廓特征点的坐标进行调整的步骤,可以包括:保持人脸轮廓特征点的纵坐标不变,通过以下方式调整人脸轮廓特征点的横坐标:将所述人脸轮廓特征点的编号与预设数值参数进行运算,获得运算结果;计算所述运算结果与所述人脸变形约束量的乘积的算术平方根,并将所述乘积的算术平方根作为所述人脸轮廓特征点的横坐标的偏移量;根据所述偏移量调整所述人脸轮廓特征点的横坐标。
具体的,所述人脸轮廓特征点的横坐标的偏移量的计算公式为:
Figure PCTCN2019126539-appb-000002
上述公式中,y表示人脸轮廓特征点的横坐标的偏移量;degree表示人脸变形约束量;x表示人脸轮廓特征点的编号。
其中,根据所述偏移量调整所述人脸轮廓特征点的横坐标时,将所述人脸轮廓点的横坐标与所述偏移量叠加求和的结果作为调整后的所述人脸轮廓点的横坐标。
根据所述人脸变形约束量,对人脸特征点集中的人脸轮廓特征点的坐标进行调整,进而对人脸调整进行有效约束,可防止人脸图像处理过程中人脸变得下巴尖锐或脸蛋部位失去弧度,防止人脸发生扭曲。
请参阅图4,在一个示例性实施例中,步骤S205中对所述第一图像中的像素点进行仿射变换,获得各所述像素点在第二图像中的变换坐标的步骤,可以包括:
步骤S2051:根据第一图像的人脸特征点和第二图像的人脸特征点,计算第一图像的像素点和第二图像的像素点之间的仿射变换矩阵。
步骤S2052:根据所述仿射变换矩阵对所述第一图像中的像素点进行仿射变换,获得各所述像素点在第二图像中的变换坐标。
通过仿射变换,可以实现图像中各点位置相对保持不变,防止图像在变换过程中发生严重变形和扭曲。
进一步地,请参阅图5,步骤S2051中所述根据第一图像的人脸特征点和第二图像的人脸特征点,计算第一图像的像素点和第二图像的像素点之间的仿射变换矩阵的步骤,可以包括:
步骤S20511:计算所述第一图像中各人脸特征点对所述第一图像的像素点的影响权重;
步骤S20512:根据所述第一图像的各人脸特征点和对应的所述影响权重,获取所述第一图像的人脸特征点的加权平均值;
步骤S20513:根据所述第二图像的各人脸特征点和对应的所述影响权重,获取所述第二图像的人脸特征点的加权平均值;
步骤S20514:根据所述第一图像的人脸特征点与所述第一图像的人脸特征点的加权平均值的差值、第二图像的人脸特征点与所述第二图像的人脸特征点的加权平均值的差值、以及所述影响权重,计算获得仿射变换矩阵。
具体的,所述仿射变换矩阵的计算公式为:
Figure PCTCN2019126539-appb-000003
其中,
Figure PCTCN2019126539-appb-000004
Figure PCTCN2019126539-appb-000005
Figure PCTCN2019126539-appb-000006
contral i=contral i-contral*;
current i=current i-current*;
上述公式中,M表示像素点的仿射变换矩阵;contral i表示第一图像中编号为i的人脸特征点的坐标;current i表示第二图像中编号为i的人脸特征点的坐标;contral j表示第一图像中编号为j的人脸特征点的坐标;current j表示第二图像中编号为j的人脸特征点的坐标;w i表示第一图像中编号为i的人脸特征点对像素点的影响权重;P表示第一图像的像素点,P.x表示像素点的横坐标;P.y表示像素点的纵坐标;control[i].x表示第一图像中编号为i的人脸特征点的横坐标;control[i].y表示第一图像中编号为i的人脸特征点的纵坐标。
其中,所述第二图像的各人脸特征点对应的所述影响权重与调整前的各人脸特征点对应的影响权重相同,即与所述第一图像的各人脸特征点和对应的所述影响权重相同。
通过计算所述第一图像中各人脸特征点对所述第一图像的像素点的影响权重,进而影响权重、第一图像的人脸特征点和第二图像的人脸特征点,计算第一图像的像素点和第二图像的像素点之间的仿射变换矩阵,从而实现对所述第一图像中的像素点进行仿射变换。
在一个示例性实施例中,请参阅图6,步骤S2052中根据所述仿射变换矩阵对所述第一图像中的像素点进行仿射变换,获得各所述像素点在第二图像中的变换坐标步骤,可以包括:
步骤S20521:计算所述第一图像的像素点坐标与所述第一图像的人脸特征点的加权平均值的差值;
步骤S20522:将所述差值与所述影响权重进行乘积计算,并将乘积结果与 所述第二图像的人脸特征点的加权平均值进行叠加求和,获得像素点在第二图像中的变换坐标。
具体的,所述像素点在第二图像中的变换坐标的计算公式为:
L(P)=(P-contral*)M+current*
上述公式中,L(P)为像素点在第二图像中的变换坐标;P为第一图像中的像素点坐标;M表示像素点的仿射变换矩阵。
通过仿射变换矩阵,可以确定像素点在第二图像中的变换坐标,进而确定调整后的图像。
对于第一图像的每一像素点的坐标通过仿射变换获得对应的第二图像的其中一个像素点的坐标,并可在第二图像的像素点坐标中填充第一图像的像素点对应的像素值,同样的道理,对于第二图像中未填充像素值的像素点的坐标也可以通过逆仿射变换反向获得对应的输入图的最近邻像素点的坐标,并将第一图像的最近邻像素点的坐标对应的像素值填充至第二图像的像素点内,从而实现图像的融合。其中,具体的仿射变换方式可参照前述所述第一图像的人脸特征点坐标和第二图像的人脸特征点坐标之间的仿射变换,在步骤S206中根据所述第一图像中的变换坐标填充所述第二图像中的像素值,获得第三图像的步骤,包括对所述第二图像中进行最近邻插值运算。通过最近邻插值运算可以简单快速的填充所述第二图像中的像素值。
本申请实施例的有益效果包括:
1、选取部分人脸特征点作为人脸变形约束点,并根据所述人脸变形约束点的坐标,计算获得人脸变形约束量,且根据所述人脸变形约束量,对人脸特征点集中的人脸轮廓特征点的坐标进行调整,对人脸调整进行有效约束,可防止人脸图像处理过程中人脸变得下巴尖锐或脸蛋部位失去弧度,导致人脸发生扭曲;同时,防止人脸图像处理过程中背景发生扭曲变形,更加符合实际应用场景。
2、在人脸外侧新增四个外侧特征点,对背景及人脸边缘部起到隔离和保护的效果,进一步防止背景及人脸边缘部发生扭曲变形。
请参阅图7,本申请实施例还提供一种图像调整装置400,包括:
人脸检测模块401,用于获取第一图像,并对第一图像进行人脸检测,获得人脸特征点集;
人脸变形约束点确定模块402,用于从人脸特征点集中选取部分人脸特征点作为人脸变形约束点;
人脸变形约束量确定模块403,用于根据所述人脸变形约束点的坐标,计算获得人脸变形约束量;
坐标调整模块404,用于根据所述人脸变形约束量,对人脸特征点集中的人脸轮廓特征点的坐标进行调整,获得第二图像;
坐标变换模块405,用于对所述第一图像中的像素点进行仿射变换,获得各所述像素点在第二图像中的变换坐标;
第三图像获取模块406,用于根据所述第一图像中的变换坐标填充所述第 二图像中的像素值,获得第三图像。
本申请实施例选取部分人脸特征点作为人脸变形约束点,并根据所述人脸变形约束点的坐标,计算获得人脸变形约束量,且根据所述人脸变形约束量,对人脸特征点集中的人脸轮廓特征点的坐标进行调整,对人脸调整进行有效约束,可防止人脸图像处理过程中人脸变得下巴尖锐或脸蛋部位失去弧度,导致人脸发生扭曲;同时,防止人脸图像处理过程中背景发生扭曲变形,更加符合实际应用场景。
其中,所述人脸特征点集包括了位于人脸不同部位的多个人脸特征点,可选地,所述人脸特征点集包括人脸关键点,所述人脸关键点包括:人脸轮廓特征点、眉毛特征点、眼睛特征点、鼻子特征点和嘴巴特征点。选取上述的人脸特征点能够表征人脸的大部分特征和有效区分不同的人脸。所述人脸关键点可以通过现有的各种人脸检测算法对第一图像进行检测和识别。在本申请实施例中,通过Dlib人脸检测算法工具对第一图像进行人脸检测来获得68个人脸关键点,且该68个人脸关键点按照预设的规则进行编号0-67加以区分,具体地,编号为0-16对应表征人脸轮廓特征点,编号为17-26对应表征眉毛特征点;编号为27-35对应表征鼻子特征点;编号为36-47对应表征眼睛特征点;编号48-67为对应表征嘴巴特征点。进一步地,在面对第一图像时,以第一图像的最左边顶点的位置作为原点坐标,以第一图像的上侧边作为x的正方向,以第一图像的左侧边作为y的负方向,建立直角坐标系,则第一图像的各个人脸关键点均可以用坐标进行表示,可以获得每个编号对应的人脸关键点的坐标位置。
在一个示例性实施例中,为防止其他非人脸轮廓点所在区域在调整后发生变形和扭曲,同时防止背景及人脸边缘区域发生扭曲变形,以对背景及人脸边缘区域起到隔离和保护的效果,所述人脸特征点集还包括四个外侧特征点,所述人脸检测模块401还用于确定所述四个外侧特征点,具体用于:选取人脸关键点中的最小横坐标、人脸关键点中的最大纵坐标、人脸关键点中的最大横坐标、人脸关键点中的最小纵坐标两两相互组合,获得人脸外侧的四个外侧特征点。具体的,由于坐标为横坐标和纵坐标构成,因此,在上述选取的坐标中以人脸关键点中的最小横坐标和人脸关键点中的最大纵坐标确定第一个外侧特征点;以人脸关键点的最大横坐标和人脸关键点的最大纵坐标确定第二个外侧特征点;以人脸关键点的最小横坐标和人脸关键点的最小纵坐标确定第三个外侧特征点;以人脸关键点的最大横坐标和人脸关键点的最小纵坐标确定第四个外侧特征点。进一步地,还可以对获得的四个外侧特征点按照预设的规则进行编号以加以区分。具体的,人脸关键点的最小横坐标即编号为0的人脸关键点的横坐标、人脸关键点的最大纵坐标即编号为18的人脸关键点的纵坐标,这两个坐标构成的人脸外侧的第一个外侧特征点,对应编号为68;人脸关键点的最大横坐标即编号为16的人脸关键点的横坐标、人脸关键点的最大纵坐标即编号为18的人脸关键点的纵坐标,这两个坐标构成的人脸外侧的第二个外侧特征点,对应编号为69;人脸关键点的最小横坐标编号为0的人脸关键点的横坐标、人脸关键点的最小纵坐标即编号为8的人脸关键点的纵坐标,这两个坐 标构成的人脸外侧的第三个外侧特征点,对应编号为70;人脸关键点的最大横坐标即编号为16的人脸关键点的横坐标、人脸关键点的最小纵坐标即编号为8的人脸关键点的纵坐标,这两个坐标构成的人脸外侧的第四个外侧特征点,对应编号为71。
在一个示例性实施例中,考虑到图像瘦脸需求时,下巴及靠近眼睛的关键点调整幅度应该较小,而脸蛋区域调整幅度应该较大,同时,若脸蛋区域调整方式不合适,在人脸边缘这样的敏感区域会出现扭曲、毛刺锯齿情况的出现,因此,为满足图像瘦脸需求,防止人脸边缘这样的敏感区域会出现扭曲、毛刺锯齿情况的出现,选取的所述人脸变形约束点为两个外眼角特征点、或两个内眼角特征点、或下巴特征点和眉心特征点作为人脸变形约束点。由于两个外眼角特征点作为人脸变形约束点,可实现对人脸轮廓特征点形变进行有效约束,进而可满足大部分用户的需求,可适用于不同尺寸、不同倾斜角度的图像的瘦脸调整,因此,本实施中,优选采用两个外眼角特征点作为人脸变形约束点,具体地,两个外眼角特征点分别为编号为39对应的人脸特征点和编号为42对应的人脸特征点。
进一步地,所述坐标调整模块402用于根据所述人脸变形约束点,计算获得人脸变形约束量时,包括用于获取两个所述人脸变形约束点的横坐标的差值,并将所述差值的算术平方根作为人脸变形约束量。具体的,所述人脸变形约束量的计算公式为:
Figure PCTCN2019126539-appb-000007
上述公式中,degree表示人脸变形约束量;point1.x表示其中一个人脸变形约束点的横坐标,point2.x表示另一个人脸变形约束点的横坐标,且point1.x-point2.x的值大于0。
由于瘦脸调整主要为人脸左右横向方向的调整,因此,通过以两个人脸变形约束点的横坐标的差值的算术平方根作为人脸变形约束量,可以实现对人脸瘦脸调整时的有效约束,有效防止人脸变形。
在一个示例性实施例中,所述坐标调整模块404用于根据所述人脸变形约束量,对人脸特征点集中的人脸轮廓特征点的坐标进行调整时,包括用于:保持人脸轮廓特征点的纵坐标不变,通过以下方式调整所述人脸轮廓特征点的横坐标:将所述人脸轮廓特征点的编号与预设数值参数进行运算,获得运算结果;计算所述运算结果与所述人脸变形约束量的乘积的算术平方根,并将所述乘积的算术平方根作为所述人脸轮廓特征点的横坐标的偏移量;根据所述偏移量调整所述人脸轮廓特征点的横坐标。
具体的,所述人脸轮廓特征点的横坐标的偏移量的计算公式为:
Figure PCTCN2019126539-appb-000008
上述公式中,y表示人脸轮廓特征点的横坐标的偏移量;degree表示人脸变形约束量;x表示人脸轮廓特征点的编号。
其中,根据所述偏移量调整所述人脸轮廓特征点的横坐标时,将所述人脸轮廓点的横坐标与所述偏移量叠加求和的结果作为调整后的所述人脸轮廓点的横坐标。
根据所述人脸变形约束量,对人脸特征点集中的人脸轮廓特征点的坐标进行调整,进而对人脸调整进行有效约束,可防止人脸变得下巴尖锐或脸蛋部位失去弧度,防止人脸发生扭曲。
请参阅图8,在一个示例性实施例中,所述坐标变换模块405包括:
仿射变换矩阵确定模块4051,用于根据第一图像的人脸特征点和第二图像的人脸特征点,计算第一图像的像素点和第二图像的像素点之间的仿射变换矩阵。
变换坐标确定模块4052,用于根据所述仿射变换矩阵对所述第一图像中的像素点进行仿射变换,获得各所述像素点在第二图像中的变换坐标。
通过仿射变换,可以实现图像中各点位置相对保持不变,防止图像在变换过程中发生严重变形和扭曲。
具体的,请参阅图9,所述仿射变换矩阵确定模块4051包括:
影响权重确定模块40511,用于计算所述第一图像中各人脸特征点对所述第一图像的像素点的影响权重。
第一加权平均值计算模块40512,用于根据所述第一图像的各人脸特征点和对应的所述影响权重,获取所述第一图像的人脸特征点的加权平均值;
第二加权平均值计算模块40513,用于根据所述第二图像的各人脸特征点和对应的所述影响权重,获取所述第二图像的人脸特征点的加权平均值;
仿射变换矩阵计算模块40514,用于根据所述第一图像的人脸特征点与所述第一图像的人脸特征点的加权平均值的差值、第二图像的人脸特征点与所述第二图像的人脸特征点的加权平均值的差值、以及所述影响权重,计算获得仿射变换矩阵。
具体的,所述仿射变换矩阵的计算公式为:
Figure PCTCN2019126539-appb-000009
其中,
Figure PCTCN2019126539-appb-000010
Figure PCTCN2019126539-appb-000011
Figure PCTCN2019126539-appb-000012
contral i=contral i-contral*;
current i=current i-current*;
上述公式中,M表示像素点的仿射变换矩阵;contral i表示第一图像中编号为i的人脸特征点的坐标;current i表示第二图像中编号为i的人脸特征点的坐标;contral j表示第一图像中编号为j的人脸特征点的坐标;current j表示第二图像中编号为j的人脸特征点的坐标;w i表示第一图像中编号为i的人脸特征点对像素点的影响权重;P表示第一图像的像素点,P.x表示像素点的横坐标;P.y表示像素点的纵坐标;control[i].x表示第一图像中编号为i的人脸特征点的横坐标;control[i].y表示第一图像中编号为i的人脸特征点的纵坐标。
通过计算所述第一图像中各人脸特征点对所述第一图像的像素点的影响权重,进而影响权重、第一图像的人脸特征点和第二图像的人脸特征点,计算第一图像的像素点和第二图像的像素点之间的仿射变换矩阵,从而实现对所述第一图像中的像素点进行仿射变换。
在一个示例性实施例中,请参阅图10,所述变换坐标确定模块4052可以包括:
差值计算模块40521,用于计算所述第一图像的像素点坐标与所述第一图像的人脸特征点的加权平均值的差值;
坐标计算模块40522,用于将所述差值与所述影响权重进行乘积计算,并将乘积结果与所述第二图像的人脸特征点的加权平均值进行叠加求和,获得像素点在第二图像中的变换坐标。
具体的,所述像素点在第二图像中的变换坐标的计算公式为:
L(P)=(P-contral*)M+current*
上述公式中,L(P)为像素点在第二图像中的变换坐标;P为第一图像中的像素点坐标;M表示像素点的仿射变换矩阵。
通过仿射变换矩阵,可以确定像素点在第二图像中的变换坐标,进而确定调整后的图像。
对于第一图像的每一像素点的坐标通过仿射变换获得对应的第二图像的其中一个像素点的坐标,并可在第二图像的像素点坐标中填充第一图像的像素点对应的像素值,同样的道理,对于第二图像中未填充像素值的像素点的坐标也可以通过逆仿射变换反向获得对应的输入图的最近邻像素点的坐标,并将第一图像的最近邻像素点的坐标对应的像素值填充至第二图像的像素点内,从而实现图像的融合。其中,具体的仿射变换方式可参照前述所述第一图像的人脸特征点坐标和第二图像的人脸特征点坐标之间的仿射变换,所述第三图像获取模块406用于根据所述第一图像中的变换坐标填充所述第二图像中的像素值,获得第三图像时,包括用于对所述第二图像中进行最近邻插值运算。
本申请实施例还提供一种计算机设备,包括:
处理器;
存储器,用于存储可由所述处理器执行的计算机程序;
其中,所述处理器执行所述程序时实现上述任一实施例所记载的图像调整方法。
如图11所示,图11是本申请实施例示出的计算机设备的结构框图。
该计算机设备包括:处理器501、存储器502、具有触摸功能的显示屏503、输入装置504、输出装置505以及通信装置506。该计算机设备中处理器501的数量可以是一个或者多个,图7中以一个处理器501为例。该电子设备中存储器502的数量可以是一个或者多个,图7中以一个存储器502为例。该电子设备的处理器501、存储器502、具有触摸功能的显示屏503、输入装置504、输出装置505以及通信装置506可以通过总线或者其他方式连接,图10中以通过总线连接为例。本实施例中,电子设备可以是计算机、手机、平板电脑、交互式智能平板、PDA(Personal Digital Assistant,个人数字助理)、电子书阅读器、多媒体播放器等。本实施例中,以电子设备为交互智能平板为例,进行描述。
存储器502作为一种计算机可读存储介质,可用于存储软件程序、计算机可执行程序以及模块,如本申请实施例任意实施例所述的图像调整方法程序,以及本申请实施例任意实施例所述的图像调整方法对应的程序指令/模块(例如,图像调整装置中的人脸检测模块401,人脸变形约束点确定模块402,人脸变形约束量确定模块403,坐标调整模块404,坐标变换模块405,第三图像获取模块406等)。存储器502可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据设备的使用所创建的数据等。此外,存储器502可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实例中,存储器502可进一步包括相对于处理器501远程设置的存储器,这些远程存储器可以通过网络连接至设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
显示屏503可为具有触摸功能的显示屏,其可以是电容屏、电磁屏或者红外屏。一般而言,显示屏503用于根据处理器501的指示显示数据,如显示第一图像和第三图像,还用于接收作用于显示屏501的触摸操作,并将相应的信号发送至处理器501或其他装置。可选的,当显示屏503为红外屏时,其还包括红外触摸框,该红外触摸框设置在显示屏503的四周,其还可以用于接收红外信号,并将该红外信号发送至处理器501或者其他设备。在其他例子中,显示屏503也可为不具有触摸功能的显示屏。
输入装置504可用于接收输入的图像,以及产生与电子设备的用户设置以及功能控制有关的键信号输入,还可以是用于获取图像的摄像头以及获取音频数据的拾音设备。输出装置1204可以包括扬声器等音频设备。需要说明的是,输入装置1203和输出装置1204的具体组成可以根据实际情况设定。
通信装置505,用于与其他设备建立通信连接,其可以是有线通信装置和/或无线通信装置。
处理器501通过运行存储在存储器502中的软件程序、指令以及模块,从而执行设备的各种功能应用以及数据处理,即实现上述的图像调整方法。
具体的,在一个示例性的实施例中,处理器501执行存储器502中存储的一个或多个程序时,具体实现如下操作:获取第一图像,并对第一图像进行人脸检测,获得人脸特征点集。从人脸特征点集中选取部分人脸特征点作为人脸变形约束点。根据所述人脸变形约束点的坐标,计算获得人脸变形约束量。根据所述人脸变形约束量,对人脸特征点集中的人脸轮廓特征点的坐标进行调整,获得第二图像。对所述第一图像中的像素点进行仿射变换,获得各所述像素点在第二图像中的变换坐标。根据所述变换坐标填充所述第二图像中的像素值,获得第三图像。
本申请实施例选取部分人脸特征点作为人脸变形约束点,并根据所述人脸变形约束点的坐标,计算获得人脸变形约束量,且根据所述人脸变形约束量,对人脸特征点集中的人脸轮廓特征点的坐标进行调整,对人脸调整进行有效约束,可防止人脸图像处理过程中人脸变得下巴尖锐或脸蛋部位失去弧度,导致人脸发生扭曲;同时,防止人脸图像处理过程中背景发生扭曲变形,更加符合实际应用场景。
在上述实施例的基础上,所述人脸特征点集包括了位于人脸不同部位的多个人脸特征点,可选地,所述人脸特征点集包括人脸关键点,所述人脸关键点包括:人脸轮廓特征点、眉毛特征点、眼睛特征点、鼻子特征点和嘴巴特征点。选取上述的人脸特征点能够表征人脸的大部分特征和有效区分不同的人脸。所述人脸关键点可以通过现有的各种人脸检测算法对第一图像进行检测和识别。在本申请实施例中,通过Dlib人脸检测算法工具对第一图像进行人脸检测来获得68个人脸关键点,且该68个人脸关键点按照预设的规则进行编号0-67加以区分,具体地,编号为0-16对应表征人脸轮廓特征点,编号为17-26对应表征眉毛特征点;编号为27-35对应表征鼻子特征点;编号为36-47对应表征眼睛特征点;编号48-67为对应表征嘴巴特征点。进一步地,在面对第一图像时,以第一图像的最左边顶点的位置作为原点坐标,以第一图像的上侧边作为x的正方向,以第一图像的左侧边作为y的负方向,建立直角坐标系,则第一图像的各个人脸关键点均可以用坐标进行表示,可以获得每个编号对应的人脸关键点的坐标位置。
在上述实施例的基础上,为防止其他非人脸轮廓点所在区域在调整后发生变形和扭曲,同时防止背景及人脸边缘区域发生扭曲变形,以对背景及人脸边缘区域起到隔离和保护的效果,人脸特征点集还包括四个外侧特征点;所述四个外侧特征点通过以下步骤获得,包括:选取人脸关键点中的最小横坐标、人脸关键点中的最大纵坐标、人脸关键点中的最大横坐标、人脸关键点中的最小纵坐标两两相互组合,获得人脸外侧的四个外侧特征点。具体的,由于坐标为横坐标和纵坐标构成,因此,在上述选取的坐标中以人脸关键点中的最小横坐标和人脸关键点中的最大纵坐标确定第一个外侧特征点;以人脸关键点的最大横坐标和人脸关键点的最大纵坐标确定第二个外侧特征点;以人脸关键点的最小横坐标和人脸关键点的最小纵坐标确定第三个外侧特征点;以人脸关键点的最大横坐标和人脸关键点的最小纵坐标确定第四个外侧特征点。进一步地,还 可以对获得的四个外侧特征点按照预设的规则进行编号以加以区分。具体的,人脸关键点的最小横坐标即编号为0的人脸关键点的横坐标、人脸关键点的最大纵坐标即编号为18的人脸关键点的纵坐标,这两个坐标构成的人脸外侧的第一个外侧特征点,对应编号为68;人脸关键点的最大横坐标即编号为16的人脸关键点的横坐标、人脸关键点的最大纵坐标即编号为18的人脸关键点的纵坐标,这两个坐标构成的人脸外侧的第二个外侧特征点,对应编号为69;人脸关键点的最小横坐标编号为0的人脸关键点的横坐标、人脸关键点的最小纵坐标即编号为8的人脸关键点的纵坐标,这两个坐标构成的人脸外侧的第三个外侧特征点,对应编号为70;人脸关键点的最大横坐标即编号为16的人脸关键点的横坐标、人脸关键点的最小纵坐标即编号为8的人脸关键点的纵坐标,这两个坐标构成的人脸外侧的第四个外侧特征点,对应编号为71。
在上述实施例的基础上,为满足图像瘦脸需求,防止人脸边缘这样的敏感区域会出现扭曲、毛刺锯齿情况的出现,选取的所述人脸变形约束点为两个外眼角特征点、或两个内眼角特征点、或下巴特征点和眉心特征点作为人脸变形约束点。由于两个外眼角特征点作为人脸变形约束点,可实现对人脸轮廓特征点形变进行有效约束,进而可满足大部分用户的需求,可适用于不同尺寸、不同倾斜角度的图像的瘦脸调整,因此,本实施中,优选采用两个外眼角特征点作为人脸变形约束点,具体地,两个外眼角特征点分别为编号为39对应的人脸特征点和编号为42对应的人脸特征点。
进一步地,所述处理器执行所述根据所述人脸变形约束点,计算获得人脸变形约束量的时,包括执行:获取两个所述人脸变形约束点的横坐标的差值,并将所述差值的算术平方根作为人脸变形约束量。具体的,所述人脸变形约束量的计算公式为:
Figure PCTCN2019126539-appb-000013
上述公式中,degree表示人脸变形约束量;point1.x表示其中一个人脸变形约束点的横坐标,point2.x表示另一个人脸变形约束点的横坐标,且point1.x-point2.x的值大于0。
在上述实施例的基础上,所述处理器执行根据所述人脸变形约束量,对人脸特征点集中的人脸轮廓特征点的坐标进行调整时,可以包括执行:保持人脸轮廓特征点的纵坐标不变,通过以下方式获取人脸轮廓点的横坐标:将所述人脸轮廓特征点的编号与预设数值参数进行运算,获得运算结果;计算所述运算结果与所述人脸变形约束量的乘积的算术平方根,并将所述乘积的算术平方根作为所述人脸轮廓特征点的横坐标的偏移量;根据所述偏移量调整所述人脸轮廓特征点的横坐标。
具体的,所述人脸轮廓特征点的横坐标的偏移量的计算公式为:
Figure PCTCN2019126539-appb-000014
上述公式中,y表示人脸轮廓特征点的横坐标的偏移量;degree表示人脸变形约束量;x表示人脸轮廓特征点的编号。
其中,根据所述偏移量调整所述人脸轮廓特征点的横坐标时,将所述人脸轮廓点的横坐标与所述偏移量叠加求和的结果作为调整后的所述人脸轮廓点的横坐标。
根据所述人脸变形约束量,对人脸特征点集中的人脸轮廓特征点的坐标进行调整,进而对人脸调整进行有效约束,可防止人脸变得下巴尖锐或脸蛋部位失去弧度,防止人脸发生扭曲。
在上述实施例的基础上,所述处理器501执行存储器502中存储的一个或多个程序时,实现所述对所述第一图像中的像素点进行仿射变换,获得各所述像素点在第二图像中的变换坐标时,具体实现如下操作:根据第一图像的人脸特征点坐标和第二图像的人脸特征点坐标,计算第一图像的像素点和第二图像的像素点之间的仿射变换矩阵;根据所述仿射变换矩阵对所述第一图像中的像素点进行仿射变换,获得各所述像素点在第二图像中的变换坐标。
具体的,所述处理器501执行存储器502中存储的一个或多个程序时,实现根据第一图像的人脸特征点和第二图像的人脸特征点,计算第一图像的像素点和第二图像的像素点之间的仿射变换矩阵时,包括执行:计算所述第一图像中各人脸特征点对所述第一图像的像素点的影响权重;根据所述第一图像的各人脸特征点和对应的所述影响权重,获取所述第一图像的人脸特征点的加权平均值;根据所述第二图像的各人脸特征点和对应的所述影响权重,获取所述第二图像的人脸特征点的加权平均值;根据所述第一图像的人脸特征点与所述第一图像的人脸特征点的加权平均值的差值、第二图像的人脸特征点与所述第二图像的人脸特征点的加权平均值的差值、以及所述影响权重,计算获得仿射变换矩阵。
具体的,所述仿射变换矩阵的计算公式为:
Figure PCTCN2019126539-appb-000015
其中,
Figure PCTCN2019126539-appb-000016
Figure PCTCN2019126539-appb-000017
Figure PCTCN2019126539-appb-000018
contral i=contral i-contral*;
current i=current i-current*;
上述公式中,M表示像素点的仿射变换矩阵;contral i表示第一图像中编号为i的人脸特征点的坐标;current i表示第二图像中编号为i的人脸特征点的坐标;contral j表示第一图像中编号为j的人脸特征点的坐标;contral j表示第二图像中编号为j的人脸特征点的坐标;w i表示第一图像中编号为i的人脸特征点对像素点的影响权重;P表示第一图像的像素点,P.x表示像素点的横坐标;P.y表示像素点的纵坐标;control[i].x表示第一图像中编号为i的人脸特征点的横坐标;control[i].y表示第一图像中编号为i的人脸特征点的纵坐标。
通过计算所述第一图像中各人脸特征点对所述第一图像的像素点的影响权重,进而影响权重、第一图像的人脸特征点和第二图像的人脸特征点,计算第一图像的像素点和第二图像的像素点之间的仿射变换矩阵,从而实现对所述第一图像中的像素点进行仿射变换。
所述处理器501执行存储器502中存储的一个或多个程序时,实现对所述第一图像中的像素点进行仿射变换,获得各所述像素点在第二图像中的变换坐标时,包括执行:计算所述第一图像的像素点坐标与所述第一图像的人脸特征点的加权平均值的差值;将所述差值与所述影响权重进行乘积计算,并将乘积结果与所述第二图像的人脸特征点的加权平均值进行叠加求和,获得像素点在第二图像中的变换坐标。
具体的,所述像素点在第二图像中的变换坐标的计算公式为:
L(P)=(P-contral*)M+current*
上述公式中,L(P)为像素点在第二图像中的变换坐标;P为第一图像中的像素点坐标;M表示像素点的仿射变换矩阵。
通过仿射变换矩阵,可以确定像素点在第二图像中的变换坐标,进而确定调整后的图像。
对于第一图像的每一像素点的坐标通过仿射变换获得对应的第二图像的其中一个像素点的坐标,并可在第二图像的像素点坐标中填充第一图像的像素点对应的像素值,同样的道理,对于第二图像中未填充像素值的像素点的坐标也可以通过逆仿射变换反向获得对应的输入图的最近邻像素点的坐标,并将第一图像的最近邻像素点的坐标对应的像素值填充至第二图像的像素点内,从而实现图像的融合。其中,具体的仿射变换方式可参照前述所述第一图像的人脸特征点坐标和第二图像的人脸特征点坐标之间的仿射变换,所述处理器501根据所述第一图像中的变换坐标填充所述第二图像中的像素值,获得第三图像的步骤,包括对所述第二图像中进行最近邻插值运算。通过最近邻插值运算可以简单快速的填充所述第二图像中的像素值。
本申请实施例还提供一种计算机可读存储介质,其上储存有计算机程序,该计算机程序被处理器执行时实现上述任一所述的图像调整方法的步骤,包括:获取第一图像,并对第一图像进行人脸检测,获得人脸特征点集。从人脸特征 点集中选取部分人脸特征点作为人脸变形约束点。根据所述人脸变形约束点的坐标,计算获得人脸变形约束量。根据所述人脸变形约束量,对人脸特征点集中的人脸轮廓特征点的坐标进行调整,获得第二图像。对所述第一图像中的像素点进行仿射变换,获得各所述像素点在第二图像中的变换坐标。根据所述变换坐标填充所述第二图像中的像素值,获得第三图像。
本申请实施例选取部分人脸特征点作为人脸变形约束点,并根据所述人脸变形约束点的坐标,计算获得人脸变形约束量,且根据所述人脸变形约束量,对人脸特征点集中的人脸轮廓特征点的坐标进行调整,对人脸调整进行有效约束,可防止人脸变得下巴尖锐或脸蛋部位失去弧度,导致人脸发生扭曲;同时,防止背景发生扭曲变形,更加符合实际应用场景。
本申请实施例可采用在一个或多个其中包含有程序代码的存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。计算机可读储存介质包括永久性和非永久性、可移动和非可移动媒体,可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括但不限于:相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。
上述提供的计算机设备可用于执行上述任意实施例提供的图像调整方法,具备相应的功能和有益效果。上述设备中各个组件的功能和作用的实现过程具体详见上述方法中对应步骤的实现过程,在此不再赘述。
对于设备实施例而言,由于其基本对应于方法实施例,所以相关之处参见方法实施例的部分说明即可。以上所描述的设备实施例仅仅是示意性的,其中所述作为分离部件说明的组件可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本申请实施例方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
本领域技术人员在考虑说明书及实践本公开后,将容易想到本申请实施例的其它实施方案。本申请实施例旨在涵盖本申请实施例的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本申请实施例的一般性原理并包括本申请实施例未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本申请实施例的真正范围和精神由下面的权利要求指出。
应当理解的是,本申请实施例并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本申请实施例的范 围仅由所附的权利要求来限制。
以上所述实施例仅表达了本申请实施例的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对公开范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请实施例构思的前提下,还可以做出若干变形和改进,这些都属于本申请实施例的保护范围。

Claims (13)

  1. 一种图像调整方法,包括如下步骤:
    获取第一图像,并对第一图像进行人脸检测,获得人脸特征点集;
    从人脸特征点集中选取部分人脸特征点作为人脸变形约束点;
    根据所述人脸变形约束点的坐标,计算获得人脸变形约束量;
    根据所述人脸变形约束量,对人脸特征点集中的人脸轮廓特征点的坐标进行调整,获得第二图像;
    对所述第一图像中的像素点进行仿射变换,获得各所述像素点在第二图像中的变换坐标;
    根据所述变换坐标填充所述第二图像中的像素值,获得第三图像。
  2. 根据权利要求1所述的图像调整方法,其中,所述人脸特征点集包括人脸关键点;所述人脸关键点包括:人脸轮廓特征点、眉毛特征点、眼睛特征点、鼻子特征点和嘴巴特征点。
  3. 根据权利要求2所述的图像调整方法,其中,所述人脸特征点集还包括四个外侧特征点;所述四个外侧特征点通过以下步骤获得:选取人脸关键点中的最小横坐标、人脸关键点中的最大纵坐标、人脸关键点中的最大横坐标、人脸关键点中的最小纵坐标两两相互组合,获得人脸外侧的四个外侧特征点。
  4. 根据权利要求3所述的图像调整方法,其中,所述人脸变形约束点为两个外眼角特征点、或两个内眼角特征点、或下巴特征点和眉心特征点。
  5. 根据权利要求4所述的图像调整方法,其中,所述根据所述人脸变形约束点,计算获得人脸变形约束量的步骤,包括:
    获取两个所述人脸变形约束点的横坐标的差值,并将所述差值的算术平方根作为人脸变形约束量。
  6. 根据权利要求1-5中任一权利要求所述的图像调整方法,其中,所述根据所述人脸变形约束量,对人脸特征点集中的人脸轮廓特征点的坐标进行调整的步骤,包括:保持人脸轮廓特征点的纵坐标不变,通过以下方式调整人脸轮廓特征点的横坐标:
    将所述人脸轮廓特征点的编号与预设数值参数进行运算,获得运算结果;
    计算所述运算结果与所述人脸变形约束量的乘积的算术平方根,并将所述乘积的算术平方根作为所述人脸轮廓特征点的横坐标的偏移量;
    根据所述偏移量调整所述人脸轮廓特征点的横坐标。
  7. 根据权利要求1-5中任一权利要求所述的图像调整方法,其中,所述对所述第一图像中的像素点进行仿射变换,获得各所述像素点在第二图像中的变换坐标的步骤,包括:
    根据第一图像的人脸特征点和第二图像的人脸特征点,计算第一图像的像素点和第二图像的像素点之间的仿射变换矩阵;
    根据所述仿射变换矩阵对所述第一图像中的像素点进行仿射变换,获得各所述像素点在第二图像中的变换坐标。
  8. 根据权利要求7所述的图像调整方法,其中,所述根据第一图像的人脸特征点和第二图像的人脸特征点,计算第一图像的像素点和第二图像的像素点之间的仿射变换矩阵的步骤,包括:
    计算所述第一图像中各人脸特征点对所述第一图像的像素点的影响权重;
    根据所述第一图像的各人脸特征点和对应的所述影响权重,获取所述第一图像的人脸特征点的加权平均值;
    根据所述第二图像的各人脸特征点和对应的所述影响权重,获取所述第二图像的人脸特征点的加权平均值;
    根据所述第一图像的人脸特征点与所述第一图像的人脸特征点的加权平均值的差值、第二图像的人脸特征点与所述第二图像的人脸特征点的加权平均值的差值、以及所述影响权重,计算获得仿射变换矩阵。
  9. 根据权利要求8所述的图像调整方法,其中,所述根据所述仿射变换矩阵对所述第一图像中的像素点进行仿射变换,获得各所述像素点在第二图像中的变换坐标的步骤,包括:
    计算所述第一图像的像素点坐标与所述第一图像的人脸特征点的加权平均值的差值;
    将所述差值与所述影响权重进行乘积计算,并将乘积结果与所述第二图像的人脸特征点的加权平均值进行叠加求和,获得像素点在第二图像中的变换坐标。
  10. 根据权利要求1所述的图像调整方法,其中,所述根据所述第一图像中的变换坐标填充所述第二图像中的像素值,获得第三图像的步骤,包括对所述第二图像中进行最近邻插值运算。
  11. 一种图像调整装置,包括:
    人脸检测模块,用于获取第一图像,并对第一图像进行人脸检测,获得人脸特征点集;
    人脸变形约束点确定模块,设置为从人脸特征点集中选取部分人脸特征点作为人脸变形约束点;
    人脸变形约束量确定模块,设置为根据所述人脸变形约束点的坐标,计算获得人脸变形约束量;
    坐标调整模块,设置为根据所述人脸变形约束量,对人脸特征点集中的人脸轮廓特征点的坐标进行调整,获得第二图像;
    坐标变换模块,设置为对所述第一图像中的像素点进行仿射变换,获得各所述像素点在第二图像中的变换坐标;
    第三图像获取模块,设置为根据所述第一图像中的变换坐标填充所述第二图像中的像素值,获得第三图像。
  12. 一种计算机可读存储介质,其上储存有计算机程序,该计算机程序被处理器执行时实现如权利要求1至10中任意一项所述的图像调整方法。
  13. 一种计算机设备,包括存储器,处理器以及储存在所述储存器中并可被所述处理器执行的计算机程序,所述处理器执行所述计算机程序时实现如权 利要求1至10中任意一项所述的图像调整方法。
PCT/CN2019/126539 2019-07-24 2019-12-19 图像调整方法、装置、存储介质以及设备 WO2021012596A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910670893.3 2019-07-24
CN201910670893.3A CN110555796B (zh) 2019-07-24 2019-07-24 图像调整方法、装置、存储介质以及设备

Publications (1)

Publication Number Publication Date
WO2021012596A1 true WO2021012596A1 (zh) 2021-01-28

Family

ID=68735952

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/126539 WO2021012596A1 (zh) 2019-07-24 2019-12-19 图像调整方法、装置、存储介质以及设备

Country Status (2)

Country Link
CN (1) CN110555796B (zh)
WO (1) WO2021012596A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112766215A (zh) * 2021-01-29 2021-05-07 北京字跳网络技术有限公司 人脸融合方法、装置、电子设备及存储介质
CN113591562A (zh) * 2021-06-23 2021-11-02 北京旷视科技有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN113674139A (zh) * 2021-08-17 2021-11-19 北京京东尚科信息技术有限公司 人脸图像的处理方法、装置、电子设备及存储介质
CN113808249A (zh) * 2021-08-04 2021-12-17 北京百度网讯科技有限公司 图像处理方法、装置、设备和计算机存储介质

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110555796B (zh) * 2019-07-24 2021-07-06 广州视源电子科技股份有限公司 图像调整方法、装置、存储介质以及设备
CN111507890B (zh) * 2020-04-13 2022-04-19 北京字节跳动网络技术有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN112750071B (zh) * 2020-11-04 2023-11-24 上海序言泽网络科技有限公司 一种用户自定义的表情制作方法及系统
CN112634165B (zh) * 2020-12-29 2024-03-26 广州光锥元信息科技有限公司 用于图像适配vi环境的方法及装置
CN113689325A (zh) * 2021-07-12 2021-11-23 深圳数联天下智能科技有限公司 一种数字化美型眉毛的方法、电子设备及存储介质
CN116310146B (zh) * 2023-05-16 2023-10-27 北京邃芒科技有限公司 人脸图像重演方法、系统、电子设备、存储介质
CN116616817B (zh) * 2023-07-21 2023-10-03 深圳华声医疗技术股份有限公司 超声心率检测方法、装置、超声设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103208133A (zh) * 2013-04-02 2013-07-17 浙江大学 一种图像中人脸胖瘦的调整方法
CN103824253A (zh) * 2014-02-19 2014-05-28 中山大学 一种基于图像局部精确变形的人物五官变形方法
CN107154030A (zh) * 2017-05-17 2017-09-12 腾讯科技(上海)有限公司 图像处理方法及装置、电子设备及存储介质
CN107203963A (zh) * 2016-03-17 2017-09-26 腾讯科技(深圳)有限公司 一种图像处理方法及装置、电子设备
CN109376671A (zh) * 2018-10-30 2019-02-22 北京市商汤科技开发有限公司 图像处理方法、电子设备及计算机可读介质
CN110555796A (zh) * 2019-07-24 2019-12-10 广州视源电子科技股份有限公司 图像调整方法、装置、存储介质以及设备

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100902995B1 (ko) * 2007-10-23 2009-06-15 에스케이 텔레콤주식회사 최적화 비율의 얼굴영상 형성 방법 및 이에 적용되는 장치
KR102013928B1 (ko) * 2012-12-28 2019-08-23 삼성전자주식회사 영상 변형 장치 및 그 방법
CN104751404B (zh) * 2013-12-30 2019-04-12 腾讯科技(深圳)有限公司 一种图像变换的方法及装置
CN105894446A (zh) * 2016-05-09 2016-08-24 西安北升信息科技有限公司 一种视频中的自动脸部轮廓修饰方法
CN108171244A (zh) * 2016-12-07 2018-06-15 北京深鉴科技有限公司 对象识别方法和系统
CN107818543B (zh) * 2017-11-09 2021-03-30 北京小米移动软件有限公司 图像处理方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103208133A (zh) * 2013-04-02 2013-07-17 浙江大学 一种图像中人脸胖瘦的调整方法
CN103824253A (zh) * 2014-02-19 2014-05-28 中山大学 一种基于图像局部精确变形的人物五官变形方法
CN107203963A (zh) * 2016-03-17 2017-09-26 腾讯科技(深圳)有限公司 一种图像处理方法及装置、电子设备
CN107154030A (zh) * 2017-05-17 2017-09-12 腾讯科技(上海)有限公司 图像处理方法及装置、电子设备及存储介质
CN109376671A (zh) * 2018-10-30 2019-02-22 北京市商汤科技开发有限公司 图像处理方法、电子设备及计算机可读介质
CN110555796A (zh) * 2019-07-24 2019-12-10 广州视源电子科技股份有限公司 图像调整方法、装置、存储介质以及设备

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112766215A (zh) * 2021-01-29 2021-05-07 北京字跳网络技术有限公司 人脸融合方法、装置、电子设备及存储介质
CN113591562A (zh) * 2021-06-23 2021-11-02 北京旷视科技有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN113808249A (zh) * 2021-08-04 2021-12-17 北京百度网讯科技有限公司 图像处理方法、装置、设备和计算机存储介质
CN113674139A (zh) * 2021-08-17 2021-11-19 北京京东尚科信息技术有限公司 人脸图像的处理方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN110555796A (zh) 2019-12-10
CN110555796B (zh) 2021-07-06

Similar Documents

Publication Publication Date Title
WO2021012596A1 (zh) 图像调整方法、装置、存储介质以及设备
US9639914B2 (en) Portrait deformation method and apparatus
WO2016065632A1 (zh) 一种图像处理方法和设备
US10970821B2 (en) Image blurring methods and apparatuses, storage media, and electronic devices
EP3992919B1 (en) Three-dimensional facial model generation method and apparatus, device, and medium
CN112288665B (zh) 图像融合的方法、装置、存储介质及电子设备
JP7441917B2 (ja) 顔に対する射影歪み補正
US10929982B2 (en) Face pose correction based on depth information
WO2021012599A1 (zh) 图像调整方法、装置和计算机设备
WO2022116397A1 (zh) 虚拟视点深度图处理方法、设备、装置及存储介质
WO2018076172A1 (zh) 一种图像显示方法及终端
CN111836058B (zh) 用于实时视频播放方法、装置、设备以及存储介质
CN108765551B (zh) 一种实现3d模型捏脸的方法及终端
CN116152121B (zh) 基于畸变参数的曲面屏生成方法、矫正方法
US20220360707A1 (en) Photographing method, photographing device, storage medium and electronic device
US9786030B1 (en) Providing focal length adjustments
US10152818B2 (en) Techniques for stereo three dimensional image mapping
JP2009251634A (ja) 画像処理装置、画像処理方法、及びプログラム
CN115984445A (zh) 图像处理方法及相关装置、设备和存储介质
CN112507766B (zh) 人脸图像提取方法、存储介质及终端设备
WO2017101570A1 (zh) 照片的处理方法及处理系统
US9563940B2 (en) Smart image enhancements
WO2023023960A1 (zh) 图像处理及神经网络的训练方法和装置
JP2021064043A (ja) 画像処理装置、画像処理システム、画像処理方法及び画像処理プログラム
WO2021042375A1 (zh) 人脸活体检测方法、芯片及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19938472

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19938472

Country of ref document: EP

Kind code of ref document: A1