CN108734126B - Beautifying method, beautifying device and terminal equipment - Google Patents

Beautifying method, beautifying device and terminal equipment Download PDF

Info

Publication number
CN108734126B
CN108734126B CN201810487620.0A CN201810487620A CN108734126B CN 108734126 B CN108734126 B CN 108734126B CN 201810487620 A CN201810487620 A CN 201810487620A CN 108734126 B CN108734126 B CN 108734126B
Authority
CN
China
Prior art keywords
lip
image
kth
beautifying
curvature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810487620.0A
Other languages
Chinese (zh)
Other versions
CN108734126A (en
Inventor
舒倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Montnets Technology Co ltd
Original Assignee
Shenzhen Montnets Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Montnets Technology Co ltd filed Critical Shenzhen Montnets Technology Co ltd
Priority to CN201810487620.0A priority Critical patent/CN108734126B/en
Publication of CN108734126A publication Critical patent/CN108734126A/en
Application granted granted Critical
Publication of CN108734126B publication Critical patent/CN108734126B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a beautifying method, a beautifying device, terminal equipment and a computer readable storage medium, wherein the method comprises the following steps: if a skin color area exists in the image to be beautified; if the facial skin color region has the facial features image, obtaining the lip number Y in the facial features image; determining a first beautifying strength according to the texture features of the face corresponding to the kth lip; calculating the curvature of the kth lip according to the characteristics of the kth lip; if the curvature of the kth lip is smaller than the preset curvature, beautifying the face corresponding to the kth lip by the first beautifying strength; and if the curvature of the kth lip is greater than or equal to the preset curvature, beautifying the face corresponding to the kth lip with a second beautifying strength. Because the face corresponding to the kth lip is beautified with different beautifying intensities according to the curvature of the lips, the beautifying intensity can be adaptively adjusted according to the expression of the face, and the satisfaction degree of the user in beautifying is improved.

Description

Beautifying method, beautifying device and terminal equipment
Technical Field
The invention belongs to the technical field of image recognition, and particularly relates to a beautifying method, a beautifying device, terminal equipment and a computer-readable storage medium.
Background
In recent years, the technique of beautifying the face can improve the appearance of a person in an image, and thus the technique is widely applied to the field of image processing. Whether the online video beautification is used for beautifying the face of the image frame in the video with a certain beautification intensity after detecting the face of the image frame in the video, or the static image is used for beautifying the face of the image with a certain beautification intensity after detecting the face of the image.
However, currently, a face existing in a face is beautified by detecting a face with a certain beautification intensity, which is usually a beautification technology based on age analysis, the age is calculated by analyzing the texture complexity of the face, and then the beautification intensity is adjusted according to the age value, in practical application, the face can generate textures of different degrees due to the change of facial expressions of people, and the beautification intensity is too high or too low due to the fact that the age is adjusted by calculating the texture complexity of a face image, so that an beautified image is unnatural, and the beautification requirement of a user cannot be met.
Disclosure of Invention
In view of this, embodiments of the present invention provide a beautifying method, an apparatus, a terminal device, and a computer-readable storage medium, which can detect facial expressions through lip curvature, and adaptively adjust a beautifying parameter according to the facial expressions, thereby improving satisfaction of a user in using a beautifying function.
The first aspect of the embodiments of the present invention provides a method for beautifying a face, where the method includes:
acquiring an image to be beautified, and detecting whether a skin color area exists in the image to be beautified;
if a skin color area exists in the image to be beautified, detecting whether a facial feature image exists in the skin color area;
if the facial features image exists in the skin color area, determining the position of the facial features image in the skin color area, and acquiring the number Y of lips in the facial features image, wherein Y is not less than 1 and is an integer;
determining a face corresponding to the kth lip according to the five sense organs associated with the kth lip and the kth lip, wherein k is more than or equal to 1 and less than or equal to Y;
determining a first beautifying strength according to the texture features of the face corresponding to the kth lip;
calculating the curvature of the kth lip according to the characteristics of the kth lip;
if the curvature of the kth lip is smaller than the preset curvature, beautifying the face corresponding to the kth lip by the first beautifying strength;
and if the curvature of the kth lip is greater than or equal to the preset curvature, beautifying the face corresponding to the kth lip with a second beautifying strength, wherein the second beautifying strength is less than the first beautifying strength.
Based on the first aspect, in a first possible implementation manner, the calculating a k-th lip curvature according to features of a k-th lip includes:
dividing the kth lip into M × N image blocks, wherein M and N respectively represent the number of rows and columns of the k lip into the image blocks, M ≧ 1 and an integer, and N ≧ 1 and an integer;
acquiring a row number i and a column number j of each image block in the kth lip, wherein the i is used for indicating the image block in the ith row in the kth lip, and the j is used for indicating the image block in the jth column in the kth lip area;
if the minimum column number in the minimum row image block of the kth lip is equal to the minimum column number in the kth lip image block, determining that the face mode corresponding to the kth lip is a first side mode;
if the maximum column number in the minimum row image block of the kth lip is equal to the maximum column number in the kth lip image block, determining that the face mode corresponding to the lip is a second side mode;
the formula for calculating the curvature of the kth lip according to the characteristics of the kth lip is as follows:
Figure GDA0002613013210000031
wherein f (i) ═ imin+(imax-imin)/2,
Figure GDA0002613013210000032
curve represents the curvature of the k-th lip, arctan () represents the arctan function, iminA row number, i, corresponding to the image block of the smallest row in the k-th lipmaxA row number j representing the image block of the largest row in the k-th lip partminThe corresponding column number j of the image block representing the smallest column in the k-th lipmaxThe column number corresponding to the image block of the largest column in the kth lip is represented by lm, the row number corresponding to the middle row of the smallest column in the kth lip is represented by rm, the row number corresponding to the middle row of the largest column in the kth lip is represented by rm, and the side mode comprises the first side mode and the second side mode.
Based on the first aspect or the first implementation manner of the first aspect, in a second possible implementation manner, if the curvature of the kth lip is greater than or equal to the preset curvature, beautifying a face corresponding to the kth lip with a second beautifying intensity includes:
if the curvature of the kth lip is larger than or equal to the preset curvature, calculating a difference value between the curvature of the kth lip and the preset curvature;
and acquiring a preset adjustment amplitude corresponding to the difference value, reducing the first beautifying intensity to a second beautifying intensity according to the preset adjustment amplitude, and beautifying the face corresponding to the kth lip by the second beautifying intensity.
Based on the first aspect or the first implementation manner of the first aspect, in a third possible implementation manner, if the scene detected by the color beautifying method includes multiple frames of images, the color beautifying method further includes:
when faces corresponding to Y lips in an image to be beautified are beautified, detecting whether a next frame of image exists or not;
and if the next frame image exists, detecting the scene similarity between the next frame image and the image to be beautified.
Based on the third implementation manner of the first aspect, in a fourth possible implementation manner, after detecting a scene similarity between the next frame image and the image to be beautified if the next frame image exists, the method includes:
if the scene similarity is smaller than or equal to the preset scene similarity, judging that the next frame of image is a scene switching image, and setting the next frame of image as an image to be beautified;
and returning to the step of obtaining the image to be beautified, and detecting whether a skin color area exists in the image to be beautified and the subsequent steps.
Based on the third implementation manner of the first aspect, in a fifth possible implementation manner, after detecting a scene similarity between the next frame image and the image to be beautified if the next frame image exists, the method further includes:
if the scene similarity is larger than the preset scene similarity, judging that the next frame of image is not a scene switching image;
performing face beautifying on the face corresponding to the image to be beautified in the next frame of image according to the face beautifying intensity of the face in the image to be beautified, simultaneously acquiring different faces in the image to be beautified in the next frame of image, and determining corresponding lip positions of the different faces;
and setting the lips corresponding to the different faces as the kth lip, and returning to the step of determining the face as the first beautifying strength according to the texture features of the face corresponding to the kth lip and the subsequent steps.
A second aspect of embodiments of the present invention provides a beauty apparatus, including:
the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring an image to be beautified and detecting whether a skin color area exists in the image to be beautified;
the first detection module is used for detecting whether a facial feature image exists in the skin color area if the skin color area exists in the image to be beautified;
the first determining module is used for determining the position of the facial features image in the skin color region and acquiring the number Y of lips in the facial features image if the facial features image exists in the skin color region, wherein Y is not less than 1 and is an integer;
the second determining module is used for determining the face corresponding to the kth lip according to the kth lip and the five sense organs related to the kth lip, wherein k is more than or equal to 1 and less than or equal to Y;
the first calculation module is used for determining a first beautifying intensity according to the texture features of the face corresponding to the kth lip;
a second calculation module for calculating the k-th lip curvature according to the characteristics of the k-th lip;
the first beautifying module is used for beautifying the face corresponding to the kth lip with the first beautifying strength if the curvature of the kth lip is smaller than a preset curvature;
and the second beautifying module is used for beautifying the face corresponding to the kth lip with a second beautifying strength if the curvature of the kth lip is greater than or equal to a preset curvature, wherein the second beautifying strength is less than the first beautifying strength.
Based on the second aspect, in a first possible implementation manner, the second calculating module specifically includes:
a first dividing unit, configured to divide the kth lip into M × N image blocks, where M and N respectively indicate a number of rows and a number of columns of the k lip into the image blocks, M ≧ 1 and an integer, and N ≧ 1 and an integer;
a first obtaining unit, configured to obtain a row number i and a column number j of each image block in the k-th lip, where i is used to indicate an image block in an ith row in the k-th lip, and j is used to indicate an image block in a jth column in the k-th lip area;
a first determining unit, configured to determine that a face mode corresponding to the kth lip is a first side mode if a minimum column number in a minimum row image block of the kth lip is equal to a minimum column number in the kth lip image block;
a second determining unit, configured to determine that the face mode corresponding to the lip is a second side mode if a maximum column number in a minimum row image block of the kth lip is equal to a maximum column number in the kth lip image block;
a first calculating unit, configured to calculate the k-th lip curvature according to the characteristics of the k-th lip, where the calculation formula is:
Figure GDA0002613013210000061
wherein f (i) ═ imin+(imax-imin)/2,
Figure GDA0002613013210000062
curve represents the curvature of the k-th lip, arctan () represents the arctan function, iminA row number, i, corresponding to the image block of the smallest row in the k-th lipmaxA row number j representing the image block of the largest row in the k-th lip partminThe corresponding column number j of the image block representing the smallest column in the k-th lipmaxThe column number corresponding to the image block of the largest column in the k-th lip part is represented by lm, the smallest column in the k-th lip part is represented by lmA row number corresponding to the middle row, rm indicating a row number corresponding to the middle row of the largest column in the k-th lip, and the side patterns including the first side pattern and the second side pattern.
A third aspect of the embodiments of the present invention provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method when executing the computer program.
A fourth aspect of embodiments of the present invention provides a computer-readable storage medium, in which a computer program is stored, which, when executed by a processor, implements the steps of the above-described method.
Compared with the prior art, the embodiment of the invention has the following beneficial effects: the embodiment of the invention detects whether a skin color area exists in the image to be beautified by acquiring the image to be beautified; if a skin color area exists in the image to be beautified, detecting whether a facial feature image exists in the skin color area; if the facial features image exists in the skin color area, determining the position of the facial features image in the skin color area, and acquiring the number Y of lips in the facial features image, wherein Y is not less than 1 and is an integer; determining a face corresponding to the kth lip according to the five sense organs associated with the kth lip and the kth lip, wherein k is more than or equal to 1 and less than or equal to Y; determining a first beautifying strength according to the texture features of the face corresponding to the kth lip; calculating the curvature of the kth lip according to the characteristics of the kth lip; if the curvature of the kth lip is smaller than the preset curvature, beautifying the face corresponding to the kth lip by the first beautifying strength; and if the curvature of the kth lip is greater than or equal to the preset curvature, beautifying the face corresponding to the kth lip with a second beautifying strength, wherein the second beautifying strength is less than the first beautifying strength. According to the embodiment of the invention, the k-th lip curvature can be calculated according to the characteristics of the k-th lip, the face corresponding to the k-th lip is beautified with different beautifying intensities according to the lip curvature, and the beautifying intensity can be adaptively adjusted according to the facial expression of the face because the lip curvature can reflect the facial expression of the face, so that the satisfaction degree of a user in beautifying is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart of an implementation of a method for providing a beauty treatment according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of an implementation of a second method for beautifying according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a beauty device according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a second computing module in the beauty device according to the third embodiment of the present invention;
fig. 5 is a schematic diagram of a terminal device according to a fourth embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
It should be understood that the sequence numbers of the steps in the method embodiments described below do not mean the execution sequence, and the execution sequence of each process should be determined by the function and the inherent logic of the process, and should not constitute any limitation on the implementation process of each embodiment.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
Example one
An embodiment of the present invention provides a method for beautifying, as shown in fig. 1, the method for beautifying in the embodiment of the present invention includes:
step 101, acquiring an image to be beautified, and detecting whether a skin color area exists in the image to be beautified;
in the embodiment of the present invention, the image to be beautified may be an image obtained by shooting through a camera, or a picture obtained from a local database, or a picture obtained from a related server, or of course, may also be a video frame decoded from a video file. In color space, chromaticity is typically represented by three components. After the image to be beautified is obtained, whether the skin color exists in the image to be beautified can be detected according to the distribution condition of the three components of the skin color in the YUV color space; or detecting whether the skin color exists in the image to be beautified according to the distribution condition of the skin color in the three components of the RGB color space. Of course, whether the skin color exists in the image to be beautified can also be detected according to the distribution condition of the skin color in the three components of the HSI color space.
Step 102, if a skin color area exists in the image to be beautified, detecting whether a facial feature image exists in the skin color area;
in the embodiment of the present invention, after determining that the skin color region exists in the image to be beautified in step 101, whether the facial features exist in the skin color region can be detected according to the features of the facial features. For example, a color model of five sense organs is established by a statistical method, the skin color region is traversed during searching, and whether the five sense organs exist or not can be judged by matching the skin color region with the color model of the five sense organs in the traversing process. Or a geometric model of parameters to be changed is constructed according to the shape feature points of the five sense organs, an evaluation function is set to measure the matching degree of the skin color area and the model, the skin color area is continuously searched for adjustment parameters to minimize the evaluation function, and whether the five sense organs exist in the skin color area is detected according to whether the model converges in the skin color area. Of course, the presence or absence of the five sense organs can also be detected by other methods, which are not limited herein.
Step 103, if there is a facial feature image in the skin color region, determining the position of the facial feature image in the skin color region, and obtaining the number Y of lips in the facial feature image, wherein Y is ≧ 1 and is an integer;
in the embodiment of the present invention, it is determined that there is an image of the five sense organs in the skin color region in step 102, and a specific position of the five sense organs in the skin color region is determined, for example, a specific row number and a specific column number of an image block of the five sense organs in the skin color region are determined. Since there may be a plurality of faces in the skin color region, there may be a plurality of five sense organs correspondingly. After the position of the five sense organs is determined, the number of lips determined in the five sense organs is obtained and recorded as Y.
104, determining a face corresponding to the kth lip according to the five sense organs associated with the kth lip and the kth lip, wherein k is more than or equal to 1 and less than or equal to Y;
in the embodiment of the invention, after the number Y of the lips of the five sense organs is obtained, each lip region is marked, for example, the lip regions are respectively marked as 1 to Y, a cycle variable k is set, the k is valued from 1 to Y, the k-th lip is obtained, the five sense organs associated with the k-th lip are determined, and the five sense organs associated with the k-th lip can be determined according to the position relationship between the preset five sense organs.
Step 105, determining a first beautifying intensity according to the texture features of the face corresponding to the kth lip;
in the embodiment of the invention, the face corresponding to the kth lip is determined, the face image can be determined through the five sense organs associated with the kth lip, the texture complexity of the face is detected, the beauty intensity levels corresponding to different texture complexities are established in advance, the beauty intensity of the face is determined according to the texture complexity of the corresponding face, and the beauty intensity is recorded as the first beauty intensity. Or, the original beauty intensity of the face can be determined according to the existing beauty technology and is recorded as the first beauty intensity.
Step 106, calculating the curvature of the kth lip according to the characteristics of the kth lip;
in an embodiment of the invention, the k-th lip curvature may be calculated from characteristics of the k-th lip. For example, the k-th lip curvature may be calculated from the radian characteristics of the k-th lip, or from other characteristics of the k-th lip.
In one embodiment, the calculating the curvature of the kth lip according to the characteristics of the kth lip includes: dividing the kth lip into M × N image blocks, wherein M and N respectively represent the number of rows and columns of the k lip into the image blocks, M ≧ 1 and an integer, and N ≧ 1 and an integer; acquiring a row number i and a column number j of each image block in the kth lip, wherein the i is used for indicating the image block in the ith row in the kth lip, and the j is used for indicating the image block in the jth column in the kth lip area; if the minimum column number in the minimum row image block of the kth lip is equal to the minimum column number in the kth lip image block, determining that the face mode corresponding to the kth lip is a first side mode; if the maximum column number in the minimum row image block of the kth lip is equal to the maximum column number in the kth lip image block, determining that the face mode corresponding to the lip is a second side mode; the formula for calculating the curvature of the kth lip according to the characteristics of the kth lip is as follows:
Figure GDA0002613013210000101
wherein f (i) ═ imin+(imax-imin)/2,
Figure GDA0002613013210000102
curve represents the curvature of the k-th lip, arctan () represents the arctan function, iminA row number, i, corresponding to the image block of the smallest row in the k-th lipmaxA row number j representing the image block of the largest row in the k-th lip partminThe corresponding column number j of the image block representing the smallest column in the k-th lipmaxThe column number corresponding to the image block of the largest column in the k-th lip part is represented by lm, the row number corresponding to the middle row of the smallest column in the k-th lip part is represented by rm, and the row number corresponding to the middle row of the largest column in the k-th lip part is represented by rmThe side patterns include the first side pattern and the second side pattern.
Step 107, if the curvature of the kth lip is smaller than a preset curvature, beautifying the face corresponding to the kth lip with the first beautifying strength;
in the embodiment of the present invention, a threshold of curvature is preset, the curvature of the lips calculated in step 106 is compared with the preset curvature, and if the curvature of the kth lip is smaller than the preset curvature, the face corresponding to the kth lip is beautified with the first beautifying strength. When the curvature of the lips is relatively small, the beautifying strength determined according to the texture complexity of the face is considered to meet the beautifying requirement of the user, and then the first beautifying strength is used for beautifying.
And 108, if the curvature of the kth lip is greater than or equal to the preset curvature, beautifying the face corresponding to the kth lip with a second beautifying strength, wherein the second beautifying strength is smaller than the first beautifying strength.
In the embodiment of the present invention, the curvature of the lips calculated in step 106 is compared with a preset curvature, and if the curvature of the kth lip is greater than or equal to the preset curvature, it is considered that different degrees of textures are generated due to facial expressions of the face, and when the face-beautifying strength is determined according to the complexity of the textures of the face, it is mistaken that the age corresponding to the face is older than the actual age, and the image after being beautified with the determined first face-beautifying strength may generate a visual discomfort (i.e., the face-beautifying effect is unnatural). Therefore, when the curvature of the kth lip is larger than or equal to the preset curvature, the first beautifying strength is adjusted downwards to be the second beautifying strength for beautifying, and the beautifying effect is improved.
In an embodiment, if the curvature of the kth lip is greater than or equal to the preset curvature, beautifying the face corresponding to the kth lip with a second beautifying intensity includes: if the curvature of the kth lip is larger than or equal to the preset curvature, calculating a difference value between the curvature of the kth lip and the preset curvature; and acquiring a preset adjustment amplitude corresponding to the difference value, reducing the first beautifying intensity to a second beautifying intensity according to the preset adjustment amplitude, and beautifying the face corresponding to the kth lip by the second beautifying intensity. And pre-establishing a corresponding relation between the difference range and a preset adjusting range, wherein if the difference is within a certain range, if the difference of the lip curvature is within a range of 1-5 degrees, the corresponding adjusting range is adjusted to be one level of beautifying intensity. And reducing the first beautifying intensity to a second beautifying intensity according to a preset adjusting amplitude, and beautifying the face corresponding to the kth lip by the second beautifying intensity.
In one embodiment, if the scene detected by the beautifying method includes multiple frames of images, the beautifying method further includes: when faces corresponding to Y lips in an image to be beautified are beautified, detecting whether a next frame of image exists or not; and if the next frame image exists, detecting the scene similarity between the next frame image and the image to be beautified. If the next frame image exists, after detecting the scene similarity between the next frame image and the image to be beautified, the method includes: if the scene similarity is smaller than or equal to the preset scene similarity, judging that the next frame of image is a scene switching image, and setting the next frame of image as an image to be beautified; and returning to the step of obtaining the image to be beautified, and detecting whether a skin color area exists in the image to be beautified and the subsequent steps. If the next frame image exists, after detecting the scene similarity between the next frame image and the image to be beautified, the method further comprises the following steps: if the scene similarity is larger than the preset scene similarity, judging that the next frame of image is not a scene switching image; performing face beautifying on the face corresponding to the image to be beautified in the next frame of image according to the face beautifying intensity of the face in the image to be beautified, simultaneously acquiring different faces in the image to be beautified in the next frame of image, and determining corresponding lip positions of the different faces; and setting the lips corresponding to the different faces as the kth lip, and returning to the step of determining the face as the first beautifying strength according to the texture features of the face corresponding to the kth lip and the subsequent steps.
Therefore, in the embodiment of the invention, whether a skin color area exists in the image to be beautified is detected by acquiring the image to be beautified; if a skin color area exists in the image to be beautified, detecting whether a facial feature image exists in the skin color area; if the facial features image exists in the skin color area, determining the position of the facial features image in the skin color area, and acquiring the number Y of lips in the facial features image, wherein Y is not less than 1 and is an integer; determining a face corresponding to the kth lip according to the five sense organs associated with the kth lip and the kth lip, wherein k is more than or equal to 1 and less than or equal to Y; determining a first beautifying strength according to the texture features of the face corresponding to the kth lip; calculating the curvature of the kth lip according to the characteristics of the kth lip; if the curvature of the kth lip is smaller than the preset curvature, beautifying the face corresponding to the kth lip by the first beautifying strength; and if the curvature of the kth lip is greater than or equal to the preset curvature, beautifying the face corresponding to the kth lip with a second beautifying strength, wherein the second beautifying strength is less than the first beautifying strength. According to the embodiment of the invention, the k-th lip curvature can be calculated according to the characteristics of the k-th lip, the face corresponding to the k-th lip is beautified with different beautifying intensities according to the lip curvature, and the beautifying intensity can be adaptively adjusted according to the facial expression of the face because the lip curvature can reflect the facial expression of the face, so that the satisfaction degree of a user in beautifying is improved.
Example two
An embodiment of the present invention provides a method for beautifying, as shown in fig. 2, the method for beautifying in the embodiment of the present invention includes:
step 201, acquiring an image to be beautified, and detecting whether a skin color area exists in the image to be beautified;
if the image to be beautified has a skin color area, entering step 202; if there is no skin color region in the image to be beautified, go to step 210.
Step 202, detecting whether a facial feature image exists in the skin color area;
if there is a facial feature image in the skin color region, go to step 203; if there is no facial feature image in the skin color region, go to step 210.
Step 203, determining the position of the facial features image in the skin color area, and acquiring the number Y of lips in the facial features image, wherein Y is not less than 1 and is an integer;
step 204, determining a face corresponding to the kth lip according to the five sense organs associated with the kth lip and the kth lip, wherein k is more than or equal to 1 and less than or equal to Y;
step 205, determining a first beautifying intensity according to the texture features of the face corresponding to the kth lip;
step 206, calculating the curvature of the kth lip according to the characteristics of the kth lip;
step 207, if the curvature of the kth lip is smaller than a preset curvature, beautifying the face corresponding to the kth lip with the first beautifying strength;
step 208, if the curvature of the kth lip is greater than or equal to the preset curvature, performing face beautifying on the face corresponding to the kth lip with a second face beautifying intensity, wherein the second face beautifying intensity is smaller than the first face beautifying intensity;
in the embodiment of the present invention, the portions of the steps 201 to 208 that are the same as or similar to the portions of the steps 101 to 108 may be referred to in the description of the steps 101 to 108, and are not repeated herein.
Step 209, detecting whether the faces corresponding to the Y lips in the image to be beautified are beautified;
if the faces corresponding to the Y lips in the image to be beautified are all beautified, then go to step 210; if the faces corresponding to the Y lips in the image to be beautified are not all beautified, the step 204 is returned.
In the embodiment of the invention, whether the faces corresponding to the Y lips in the image to be beautified are beautified is detected. Subtraction detection can be performed by a counting method, a numerical value Y is set in advance according to the number Y of lips, and after face beautifying processing is performed on one face. And subtracting 1 from Y until the Y is subtracted to 0, and judging that the faces corresponding to the Y lips in the image to be beautified are all beautified. If the Y value is not 0, judging that the faces corresponding to the Y lips in the image to be beautified are not all beautified. Or, the addition detection can be performed through a counter, the preset value of the counter is set to be 0, and when the counter is added to the Y value, the face corresponding to the Y lips in the image to be beautified is judged to be beautified. Of course, it may also be detected whether all the faces corresponding to the Y lips in the image to be beautified are beautified through other related algorithms, which is not limited herein.
Step 210, detecting whether a next frame image exists;
in the embodiment of the present invention, if the scene detected by the beauty method includes multiple frames of images, it is detected whether there is a next frame of image to be beautified.
Step 211, if there is a next frame image, detecting whether the scene similarity between the next frame image and the image to be beautified is greater than a preset scene similarity.
If the scene similarity is less than or equal to the preset scene similarity, go to step 212; if the scene similarity is greater than the preset scene similarity, step 213 is performed.
In the embodiment of the present invention, detecting whether the scene similarity between the next frame image and the image to be beautified is greater than a preset scene similarity may be understood as: and detecting the relevance between the current image to be beautified and the next frame of image, wherein the next frame of image with strong relevance with the current image to be beautified has a very large probability of appearing the face existing in the current image to be beautified.
Step 212, judging that the next frame image is a scene switching image, and setting the next frame image as an image to be beautified;
and after the next frame image is set as the image to be beautified, returning to the step 201.
In one embodiment, the scene cut image may be understood as one frame image that changes substantially from the previous frame image, which is quite different. Locally varied: if the background is changed, the foreground is not changed but is switched to the scene, and the foreground is only changed but is switched to the scene. The method for detecting scene switching can also detect through an algorithm of related scene switching.
Step 213, judging that the next frame image is not a scene switching image, beautifying the face corresponding to the face to be beautified in the next frame image according to the beautifying intensity of the face in the image to be beautified, simultaneously acquiring the face different from the face to be beautified in the next frame image, determining the corresponding lip position of the face different from the face, and setting the lip corresponding to the face different from the face to be the kth lip;
and setting the lip corresponding to the different face as the kth lip, and returning to step 205.
In the embodiment of the invention, when the next frame image is not the scene switching image, if the face which is the same as the face of the current image to be beautified exists in the next frame image, the face beautifying intensity corresponding to the face of the current image to be beautified is used for beautifying, and the face beautifying intensity of the face is not required to be recalculated. If a face different from the current image to be beautified is detected in the next frame of image, setting the lip corresponding to the different face as the kth lip, and returning to step 205.
In one embodiment, an image area with small relevance with a current image to be beautified in a next frame image is obtained and is used as a new sub-image, and whether a face different from the face existing in the current image to be beautified, namely a new face, exists in the new sub-image is detected; if yes, setting the lip corresponding to the new face as the kth lip, and returning to step 205.
Therefore, in the embodiment of the invention, on one hand, the k-th lip curvature can be calculated according to the characteristics of the k-th lip, the face corresponding to the k-th lip is beautified with different beautifying intensities according to the lip curvature, and the lip curvature can reflect the facial expression of the face, so that the beautifying intensity is adaptively adjusted according to the facial expression of the face, and the satisfaction degree of a user in beautifying is improved. On the other hand, the face corresponding to the face in the image to be beautified in the next frame of image is beautified according to the beautification intensity of the face in the image to be beautified, and the same face in the next frame of image can be beautified by using the corresponding beautification intensity of the face in the image to be beautified, so that the calculation efficiency can be improved.
EXAMPLE III
As shown in fig. 3, the beauty device 300 of the embodiment of the present invention includes:
a first obtaining module 301, configured to obtain an image to be beautified, and detect whether a skin color region exists in the image to be beautified;
a first detecting module 302, configured to detect whether a facial feature image exists in the skin color area if the skin color area exists in the image to be beautified;
a first determining module 303, configured to determine, if a facial feature image exists in the skin color region, a position of the facial feature image in the skin color region, and obtain the number Y of lips in the facial feature image, where Y is ≧ 1 and an integer;
a second determining module 304, configured to determine a face corresponding to the kth lip according to the kth lip and the five sense organs associated with the kth lip, where k is greater than or equal to 1 and less than or equal to Y;
a first calculating module 305, configured to determine a first beauty intensity according to texture features of a face corresponding to the kth lip;
a second calculation module 306, configured to calculate a curvature of the kth lip according to characteristics of the kth lip;
in an embodiment, as shown in fig. 4, the second calculating module 306 specifically includes: a first dividing unit 3061, configured to divide the k-th lip into M × N image blocks, where M and N respectively indicate the number of rows and columns of the k-th lip into the image blocks, M ≧ 1 and an integer, and N ≧ 1 and an integer;
a first obtaining unit 3062, configured to obtain a row number i and a column number j of each image block in the k-th lip, where i is used to indicate an image block in an ith row in the k-th lip, and j is used to indicate an image block in a jth column in the k-th lip area;
a first determining unit 3063, configured to determine that the face mode corresponding to the kth lip is the first side mode if the minimum column number in the minimum row image block of the kth lip is equal to the minimum column number in the k lip image block;
a second determining unit 3064, configured to determine that the face mode corresponding to the lip is a second side mode if the maximum column number in the minimum row image block of the kth lip is equal to the maximum column number in the kth lip image block;
a first calculation unit 3065, configured to calculate the k-th lip curvature according to the characteristics of the k-th lip, where the calculation formula is:
Figure GDA0002613013210000171
wherein f (i) ═ imin+(imax-imin)/2,
Figure GDA0002613013210000172
curve represents the curvature of the k-th lip, arctan () represents the arctan function, iminA row number, i, corresponding to the image block of the smallest row in the k-th lipmaxA row number j representing the image block of the largest row in the k-th lip partminThe corresponding column number j of the image block representing the smallest column in the k-th lipmaxThe column number corresponding to the image block of the largest column in the kth lip is represented by lm, the row number corresponding to the middle row of the smallest column in the kth lip is represented by rm, the row number corresponding to the middle row of the largest column in the kth lip is represented by rm, and the side mode comprises the first side mode and the second side mode.
A first beautifying module 307, configured to beautify, with the first beautifying strength, a face corresponding to the kth lip if the curvature of the kth lip is smaller than a preset curvature;
a second beautifying module 308, configured to, if the curvature of the kth lip is greater than or equal to a preset curvature, beautify a face of the person corresponding to the kth lip with a second beautifying intensity, where the second beautifying intensity is smaller than the first beautifying intensity.
In one embodiment, if the scene detected by the beauty device includes multiple frames of images, the beauty device further includes:
the second detection module is used for detecting whether a next frame of image exists or not after the faces corresponding to the Y lips in the image to be beautified are beautified;
and the third detection module is used for detecting the scene similarity between the next frame of image and the image to be beautified if the next frame of image exists.
The first judging module is used for judging that the next frame of image is a scene switching image if the scene similarity is smaller than or equal to a preset scene similarity, and setting the next frame of image as an image to be beautified; returning to the first obtaining module to continue processing.
The second judgment module is used for judging that the next frame image is not a scene switching image if the scene similarity is larger than the preset scene similarity;
a third determining module, configured to beautify a face in the next frame of image corresponding to the image to be beautified according to the beautification intensity of the face in the image to be beautified, obtain a face in the next frame of image different from the image to be beautified, and determine a corresponding lip position for the different face; and setting the lips corresponding to the different faces as the kth lip, and returning to the first beautifying module for processing.
Therefore, in the embodiment of the present invention, the second calculating module 306 may calculate the k-th lip curvature according to the characteristics of the k-th lip, and beautify the face corresponding to the k-th lip with different beautifying strengths according to the lip curvature, where the lip curvature may reflect the facial expression of the face, and adaptively adjust the beautifying strength according to the facial expression, so as to improve the satisfaction of the user in beautifying.
Example four
Fig. 5 is a terminal device according to an embodiment of the present invention. As shown in fig. 5, the terminal device 500 in the embodiment of the present invention includes: a processor 501, a memory 502 and a computer program 503 stored in the memory 502 and executable on the processor 501. The processor 501 executes the computer program 503 to implement the steps in the embodiment of the beautifying method, such as the steps 101 to 108 shown in fig. 1 or the steps 201 to 213 shown in fig. 2.
Illustratively, the computer program 503 may be divided into one or more units/modules, which are stored in the memory 502 and executed by the processor 501 to implement the present invention. The one or more units/modules may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 503 in the terminal device 500. For example, the computer program 503 may be divided into a first obtaining module, a first detecting module, a first determining module, a second determining module, a first calculating module, a second calculating module, a first beautifying module, and a second beautifying module, and specific functions of the modules are described in the third embodiment, which is not repeated herein.
The terminal device 500 may be a shooting device, a mobile terminal, a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device 500 may include, but is not limited to, a processor 501 and a memory 502. Those skilled in the art will appreciate that fig. 5 is only an example of the terminal device 500 and does not constitute a limitation to the terminal device 500, and may include more or less components than those shown, or combine some components, or different components, for example, the terminal device 500 may further include an input-output device, a network access device, a bus, etc.
The Processor 501 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 502 may be an internal storage unit of the terminal device 500, such as a hard disk or a memory of the terminal device 500. The memory 502 may also be an external storage device of the terminal device 500, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 500. Further, the memory 502 may include both an internal storage unit and an external storage device of the terminal device 500. The memory 502 is used to store the computer programs and other programs and data required by the terminal device 500. The memory 502 described above may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as different functional units and modules according to needs, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the intelligent terminal may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the above-described modules or units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium and can implement the steps of the embodiments of the method when the computer program is executed by a processor. The computer program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file or some intermediate form. The computer readable medium may include: any entity or device capable of carrying the above-mentioned computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signal, telecommunication signal, software distribution medium, etc. It should be noted that the computer readable medium described above may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media excludes electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A cosmetic method, comprising:
acquiring an image to be beautified, and detecting whether a skin color area exists in the image to be beautified;
if a skin color area exists in the image to be beautified, detecting whether a facial feature image exists in the skin color area;
if the facial features image exists in the skin color area, determining the position of the facial features image in the skin color area, and acquiring the number Y of lips in the facial features image, wherein Y is not less than 1 and is an integer;
determining a face corresponding to the kth lip according to the five sense organs associated with the kth lip and the kth lip, wherein k is more than or equal to 1 and less than or equal to Y;
determining a first beautifying strength according to the texture features of the face corresponding to the kth lip;
calculating the curvature of the kth lip according to the characteristics of the kth lip;
if the curvature of the kth lip is smaller than the preset curvature, beautifying the face corresponding to the kth lip by the first beautifying strength;
and if the curvature of the kth lip is greater than or equal to the preset curvature, beautifying the face corresponding to the kth lip with a second beautifying strength, wherein the second beautifying strength is less than the first beautifying strength.
2. A method of beautifying according to claim 1 wherein said calculating said kth lip curvature from characteristics of said kth lip comprises:
dividing the kth lip into M × N image blocks, wherein M and N respectively represent the number of rows and columns of the k lip into the image blocks, M ≧ 1 and an integer, and N ≧ 1 and an integer;
acquiring a row number i and a column number j of each image block in the kth lip, wherein the i is used for indicating the image block in the ith row in the kth lip, and the j is used for indicating the image block in the jth column in the kth lip area;
if the minimum column number in the minimum row image block of the kth lip is equal to the minimum column number in the kth lip image block, determining that the face mode corresponding to the kth lip is a first side mode;
if the maximum column number in the minimum row image block of the kth lip is equal to the maximum column number in the kth lip image block, determining that the face mode corresponding to the lip is a second side mode;
the formula for calculating the curvature of the kth lip according to the characteristics of the kth lip is as follows:
Figure FDA0002613013200000021
wherein f (i) ═ imin+(imax-imin)/2,
Figure FDA0002613013200000022
curve represents the curvature of the k-th lip, arctan () represents the arctan function, iminA row number, i, corresponding to the image block of the smallest row in the k-th lipmaxA row number j representing the image block of the largest row in the k-th lip partminThe corresponding column number j of the image block representing the smallest column in the k-th lipmaxThe column number corresponding to the image block of the largest column in the kth lip is represented by lm, the row number corresponding to the middle row of the smallest column in the kth lip is represented by rm, the row number corresponding to the middle row of the largest column in the kth lip is represented by rm, and the side mode comprises the first side mode and the second side mode.
3. The method as claimed in claim 1 or 2, wherein if the curvature of the kth lip is greater than or equal to the preset curvature, the beautifying the face corresponding to the kth lip with a second beautifying intensity comprises:
if the curvature of the kth lip is larger than or equal to the preset curvature, calculating a difference value between the curvature of the kth lip and the preset curvature;
and acquiring a preset adjustment amplitude corresponding to the difference value, reducing the first beautifying intensity to a second beautifying intensity according to the preset adjustment amplitude, and beautifying the face corresponding to the kth lip by the second beautifying intensity.
4. A beauty method according to claim 1 or 2, wherein if the scene detected by the beauty method includes a plurality of frames of images, the beauty method further comprises:
when faces corresponding to Y lips in an image to be beautified are beautified, detecting whether a next frame of image exists or not;
and if the next frame image exists, detecting the scene similarity between the next frame image and the image to be beautified.
5. The method as claimed in claim 4, wherein the step of detecting the similarity between the next frame image and the scene in the image to be beautified if the next frame image exists comprises:
if the scene similarity is smaller than or equal to the preset scene similarity, judging that the next frame of image is a scene switching image, and setting the next frame of image as an image to be beautified;
and returning to the step of obtaining the image to be beautified, and detecting whether a skin color area exists in the image to be beautified and the subsequent steps.
6. The method as claimed in claim 4, wherein after detecting the similarity between the next frame image and the scene in the image to be beautified if the next frame image exists, the method further comprises:
if the scene similarity is larger than the preset scene similarity, judging that the next frame of image is not a scene switching image;
performing face beautifying on the face corresponding to the image to be beautified in the next frame of image according to the face beautifying intensity of the face in the image to be beautified, simultaneously acquiring different faces in the image to be beautified in the next frame of image, and determining corresponding lip positions of the different faces;
and setting the lips corresponding to the different faces as the kth lip, and returning to the step of determining the face as the first beautifying strength according to the texture features of the face corresponding to the kth lip and the subsequent steps.
7. A beauty device, characterized in that it comprises:
the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring an image to be beautified and detecting whether a skin color area exists in the image to be beautified;
the first detection module is used for detecting whether a facial feature image exists in the skin color area if the skin color area exists in the image to be beautified;
the first determining module is used for determining the position of the facial features image in the skin color region and acquiring the number Y of lips in the facial features image if the facial features image exists in the skin color region, wherein Y is not less than 1 and is an integer;
the second determining module is used for determining the face corresponding to the kth lip according to the kth lip and the five sense organs related to the kth lip, wherein k is more than or equal to 1 and less than or equal to Y;
the first calculation module is used for determining a first beautifying intensity according to the texture features of the face corresponding to the kth lip;
a second calculation module for calculating the k-th lip curvature according to the characteristics of the k-th lip;
the first beautifying module is used for beautifying the face corresponding to the kth lip with the first beautifying strength if the curvature of the kth lip is smaller than a preset curvature;
and the second beautifying module is used for beautifying the face corresponding to the kth lip with a second beautifying strength if the curvature of the kth lip is greater than or equal to a preset curvature, wherein the second beautifying strength is less than the first beautifying strength.
8. The facial device as claimed in claim 7, wherein said second computing module specifically comprises:
a first dividing unit, configured to divide the kth lip into M × N image blocks, where M and N respectively indicate a number of rows and a number of columns of the k lip into the image blocks, M ≧ 1 and an integer, and N ≧ 1 and an integer;
a first obtaining unit, configured to obtain a row number i and a column number j of each image block in the k-th lip, where i is used to indicate an image block in an ith row in the k-th lip, and j is used to indicate an image block in a jth column in the k-th lip area;
a first determining unit, configured to determine that a face mode corresponding to the kth lip is a first side mode if a minimum column number in a minimum row image block of the kth lip is equal to a minimum column number in the kth lip image block;
a second determining unit, configured to determine that the face mode corresponding to the lip is a second side mode if a maximum column number in a minimum row image block of the kth lip is equal to a maximum column number in the kth lip image block;
a first calculating unit, configured to calculate the k-th lip curvature according to the characteristics of the k-th lip, where the calculation formula is:
Figure FDA0002613013200000051
wherein f (i) ═ imin+(imax-imin)/2,
Figure FDA0002613013200000052
curve represents the curvature of the k-th lip, arctan () represents the arctan function, iminA row number, i, corresponding to the image block of the smallest row in the k-th lipmaxA row number j representing the image block of the largest row in the k-th lip partminThe corresponding column number j of the image block representing the smallest column in the k-th lipmaxThe corresponding column number, lm, of the image block representing the largest column in the k-th lipAnd the row number corresponding to the middle row of the smallest column in the k-th lip is represented by rm, the row number corresponding to the middle row of the largest column in the k-th lip is represented by rm, and the side patterns comprise the first side pattern and the second side pattern.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 6 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
CN201810487620.0A 2018-05-21 2018-05-21 Beautifying method, beautifying device and terminal equipment Active CN108734126B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810487620.0A CN108734126B (en) 2018-05-21 2018-05-21 Beautifying method, beautifying device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810487620.0A CN108734126B (en) 2018-05-21 2018-05-21 Beautifying method, beautifying device and terminal equipment

Publications (2)

Publication Number Publication Date
CN108734126A CN108734126A (en) 2018-11-02
CN108734126B true CN108734126B (en) 2020-11-13

Family

ID=63937691

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810487620.0A Active CN108734126B (en) 2018-05-21 2018-05-21 Beautifying method, beautifying device and terminal equipment

Country Status (1)

Country Link
CN (1) CN108734126B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109274983A (en) * 2018-12-06 2019-01-25 广州酷狗计算机科技有限公司 The method and apparatus being broadcast live
CN111507142A (en) * 2019-01-31 2020-08-07 北京字节跳动网络技术有限公司 Facial expression image processing method and device and electronic equipment
CN110097622B (en) * 2019-04-23 2022-02-25 北京字节跳动网络技术有限公司 Method and device for rendering image, electronic equipment and computer readable storage medium
CN110992283A (en) * 2019-11-29 2020-04-10 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, electronic device, and readable storage medium
CN111861875A (en) * 2020-07-30 2020-10-30 北京金山云网络技术有限公司 Face beautifying method, device, equipment and medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009064423A (en) * 2007-08-10 2009-03-26 Shiseido Co Ltd Makeup simulation system, makeup simulation device, makeup simulation method, and makeup simulation program
CN103605975B (en) * 2013-11-28 2018-10-19 小米科技有限责任公司 A kind of method, apparatus and terminal device of image procossing
CN104966267B (en) * 2015-07-02 2018-01-19 广东欧珀移动通信有限公司 A kind of method and device of U.S. face user images
CN106331509B (en) * 2016-10-31 2019-08-20 维沃移动通信有限公司 A kind of photographic method and mobile terminal
CN106920211A (en) * 2017-03-09 2017-07-04 广州四三九九信息科技有限公司 U.S. face processing method, device and terminal device
CN107454267A (en) * 2017-08-31 2017-12-08 维沃移动通信有限公司 The processing method and mobile terminal of a kind of image
CN107995415A (en) * 2017-11-09 2018-05-04 深圳市金立通信设备有限公司 A kind of image processing method, terminal and computer-readable medium

Also Published As

Publication number Publication date
CN108734126A (en) 2018-11-02

Similar Documents

Publication Publication Date Title
CN108734126B (en) Beautifying method, beautifying device and terminal equipment
JP7413400B2 (en) Skin quality measurement method, skin quality classification method, skin quality measurement device, electronic equipment and storage medium
CN110852160B (en) Image-based biometric identification system and computer-implemented method
CN109389562B (en) Image restoration method and device
WO2022078041A1 (en) Occlusion detection model training method and facial image beautification method
CN109272016B (en) Target detection method, device, terminal equipment and computer readable storage medium
CN108765264B (en) Image beautifying method, device, equipment and storage medium
CN106570909B (en) Skin color detection method, device and terminal
CN111563435A (en) Sleep state detection method and device for user
CN105243371A (en) Human face beauty degree detection method and system and shooting terminal
CN110969046B (en) Face recognition method, face recognition device and computer-readable storage medium
CN107944381B (en) Face tracking method, face tracking device, terminal and storage medium
CN111383232A (en) Matting method, matting device, terminal equipment and computer-readable storage medium
CN108734127B (en) Age identification value adjusting method, age identification value adjusting device, age identification value adjusting equipment and storage medium
CN115082966B (en) Pedestrian re-recognition model training method, pedestrian re-recognition method, device and equipment
CN113421204A (en) Image processing method and device, electronic equipment and readable storage medium
CN111126250A (en) Pedestrian re-identification method and device based on PTGAN
CN111444555A (en) Temperature measurement information display method and device and terminal equipment
Jacobson et al. An online learning approach to occlusion boundary detection
CN111260655B (en) Image generation method and device based on deep neural network model
CN111080754B (en) Character animation production method and device for connecting characteristic points of head and limbs
CN112102348A (en) Image processing apparatus
CN113379623B (en) Image processing method, device, electronic equipment and storage medium
CN113610723B (en) Image processing method and related device
CN112084874B (en) Object detection method and device and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant