CN114663290A - Face image processing method and device, electronic equipment and storage medium - Google Patents
Face image processing method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN114663290A CN114663290A CN202011401695.6A CN202011401695A CN114663290A CN 114663290 A CN114663290 A CN 114663290A CN 202011401695 A CN202011401695 A CN 202011401695A CN 114663290 A CN114663290 A CN 114663290A
- Authority
- CN
- China
- Prior art keywords
- face image
- target face
- template material
- weight value
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title abstract description 9
- 239000000463 material Substances 0.000 claims abstract description 112
- 238000012545 processing Methods 0.000 claims abstract description 90
- 238000000034 method Methods 0.000 claims abstract description 38
- 238000005282 brightening Methods 0.000 claims description 36
- 241000255789 Bombyx mori Species 0.000 claims description 19
- 238000004590 computer program Methods 0.000 claims description 16
- 241000023320 Luma <angiosperm> Species 0.000 claims description 12
- 230000004927 fusion Effects 0.000 claims description 12
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 claims description 12
- 238000000605 extraction Methods 0.000 claims description 5
- 230000000694 effects Effects 0.000 abstract description 9
- 238000010586 diagram Methods 0.000 description 18
- 230000003796 beauty Effects 0.000 description 8
- 230000006870 function Effects 0.000 description 7
- 238000004891 communication Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 101100292548 Rattus norvegicus Adi1 gene Proteins 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 238000007499 fusion processing Methods 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 240000007019 Oxalis corniculata Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 210000000697 sensory organ Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a method and a device for processing a face image, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a target face image, and extracting key point data of the target face image from the target face image; acquiring a template material, and drawing the template material into blank textures in a preset mode to enable the template material to correspond to a region to be processed of a target face image; and weighting at least two color channels of the region to be processed of the target face image based on the template material. According to the face image processing method provided by the embodiment of the disclosure, the region to be processed of the target face image can be accurately determined, and the weighting processing is performed on at least two color channels of the region to be processed of the target face image based on the template material, so that the finally obtained weighted target face image has a good beautifying effect, and the user experience is improved.
Description
Technical Field
The present invention relates to the field of image processing, and in particular, to a method and an apparatus for processing a face image, an electronic device, and a storage medium.
Background
With the development of image processing technology, more and more users process face images through beauty software.
Among the above-described beauty techniques by beauty software, some users use beauty techniques to make the makeup of the user's target area more beautiful.
The existing beautifying technology has the following defects:
the position of any selected target area determined on the face image of the user is not accurate due to errors possibly existing when the position of any selected target area is determined on the face image of the user based on the existing face beautifying technology, and therefore the obtained face beautifying effect of performing face beautifying processing on the face image of the user is poor, and user experience is reduced.
Disclosure of Invention
Based on this, it is necessary to provide a method and an apparatus for processing a face image, an electronic device, and a storage medium, for solving the problem of poor beautifying effect caused by inaccurate position of a selected target area, which is based on the existing silkworm beautifying technology.
In a first aspect, an embodiment of the present application provides a method for processing a face image, where the method includes:
acquiring a target face image, and extracting key point data of the target face image from the target face image;
acquiring a template material, and drawing the template material into blank textures in a preset mode to enable the template material to correspond to a region to be processed of a target face image;
and performing weighting processing on at least two color channels of the region to be processed of the target face image based on the template material.
In one embodiment, the drawing the template material to a blank texture in a preset manner includes:
determining a fusion area of the target face image in the blank texture based on the key point data of the target face image;
acquiring texture data of the template material;
and drawing the template material to a fusion area in the blank texture according to an index mode and texture data of the template material.
In one embodiment, the weighting, based on the template material, at least two color channels of the region to be processed of the target face image includes:
at least two color channels of the region to be processed of the target face image are weighted based on the template material in the first channel, and
and in a second channel, performing weighting processing on at least two color channels of the region to be processed of the target face image based on the template material.
In one embodiment, the first channel is a blue channel, and the weighting, based on the template material, at least two color channels of the region to be processed of the target face image includes:
determining a first brightening weight value for brightening the target area within the blue channel;
according to the first brightening weight value and the pixel value corresponding to each key point of the target face image, weighting processing is carried out on the region to be processed of the target face image based on the template material in the first channel, so that the region to be processed of the target face image is brightened in the blue channel through the first brightening weight value.
In one embodiment, the determining the first luma weight value includes:
acquiring a first preset input weight value, a second preset input weight value, blue extracted from the template material and a first weighting coefficient for weighting in the blue channel, wherein the sum of the first preset input weight value and the second preset input weight value is 1;
and performing weighting processing according to the first preset input weight value, the second preset input weight value, the blue extracted from the template material and the first weighting coefficient for weighting processing in the blue channel, and determining the first brightening weight value.
In one embodiment, the second channel is a blue channel, and the weighting, in the second channel, at least two color channels of the region to be processed of the target face image based on the template material includes:
determining a second luma weight value for luma processing of the target region within the green channel;
according to the second brightening weight value and the pixel value corresponding to each key point of the target face image, weighting processing is carried out on the region to be processed of the target face image based on the template material in the second channel, so that the region to be processed of the target face image is brightened in the green channel through the second brightening weight value.
In one embodiment, the determining the second luma weight value includes:
acquiring the first preset input weight value, the second preset input weight value, green extracted from the template material and a second weighting coefficient for weighting in the green channel, wherein the sum of the first preset input weight value and the second preset input weight value is 1;
and performing weighting processing according to the first preset input weight value, the second preset input weight value, the green color extracted from the template material and the second weighting coefficient for performing weighting processing in the green channel to determine a second brightening weight value.
In one embodiment, the target area is an area where the silkworm lies.
In a second aspect, an embodiment of the present application provides an apparatus for processing a face image, where the apparatus includes: the device comprises an acquisition unit, an extraction unit, a drawing unit and a weighting processing unit;
the target face image is obtained, and a template material is obtained;
the extracting unit is used for extracting the key point data of the target face image from the target face image acquired by the acquiring unit;
the drawing unit is used for drawing the template material acquired by the acquisition unit into blank textures in a preset mode, so that the template material corresponds to a region to be processed of the target face image;
the weighting processing unit is used for weighting at least two color channels of the region to be processed of the target face image based on the template material acquired by the acquisition unit.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement the method steps described above.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, on which a computer program is stored, the program being executed by a processor to implement the method steps as described above.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
in the embodiment of the application, a target face image is obtained, and key point data of the target face image is extracted from the target face image; acquiring a template material, and drawing the template material into blank textures in a preset mode to enable the template material to correspond to a region to be processed of a target face image; and weighting at least two color channels of the region to be processed of the target face image based on the template material. According to the method for processing the face image provided by the embodiment of the disclosure, the region to be processed of the target face image can be accurately determined, and the weighting processing is performed on at least two color channels of the region to be processed of the target face image based on the template material, so that the finally obtained weighted target face image has a better beautifying effect, and the user experience is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is an application scene diagram of a method for processing a face image according to an embodiment of the present disclosure;
fig. 2 is a schematic flow chart of a method for processing a face image according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a target face image identified by 106 key points in an application scenario according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of a silkworm sleeping image for performing beauty processing on a target face image in an application scenario according to the present disclosure;
fig. 5 is a schematic diagram of a standard face image in an application scenario according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram of a processed standard face image obtained after standard mesh segmentation processing in an application scenario according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a face image processing apparatus according to an embodiment of the present disclosure;
fig. 8 shows an electronic device connection structure schematic according to an embodiment of the present disclosure.
Detailed Description
The following description and the drawings sufficiently illustrate specific embodiments of the invention to enable those skilled in the art to practice them.
It should be understood that the described embodiments are only some embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Alternative embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
As shown in fig. 1, an application scenario diagram according to an embodiment of the present disclosure is an application scenario diagram, where multiple users operate clients installed on a terminal device through the terminal device such as a mobile phone, and the clients perform data communication with a background server through a network. One particular application scenario is: the face image is processed, namely: the processing of the face image by the silkworm sleeping beauty is performed, but the processing is not limited to the only application scenario, and any scenario that can be applied to the present embodiment is included.
As shown in fig. 2, an embodiment of the present disclosure provides a method for processing a face image, which is applied to a client, and specifically includes the following steps:
s202: and acquiring a target face image, and extracting key point data of the target face image from the target face image.
Fig. 3 is a schematic diagram of a target face image identified by 106 target face key points in an application scenario according to an embodiment of the present disclosure; as shown in fig. 3, 106 key points in the target face image are shown, each having a corresponding numerical identifier. For example, the numeric identifiers 52, 57, 73, 56, and 55 represent target face key points around the lower corner of the eye on one side in the target face image, respectively, and correspondingly, the numeric identifiers 58, 63, 76, 62, and 61 represent target face key points around the lower corner of the eye on the other side in the target face image, respectively.
Because the standard face key points and the target face key points with the same identification have the one-to-one corresponding index relationship, the corresponding target face key points associated with the lower canthus can be indexed according to the index relationship and the standard face key points associated with the lower canthus.
S204: and acquiring a template material, and drawing the template material into blank textures in a preset mode so that the template material corresponds to a region to be processed of the target face image.
In the embodiment of the application, the step of drawing the template material to the blank texture in a preset mode comprises the following steps:
determining a fusion area of the target face image in the blank texture based on the key point data of the target face image;
acquiring texture data of a template material;
and drawing the template material to a fusion area in the blank texture according to the indexing mode and the texture data of the template material.
In the embodiment of the present application, the indexing method specifically includes: according to the one-to-one corresponding index relationship (the same label is used for carrying out the one-to-one corresponding index), between each data point of the template material and each data point of the blank texture, the template material is drawn to the fusion area in the blank texture according to the index mode and the texture data of the template material, so that the fusion area has the texture data of the template material, and the color fusion effect is realized.
In the embodiment of the present application, the template material may be a silkworm image.
Fig. 4 is a schematic diagram of a silkworm sleeping image for performing a beauty treatment on a target face image in an application scenario according to an embodiment of the present disclosure.
The silkworm image as shown in fig. 4 has a convex light and dark beauty effect, so that the processed target face image obtained by performing image fusion processing on the silkworm image and the target face image is more natural, has a better beauty effect, and improves user experience.
In the embodiment of the present application, a silkworm image matching the preference of a user may be selected from an image library including various silkworm images according to the preference of different users, and the silkworm image shown in fig. 3 is merely an example.
In the embodiment of the application, the format of the target face image is preferably an image in rgba format.
Fig. 5 is a schematic diagram of a standard face image in an application scenario according to an embodiment of the present disclosure; in the embodiment of the present application, the preset number of the standard face key points of the standard face image may be set to 106. In different application scenarios, different preset numbers of standard face key points can be configured. In fig. 5, the image is simplified and a preset number of standard face key points are not identified, which is merely an example. The format of the standard face image is preferably an image in rgba format.
Fig. 6 is a schematic diagram of a processed standard face image obtained after standard mesh segmentation processing in an application scene according to an embodiment of the present disclosure; through each standard grid as shown in fig. 6, each five sense organs in the standard face image can be precisely positioned.
Acquiring a first data set of a preset number of standard face key points, wherein the first data set comprises identification data of each standard face key point, abscissa data of each standard face key point and ordinate data of each target standard key point; acquiring a second data set of a preset number of target face key points, wherein the second data set comprises identification data of each target face key point, abscissa data of each target face key point and ordinate data of each target face key point, and standard face key points with the same identification and the target face key points have one-to-one corresponding index relationship;
performing grid segmentation processing on a standard face image of a standard object according to data in the first data set to obtain each segmented grid, and pasting each segmented grid on a blank picture;
and attaching the image of the lying silkworm to the blank picture with each divided grid to obtain a processed image of the lying silkworm with the standardized grid, wherein the processed image of the lying silkworm is the image of the lying silkworm subjected to the standard grid division processing.
S206: and weighting at least two color channels of the region to be processed of the target face image based on the template material.
In the embodiment of the application, the output fused target face image is given with a weight value, and a user can manually adjust the weight value according to the preference of the user and replace the previous weight value.
In the embodiment of the present application, the target area may be an area where the silkworm lies. For the description of the silkworm, refer to the description of the same parts in the foregoing description, and the description is omitted.
In one possible implementation manner, the weighting processing of at least two color channels of the region to be processed of the target face image based on the template material includes the following steps:
at least two color channels of the region to be processed of the target face image are weighted based on the template material in the first channel, and at least two color channels of the region to be processed of the target face image are weighted based on the template material in the second channel.
In the embodiment of the application, the first channel is a blue channel, and the weighting processing of at least two color channels of the to-be-processed region of the target face image based on the template material comprises the following steps:
determining a first brightening weight value for brightening a target area in a blue channel;
according to the first brightening weight value and the pixel value corresponding to each key point of the target face image, weighting processing is carried out on the region to be processed of the target face image based on the template material in the first channel, so that the region to be processed of the target face image is brightened in the blue channel through the first brightening weight value.
In one possible implementation, determining the first luma weight value includes:
acquiring a first preset input weight value, a second preset input weight value, blue extracted from the template material and a first weighting coefficient for weighting in a blue channel, wherein the sum of the first preset input weight value and the second preset input weight value is 1;
and performing weighting processing according to the first preset input weight value, the second preset input weight value, the blue extracted from the template material and a first weighting coefficient for performing weighting processing in the blue channel to determine a first brightening weight value.
In this embodiment of the application, when the first channel is a blue channel, a weighting formula used for weighting the region to be processed of the target face image based on the template material may be:
color2=color1 *n+color1 *m*B*Alp1
m=1-n:
Alp1=1.6;
wherein, color1Is a pixel value, color, corresponding to any key point selected from the target face image2To color1The corresponding key point is a pixel value corresponding to the brightening processing in the blue channel, n is a first preset input weight value, m is a second preset input weight value, B is blue extracted from the template material, and Alp1 is a first weighting coefficient for weighting processing in the blue channel. The above is merely an example, a certain number of key points may be selected in the region to be processed, and the above formula may be used to traverse and process the key points in sequence, which is not described herein again.
In this embodiment of the present application, the second channel is a blue channel, and in the second channel, performing weighting processing on at least two color channels of the region to be processed of the target face image based on the template material includes the following steps:
determining a second brightening weight value for brightening the target area in the green channel;
and according to the second brightening weight value and the pixel value corresponding to each key point of the target face image, weighting the region to be processed of the target face image based on the template material in the second channel, so that the region to be processed of the target face image is brightened in the green channel through the second brightening weight value.
In one possible implementation, determining the second luma weight value includes:
acquiring a first preset input weight value, a second preset input weight value, green extracted from the template material and a second weighting coefficient for weighting in a green channel, wherein the sum of the first preset input weight value and the second preset input weight value is 1;
and performing weighting processing according to the first preset input weight value, the second preset input weight value, the green color extracted from the template material and a second weighting coefficient for performing weighting processing in the green channel to determine a second brightening weight value.
In this embodiment of the application, when the second channel is a green channel, a fusion formula used for performing fusion processing on the template material and the target region corresponding to the background texture of the target face image may be:
color3=color1 *n+color1 *m*G*Alp2;
m=1-n;
Alp2=1.3;
wherein, color1Is a pixel value, color, corresponding to any key point selected from the target face image3To color1The corresponding key point is a pixel value corresponding to the brightening processing in the green channel, n is a first preset input weight value, m is a second preset input weight value, G is green extracted from the template material, and Alp2 is a second weighting coefficient for weighting processing in the green channel. The above is merely an example, a certain number of key points may be selected in the region to be processed, and the above formula may be used to traverse and process the key points in sequence, which is not described herein again.
It should be noted that, besides the two listed channels, a third channel, for example, a red channel, may be further provided, where the formula is similar to the foregoing formula, and the difference is that the weighting coefficient selected for performing weighting processing in the red channel is different, for example, the weighting coefficient for performing weighting processing in the red channel may be set to 1.2, which is not described herein again.
In the embodiment of the disclosure, a target face image is obtained, and key point data of the target face image is extracted from the target face image; acquiring a template material, and drawing the template material into blank textures in a preset mode to enable the template material to correspond to a region to be processed of a target face image; and weighting at least two color channels of the region to be processed of the target face image based on the template material. According to the method for processing the face image, the region to be processed of the target face image can be accurately determined, and the weighting processing is performed on at least two color channels of the region to be processed of the target face image based on the template material, so that the finally obtained weighted target face image has a good beautifying effect, and the user experience is improved; in addition, according to the processing method of the face image provided by the embodiment of the disclosure, each frame of the target face image does not need to be processed in real time, and only the pixel values of each key point of the region to be processed of the target face image need to be subjected to pixel correction processing, so that the processing efficiency of the face image is improved.
The following is an embodiment of a processing apparatus for a face image according to the embodiment of the present disclosure, which can be used to execute the embodiment of the processing method for a face image according to the embodiment of the present disclosure. For details that are not disclosed in the embodiment of the processing apparatus for a face image in the embodiment of the present disclosure, please refer to the embodiment of the processing method for a face image in the embodiment of the present disclosure.
Referring to fig. 7, a schematic structural diagram of a processing apparatus for providing a face image according to an exemplary embodiment of the present invention is shown. The processing device of the face image can be realized by software, hardware or a combination of the software and the hardware to form all or part of the terminal. The processing device of the face image comprises an acquisition unit 702, an extraction unit 704, a drawing unit 706 and a weighting processing unit 708;
specifically, the obtaining unit 702 is configured to obtain a target face image and obtain a template material;
an extracting unit 704, configured to extract the key point data of the target face image from the target face image acquired by the acquiring unit 702;
the drawing unit 706 is configured to draw the template material acquired by the acquiring unit 702 into a blank texture in a preset manner, so that the template material corresponds to a region to be processed of the target face image;
a weighting processing unit 708, configured to perform weighting processing on at least two color channels of the region to be processed of the target face image based on the template material acquired by the acquisition unit 702.
Optionally, the apparatus further comprises:
a determining unit (not shown in fig. 7) configured to determine a fusion region of the target face image within the blank texture based on the key point data of the target face image extracted by the extracting unit;
the obtaining unit 702 is further configured to obtain texture data of the template material;
the drawing unit 706 is specifically configured to: and drawing the template material to a fusion area in the blank texture according to the indexing mode and the texture data of the template material.
Optionally, the obtaining unit 702 is further configured to:
acquiring texture data of a template material;
the rendering unit 706 is configured to: the template material obtained by the obtaining unit 702 is drawn into blank texture in an indexing manner, wherein texture data of the template material is used for indexing key point data of the target face image.
Optionally, the weighting processing unit 708 is configured to:
at least two color channels of the region to be processed of the target face image are weighted based on the template material in the first channel, and
and in the second channel, weighting at least two color channels of the region to be processed of the target face image based on the template material.
Optionally, the first channel is a blue channel, and the apparatus further includes:
the determining unit is further used for determining a first brightening weight value for brightening the target area in the blue channel;
the weighting processing unit 708 is further configured to: according to the first brightening weight value determined by the determining unit and the pixel value corresponding to each key point of the target face image, weighting processing is carried out on the to-be-processed area of the target face image based on the template material in the first channel, so that the to-be-processed area of the target face image is brightened in the blue channel through the first brightening weight value.
Optionally, the obtaining unit 702 is further configured to:
acquiring a first preset input weight value, a second preset input weight value, blue extracted from the template material and a first weighting coefficient for weighting in a blue channel, wherein the sum of the first preset input weight value and the second preset input weight value is 1;
the determination unit is specifically configured to: weighting processing is performed according to the first preset input weight value, the second preset input weight value, the blue color extracted from the template material, and the first weighting coefficient for weighting processing in the blue channel, which are acquired by the acquisition unit 702, to determine a first brightening weight value.
Optionally, the second channel is a blue channel, and the determining unit is further configured to: determining a second brightening weight value for brightening the target area in the green channel;
the weighting processing unit 708 is further configured to: according to the second brightening weight value acquired by the acquisition unit 702 and the pixel value corresponding to each key point of the target face image, in the second channel, the region to be processed of the target face image is weighted based on the template material, so that the region to be processed of the target face image is brightened in the green channel by the second brightening weight value.
Optionally, the obtaining unit 702 is further configured to: acquiring a first preset input weight value, a second preset input weight value, green extracted from the template material and a second weighting coefficient for weighting in a green channel, wherein the sum of the first preset input weight value and the second preset input weight value is 1;
the determination unit is further configured to: weighting processing is performed according to the first preset input weight value, the second preset input weight value, the green color extracted from the template material, and the second weighting coefficient for weighting processing in the green channel, which are acquired by the acquisition unit 702, to determine a second brightening weight value.
Optionally, the target area is an area where the silkworm lies.
It should be noted that, when the processing apparatus for a face image provided in the foregoing embodiment executes the processing method for a face image, the division of the above functional units is merely used as an example, and in practical applications, the above functions may be distributed to different functional units according to needs, that is, the internal structure of the device may be divided into different functional units to complete all or part of the functions described above. In addition, the embodiment of the processing apparatus for a face image and the embodiment of the processing method for a face image provided in the foregoing embodiments belong to the same concept, and the implementation process is detailed in the embodiment of the processing method for a face image, and is not described herein again.
In the embodiment of the disclosure, the acquisition unit is used for acquiring a target face image and acquiring a template material; the extraction unit is used for extracting key point data of the target face image from the target face image acquired by the acquisition unit; the drawing unit is used for drawing the template material acquired by the acquisition unit into blank textures in a preset mode, so that the template material corresponds to a region to be processed of the target face image; and the weighting processing unit is used for weighting at least two color channels of the region to be processed of the target face image based on the template material acquired by the acquisition unit. The processing device for the face image provided by the embodiment of the disclosure can accurately determine the region to be processed of the target face image, and perform weighting processing on at least two color channels of the region to be processed of the target face image based on the template material, so that the finally obtained weighted target face image has a better beautifying effect, and the user experience is improved.
As shown in fig. 8, the present embodiment provides an electronic device, which includes a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor executes the computer program to implement the method steps as described above.
Embodiments of the present disclosure provide a storage medium having stored thereon a computer program of instructions for implementing the method steps as described above.
Referring now to FIG. 8, shown is a block diagram of an electronic device suitable for use in implementing embodiments of the present disclosure. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 8, an electronic device may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 801 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)802 or a program loaded from a storage means 808 into a Random Access Memory (RAM) 803. In the RAM803, various programs and data necessary for the operation of the electronic apparatus are also stored. The processing apparatus 801, the ROM802, and the RAM803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
Generally, the following devices may be connected to the I/O interface 805: input devices 806 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 807 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage 808 including, for example, magnetic tape, hard disk, etc.; and a communication device 809. The communication means 809 may allow the electronic device to communicate with other devices wirelessly or by wire to exchange data. While fig. 8 illustrates an electronic device having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer-readable medium, the computer program comprising program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication means 809, or installed from the storage means 808, or installed from the ROM 802. The computer program, when executed by the processing apparatus 801, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
Claims (11)
1. A method for processing a face image, the method comprising: acquiring a target face image, and extracting key point data of the target face image from the target face image;
acquiring a template material, and drawing the template material into blank textures in a preset mode to enable the template material to correspond to a region to be processed of a target face image;
and performing weighting processing on at least two color channels of the region to be processed of the target face image based on the template material.
2. The method of claim 1, wherein the drawing the template material to a blank texture in a preset manner comprises:
determining a fusion area of the target face image in the blank texture based on the key point data of the target face image;
acquiring texture data of the template material;
and drawing the template material to a fusion area in the blank texture according to an index mode and texture data of the template material.
3. The method of claim 1, wherein weighting at least two color channels of the region of the target face image to be processed based on the template material comprises:
at least two color channels of the region to be processed of the target face image are weighted based on the template material in the first channel, and
and in a second channel, performing weighting processing on at least two color channels of the region to be processed of the target face image based on the template material.
4. The method of claim 3, wherein the first channel is a blue channel, and wherein weighting at least two color channels of the region of the target face image to be processed based on the template material comprises:
determining a first luma weight value for luma processing of the target region within the blue channel;
according to the first brightening weight value and the pixel value corresponding to each key point of the target face image, weighting processing is carried out on the region to be processed of the target face image based on the template material in the first channel, so that the region to be processed of the target face image is brightened in the blue channel through the first brightening weight value.
5. The method of claim 4, wherein the determining the first luma weight value comprises:
acquiring a first preset input weight value, a second preset input weight value, blue extracted from the template material and a first weighting coefficient for weighting in the blue channel, wherein the sum of the first preset input weight value and the second preset input weight value is 1;
and performing weighting processing according to the first preset input weight value, the second preset input weight value, the blue color extracted from the template material and the first weighting coefficient for performing weighting processing in the blue channel to determine the first brightening weight value.
6. The method of claim 4, wherein the second channel is a blue channel, and wherein weighting at least two color channels of the region of the target face image to be processed based on the template material in the second channel comprises:
determining a second luma weight value for luma processing of the target region within the green channel;
according to the second brightening weight value and the pixel value corresponding to each key point of the target face image, weighting processing is carried out on the region to be processed of the target face image based on the template material in the second channel, so that the region to be processed of the target face image is brightened in the green channel through the second brightening weight value.
7. The method of claim 6, wherein the determining the second luma weight value comprises:
acquiring the first preset input weight value, the second preset input weight value, green extracted from the template material and a second weighting coefficient for weighting in the green channel, wherein the sum of the first preset input weight value and the second preset input weight value is 1;
and performing weighting processing according to the first preset input weight value, the second preset input weight value, the green color extracted from the template material and the second weighting coefficient for performing weighting processing in the green channel to determine a second brightening weight value.
8. The method of claim 1,
the target area is the area where the lying silkworms are located.
9. An apparatus for processing a face image, the apparatus comprising: the device comprises an acquisition unit, an extraction unit, a drawing unit and a weighting processing unit;
the target face image is obtained, and a template material is obtained;
the extraction unit is used for extracting key point data of the target face image from the target face image acquired by the acquisition unit;
the drawing unit is used for drawing the template material acquired by the acquisition unit into blank textures in a preset mode, so that the template material corresponds to a region to be processed of the target face image;
the weighting processing unit is used for weighting at least two color channels of the region to be processed of the target face image based on the template material acquired by the acquisition unit.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the computer program to implement the method of any one of claims 1-8.
11. A computer-readable storage medium, on which a computer program is stored, characterized in that the program is executed by a processor to implement the method according to any of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011401695.6A CN114663290A (en) | 2020-12-03 | 2020-12-03 | Face image processing method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011401695.6A CN114663290A (en) | 2020-12-03 | 2020-12-03 | Face image processing method and device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114663290A true CN114663290A (en) | 2022-06-24 |
Family
ID=82025464
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011401695.6A Pending CN114663290A (en) | 2020-12-03 | 2020-12-03 | Face image processing method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114663290A (en) |
-
2020
- 2020-12-03 CN CN202011401695.6A patent/CN114663290A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112954450B (en) | Video processing method and device, electronic equipment and storage medium | |
CN110070896B (en) | Image processing method, device and hardware device | |
CN112241714B (en) | Method and device for identifying designated area in image, readable medium and electronic equipment | |
CN111260601B (en) | Image fusion method and device, readable medium and electronic equipment | |
CN110062157B (en) | Method and device for rendering image, electronic equipment and computer readable storage medium | |
CN110211030B (en) | Image generation method and device | |
CN110070496A (en) | Generation method, device and the hardware device of image special effect | |
US11893770B2 (en) | Method for converting a picture into a video, device, and storage medium | |
EP4460022A1 (en) | Video generation method and apparatus, and device and storage medium | |
EP4171027A1 (en) | Method and apparatus for converting picture into video, and device and storage medium | |
CN112258622B (en) | Image processing method and device, readable medium and electronic equipment | |
CN110636331B (en) | Method and apparatus for processing video | |
CN110582021B (en) | Information processing method and device, electronic equipment and storage medium | |
WO2023035973A1 (en) | Video processing method and apparatus, device, and medium | |
CN110555799A (en) | Method and apparatus for processing video | |
CN114663290A (en) | Face image processing method and device, electronic equipment and storage medium | |
CN114125485B (en) | Image processing method, device, equipment and medium | |
CN110209861A (en) | Image processing method, device, electronic equipment and computer readable storage medium | |
CN111292247A (en) | Image processing method and device | |
CN113225586B (en) | Video processing method and device, electronic equipment and storage medium | |
JP2023550970A (en) | Methods, equipment, storage media, and program products for changing the background in the screen | |
CN115457024A (en) | Method and device for processing cryoelectron microscope image, electronic equipment and storage medium | |
CN111292276B (en) | Image processing method and device | |
CN111489769B (en) | Image processing method, device and hardware device | |
CN111292245A (en) | Image processing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |