CN113379623B - Image processing method, device, electronic equipment and storage medium - Google Patents

Image processing method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113379623B
CN113379623B CN202110603394.XA CN202110603394A CN113379623B CN 113379623 B CN113379623 B CN 113379623B CN 202110603394 A CN202110603394 A CN 202110603394A CN 113379623 B CN113379623 B CN 113379623B
Authority
CN
China
Prior art keywords
face
image
point
map
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110603394.XA
Other languages
Chinese (zh)
Other versions
CN113379623A (en
Inventor
郭赛南
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202110603394.XA priority Critical patent/CN113379623B/en
Publication of CN113379623A publication Critical patent/CN113379623A/en
Application granted granted Critical
Publication of CN113379623B publication Critical patent/CN113379623B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The embodiment of the disclosure relates to an image processing method, an image processing device, electronic equipment and a storage medium, which belong to the field of computer graphics, and the embodiment of the disclosure considers the size of a human face in an entire image, namely calculates the aspect ratio of the human face in the entire image; further, according to the aspect ratio, the original face image is subjected to fuzzy processing with different degrees to obtain a first fuzzy graph and a second fuzzy graph; and then determining a target area and a target pixel value for carrying out the brightening treatment on the target area in the original face image according to the first fuzzy graph and the second fuzzy graph, and superposing the target pixel value on a part of the target area belonging to the area to be treated according to the mask graph of the original face image to complete the image treatment. The image processing method considers the size of the human face in the whole image, so that the method is suitable for the region brightening under the conditions of different image resolutions or different human face sizes, namely, the human face decoration brightening can be performed cleanly and effectively, and further, the effect of human face beautifying is ensured.

Description

Image processing method, device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of computer graphics, and in particular relates to an image processing method, an image processing device, electronic equipment and a storage medium.
Background
With the wide popularization of terminals and the gradual improvement of terminal performance, video shooting or live broadcasting by using terminals has become a new internet culture. Along with this, the requirements of users on portrait beautifying technology and effect are also increasing. For example, besides the integral skin grinding of the face, the user is also more concerned about the beautifying effect of details, such as whether the black eye circles can be effectively removed, so that the face is plump and clean, and the face is younger, and is an important index for the user to judge the beautifying effect. As can be seen from the description, a new image processing method is needed to perform face modification and brightening, such as removing dark circles more cleanly and effectively, so as to ensure the face-beautifying effect.
Disclosure of Invention
The disclosure provides an image processing method, an image processing device, electronic equipment and a storage medium, which can remove black eyes more cleanly and effectively, and further ensure the portrait beautifying effect. The technical scheme of the present disclosure is as follows:
according to a first aspect of an embodiment of the present disclosure, there is provided an image processing method including:
Acquiring an original face image, wherein the original face image comprises a target face; detecting the key points of the face of the original face image to obtain a detection result of the key points of the face;
according to the face key point detection result, performing key point expansion on the target face to obtain expanded key points; determining a face outer frame according to the expansion key points and acquiring a first size of the face outer frame, wherein the face outer frame is a minimum external frame comprising the expansion key points;
determining an aspect ratio of the target face in the original face image according to the first size and a second size of the original face image, wherein the aspect ratio comprises a longitudinal ratio and a transverse ratio, the longitudinal ratio is a ratio of a height in the first size to a height in the second size, and the transverse ratio is a ratio of a width in the first size to a width in the second size;
according to the aspect ratio, carrying out image blurring processing on the original face image to different degrees to obtain a first blurring map and a second blurring map, wherein the image definition of the first blurring map is larger than that of the second blurring map;
Determining a target area and a target pixel value for carrying out brightening treatment on the target area in the original face image according to the first fuzzy graph and the second fuzzy graph;
acquiring a first mask image of the original face image, wherein the first mask image is used for distinguishing a region to be processed and a non-processing region in the original face image; and according to the first mask diagram, overlapping the target pixel value on the part, belonging to the region to be processed, of the target region to obtain a target face image.
In some embodiments, the face key point detection result includes in-plane key points, eyebrow key points, and cheek key points, the in-plane key points being used to indicate a center of the target face;
performing key point expansion on the target face according to the face key point detection result to obtain expanded key points; determining the face outer frame according to the expansion key points and obtaining the first size of the face outer frame comprises the following steps:
for any one eyebrow key point, taking the in-plane key point as a starting point, connecting the in-plane key point with the eyebrow key point, and determining a point on a connecting line extension line of the in-plane key point and the eyebrow key point as a first type of expansion key point; acquiring coordinates of the first type of expansion key points according to the coordinates of the key points in the surface and the first distance between the key points in the surface and the eyebrow key points;
For any cheek key point, taking the in-plane key point as a starting point, connecting the in-plane key point and the cheek key point, and determining a point on a connecting line extension line of the in-plane key point and the cheek key point as a second type of expansion key point; acquiring coordinates of the second type of expansion key points according to the coordinates of the key points in the surface and the second distance between the key points in the surface and the cheek key points;
and taking the minimum circumscribed frame comprising the first type of expansion key points and the second type of expansion key points as the face outer frame, and determining the first size of the face outer frame according to the coordinates of the first type of expansion key points and the coordinates of the second type of expansion key points.
In some embodiments, the performing image blurring processing on the original face image according to the aspect ratio to obtain a first blur map and a second blur map, including:
performing downsampling processing on the original face image to obtain a first sampling image and a second sampling image; wherein the first sample pattern and the second sample pattern are the same size;
determining a first step size according to the first coefficient, the designated constant and the aspect ratio; performing first average filtering on the first sampling graph according to the first step length to obtain the first fuzzy graph; the appointed constant is determined according to face structure information, and the first step length comprises a first transverse step length and a first longitudinal step length;
Determining a second step size according to a second coefficient, the specified constant and the aspect ratio; performing second average filtering on the second sampling graph according to the second step length to obtain a second fuzzy graph; the second coefficient is larger than the first coefficient, the second step size comprises a second transverse step size and a second longitudinal step size, the second transverse step size is larger than the first transverse step size, and the second longitudinal step size is larger than the first longitudinal step size.
In some embodiments, the determining a first step size is based on a first coefficient, a specified constant, and the aspect ratio; performing first mean filtering on the first sampling graph according to the first step length to obtain the first fuzzy graph, including:
determining the first transverse step size according to the transverse proportion in the first coefficient and the transverse proportion; and determining the first longitudinal step size according to the longitudinal proportion among the first coefficient, the specified constant and the aspect ratio;
responding to the value of the current position point in the first mask map as a first numerical value, and carrying out average filtering on the corresponding pixel position in the first sampling map according to the first step length; wherein the first value is used for indicating the region to be treated;
And repeatedly executing the steps until all the position points in the first mask map are traversed, and obtaining the first fuzzy map.
In some embodiments, the determining a second step size is based on a second coefficient, the specified constant, and the aspect ratio; and performing a second mean filtering on the second sampling graph according to the second step length to obtain the second fuzzy graph, including:
determining the second lateral step size according to the second coefficient and the lateral proportion in the aspect ratio; and determining the second longitudinal step size according to the second coefficient, the specified constant and the longitudinal proportion in the aspect ratio;
responding to the value of the current position point in the first mask map as a first numerical value, and carrying out average filtering on the corresponding pixel position in the second sampling map according to the second step length; wherein the first value is used for indicating the region to be treated;
and repeatedly executing the steps until all the position points in the first mask map are traversed, and obtaining the second fuzzy map.
In some embodiments, the determining a target region and a target pixel value in the original face image according to the first blur map and the second blur map includes:
And determining the target area and the target pixel value in the original face image according to the pixel difference value between the first fuzzy graph and the second fuzzy graph.
In some embodiments, the determining the target region and the target pixel value in the original face image according to the pixel difference between the first blur map and the second blur map includes:
for any pixel point in the original face image, determining that the pixel point belongs to the target area in response to the fact that a first pixel value of the pixel point in the first fuzzy graph is smaller than a second pixel value of the pixel point in the second fuzzy graph;
the target pixel value corresponding to the pixel point is determined by the following formula:
diff=(B 2 -B 1 )*ω 1 +B 22
B 1 refer to the first pixel value, B 2 Refer to the second pixel value; w (w) 1 Refer to a first weight, w 2 A second weight is referred, the first weight is used for controlling the influence of a pixel difference value on the brightness degree of the pixel point, and the pixel difference value is the difference between the second pixel value and the first pixel value; the second weight is used for controlling the influence of the second pixel value on the brightness degree of the pixel point; diff refers to the target pixel value corresponding to the pixel point, and the value of diff is not smaller than 0.
In some embodiments, the method further comprises:
and in response to the pixel value to be lightened of the pixel point being larger than the upper limit of the degree of lightening, updating the target pixel value corresponding to the pixel point to be the upper limit of the degree of lightening.
In some embodiments, the method further comprises:
acquiring a second mask image of the original face image, wherein the second mask image is used for distinguishing a visible face area which is not shielded from a non-visible face area which is shielded in the original face image;
and according to the first mask diagram, overlapping the target pixel value to a portion of the target region belonging to the region to be processed to obtain a target face image, including:
and according to the first mask map and the second mask map, overlapping the target pixel value to a part which belongs to the to-be-processed area and is not shielded on the target area, so as to obtain the target face image.
According to a second aspect of the embodiments of the present disclosure, there is provided an image processing apparatus including:
the first processing module is configured to acquire an original face image, wherein the original face image comprises a target face; detecting the key points of the face of the original face image to obtain a detection result of the key points of the face;
The first determining module is configured to execute the key point expansion of the target face according to the face key point detection result to obtain expanded key points; determining a face outer frame according to the expansion key points and acquiring a first size of the face outer frame, wherein the face outer frame is a minimum external frame comprising the expansion key points;
a second determining module configured to perform determining an aspect ratio of the target face in the original face image from the first size and a second size of the original face image, the aspect ratio including a longitudinal ratio that is a ratio of a height in the first size to a height in the second size and a lateral ratio that is a ratio of a width in the first size to a width in the second size;
the second processing module is configured to execute image blurring processing of different degrees on the original face image according to the aspect ratio to obtain a first blurring map and a second blurring map, and the image definition of the first blurring map is larger than that of the second blurring map;
a third determining module configured to perform determining a target region in the original face image and a target pixel value that performs a brightening process on the target region according to the first blur map and the second blur map;
A third processing module configured to perform acquiring a first mask map of the original face image, the first mask map being used for distinguishing a region to be processed from a non-processed region in the original face image; and according to the first mask diagram, overlapping the target pixel value on the part, belonging to the region to be processed, of the target region to obtain a target face image.
In some embodiments, the face key point detection result includes in-plane key points, eyebrow key points, and cheek key points, the in-plane key points being used to indicate a center of the target face;
the first determining module includes:
a first determining unit configured to perform, for any one of the eyebrow keypoints, determining, with the in-plane keypoint as a starting point, a point on a line extension of the in-plane keypoint and the eyebrow keypoint, as a first type of expansion keypoint, connecting the in-plane keypoint and the eyebrow keypoint;
a first acquisition unit configured to perform acquisition of coordinates of the first-type expansion key points according to coordinates of the key points in the face, a first distance between the key points in the face and the eyebrow key points;
A second determining unit configured to perform, for any one cheek-key point, a connection between the cheek-key point and the cheek-key point with the in-plane-key point as a start point, and determine a point on a line extension line of the cheek-key point and the in-plane-key point as a second-type extension-key point;
a second acquisition unit configured to perform acquisition of coordinates of the second-type expansion key points according to coordinates of the key points in the face, second distances between the key points in the face and the cheek key points;
a third determining unit configured to perform, as the face outline, a minimum circumscribed frame including the first type expansion key points and the second type expansion key points;
and a third obtaining unit configured to determine a first size of the face frame according to the coordinates of the first type of expansion key points and the coordinates of the second type of expansion key points.
In some embodiments, the second processing module comprises:
the first processing unit is configured to perform downsampling processing on the original face image to obtain a first sampling image and a second sampling image; wherein the first sample pattern and the second sample pattern are the same size;
A second processing unit configured to perform determining a first step size according to the first coefficient, the specified constant and the aspect ratio; performing first average filtering on the first sampling graph according to the first step length to obtain the first fuzzy graph; the appointed constant is determined according to face structure information, and the first step length comprises a first transverse step length and a first longitudinal step length;
a third processing unit configured to perform determining a second step size according to a second coefficient, the specified constant, and the aspect ratio; performing second average filtering on the second sampling graph according to the second step length to obtain a second fuzzy graph; the second coefficient is larger than the first coefficient, the second step size comprises a second transverse step size and a second longitudinal step size, the second transverse step size is larger than the first transverse step size, and the second longitudinal step size is larger than the first longitudinal step size.
In some embodiments, the second processing unit is configured to perform: determining the first transverse step size according to the transverse proportion in the first coefficient and the transverse proportion; and determining the first longitudinal step size according to the longitudinal proportion among the first coefficient, the specified constant and the aspect ratio;
Responding to the value of the current position point in the first mask map as a first numerical value, and carrying out average filtering on the corresponding pixel position in the first sampling map according to the first step length; wherein the first value is used for indicating the region to be treated;
and repeatedly executing the steps until all the position points in the first mask map are traversed, and obtaining the first fuzzy map.
In some embodiments, the third processing unit is configured to perform: determining the second lateral step size according to the second coefficient and the lateral proportion in the aspect ratio; and determining the second longitudinal step size according to the second coefficient, the specified constant and the longitudinal proportion in the aspect ratio;
responding to the value of the current position point in the first mask map as a first numerical value, and carrying out average filtering on the corresponding pixel position in the second sampling map according to the second step length; wherein the first value is used for indicating the region to be treated;
and repeatedly executing the steps until all the position points in the first mask map are traversed, and obtaining the second fuzzy map.
In some embodiments, the third determination module is configured to perform: and determining the target area and the target pixel value in the original face image according to the pixel difference value between the first fuzzy graph and the second fuzzy graph.
In some embodiments, the third determination module is configured to perform: for any pixel point in the original face image, determining that the pixel point belongs to the target area in response to the fact that a first pixel value of the pixel point in the first fuzzy graph is smaller than a second pixel value of the pixel point in the second fuzzy graph;
the target pixel value corresponding to the pixel point is determined by the following formula:
diff=(B 2 -B 1 )*ω 1 +B 22
B 1 refer to the first pixel value, B 2 Refer to the second pixel value; w (w) 1 Refer to a first weight, w 2 A second weight is referred, the first weight is used for controlling the influence of a pixel difference value on the brightness degree of the pixel point, and the pixel difference value is the difference between the second pixel value and the first pixel value; the second weight is used for controlling the influence of the second pixel value on the brightness degree of the pixel point; diff refers to the target pixel value corresponding to the pixel point, and the value of diff is not smaller than 0.
In some embodiments, the third determination module is further configured to perform: and in response to the pixel value to be lightened of the pixel point being larger than the upper limit of the degree of lightening, updating the target pixel value corresponding to the pixel point to be the upper limit of the degree of lightening.
In some embodiments, the apparatus further comprises:
an acquisition module configured to perform acquisition of a second mask map of the original face image, the second mask map being used for distinguishing a visible face region which is not blocked from a non-visible face region which is blocked in the original face image;
the third processing module is configured to perform: and according to the first mask map and the second mask map, overlapping the target pixel value to a part which belongs to the to-be-processed area and is not shielded on the target area, so as to obtain the target face image.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the image processing method described above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, which when executed by a processor of an electronic device, causes the electronic device to perform the above-described image processing method.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the above-described image processing method.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
in the image processing process, the size of the human face in the whole image is considered in the embodiment of the disclosure, namely the embodiment of the disclosure can calculate the aspect ratio of the human face in the whole image; further, according to the aspect ratio, the original face image is subjected to fuzzy processing with different degrees to obtain a first fuzzy graph and a second fuzzy graph; and then determining a target area to be brightened and a target pixel value for the brightening treatment of the target area in the original face image according to the first fuzzy graph and the second fuzzy graph, and superposing the target pixel value on a part of the target area belonging to the area to be treated according to the mask graph of the original face image to finish the image treatment. The mask map is used for distinguishing a region to be processed and a non-processing region in the original face image; in summary, the image processing method considers the size of the face in the whole image, so that the method is suitable for brightening the areas under the conditions of different image resolutions or different face sizes, namely, face decoration and brightening can be more cleanly and effectively carried out, and further, the face beautifying effect is ensured.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
Fig. 1 is a flowchart illustrating an image processing method according to an exemplary embodiment.
Fig. 2 is a flowchart illustrating an image processing method according to an exemplary embodiment.
Fig. 3 is a schematic overall flow diagram illustrating an image processing method according to an exemplary embodiment.
Fig. 4 is a block diagram of an image processing apparatus according to an exemplary embodiment.
Fig. 5 is a block diagram of an electronic device, according to an example embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
The user information referred to in the present disclosure may be information authorized by the user or sufficiently authorized by each party.
In recent years, the development of short video, live broadcast industry and the like is rapid, and the method is a new internet cultural amateur. More and more individuals and enterprises join in the army with short video and live broadcast, a large number of network red people emerge like spring bamboo shoots after raining, and the technical requirements and the effect requirements of people on face beautification are also higher and higher. In addition to the overall skinning of the face, the user is also more concerned with the aesthetic appearance of detail. For example, whether the black eye can be effectively removed can enable the face to be plump and clean and to be youthful is an important index for users to judge the effect of beautifying.
Based on this, the embodiment of the disclosure provides a face modification and brightening (for example, removing facial dark circles) human face beautifying method with a scale self-adaptive, and the scheme reasonably considers the proportion of the human face in the image and the structural information of the human face, so that the face modification and brightening under the conditions of different image resolutions and different human face sizes can be adaptively processed.
Some terms involved in the embodiments of the present disclosure are explained first below.
The portrait beautifies: the face is beautified and decorated, wherein the portrait beautifying treatment comprises but is not limited to face thinning, face dressing, face blemish removal, skin color decoration and brightening, and the like.
An implementation environment related to an image processing scheme according to an embodiment of the present disclosure is described below.
The image processing scheme provided by the embodiment of the disclosure is used for face decoration and brightening, such as black eye removal on a face.
In some embodiments, the scheme may be applied to any scene where face modification and lightening is required, and the embodiments of the disclosure do not limit this. Such as a doctor-beauty scene, a live broadcast scene, an on-line or off-line portrait-beauty scene, etc.
The implementation main body of the scheme is electronic equipment. Illustratively, the electronic device is a terminal, the types of which include, but are not limited to, a smart phone, a desktop computer, a notebook computer, a tablet computer, and the like. In addition, the electronic device may also be a server, for example, the terminal uploads the face image to the server, and the server performs face modification and illumination, which is not specifically limited in the embodiments of the present disclosure.
In other embodiments, taking the electronic device as an example of the terminal, the face image currently processed by the terminal may be a locally stored image, a newly shot image, a video frame in a video call or video live broadcast, or an image sent by another terminal, which is not limited in the embodiments of the present disclosure. Taking the electronic device as a server as an example, the face image currently processed by the server may be a face image uploaded by the terminal.
The portrait beautifying method for removing facial dark circles provided by the embodiments of the present disclosure will be described in detail by the following embodiments.
Fig. 1 is a flowchart illustrating an image processing method according to an exemplary embodiment, which is used in an electronic device, as shown in fig. 1, and includes the following steps.
In 101, an original face image is acquired, wherein the original face image comprises a target face; and detecting the key points of the face of the original face image to obtain a detection result of the key points of the face.
In 102, performing key point expansion on a target face according to the face key point detection result to obtain expanded key points; and determining the face outer frame according to the expansion key points and acquiring the first size of the face outer frame, wherein the face outer frame is the minimum external frame including the expansion key points.
In 103, an aspect ratio of the target face in the original face image is determined based on the first size and the second size of the original face image, the aspect ratio comprising a longitudinal ratio and a transverse ratio, the longitudinal ratio being a ratio of a height in the first size to a height in the second size, the transverse ratio being a ratio of a width in the first size to a width in the second size.
In 104, according to the aspect ratio, image blurring processes with different degrees are performed on the original face image, so as to obtain a first blurring map and a second blurring map, wherein the image definition of the first blurring map is greater than that of the second blurring map.
In 105, a target region and a target pixel value for the target region to be subjected to the brightening process are determined in the original face image based on the first blur map and the second blur map.
In 106, a first mask map of the original face image is obtained, wherein the first mask map is used for distinguishing a to-be-processed area from a non-processed area in the original face image; and according to the first mask diagram, superposing the target pixel value on a part of the target region, which belongs to the region to be processed, to obtain a target face image.
In the image processing process, the size of the human face in the whole image is considered in the embodiment of the disclosure, namely the embodiment of the disclosure can calculate the aspect ratio of the human face in the whole image; further, according to the aspect ratio, the original face image is subjected to fuzzy processing with different degrees to obtain a first fuzzy graph and a second fuzzy graph; and then determining a target area to be brightened and a target pixel value for the brightening treatment of the target area in the original face image according to the first fuzzy graph and the second fuzzy graph, and superposing the target pixel value on a part of the target area belonging to the area to be treated according to the mask graph of the original face image to finish the image treatment. The mask map is used for distinguishing a region to be processed and a non-processing region in the original face image; in summary, the image processing method considers the size of the face in the whole image, so that the method is suitable for brightening the areas under the conditions of different image resolutions or different face sizes, namely, face decoration and brightening can be more cleanly and effectively carried out, and further, the face beautifying effect is ensured.
In some embodiments, the face key point detection result includes in-plane key points, eyebrow key points, and cheek key points, the in-plane key points being used to indicate a center of the target face;
performing key point expansion on the target face according to the face key point detection result to obtain expanded key points; determining the face outer frame according to the expansion key points and obtaining the first size of the face outer frame comprises the following steps:
for any one eyebrow key point, taking the in-plane key point as a starting point, connecting the in-plane key point with the eyebrow key point, and determining a point on a connecting line extension line of the in-plane key point and the eyebrow key point as a first type of expansion key point; acquiring coordinates of the first type of expansion key points according to the coordinates of the key points in the surface and the first distance between the key points in the surface and the eyebrow key points;
for any cheek key point, taking the in-plane key point as a starting point, connecting the in-plane key point and the cheek key point, and determining a point on a connecting line extension line of the in-plane key point and the cheek key point as a second type of expansion key point; acquiring coordinates of the second type of expansion key points according to the coordinates of the key points in the surface and the second distance between the key points in the surface and the cheek key points;
And taking the minimum circumscribed frame comprising the first type of expansion key points and the second type of expansion key points as the face outer frame, and determining the first size of the face outer frame according to the coordinates of the first type of expansion key points and the coordinates of the second type of expansion key points.
In some embodiments, the performing image blurring processing on the original face image according to the aspect ratio to obtain a first blur map and a second blur map, including:
performing downsampling processing on the original face image to obtain a first sampling image and a second sampling image; wherein the first sample pattern and the second sample pattern are the same size;
determining a first step size according to the first coefficient, the designated constant and the aspect ratio; performing first average filtering on the first sampling graph according to the first step length to obtain the first fuzzy graph; the appointed constant is determined according to face structure information, and the first step length comprises a first transverse step length and a first longitudinal step length;
determining a second step size according to a second coefficient, the specified constant and the aspect ratio; performing second average filtering on the second sampling graph according to the second step length to obtain a second fuzzy graph; the second coefficient is larger than the first coefficient, the second step size comprises a second transverse step size and a second longitudinal step size, the second transverse step size is larger than the first transverse step size, and the second longitudinal step size is larger than the first longitudinal step size.
In some embodiments, the determining a first step size is based on a first coefficient, a specified constant, and the aspect ratio; performing first mean filtering on the first sampling graph according to the first step length to obtain the first fuzzy graph, including:
determining the first transverse step size according to the transverse proportion in the first coefficient and the transverse proportion; and determining the first longitudinal step size according to the longitudinal proportion among the first coefficient, the specified constant and the aspect ratio;
responding to the value of the current position point in the first mask map as a first numerical value, and carrying out average filtering on the corresponding pixel position in the first sampling map according to the first step length; wherein the first value is used for indicating the region to be treated;
and repeatedly executing the steps until all the position points in the first mask map are traversed, and obtaining the first fuzzy map.
In some embodiments, the determining a second step size is based on a second coefficient, the specified constant, and the aspect ratio; and performing a second mean filtering on the second sampling graph according to the second step length to obtain the second fuzzy graph, including:
Determining the second lateral step size according to the second coefficient and the lateral proportion in the aspect ratio; and determining the second longitudinal step size according to the second coefficient, the specified constant and the longitudinal proportion in the aspect ratio;
responding to the value of the current position point in the first mask map as a first numerical value, and carrying out average filtering on the corresponding pixel position in the second sampling map according to the second step length; wherein the first value is used for indicating the region to be treated;
and repeatedly executing the steps until all the position points in the first mask map are traversed, and obtaining the second fuzzy map.
In some embodiments, the determining a target region and a target pixel value in the original face image according to the first blur map and the second blur map includes:
and determining the target area and the target pixel value in the original face image according to the pixel difference value between the first fuzzy graph and the second fuzzy graph.
In some embodiments, the determining the target region and the target pixel value in the original face image according to the pixel difference between the first blur map and the second blur map includes:
For any pixel point in the original face image, determining that the pixel point belongs to the target area in response to the fact that a first pixel value of the pixel point in the first fuzzy graph is smaller than a second pixel value of the pixel point in the second fuzzy graph;
the target pixel value corresponding to the pixel point is determined by the following formula:
diff=(B 2 -B l )*ω l +B 22
B 1 refer to the first pixel value, B 2 Refer to the second pixel value; w (w) 1 Refer to a first weight, w 2 A second weight is referred, the first weight is used for controlling the influence of a pixel difference value on the brightness degree of the pixel point, and the pixel difference value is the difference between the second pixel value and the first pixel value; the second weight is used for controlling the influence of the second pixel value on the brightness degree of the pixel point; diff refers to the target pixel value corresponding to the pixel point, and the value of diff is not smaller than 0.
In some embodiments, the method further comprises:
and in response to the pixel value to be lightened of the pixel point being larger than the upper limit of the degree of lightening, updating the target pixel value corresponding to the pixel point to be the upper limit of the degree of lightening.
In some embodiments, the method further comprises:
acquiring a second mask image of the original face image, wherein the second mask image is used for distinguishing a visible face area which is not shielded from a non-visible face area which is shielded in the original face image;
And according to the first mask diagram, overlapping the target pixel value to a portion of the target region belonging to the region to be processed to obtain a target face image, including:
and according to the first mask map and the second mask map, overlapping the target pixel value to a part which belongs to the to-be-processed area and is not shielded on the target area, so as to obtain the target face image.
Any combination of the above-mentioned optional solutions may be adopted to form an optional embodiment of the present disclosure, which is not described herein in detail.
Fig. 2 is a flowchart illustrating an image processing method, as shown in fig. 2, for use in an electronic device, according to an exemplary embodiment, including the following steps.
In 201, an original face image is acquired, the original face image including a target face.
In the embodiment of the present disclosure, a face image acquired by an electronic device may be a video frame in a video call or live video process, or may be an image currently shot or pre-shot by a user, or may be a video frame in a pre-shot video, which is not specifically limited in the embodiment of the present disclosure. The present embodiment only uses one face image as an illustration of the whole image processing flow, and it can be extended that the image processing flow can be applied to each video frame in a plurality of images or videos.
As shown in fig. 3, the input of the entire image processing flow is face image data I, which in the embodiments of the present disclosure is referred to as an original face image, which has a width W I High is H I . The faces included in the original image are referred to herein as target faces, i.e., faces to be subjected to the highlighting process.
In 202, face key point detection is performed on the original face image, and a face key point detection result is obtained.
Optionally, the step uses a trained deep neural network to detect face keypoints in the original face image. For face keypoint detection, face keypoints include, but are not limited to: eyebrows, eyes, nose, mouth, face contours, etc. In some embodiments, the deep neural network training may be performed according to a plurality of face images and face key point coordinates detected in the face images, so as to obtain a deep neural network with face key point detection capability. In the follow-up process, when a face key point in a certain image is to be detected, the image is input into the depth neural network, and the face key point is detected on the basis of the depth neural network, so that the face key point coordinates in the image are determined.
In 203, a first mask map of the original face image is acquired, where the first mask map is used to distinguish between a to-be-processed area and a non-processed area in the original face image.
In some embodiments, a first mask map of the original face image is obtained, including but not limited to the following:
a standard mask image of a standard face image is obtained, wherein the standard mask image is used for distinguishing a to-be-processed area from a non-processed area in the standard face image; for example, the standard mask diagram adopts a value 1 to distinguish the area to be processed and adopts a value 0 to distinguish the non-processing area; and then, according to the face key point detection result and the standard mask map, acquiring a first mask map of the original face image.
As shown in fig. 3, taking face decoration and brightening as an example to remove dark circles, the area to be treated is a dark circle area, and the non-treated area is a non-dark circle area. The standard mask map is also referred to herein as black eye region expert material, and functions as a mask map marking black eye regions for distinguishing non-black eye regions from black eye regions.
Optionally, according to the face key point detection result and the standard mask map, the obtaining a first mask map of the original face image may be: and according to the face key points of the original face image and the face key points of the standard face image, performing one-to-one correspondence to attach expert materials to a target face included in the original face image, and further accurately positioning a black eye region in the original face image.
In 204, a second mask map of the original face image is obtained, where the second mask map is used to distinguish between a visible face region in the original face image that is not occluded and an invisible face region that is occluded.
The second mask map is the face shielding probability map in fig. 3. In some embodiments, a second mask map of the original face image is obtained, including but not limited to the following: based on an image semantic segmentation model, carrying out semantic segmentation processing on the original face image to obtain a visible face region and an invisible face region; and generating a second mask image of the original face image, wherein the pixel position with the value of a third value in the second mask image is used for indicating the visible face area, and the pixel position with the value of a fourth value in the second mask image is used for indicating the invisible face area.
The method comprises the steps of performing image semantic segmentation on an original face image to generate a face shielding probability map of the original face image, wherein the face shielding probability map reflects shielding information of the original face image.
After the original face image is subjected to image semantic segmentation, a visible face area which is not blocked by the blocking object and an invisible face area which is blocked by the blocking object are distinguished in the segmentation result. And the invisible face area is covered with the shielding object.
In some embodiments, a pixel location with a third value in the face occlusion probability map indicates the visible face region, and a pixel location with a fourth value indicates the invisible face region.
For example, the face occlusion probability map may be a binary mask, for example, when a certain pixel in the original face image belongs to a non-occluded visible face area, the corresponding position in the face occlusion probability map is 1, otherwise, when a certain pixel in the original face image belongs to an occluded non-visible face area, the corresponding position in the face occlusion probability map is 0. That is, the third value may be 1 and the fourth value may be 0. In another expression, a region with a value of 1 in the face occlusion probability map indicates a visible face region, and a region with a value of 0 indicates an invisible face region.
In the embodiment of the disclosure, semantic segmentation processing is performed on an original face image based on a pre-trained image semantic segmentation model, so as to obtain the visible face region and the invisible face region. The image semantic segmentation model is usually sensitive to edges, so that the image semantic segmentation model can be used for obtaining more accurate segmentation edges, and the segmentation effect is ensured.
In one possible implementation, the training process of the image semantic segmentation model includes, but is not limited to:
and a step a, obtaining a training sample image and a labeling segmentation result of the training sample image.
The method comprises the steps of training images, wherein a large number of face areas included in the training sample images are shielded by a shielding object such as a hand or an object, and labeling segmentation results of manually labeling the training sample images. The non-occluded visible face area and the occluded non-visible face area in each training sample image are given by a human in the labeling segmentation result.
B, inputting the training sample image into a deep neural network; and determining whether a predicted segmentation result of the training sample image output by the deep neural network is matched with a labeling segmentation result based on the target loss function.
As one example, the objective loss function may be a cross entropy loss function, and the deep neural network may be a convolutional neural network, such as a full convolutional neural network, to which embodiments of the present disclosure are not limited in detail.
And c, repeatedly and circularly updating network parameters of the deep neural network until the model converges to obtain an image semantic segmentation model when the prediction segmentation result is not matched with the labeling segmentation result.
In 205, according to the face key point detection result, performing key point expansion on the target face according to the face key point detection result to obtain expanded key points; and determining the face outer frame according to the expansion key points and acquiring the first size of the face outer frame, wherein the face outer frame is the minimum external frame including the expansion key points.
Based on the face key point information output in the above step 202, the present step first performs linear expansion on the face key point, and estimates a forehead key point (also referred to herein as a first type expansion key point) and a face outer frame point (also referred to herein as a second type expansion key point).
In some embodiments, specific implementations of this step include, but are not limited to, the following:
2051. for any one eyebrow key point, taking the in-plane key point as a starting point, connecting the in-plane key point with the eyebrow key point, and taking a certain point on a connecting line extension line of the in-plane key point and the eyebrow key point as a first type of expansion key point; and acquiring coordinates of the first type of expansion key points according to the coordinates of the key points in the surface and the first distance between the key points in the surface and the eyebrow key points.
In some embodiments, the second distance between the first class of expanded keypoints and the in-plane keypoints is N times the first distance, N being a positive integer greater than 1. For example, when the in-plane key point is taken as the starting point Xo and any one of the eyebrow key points on the eyebrow is taken as the extension line of the two points Xo and Xm, the forehead extension point Xe is defined as a point on the extension line of the two points Xo and Xm, and the extension multiple is set as N. Then the following formula is based:
The coordinates of the forehead extension point Xe can be calculated. The above step 2051 is performed once for each eyebrow key point on the eyebrow, so that a plurality of first type expansion key points can be obtained.
2052. For any cheek key point, taking the in-plane key point as a starting point, connecting the in-plane key point with the cheek key point, and taking a certain point on a connecting line extension line of the in-plane key point and the cheek key point as a second type of expansion key point; and acquiring coordinates of the second type of expansion key points according to the coordinates of the key points in the surface and the third distance between the key points in the surface and the cheek key points.
In some embodiments, the fourth distance between the second class of expanded keypoints and the in-plane keypoints is N times the second distance. Wherein step 2052 described above is performed once for each cheek keypoint on the cheek, a plurality of second class expansion keypoints may be derived.
2053. And taking the minimum circumscribed frame comprising the first type of expansion key points and the second type of expansion key points as the face outer frame, and determining the first size of the face outer frame according to the coordinates of the first type of expansion key points and the coordinates of the second type of expansion key points.
Wherein the face frame is a minimum circumscribed rectangle comprising a first type of expansion key points and a second type of expansion key points, and the first size of the face frame comprises the width W of the face frame f And height H f The method comprises the steps of carrying out a first treatment on the surface of the It should be noted that this step is a traversal operation, and the face frame is determined based on the expanded key points, instead of determining the face frame based on the face key point detection result obtained in the step 202.
In addition, the embodiment of the disclosure performs key point expansion, positions the face outer frame by using the expanded face key points, and calculates the size of the face outer frame according to the expanded face key points, so that the face region of the blur map calculated in the subsequent step contains more useful face information, and the influence of non-face pixels (such as background and the like) is reduced.
In 206, an aspect ratio of the target face in the original face image is determined based on the first size of the face frame and the second size of the original face image.
The step is used for calculating the aspect ratio of the target face in the original face image, wherein the aspect ratio comprises a longitudinal ratio and a transverse ratio, the longitudinal ratio is the ratio of the height in the first dimension to the height in the second dimension, the transverse ratio is the ratio of the width in the first dimension to the width in the second dimension, namely the transverse ratio is R fW =W f /W I Longitudinal ratio of R fH =W f /W H
The size and aspect ratio of the face frame are calculated to better estimate the size of the face in the whole image, so as to determine the step size of the blur map used in step 207 described below. Because for images of different face proportions, if the same step size is used, the coverage of the neighborhood pixels is inaccurate when calculating the low frequency map. Taking face decoration and brightening as an example of removing black eyes, in an ideal state, calculating a fuzzy graph of a black eye region, wherein the neighborhood coverage area only comprises a skin color region, but if the face proportion is too small, the same step coverage area may comprise irrelevant background pixels, so that the dark part of the subsequent black eye region is positioned inaccurately. Therefore, the step length needs to be converted in an equal proportion mode based on the face size self-adaption, so that the neighborhood pixel value is ensured to be in a skin area, and the subsequent black eye removing effect is cleaner.
In 207, according to the aspect ratio, image blurring processes with different degrees are performed on the original face image, so as to obtain a first blurring map and a second blurring map, wherein the image definition of the first blurring map is greater than that of the second blurring map.
In some embodiments, the first blur map and the second blur map are obtained according to the aspect ratio, including but not limited to:
2071. Performing downsampling processing on the original face image to obtain a first sampling image and a second sampling image; wherein the first sample pattern and the second sample pattern are the same size.
The step is used for carrying out downsampling processing on the original face image so as to obtain two downsampled data D with the same size (reduced according to a certain proportion of the original face image) I1 And D I2 . Wherein downsampled data D I1 Referred to herein as a first sample map, downsampled data D I2 Referred to herein as a second sample map.
2072. Determining a first step size according to the first coefficient, the specified constant and the aspect ratio; performing first average filtering on the first sampling graph according to a first step length to obtain a first fuzzy graph; the appointed constant is determined according to the face structure information, and the first step length comprises a first transverse step length and a first longitudinal step length.
In some embodiments, specific implementations of this step include, but are not limited to, the following: determining a first lateral step size according to the first coefficient and the lateral proportion in the aspect ratio; and determining a first longitudinal step size according to the first coefficient, the specified constant and the longitudinal proportion in the aspect ratio; in response to the value of the current position point in the first mask map being a first numerical value, average filtering is carried out on the corresponding pixel position in the first sampling map according to a first step length; responding to the value of the current position point in the first mask map as a second numerical value, and keeping the pixel value of the corresponding pixel position in the first sampling map unchanged; and repeatedly executing the steps until all the position points in the first mask map are traversed, and obtaining a first fuzzy map.
For the first sampling graph D I1 Taking the first value as 1 and the second value as 0 as an example, if the value of the current position point in the first mask map is 1, a first step size (a×r fW ,a*R fH * t) carrying out mean filtering to obtain a first fuzzy graph. I.e. for a point where the pixel position is (x, y), the calculated value is:
wherein i and j are positive integers, a is a first coefficient, and a is R fW For the first lateral step, a×r fW * t is a first longitudinal step length, k is the window size of the mean value filtering, t is the appointed constant, and the value of t is between 0 and 1.
In the example of removing the black eye, the black eye region of the face is generally an elliptical region having a transverse width greater than a longitudinal width, and may be analogous to an ellipse having a major axis greater than a minor axis. Therefore, when the average filtering calculation is performed, if the longitudinal step size is the same as the transverse step size, the non-effective area, such as the eyeball and the white area, is covered when the longitudinal sampling is performed, and the pixels of the area are non-skin color areas, which has negative influence on the final removal result, so that the situation is avoided as much as possible. The specified constant is determined by the elliptical structure information of the black eye region for longitudinal step restriction.
And if the value of the current position point in the first mask diagram is 0, keeping the pixel value of the corresponding pixel position in the first sampling diagram unchanged.
2073. Determining a second step size according to the second coefficient, the designated constant and the aspect ratio; performing second average filtering on the second sampling graph according to a second step length to obtain a second fuzzy graph; the second coefficient is larger than the first coefficient, the second step length comprises a second transverse step length and a second longitudinal step length, the second transverse step length is larger than the first transverse step length, and the second longitudinal step length is larger than the first longitudinal step length.
In some embodiments, specific implementations of this step include, but are not limited to, the following: determining a second lateral step size according to the second coefficient and the lateral proportion in the aspect ratio; and determining a second longitudinal step size according to the second coefficient, the specified constant and the longitudinal proportion in the aspect ratio; in response to the value of the current position point in the first mask map being a first numerical value, average filtering is carried out on the corresponding pixel position in the second sampling map according to the second step length; responding to the value of the current position point in the first mask map as a second numerical value, and keeping the pixel value of the corresponding pixel position in the second sampling map unchanged; and repeating the steps until all the position points in the first mask map are traversed, and obtaining a second fuzzy map.
For the second sampling graph D I2 Taking the first value as 1 and the second value as 0 as an example, if the value of the current position point in the first mask map is 1, a second step size (b×r fW ,b*R fH * t) carrying out mean filtering to obtain a second fuzzy graph. I.e. for a point where the pixel position is (x, y), the calculated value is:
wherein b is a second coefficient, b is R fW For a second lateral step, b fW * t is the second longitudinal step, b is smaller than a.
And if the value of the current position point in the first mask diagram is 0, keeping the pixel value of the corresponding pixel position in the second sampling diagram unchanged.
In the related art, when calculating the blur map (also referred to herein as the low-frequency map), the same step is used in the lateral direction and the longitudinal direction, but when the blur map is sampled in the lateral direction, the window area covered by the set step is basically a human face skin color area, however, if the same step is used in the longitudinal direction, the eye area (eyeball, white, etc.) is covered, and the structure of the eye area is inconsistent with the skin color to which the black eye area belongs, and the subsequent calculation is performed by using such data, which may cause the black eye to be removed uncleanly, or even introduce a spherical ghost. The face structure information is better utilized through the setting of t, so that the longitudinal mean filtering can be realized in the skin color area, the influence of non-skin color pixels can not be introduced, and the black eye removing effect is better.
In 208, determining a target region in the original face image and a target pixel value for performing a brightening process on the target region according to the first blur map and the second blur map; and superposing the target pixel value on a part which belongs to the area to be processed and is not shielded on the target area according to the first mask map and the second mask map, so as to obtain a target face image.
In the embodiment of the disclosure, the two blur maps are used for positioning the pixel point to be lightened and the lightening degree through the pixel difference value between the two blur maps. Taking the example of removing dark circles, in which a dark circle under an eye to be removed is actually a relatively continuous block-shaped low-frequency dark portion. In the embodiment of the disclosure, two blur maps are obtained by the two low-frequency calculations of the different parameters shown in the step 207, so that the target area and the pixel value of the target area to be brightened can be obtained accordingly. The method not only can accurately position the target area to be processed, but also can accurately determine the brightness, thereby ensuring the final face beautifying effect. Based on the above description, determining the target region and the target pixel value in the original face image according to the first blur map and the second blur map includes: and determining a target area and target pixel values in the original face image according to the pixel difference value between the first fuzzy graph and the second fuzzy graph. Including, but not limited to:
For any pixel point, determining that the pixel point belongs to a target area in response to the fact that a first pixel value of the pixel point in a first fuzzy graph is smaller than a second pixel value of the pixel point in a second fuzzy graph; in some embodiments, the target pixel value corresponding to the pixel point is determined by the following formula:
diff=(B 2 one B 1 )*w 1 +B 2 *w 2
Wherein B is 1 Refer to the first pixel value, B 2 Refer to the second pixel value; w (w) 1 Refer to a first weight, w 2 The first weight is used for controlling the influence of a pixel difference value on the brightness degree of the pixel point, and the pixel difference value is the difference between the second pixel value and the first pixel value; the second weight is used for controlling the influence of the second pixel value on the brightness degree of the pixel point; diff refers to the target pixel value corresponding to the pixel point. The first weight and the second weight are based on experimentally derived parameters.
In other embodiments, in response to the pixel value that should be lit being greater than the upper limit of the degree of lighting, the pixel value that should be lit for the pixel is updated to the upper limit of the degree of lighting, namely:
diff=max(diff,0)
diff=min(diff,m)
where m is the maximum value of the brightness level. The upper limit of the brightness is controlled to be 0, and the black eye removing effect can be ensured to be natural and unobtrusive. Because the value range of diff is controlled by the formula, the brightness of the relatively bright part in the black eye area is weaker and even 0; for relatively dark portions in the dark eye region, the degree of lightening will be high but not exceed m, which ensures the naturalness of the dark eye removal effect.
Alternatively, taking the example of removing the black eye, the target pixel value is superimposed on the non-occluded portion belonging to the black eye region according to the first mask map and the second mask map. The obtained brightness enhancement degree is added to the original face image in a superposition mode, and the face modification brightness can be cleanly and effectively carried out by controlling the superposition area (the non-shielded black eye area is superposed and the non-black eye area is not superposed), so that the face enhancement result after the black eye is removed can be obtained, and the image processing effect is good.
In the image processing process, the size of the human face in the whole image is considered in the embodiment of the disclosure, namely the embodiment of the disclosure can calculate the aspect ratio of the human face in the whole image; further, according to the aspect ratio, the original face image is subjected to fuzzy processing with different degrees to obtain a first fuzzy graph and a second fuzzy graph; and then determining a target area to be brightened and a target pixel value for the brightening treatment of the target area in the original face image according to the first fuzzy graph and the second fuzzy graph, and superposing the target pixel value on a part of the target area belonging to the area to be treated according to the mask graph of the original face image to finish the image treatment. The mask map is used for distinguishing a region to be processed and a non-processing region in the original face image; in summary, the image processing method considers the size of the face in the whole image, so that the method is suitable for brightening the areas under the conditions of different image resolutions or different face sizes, namely, face decoration and brightening can be more cleanly and effectively carried out, and further, the face beautifying effect is ensured. In other words, the embodiment of the disclosure can perform face modification and illumination well, whether for multi-resolution images or multi-face scale. In addition, the embodiment of the disclosure also considers the face structure information, and by utilizing the face structure information, the longitudinal mean filtering can be realized in the skin color area, the influence of non-skin color pixels can not be introduced, and the face modification and brightening effect is better. The embodiment of the disclosure reasonably considers the proportion of the portrait in the whole image and the face structure information, so that the face modification and the brightening under the conditions of different resolutions and different face sizes can be adaptively processed.
Fig. 4 is a block diagram of an image processing apparatus according to an exemplary embodiment. Referring to fig. 4, the apparatus includes a first processing module 401, a first determining module 402, a second determining module 403, a second processing module 404, a third determining module 405, and a third processing module 406.
A first processing module 401 configured to perform acquiring an original face image, where the original face image includes a target face; detecting the key points of the face of the original face image to obtain a detection result of the key points of the face;
a first determining module 402, configured to perform keypoint expansion on the target face according to the face keypoint detection result, so as to obtain expanded keypoints; determining a face outer frame according to the expansion key points and acquiring a first size of the face outer frame, wherein the face outer frame is a minimum external frame comprising the expansion key points;
a second determining module 403 configured to perform determining an aspect ratio of the target face in the original face image according to the first size and a second size of the original face image, the aspect ratio including a longitudinal ratio, which is a ratio of a height in the first size to a height in the second size, and a lateral ratio, which is a ratio of a width in the first size to a width in the second size;
A second processing module 404, configured to perform image blurring processing on the original face image according to the aspect ratio to obtain a first blurred image and a second blurred image, where the image sharpness of the first blurred image is greater than that of the second blurred image;
a third determining module 405 configured to determine a target region in the original face image and a target pixel value that performs a brightening process on the target region according to the first blur map and the second blur map;
a third processing module 406 configured to perform acquiring a first mask map of the original face image, where the first mask map is used to distinguish a to-be-processed area from a non-processed area in the original face image; and according to the first mask diagram, overlapping the target pixel value on the part, belonging to the region to be processed, of the target region to obtain a target face image.
In the image processing process, the size of the human face in the whole image is considered in the embodiment of the disclosure, namely the embodiment of the disclosure can calculate the aspect ratio of the human face in the whole image; further, according to the aspect ratio, the original face image is subjected to fuzzy processing with different degrees to obtain a first fuzzy graph and a second fuzzy graph; and then determining a target area to be brightened and a target pixel value for the brightening treatment of the target area in the original face image according to the first fuzzy graph and the second fuzzy graph, and superposing the target pixel value on a part of the target area belonging to the area to be treated according to the mask graph of the original face image to finish the image treatment. The mask map is used for distinguishing a region to be processed and a non-processing region in the original face image; in summary, the image processing method considers the size of the face in the whole image, so that the method is suitable for brightening the areas under the conditions of different image resolutions or different face sizes, namely, face decoration and brightening can be more cleanly and effectively carried out, and further, the face beautifying effect is ensured.
In some embodiments, the face key point detection result includes in-plane key points, eyebrow key points, and cheek key points, the in-plane key points being used to indicate a center of the target face;
the first determining module includes:
a first determining unit configured to perform, for any one of the eyebrow keypoints, determining, with the in-plane keypoint as a starting point, a point on a line extension of the in-plane keypoint and the eyebrow keypoint, as a first type of expansion keypoint, connecting the in-plane keypoint and the eyebrow keypoint;
a first acquisition unit configured to perform acquisition of coordinates of the first-type expansion key points according to coordinates of the key points in the face, a first distance between the key points in the face and the eyebrow key points;
a second determining unit configured to perform, for any one cheek-key point, a connection between the cheek-key point and the cheek-key point with the in-plane-key point as a start point, and determine a point on a line extension line of the cheek-key point and the in-plane-key point as a second-type extension-key point;
a second acquisition unit configured to perform acquisition of coordinates of the second-type expansion key points according to coordinates of the key points in the face, second distances between the key points in the face and the cheek key points;
A third determining unit configured to perform, as the face outline, a minimum circumscribed frame including the first type expansion key points and the second type expansion key points;
and a third obtaining unit configured to determine a first size of the face frame according to the coordinates of the first type of expansion key points and the coordinates of the second type of expansion key points.
In some embodiments, the second processing module comprises:
the first processing unit is configured to perform downsampling processing on the original face image to obtain a first sampling image and a second sampling image; wherein the first sample pattern and the second sample pattern are the same size;
a second processing unit configured to perform determining a first step size according to the first coefficient, the specified constant and the aspect ratio; performing first average filtering on the first sampling graph according to the first step length to obtain the first fuzzy graph; the appointed constant is determined according to face structure information, and the first step length comprises a first transverse step length and a first longitudinal step length;
a third processing unit configured to perform determining a second step size according to a second coefficient, the specified constant, and the aspect ratio; performing second average filtering on the second sampling graph according to the second step length to obtain a second fuzzy graph; the second coefficient is larger than the first coefficient, the second step size comprises a second transverse step size and a second longitudinal step size, the second transverse step size is larger than the first transverse step size, and the second longitudinal step size is larger than the first longitudinal step size.
In some embodiments, the second processing unit is configured to perform: determining the first transverse step size according to the transverse proportion in the first coefficient and the transverse proportion; and determining the first longitudinal step size according to the longitudinal proportion among the first coefficient, the specified constant and the aspect ratio;
responding to the value of the current position point in the first mask map as a first numerical value, and carrying out average filtering on the corresponding pixel position in the first sampling map according to the first step length; wherein the first value is used for indicating the region to be treated;
and repeatedly executing the steps until all the position points in the first mask map are traversed, and obtaining the first fuzzy map.
In some embodiments, the third processing unit is configured to perform: determining the second lateral step size according to the second coefficient and the lateral proportion in the aspect ratio; and determining the second longitudinal step size according to the second coefficient, the specified constant and the longitudinal proportion in the aspect ratio;
responding to the value of the current position point in the first mask map as a first numerical value, and carrying out average filtering on the corresponding pixel position in the second sampling map according to the second step length; wherein the first value is used for indicating the region to be treated;
And repeatedly executing the steps until all the position points in the first mask map are traversed, and obtaining the second fuzzy map.
In some embodiments, the third determination module is configured to perform: and determining the target area and the target pixel value in the original face image according to the pixel difference value between the first fuzzy graph and the second fuzzy graph.
In some embodiments, the third determination module is configured to perform: for any pixel point in the original face image, determining that the pixel point belongs to the target area in response to the fact that a first pixel value of the pixel point in the first fuzzy graph is smaller than a second pixel value of the pixel point in the second fuzzy graph;
the target pixel value corresponding to the pixel point is determined by the following formula:
diff=(B 2 -B 1 )*ω 1 +B 22
B 1 refer to the first pixel value, B 2 Refer to the second pixel value; w (w) 1 Refer to a first weight, w 2 A second weight is referred, the first weight is used for controlling the influence of a pixel difference value on the brightness degree of the pixel point, and the pixel difference value is the difference between the second pixel value and the first pixel value; the second weight is used for controlling the influence of the second pixel value on the brightness degree of the pixel point; diff refers to the target pixel value corresponding to the pixel point, and the value of diff is not smaller than 0.
In some embodiments, the third determination module is further configured to perform: and in response to the pixel value to be lightened of the pixel point being larger than the upper limit of the degree of lightening, updating the target pixel value corresponding to the pixel point to be the upper limit of the degree of lightening.
In some embodiments, the apparatus further comprises:
an acquisition module configured to perform acquisition of a second mask map of the original face image, the second mask map being used for distinguishing a visible face region which is not blocked from a non-visible face region which is blocked in the original face image;
the third processing module is configured to perform: and according to the first mask map and the second mask map, overlapping the target pixel value to a part which belongs to the to-be-processed area and is not shielded on the target area, so as to obtain the target face image.
Any combination of the above-mentioned optional solutions may be adopted to form an optional embodiment of the present disclosure, which is not described herein in detail.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 5 shows a block diagram of an electronic device 500 provided by an exemplary embodiment of the present disclosure. In general, the apparatus 500 comprises: a processor 501 and a memory 502.
Processor 501 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 501 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 501 may also include a main processor and a coprocessor, the main processor being a processor for processing data in an awake state, also referred to as a CPU (Central Processing Unit ); a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 501 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 501 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 502 may include one or more computer-readable storage media, which may be non-transitory. Memory 502 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 502 is used to store at least one instruction for execution by processor 501 to implement the image processing methods provided by the method embodiments in the present disclosure.
In some embodiments, the apparatus 500 may further optionally include: a peripheral interface 503 and at least one peripheral. The processor 501, memory 502, and peripheral interface 503 may be connected by buses or signal lines. The individual peripheral devices may be connected to the peripheral device interface 503 by buses, signal lines or circuit boards. Specifically, the peripheral device includes: a power supply 509.
Peripheral interface 503 may be used to connect at least one Input/Output (I/O) related peripheral to processor 501 and memory 502. In some embodiments, processor 501, memory 502, and peripheral interface 503 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 501, memory 502, and peripheral interface 503 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The power supply 504 is used to power the various components in the device 500. The power source 504 may be alternating current, direct current, disposable or rechargeable. When the power source 504 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
Those skilled in the art will appreciate that the structure shown in fig. 5 is not limiting of the apparatus 500 and may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.
In an exemplary embodiment, a computer readable storage medium is also provided, such as a memory, comprising instructions executable by a processor of the electronic device 500 to perform the above-described image processing method. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided, the instructions in which, when executed by a processor of the electronic device 500, enable the electronic device 500 to perform the image processing method as in the method embodiments described above.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (18)

1. An image processing method, comprising:
acquiring an original face image, wherein the original face image comprises a target face; detecting the key points of the face of the original face image to obtain a detection result of the key points of the face;
According to the face key point detection result, performing key point expansion on the target face to obtain expanded key points; determining a face outer frame according to the expansion key points and acquiring a first size of the face outer frame, wherein the face outer frame is a minimum external frame comprising the expansion key points;
determining an aspect ratio of the target face in the original face image according to the first size and a second size of the original face image, wherein the aspect ratio comprises a longitudinal ratio and a transverse ratio, the longitudinal ratio is a ratio of a height in the first size to a height in the second size, and the transverse ratio is a ratio of a width in the first size to a width in the second size;
according to the aspect ratio, carrying out image blurring processing on the original face image to different degrees to obtain a first blurring map and a second blurring map, wherein the image definition of the first blurring map is larger than that of the second blurring map;
determining a target area and a target pixel value for carrying out brightening treatment on the target area in the original face image according to the pixel difference value between the first fuzzy graph and the second fuzzy graph;
Acquiring a first mask image of the original face image, wherein the first mask image is used for distinguishing a region to be processed and a non-processing region in the original face image; and according to the first mask diagram, overlapping the target pixel value on the part, belonging to the region to be processed, of the target region to obtain a target face image.
2. The image processing method according to claim 1, wherein the face key point detection result includes an in-plane key point, an eyebrow key point, and a cheek key point, the in-plane key point being used to indicate a center of the target face;
performing key point expansion on the target face according to the face key point detection result to obtain expanded key points; determining the face outer frame according to the expansion key points and obtaining the first size of the face outer frame comprises the following steps:
for any one eyebrow key point, taking the in-plane key point as a starting point, connecting the in-plane key point with the eyebrow key point, and determining a point on a connecting line extension line of the in-plane key point and the eyebrow key point as a first type of expansion key point; acquiring coordinates of the first type of expansion key points according to the coordinates of the key points in the surface and the first distance between the key points in the surface and the eyebrow key points;
For any cheek key point, taking the in-plane key point as a starting point, connecting the in-plane key point and the cheek key point, and determining a point on a connecting line extension line of the in-plane key point and the cheek key point as a second type of expansion key point; acquiring coordinates of the second type of expansion key points according to the coordinates of the key points in the surface and the second distance between the key points in the surface and the cheek key points;
and taking the minimum circumscribed frame comprising the first type of expansion key points and the second type of expansion key points as the face outer frame, and determining the first size of the face outer frame according to the coordinates of the first type of expansion key points and the coordinates of the second type of expansion key points.
3. The image processing method according to claim 1, wherein the performing image blurring processes of different degrees on the original face image according to the aspect ratio to obtain a first blur map and a second blur map includes:
performing downsampling processing on the original face image to obtain a first sampling image and a second sampling image; wherein the first sample pattern and the second sample pattern are the same size;
Determining a first step size according to the first coefficient, the designated constant and the aspect ratio; performing first average filtering on the first sampling graph according to the first step length to obtain the first fuzzy graph; the appointed constant is determined according to face structure information, and the first step length comprises a first transverse step length and a first longitudinal step length;
determining a second step size according to a second coefficient, the specified constant and the aspect ratio; performing second average filtering on the second sampling graph according to the second step length to obtain a second fuzzy graph; the second coefficient is larger than the first coefficient, the second step size comprises a second transverse step size and a second longitudinal step size, the second transverse step size is larger than the first transverse step size, and the second longitudinal step size is larger than the first longitudinal step size.
4. The image processing method according to claim 3, wherein the determining a first step size is based on a first coefficient, a specified constant, and the aspect ratio; performing first mean filtering on the first sampling graph according to the first step length to obtain the first fuzzy graph, including:
determining the first transverse step size according to the transverse proportion in the first coefficient and the transverse proportion; and determining the first longitudinal step size according to the longitudinal proportion among the first coefficient, the specified constant and the aspect ratio;
Responding to the value of the current position point in the first mask map as a first numerical value, and carrying out average filtering on the corresponding pixel position in the first sampling map according to the first step length; wherein the first value is used for indicating the region to be treated;
and repeatedly executing the steps until all the position points in the first mask map are traversed, and obtaining the first fuzzy map.
5. The image processing method according to claim 3, wherein the determining a second step size is based on a second coefficient, the prescribed constant, and the aspect ratio; and performing a second mean filtering on the second sampling graph according to the second step length to obtain the second fuzzy graph, including:
determining the second lateral step size according to the second coefficient and the lateral proportion in the aspect ratio; and determining the second longitudinal step size according to the second coefficient, the specified constant and the longitudinal proportion in the aspect ratio;
responding to the value of the current position point in the first mask map as a first numerical value, and carrying out average filtering on the corresponding pixel position in the second sampling map according to the second step length; wherein the first value is used for indicating the region to be treated;
And repeatedly executing the steps until all the position points in the first mask map are traversed, and obtaining the second fuzzy map.
6. The image processing method according to claim 1, wherein the determining a target area and a target pixel value for the target area to be subjected to the brightening process in the original face image based on a pixel difference value between the first blur map and the second blur map includes:
for any pixel point in the original face image, determining that the pixel point belongs to the target area in response to the fact that a first pixel value of the pixel point in the first fuzzy graph is smaller than a second pixel value of the pixel point in the second fuzzy graph;
the target pixel value corresponding to the pixel point is determined by the following formula:
diff=(B 2 -B 1 )*w 1 +B 2 *w 2
B 1 refer to the first pixel value, B 2 Refer to the second pixel value; w (w) 1 Refer to a first weight, w 2 A second weight is referred, the first weight is used for controlling the influence of a pixel difference value on the brightness degree of the pixel point, and the pixel difference value is the difference between the second pixel value and the first pixel value; the second weight is used for controlling the influence of the second pixel value on the brightness degree of the pixel point; diff refers to the target pixel value corresponding to the pixel point, and the value of diff is not smaller than 0.
7. The image processing method according to claim 6, characterized in that the method further comprises:
and in response to the pixel value to be lightened of the pixel point being larger than the upper limit of the degree of lightening, updating the target pixel value corresponding to the pixel point to be the upper limit of the degree of lightening.
8. The image processing method according to claim 1, characterized in that the method further comprises:
acquiring a second mask image of the original face image, wherein the second mask image is used for distinguishing a visible face area which is not shielded from a non-visible face area which is shielded in the original face image;
and according to the first mask diagram, overlapping the target pixel value to a portion of the target region belonging to the region to be processed to obtain a target face image, including:
and according to the first mask map and the second mask map, overlapping the target pixel value to a part which belongs to the to-be-processed area and is not shielded on the target area, so as to obtain the target face image.
9. An image processing apparatus, comprising:
the first processing module is configured to acquire an original face image, wherein the original face image comprises a target face; detecting the key points of the face of the original face image to obtain a detection result of the key points of the face;
The first determining module is configured to execute the key point expansion of the target face according to the face key point detection result to obtain expanded key points; determining a face outer frame according to the expansion key points and acquiring a first size of the face outer frame, wherein the face outer frame is a minimum external frame comprising the expansion key points;
a second determining module configured to perform determining an aspect ratio of the target face in the original face image from the first size and a second size of the original face image, the aspect ratio including a longitudinal ratio that is a ratio of a height in the first size to a height in the second size and a lateral ratio that is a ratio of a width in the first size to a width in the second size;
the second processing module is configured to execute image blurring processing of different degrees on the original face image according to the aspect ratio to obtain a first blurring map and a second blurring map, and the image definition of the first blurring map is larger than that of the second blurring map;
a third determining module configured to perform determining a target region in the original face image and a target pixel value that performs a brightening process on the target region according to a pixel difference value between the first blur map and the second blur map;
A third processing module configured to perform acquiring a first mask map of the original face image, the first mask map being used for distinguishing a region to be processed from a non-processed region in the original face image; and according to the first mask diagram, overlapping the target pixel value on the part, belonging to the region to be processed, of the target region to obtain a target face image.
10. The image processing apparatus according to claim 9, wherein the face key point detection result includes an in-plane key point, an eyebrow key point, and a cheek key point, the in-plane key point being used to indicate a center of the target face;
the first determining module includes:
a first determining unit configured to perform, for any one of the eyebrow keypoints, determining, with the in-plane keypoint as a starting point, a point on a line extension of the in-plane keypoint and the eyebrow keypoint, as a first type of expansion keypoint, connecting the in-plane keypoint and the eyebrow keypoint;
a first acquisition unit configured to perform acquisition of coordinates of the first-type expansion key points according to coordinates of the key points in the face, a first distance between the key points in the face and the eyebrow key points;
A second determining unit configured to perform, for any one cheek-key point, a connection between the cheek-key point and the cheek-key point with the in-plane-key point as a start point, and determine a point on a line extension line of the cheek-key point and the in-plane-key point as a second-type extension-key point;
a second acquisition unit configured to perform acquisition of coordinates of the second-type expansion key points according to coordinates of the key points in the face, second distances between the key points in the face and the cheek key points;
a third determining unit configured to perform, as the face outline, a minimum circumscribed frame including the first type expansion key points and the second type expansion key points;
and a third obtaining unit configured to determine a first size of the face frame according to the coordinates of the first type of expansion key points and the coordinates of the second type of expansion key points.
11. The image processing apparatus of claim 9, wherein the second processing module comprises:
the first processing unit is configured to perform downsampling processing on the original face image to obtain a first sampling image and a second sampling image; wherein the first sample pattern and the second sample pattern are the same size;
A second processing unit configured to perform determining a first step size according to the first coefficient, the specified constant and the aspect ratio; performing first average filtering on the first sampling graph according to the first step length to obtain the first fuzzy graph; the appointed constant is determined according to face structure information, and the first step length comprises a first transverse step length and a first longitudinal step length;
a third processing unit configured to perform determining a second step size according to a second coefficient, the specified constant, and the aspect ratio; performing second average filtering on the second sampling graph according to the second step length to obtain a second fuzzy graph; the second coefficient is larger than the first coefficient, the second step size comprises a second transverse step size and a second longitudinal step size, the second transverse step size is larger than the first transverse step size, and the second longitudinal step size is larger than the first longitudinal step size.
12. The image processing apparatus according to claim 11, wherein the second processing unit is configured to perform: determining the first transverse step size according to the transverse proportion in the first coefficient and the transverse proportion; and determining the first longitudinal step size according to the longitudinal proportion among the first coefficient, the specified constant and the aspect ratio;
Responding to the value of the current position point in the first mask map as a first numerical value, and carrying out average filtering on the corresponding pixel position in the first sampling map according to the first step length; wherein the first value is used for indicating the region to be treated;
and repeatedly executing the steps until all the position points in the first mask map are traversed, and obtaining the first fuzzy map.
13. The image processing apparatus according to claim 11, wherein the third processing unit is configured to perform: determining the second lateral step size according to the second coefficient and the lateral proportion in the aspect ratio; and determining the second longitudinal step size according to the second coefficient, the specified constant and the longitudinal proportion in the aspect ratio;
responding to the value of the current position point in the first mask map as a first numerical value, and carrying out average filtering on the corresponding pixel position in the second sampling map according to the second step length; wherein the first value is used for indicating the region to be treated;
and repeatedly executing the steps until all the position points in the first mask map are traversed, and obtaining the second fuzzy map.
14. The image processing apparatus according to claim 9, wherein the third determination module is configured to perform: for any pixel point in the original face image, determining that the pixel point belongs to the target area in response to the fact that a first pixel value of the pixel point in the first fuzzy graph is smaller than a second pixel value of the pixel point in the second fuzzy graph;
the target pixel value corresponding to the pixel point is determined by the following formula:
diff=(B 2 -B 1 )*w 1 +B 2 *w 2
B 1 refer to the first pixel value, B 2 Refer to the second pixel value; w (w) 1 Refer to a first weight, w 2 A second weight is referred, the first weight is used for controlling the influence of a pixel difference value on the brightness degree of the pixel point, and the pixel difference value is the difference between the second pixel value and the first pixel value; the second weight is used for controlling the influence of the second pixel value on the brightness degree of the pixel point; diff refers to the target pixel value corresponding to the pixel point, and the value of diff is not smaller than 0.
15. The image processing apparatus of claim 14, wherein the third determination module is further configured to perform: and in response to the pixel value to be lightened of the pixel point being larger than the upper limit of the degree of lightening, updating the target pixel value corresponding to the pixel point to be the upper limit of the degree of lightening.
16. The image processing apparatus according to claim 9, wherein the apparatus further comprises:
an acquisition module configured to perform acquisition of a second mask map of the original face image, the second mask map being used for distinguishing a visible face region which is not blocked from a non-visible face region which is blocked in the original face image;
the third processing module is configured to perform: and according to the first mask map and the second mask map, overlapping the target pixel value to a part which belongs to the to-be-processed area and is not shielded on the target area, so as to obtain the target face image.
17. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the image processing method of any one of claims 1 to 8.
18. A computer readable storage medium, characterized in that instructions in the computer readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the image processing method of any one of claims 1 to 8.
CN202110603394.XA 2021-05-31 2021-05-31 Image processing method, device, electronic equipment and storage medium Active CN113379623B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110603394.XA CN113379623B (en) 2021-05-31 2021-05-31 Image processing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110603394.XA CN113379623B (en) 2021-05-31 2021-05-31 Image processing method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113379623A CN113379623A (en) 2021-09-10
CN113379623B true CN113379623B (en) 2023-12-19

Family

ID=77575038

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110603394.XA Active CN113379623B (en) 2021-05-31 2021-05-31 Image processing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113379623B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113657357B (en) * 2021-10-20 2022-02-25 北京市商汤科技开发有限公司 Image processing method, image processing device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108009999A (en) * 2017-11-30 2018-05-08 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and electronic equipment
WO2018137623A1 (en) * 2017-01-24 2018-08-02 深圳市商汤科技有限公司 Image processing method and apparatus, and electronic device
CN110378846A (en) * 2019-06-28 2019-10-25 北京字节跳动网络技术有限公司 A kind of method, apparatus, medium and the electronic equipment of processing image mill skin
CN110929651A (en) * 2019-11-25 2020-03-27 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN112102198A (en) * 2020-09-17 2020-12-18 广州虎牙科技有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN112348736A (en) * 2020-10-12 2021-02-09 武汉斗鱼鱼乐网络科技有限公司 Method, storage medium, device and system for removing black eye

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018137623A1 (en) * 2017-01-24 2018-08-02 深圳市商汤科技有限公司 Image processing method and apparatus, and electronic device
CN108009999A (en) * 2017-11-30 2018-05-08 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and electronic equipment
CN110378846A (en) * 2019-06-28 2019-10-25 北京字节跳动网络技术有限公司 A kind of method, apparatus, medium and the electronic equipment of processing image mill skin
CN110929651A (en) * 2019-11-25 2020-03-27 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN112102198A (en) * 2020-09-17 2020-12-18 广州虎牙科技有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN112348736A (en) * 2020-10-12 2021-02-09 武汉斗鱼鱼乐网络科技有限公司 Method, storage medium, device and system for removing black eye

Also Published As

Publication number Publication date
CN113379623A (en) 2021-09-10

Similar Documents

Publication Publication Date Title
US11250241B2 (en) Face image processing methods and apparatuses, and electronic devices
CN108229279B (en) Face image processing method and device and electronic equipment
EP3338217B1 (en) Feature detection and masking in images based on color distributions
Arbel et al. Shadow removal using intensity surfaces and texture anchor points
CN109952594B (en) Image processing method, device, terminal and storage medium
CN108012081B (en) Intelligent beautifying method, device, terminal and computer readable storage medium
KR101446975B1 (en) Automatic face and skin beautification using face detection
CN113205568B (en) Image processing method, device, electronic equipment and storage medium
CN108830892B (en) Face image processing method and device, electronic equipment and computer readable storage medium
CN107665482B (en) Video data real-time processing method and device for realizing double exposure and computing equipment
CN108734126B (en) Beautifying method, beautifying device and terminal equipment
CN112258440B (en) Image processing method, device, electronic equipment and storage medium
CN111563435A (en) Sleep state detection method and device for user
RU2697627C1 (en) Method of correcting illumination of an object on an image in a sequence of images and a user's computing device which implements said method
CN107705279B (en) Image data real-time processing method and device for realizing double exposure and computing equipment
CN113379623B (en) Image processing method, device, electronic equipment and storage medium
CN114581979A (en) Image processing method and device
CN116612263B (en) Method and device for sensing consistency dynamic fitting of latent vision synthesis
CN114862729A (en) Image processing method, image processing device, computer equipment and storage medium
CN111652792B (en) Local processing method, live broadcasting method, device, equipment and storage medium for image
CN107316281B (en) Image processing method and device and terminal equipment
WO2022258013A1 (en) Image processing method and apparatus, electronic device and readable storage medium
CN114742725A (en) Image processing method, image processing device, electronic equipment and storage medium
CN110675413A (en) Three-dimensional face model construction method and device, computer equipment and storage medium
CN114187202A (en) Image processing method, apparatus, device, storage medium, and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant