CN112766204A - Image processing method, image processing apparatus, and computer-readable storage medium - Google Patents
Image processing method, image processing apparatus, and computer-readable storage medium Download PDFInfo
- Publication number
- CN112766204A CN112766204A CN202110114651.3A CN202110114651A CN112766204A CN 112766204 A CN112766204 A CN 112766204A CN 202110114651 A CN202110114651 A CN 202110114651A CN 112766204 A CN112766204 A CN 112766204A
- Authority
- CN
- China
- Prior art keywords
- image
- area
- skin
- mask
- buffing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012545 processing Methods 0.000 title claims abstract description 47
- 238000003672 processing method Methods 0.000 title claims abstract description 19
- 238000003860 storage Methods 0.000 title claims abstract description 17
- 238000000034 method Methods 0.000 claims description 69
- 206010040844 Skin exfoliation Diseases 0.000 claims description 58
- 238000010586 diagram Methods 0.000 claims description 30
- 230000008569 process Effects 0.000 claims description 26
- 238000011282 treatment Methods 0.000 claims description 26
- 238000001514 detection method Methods 0.000 claims description 21
- 230000014759 maintenance of location Effects 0.000 claims description 18
- 238000005282 brightening Methods 0.000 claims description 12
- 210000004709 eyebrow Anatomy 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 10
- 238000004364 calculation method Methods 0.000 claims description 6
- 230000000717 retained effect Effects 0.000 claims description 6
- 238000005728 strengthening Methods 0.000 claims description 4
- 230000035772 mutation Effects 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 38
- 230000004069 differentiation Effects 0.000 abstract description 7
- 210000001508 eye Anatomy 0.000 description 18
- 238000001914 filtration Methods 0.000 description 18
- 230000001965 increasing effect Effects 0.000 description 10
- 239000003086 colorant Substances 0.000 description 9
- 230000006870 function Effects 0.000 description 7
- 230000037311 normal skin Effects 0.000 description 7
- 238000004422 calculation algorithm Methods 0.000 description 6
- 230000000295 complement effect Effects 0.000 description 5
- 230000003247 decreasing effect Effects 0.000 description 5
- 208000002874 Acne Vulgaris Diseases 0.000 description 4
- 206010000496 acne Diseases 0.000 description 4
- 230000002146 bilateral effect Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000005192 partition Methods 0.000 description 3
- 238000005498 polishing Methods 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 230000003796 beauty Effects 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 230000007935 neutral effect Effects 0.000 description 2
- 210000001331 nose Anatomy 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 208000035874 Excoriation Diseases 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000002537 cosmetic Substances 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 210000001061 forehead Anatomy 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000003707 image sharpening Methods 0.000 description 1
- 210000001847 jaw Anatomy 0.000 description 1
- 210000000088 lip Anatomy 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007639 printing Methods 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 230000007723 transport mechanism Effects 0.000 description 1
- 230000002087 whitening effect Effects 0.000 description 1
- 238000013316 zoning Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20028—Bilateral filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the disclosure provides an image processing method, an image processing device and a computer readable storage medium, wherein the image processing method comprises the following steps: acquiring an input image containing a human face, and determining a skin area; detecting a face in the input image to obtain a face area, extracting key points of the face in the face area, and determining a reserved area and a non-reserved area of the face area according to the key points; and fusing the image of the reserved area, the first skin grinding image of the non-reserved area and the second skin grinding image of the skin area to obtain an output image, wherein the skin grinding intensity of the second skin grinding image is smaller than that of the first skin grinding image. According to the embodiment of the invention, different regions of the face are processed in a differentiation way by distinguishing the reserved region and the unreserved region of the face region, so that a natural beautifying effect can be obtained.
Description
Technical Field
The embodiment of the disclosure relates to the technical field of image processing, in particular to an image processing method, an image processing device and a computer-readable storage medium.
Background
In order to take a satisfactory photograph, it is often necessary to use software having a function of decorating the photograph. With the increasing popularity of the retouching software, the requirements of people on the beautifying function of the retouching software are also increasing, and the beautifying effect is expected to be closer to the real one and higher than the real effect.
However, the current beautifying effect is not natural enough and is easy to distort.
Disclosure of Invention
The disclosed embodiments provide an image processing method, apparatus, and computer-readable storage medium, which achieve a more natural beauty effect.
In one aspect, an embodiment of the present disclosure provides an image processing method, including:
acquiring an input image containing a human face, and determining a skin area; detecting a face in the input image to obtain a face area, extracting key points of the face in the face area, and determining a reserved area and a non-reserved area of the face area according to the key points;
and fusing the image of the reserved area, the first skin grinding image of the non-reserved area and the second skin grinding image of the skin area to obtain an output image, wherein the skin grinding intensity of the second skin grinding image is smaller than that of the first skin grinding image.
On the other hand, the embodiment of the present disclosure further provides a display device, which includes a processor and a memory storing a computer program executable on the processor, wherein the processor implements the steps of the image processing method when executing the program.
In still another aspect, embodiments of the present disclosure further provide a computer-readable storage medium, which stores a computer program executable on a processor, where the computer program is used to implement the above-mentioned image processing method when executed by the processor.
According to the method provided by the embodiment of the disclosure, different regions of the face are subjected to differentiation processing by distinguishing the reserved region, the unreserved region and the skin region of the face region, and the texture of the reserved region is reserved while the unreserved region and the skin region are subjected to differentiation skin grinding processing, so that a natural beautifying effect can be obtained.
Of course, not all advantages described above need to be achieved at the same time to practice any one product or method of the present disclosure. Additional features and advantages of the disclosure will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the disclosure. The objectives and other advantages of the disclosed embodiments may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings are included to provide a further understanding of the disclosed embodiments and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the example serve to explain the principles of the disclosure and not to limit the disclosure. The shapes and sizes of the various elements in the drawings are not to be considered as true proportions, but are merely intended to illustrate the present disclosure.
FIG. 1 is a flowchart of an image processing method according to an embodiment of the disclosure;
FIG. 2 is a flow chart of another image processing method according to an embodiment of the disclosure;
FIG. 3a is an example of an original input image according to an embodiment of the present disclosure;
FIG. 3b is a mask image obtained after skin detection of the original input image in FIG. 3 a;
FIG. 3c is another example of an original input image according to an embodiment of the present disclosure;
FIG. 3d is a mask image obtained after skin detection of the original input image in FIG. 3 c;
FIG. 4a is a schematic diagram of a face key point and a face ROI in an embodiment of the disclosure;
FIG. 4b is a diagram of a reserved area mask according to an embodiment of the present disclosure;
FIG. 4c is a diagram of a mask of an unreserved region according to an embodiment of the present disclosure;
FIG. 5a is an example of an original input image according to an embodiment of the present disclosure;
FIG. 5b is a gray scale view of the original input image of FIG. 5 a;
FIG. 6a is an initial feature point diagram of an original input image according to an embodiment of the disclosure;
FIG. 6b is a feature point diagram after feature point enhancement according to an embodiment of the present disclosure;
FIGS. 7a and 7b are diagrams comparing an original input image with a curve-enhanced image according to an embodiment of the present disclosure;
FIG. 8a is a graph showing the effect of high contrast dermabrasion according to embodiments of the present disclosure;
FIG. 8b is a diagram illustrating the effect of bilateral filtering after dermabrasion according to the embodiment of the present disclosure;
FIG. 8c is a diagram illustrating the effect of the edge-preserving filtering after buffing according to the embodiment of the present disclosure;
FIG. 9 is a schematic sharpening flow chart according to an embodiment of the disclosure;
FIG. 10a is a chart of high contrast dermabrasion prior to sharpening according to embodiments of the present disclosure;
FIG. 10b is a high contrast dermabrasion image sharpened according to embodiments of the present disclosure;
FIG. 11 is a color ring;
12a and 12b are diagrams comparing an original input image with an image after color adjustment according to an embodiment of the present disclosure;
13a and 13b are graphs comparing an original input image with a processed image according to an embodiment of the present disclosure;
fig. 14 is a schematic structural diagram of an image processing apparatus according to an embodiment of the disclosure.
Detailed Description
The present disclosure describes embodiments, but the description is illustrative rather than limiting and it will be apparent to those of ordinary skill in the art that many more embodiments and implementations are possible within the scope of the embodiments described in the present disclosure. Although many possible combinations of features are shown in the drawings and discussed in the embodiments, many other combinations of the disclosed features are possible. Any feature or element of any embodiment may be used in combination with or instead of any other feature or element in any other embodiment, unless expressly limited otherwise.
The present disclosure includes and contemplates combinations of features and elements known to those of ordinary skill in the art. The embodiments, features and elements of the present disclosure that have been disclosed may also be combined with any conventional features or elements to form unique inventive aspects as defined by the claims. Any feature or element of any embodiment may also be combined with features or elements from other inventive aspects to form yet another unique inventive aspect, as defined by the claims. Thus, it should be understood that any features shown and/or discussed in this disclosure may be implemented alone or in any suitable combination. Accordingly, the embodiments are not limited except as by the appended claims and their equivalents. Furthermore, various modifications and changes may be made within the scope of the appended claims.
Further, in describing representative embodiments, the specification may have presented the method and/or process as a particular sequence of steps. However, to the extent that the method or process does not rely on the particular order of steps set forth herein, the method or process should not be limited to the particular sequence of steps described. Other orders of steps are possible as will be understood by those of ordinary skill in the art. Therefore, the particular order of the steps set forth in the specification should not be construed as limitations on the claims. Further, the claims directed to the method and/or process should not be limited to the performance of their steps in the order written, and one skilled in the art can readily appreciate that the sequences may be varied and still remain within the spirit and scope of the embodiments of the present disclosure.
Unless defined otherwise, technical or scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. The use of "first," "second," and similar terms in this disclosure is not intended to indicate any order, quantity, or importance, but rather is used to distinguish one element from another. In the present disclosure, "a plurality" may mean two or more numbers. The word "comprising" or "comprises", and the like, means that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items.
To maintain the following description of the embodiments of the present disclosure clear and concise, a detailed description of some known functions and components have been omitted from the present disclosure. The drawings of the embodiments of the disclosure only relate to the structures related to the embodiments of the disclosure, and other structures can refer to the common design.
One common cosmetic operation is to operate on the entire image without distinguishing between different regions, which can affect the contrast degradation, distortion, etc. of the entire image. Even if the face area and the background area are distinguished, the boundary processing of the face area and the background area is a problem. Meanwhile, different areas exist in the human face, and if the human eyes, the eyebrows, the lips and other areas are rubbed, the eye spirit of the human eyes, the texture of the eyebrows and the texture of the lips can be lost, so that unnatural distortion is caused. If the pouch area and the law pattern area of the face are not emphasized, the skin abrasion is not enough, and the beautifying effect cannot be achieved.
An embodiment of the present disclosure provides an image processing method, as shown in fig. 1, including the following steps:
the input image containing a human face may be, for example, an image extracted from a video frame.
For example, the reserved area includes one or more of the following areas: eye area, eyebrow area, lip area; the unreserved region comprises one or more of: a stature area, an eye pouch area.
In an exemplary embodiment, the acquiring an input image including a human face and determining a skin region may include: and performing skin detection on the input image, and determining a skin area in the face area as the skin area by combining the face area. By combining skin detection and the face area, the skin in the face area can be accurately determined, the problem that the background is mistaken for the skin possibly occurring in simple skin detection is avoided, and the problems of face beautification and inconsistent background are solved.
And step 12, fusing the image of the reserved area, the first skin grinding image of the non-reserved area and the second skin grinding image of the skin area to obtain an output image, wherein the skin grinding intensity of the second skin grinding image is smaller than that of the first skin grinding image.
The buffing is an image processing method, can eliminate flaws such as spots or variegates in an image, and can enable the face of a person to be finer and smoother. The intensity of the buffing reflects the degree of flaw removal and the fineness of the human face, and the stronger the buffing intensity, the fewer the flaws and the finer the skin.
The embodiment of the disclosure provides a method for beautifying a face through partition refinement, wherein different areas of the face are subjected to differentiation processing by distinguishing reserved areas, non-reserved areas and skin areas of the face area, and the texture of the reserved areas is reserved while the non-reserved areas and the skin areas are subjected to differentiation skin grinding processing, so that a natural beautifying effect can be obtained.
In an exemplary embodiment, the fusing the image of the reserved area, the first dermabrasion image of the unreserved area and the second dermabrasion image of the skin area to obtain an output image comprises:
step 121, performing a first buffing treatment to obtain a first buffing image of the non-reserved area;
in an exemplary embodiment, a first buffing process may be performed on the input image to obtain a first buffing process result map, and a first buffing image of the non-reserved area is obtained according to a non-reserved area mask: and the first buffing image is a first buffing processing result graph and is a non-reserved area mask.
In the disclosed embodiment, the mask is a binary image (comprising 0 and 1). The skin-grinding processing result graph is multiplied by the mask, so that the purpose of reserving the image of the region of interest and shielding the region which does not participate in processing can be achieved. For example, for a pixel a at a certain position, the pixel value of the pixel a in the first peeling process result image is P1, but the value in the mask of the non-reserved area is 1, the pixel value of the pixel a in the first peeling image obtained after multiplication is still P1, and for a pixel B at another position, the pixel value of the pixel B in the first peeling process result image is P2, but the value in the mask of the non-reserved area is 0, the pixel value of the pixel B in the first peeling image obtained after multiplication is 0. In step 121, a first peeling result, i.e., a first peeling image, of the non-preserved area is obtained by multiplying the first peeling result map by the non-preserved area mask.
Step 122, performing second buffing treatment to obtain second buffing images of other areas except the non-reserved area;
in an exemplary embodiment, a second buffing processing may be performed on the input image to obtain a second buffing processing result map, and according to a non-reserved area mask, a second buffing image of an area other than the non-reserved area is obtained: and the second buffing image is a second buffing processing result graph (1-non-reserved area mask). The intensity of the second peeling treatment is less than the intensity of the first peeling treatment.
Step 123, fusing the first buffing image and the second buffing image to obtain a third buffing image;
that is, the third buffing image is equal to the first buffing image and the second buffing image
And step 124, obtaining an output image according to the skin area mask and the reserved area mask based on the third dermabrasion image.
For example, one of the following ways may be used:
based on the third dermabrasion image, obtaining a first result image as an output image according to the skin area mask and the reserved area mask: the first result image is the third dermabrasion image (skin area mask-retained area mask); or
Sharpening the third buffing image to obtain a sharpened third buffing image; based on the sharpened third dermabrasion image, obtaining a first result image as an output image according to a skin area mask and a reserved area mask: the first result image is the sharpened third dermabrasion image (skin area mask-retained area mask). The image sharpening process is used for compensating the outline of the image, enhancing the edge of the image and the part with the gray jump, enabling the image to be clear, and obtaining a clearer beautifying effect through sharpening.
In the output image obtained by the method, the skin area in the face area has a slight skin polishing effect, the non-reserved area has a severe skin polishing effect, and the reserved area and the background area have no skin polishing effect.
In an exemplary embodiment, after obtaining the first result map, the method may further include: carrying out color adjustment on the input image to obtain an image after color adjustment; obtaining a second result graph according to the skin area mask and the reserved area mask: second result graph (color adjusted image) (1-skin region mask + remaining region mask);
the obtaining of the output image includes: and fusing the first result graph and the second result graph to obtain an output image: and outputting the image as the first result image and the second result image.
The color adjustment of the input image may be, for example, reducing cyan of a red region pixel, which is a pixel having a largest red channel component value.
In this embodiment, the red color in the image can be increased by color adjustment, for example, reducing the cyan color in the input image, and the redness of the lip part can be increased for the face part, resulting in a better effect of beautifying the face.
In an exemplary embodiment, the first peeling treatment is a high contrast peeling treatment.
In an exemplary embodiment, the second dermabrasion process is a high contrast retention dermabrasion process.
Alternatively, the first and second dermabrasion treatments may employ the same or different dermabrasion treatment methods.
In an exemplary embodiment, the high contrast retention peeling process comprises:
acquiring a characteristic point diagram of the input image with the gray level mutation;
performing feature point enhancement processing on the feature point diagram for multiple times to obtain a high-contrast reserved mask, and controlling the intensity of buffing processing through the times of the feature point enhancement processing;
and calculating according to the high-contrast reserved mask to obtain a buffing image.
Wherein: the buffing image is the input image with a high contrast retaining mask.
When the characteristic point diagram with the abrupt change of the gray level of the input image is obtained, the input image can be subjected to smooth filtering, and then the gray level diagram after smooth filtering is subtracted from the gray level diagram of the input image, so that the characteristic point diagram with the abrupt change of the gray level can be obtained.
Feature point enhancement is used to increase the pixel contrast of an image, in general, by one or more functional operations, bright places in the image are brighter, and dark places are darker.
The high contrast retained buffing process employed in this example achieves a more surprising effect in ensuring buffing and detail fidelity.
In an exemplary embodiment, after performing the feature point enhancement processing a plurality of times, the method further includes: and zooming the feature map subjected to the feature point strengthening treatment to obtain a high-contrast reserved mask, and controlling the intensity of the peeling treatment through the zooming intensity. The peeling intensity can be further controlled by image scaling.
In an exemplary embodiment, before obtaining the dermabrasion image from the high contrast retention mask calculation, the method further comprises: brightening an input image to obtain a brightening image;
the obtaining of the buffing image according to the high contrast retained mask calculation comprises: calculating according to the high contrast reserved mask and the brightening image to obtain a buffing image:
buffing image ═ input image ═ high contrast retention mask + highlight image — (1 — high contrast retention mask).
In this embodiment, the whitening effect can be achieved by the brightening treatment. The brightening process may use a curve adjustment method, or an opencv (a computer vision library) to process brightness and contrast in an image.
The embodiment of the disclosure performs partition refinement operation on an input image. The human face area and the background area are distinguished by combining skin detection and human face ROI, so that the problem of boundary processing of the human face area and the background area is solved, and background distortion is avoided. In addition, different areas of the human face are processed, reserved areas (such as the areas of human eyes, eyebrows, lips and the like) are extracted through key points of the human face, and the reserved areas are not ground, so that the eye spirit of the human eyes, the texture of the eyebrows, the texture of the lips and the like can be reserved. The non-reserved areas (such as the eye bag area and the statute line area) of the face are heavily abraded, and a better beautifying effect is achieved. The red moisture of the lips can be improved by using color adjustment, and the skin beautifying effect is further enhanced. Compared with other beautifying methods, the high-contrast retaining peeling and zoning treatment used in the embodiment of the disclosure has a flamboyance effect in ensuring the beautifying effect and the texture definition of the output image.
The following describes an embodiment of the present disclosure by using an application example, and as shown in fig. 2, the implementation steps include:
the face ROI may be obtained, for example, by face recognition, as shown in the area indicated by the square box line in fig. 4 a.
optionally, the intersection of the face region and the skin detection result may be taken as the skin mask.
The skin detection algorithm is explained below.
The skin color detection technology is an analysis and calculation process of human skin color pixels. The skin color detection technology comprises a method based on a color space, a method based on spectral characteristics and a method based on a skin color reflection model, wherein the method mainly comprises the steps of firstly transforming the color space and then establishing the skin color model. The color space in skin color detection comprises RGB, YCrCb, HSV, Lab and the like, and the RGB color space is converted into the corresponding color space during processing. Widely used are based on skin color clustering or threshold segmentation. YCbCr, HSV, RGB or LAB color space thresholding is more commonly used.
Taking the YCbCr color space as an example, the YCbCr color space is a commonly used color model for skin color detection, where Y represents luminance, Cr represents a red component in a light source, and Cb represents a blue component in the light source. The color difference in appearance of human skin color is caused by chromaticity, and the skin color distribution of different people is concentrated in a small area. And performing binary segmentation on the Cr component by using an Otsu (Otsu method-maximum inter-class variance method) algorithm, so that a skin color clustering result can be accurately obtained.
The Otsu algorithm uses the idea of clustering, dividing the number of gray levels of an image into two parts by gray scale, so that the difference in gray level between the two parts is the largest and the difference in gray level between each part is the smallest, and dividing by finding a suitable gray level through variance calculation. The Otsu algorithm can be used to automatically select the threshold value for binarization during binarization. The Otsu algorithm is considered as the optimal algorithm for selecting the threshold value in image segmentation, is simple to calculate and is not influenced by the brightness and the contrast of an image. Thus, a segmentation that maximizes the inter-class variance means that the probability of false positives is minimized.
Fig. 3 shows a schematic diagram of a skin detection Mask, fig. 3a is an original 1, fig. 3b is a Mask diagram obtained by detecting the skin of the original 1, fig. 3c is an original 2, and fig. 3d is a Mask diagram obtained by detecting the skin of the original 2. And subsequently, combining the face area to obtain the skin mask in the face area.
The accurate skin area mask can be obtained by taking the intersection of the face area and the area obtained by skin detection.
reserved regions include, for example, but are not limited to, one or more of the following: eyebrow area, eye area, lip area.
The unreserved region includes, for example, but is not limited to, one or more of the following regions: a stature line area and an eye bag area.
The required area can be obtained, for example, by connecting key points.
The embodiment uses an open source face feature recognition library-Dlib to realize face detection. Dlib trains a neural network architecture by marking images of key points (eyebrows, eyes, noses, lips and jaws) of a human face by adopting a machine learning method, and integrally encapsulates the trained model to form an identification library. The latest Dlib library can generate 81 feature points, numbered 0-80. FIG. 4a shows Dlib detecting labeled face key points, where the box is the face ROI area and the skin inside the area is the beautified object. By connecting key points of the face, a Mask of a reserved area and a Mask of a non-reserved area can be extracted and obtained at the same time, in this example, the Mask of the reserved area is a Mask of eyes, lips and eyebrows of the face, as shown in fig. 4b, and the Mask of the non-reserved area is a Mask of a statute line and an eye pocket, as shown in fig. 4 c.
Step 24, performing two times of high contrast retention buffing processing with different parameters on the input image, including: performing first high-contrast retention peeling treatment on an input image to obtain a first high-contrast peeling image (namely, a first peeling image), performing second high-contrast retention peeling treatment on the input image to obtain a second high-contrast peeling image (namely, a second peeling image), wherein the effect of the first high-contrast peeling treatment is stronger than that of the second high-contrast retention peeling treatment, the peeling strength is higher, the first high-contrast retention peeling can be regarded as severe peeling, and the second high-contrast retention peeling can be regarded as mild peeling;
the high contrast retention peeling process used in this example is explained below.
Generally, the face of a person may have acne marks and some dark and rough skin, and the purpose of beautifying is to eliminate the acne marks as far as possible, whiten the dark skin, and refine the rough skin, so as to finally achieve the beautifying effect. Therefore, characteristic points needing to be processed on the face, such as acne marks, dark skin and rough skin, need to be searched, and the characteristic points are processed to realize beautification.
By careful observation, it was found that the seen pox marks and dark skin differed in skin color from the surrounding normal skin. If the gray value of the acne marks or dark skin is low from the image perspective, the gray value of the normal skin is high, and the gray value is a convex process, namely, from the normal skin to the characteristic points, the gray value drops sharply from the high points. Similarly, locally rough skin may have a sudden change in gray level, as shown in FIG. 5. Fig. 5a is a gray scale diagram of the original input image, fig. 5b is a gray scale diagram of the original input image, and it can be seen from fig. 5b that the position of the pox mark has a gray scale jump.
For this purpose, the feature points need to be extracted, the input image may be subjected to smoothing filtering to obtain a filtered image, for example, averaging filtering, gaussian filtering, bilateral filtering or guided filtering may be used, and the abrupt positions are filled first, so as to extract the feature points subsequently. The present embodiment employs gaussian filtering.
The filtered gray value is subtracted from the original gray value for all pixel positions, respectively. For the feature points, since the filtered gray value is higher than the original gray value, the gray value of the feature points is less than 0, and the gray value of the normal skin position is positive.
For the convenience of subsequent processing, all results are shifted, for example, uniformly adding 0.5, that is:
position of characteristic point, original gray value-gray value after filtering +0.5 (1)
Thus, the value of the landmark positions is less than 0.5, while the value of normal skin is greater than 0.5. Fig. 6a shows a feature point diagram with abrupt gray scale, which shows the positions of feature points, and black points are the positions of the feature points, i.e. the objects to be extracted.
The whole picture is seen in comparison grey in figure 6 a. In order to make the feature points more obvious, the feature points may be enhanced to make the features more obvious, and the high contrast Mask obtained after the enhancement is shown in fig. 6 b.
The feature point enhancement is used for enhancing feature point information to make the feature point to be extracted different from normal skin, and for example, the feature point diagram may be subjected to the following highlight processing (the effect is similar to that of a dazzling spotlight shining on an image, so called as highlight processing) for 3-5 times: for values x1 less than 0.5, the new enhanced value x1 '═ x1 × 1 × 2, and for values x2 greater than 0.5, the new enhanced value x 2' ═ 1- (1-x2) × (1-x2) × 2. This was repeated 3-5 times. The processing function is only an example, and other functions may be implemented in other embodiments, which are not limited in this disclosure. The intensity of the buffing can be controlled through the times of the characteristic point strengthening treatment, for example, 5 times of strong light treatment can be carried out on the heavy buffing, 3 times of strong light treatment can be carried out on the light buffing, and the more the strengthening times, the stronger the buffing effect.
Optionally, all the values x of the feature map can be zoomed once, and the enhancement intensity of the feature map can be adjusted by adjusting the zoom intensity, so as to control the peeling intensity. For example, any of the following scaling approaches may be employed:
xnew=kx (2)
wherein k belongs to (0,1), and the larger the k value is, the stronger the peeling effect is.
Wherein c1, c2 epsilon [0, 1] are all scaling factors, c2> c1, and c1 and c2 are controlled to adjust the scaling strength.
After the high-contrast Mask is obtained, the characteristic points can be brightened, and the gray value of the characteristic points is increased to be whiter than the original color as much as possible, and the gray value is the same as that of the normal skin color. In the present embodiment, the original is brightened and whitened as a whole by using the curve adjustment method, and as shown in fig. 7, fig. 7a is the original, and fig. 7b is the image after curve brightening. The curve adjusting method mainly adjusts the gray level histogram of the image. And setting an anchor point, carrying out interpolation according to the anchor point to obtain a curve, and adjusting the gray level histogram according to the shape of the curve. Setting an appropriate anchor point can improve the brightness of the whole image, even if the image achieves the effect of brightening the whole image.
After the curve is brightened, the original image and the brightened image are overlapped by using a high-contrast Mask to obtain a buffing image:
high contrast dermabrasion figure (original figure), high contrast Mask + brightening figure (1-high contrast Mask) (3)
By adopting the high-contrast retention peeling method, the original image is subjected to twice peeling treatment to obtain a first high-contrast peeling image and a second high-contrast peeling image respectively, wherein the two peeling intensities are different, in the embodiment, the first high-contrast peeling is severe peeling, and the second high-contrast peeling is mild peeling.
If one kind of peeling strength is adopted, if the peeling strength is not enough, a better peeling effect may not be obtained in the grain and/or pouch area, if the peeling strength is stronger, the effect is better in the grain and/or pouch area, but other areas may be distorted due to excessive peeling, so in the embodiment of the disclosure, areas with heavier textures, such as the grain and/or pouch area, are heavily peeled, and other parts of the face skin are lightly peeled, and a better peeling effect is obtained under the condition of ensuring no distortion.
In addition to high contrast peeling, bilateral filtering peeling or edge-preserving filtering peeling may be used, fig. 8a is an effect diagram after high contrast peeling, fig. 8b is an effect diagram after bilateral filtering peeling, and fig. 8c is an effect diagram after edge-preserving filtering peeling. In contrast, it can be seen that the high contrast retention peel provides superior peel results and detail fidelity. The peeling strength of fig. 8a is stronger than that of fig. 8b and 8 c.
Alternatively, the two dermabrasion treatments may employ different dermabrasion methods.
Step 25, fusing the two dermabrasion images according to the mask of the unreserved region to obtain a high-contrast dermabrasion image (namely a third dermabrasion image) of the input image;
and acquiring a first high-contrast dermabrasion picture of the non-reserved area and a second high-contrast dermabrasion picture of the rest areas except the reserved area according to the mask of the non-reserved area, and fusing the first high-contrast dermabrasion picture of the non-reserved area and the second high-contrast dermabrasion picture of the rest areas except the reserved area to obtain the high-contrast dermabrasion picture of the input image.
High contrast dermabrasion pattern first high contrast dermabrasion pattern Mask + second high contrast dermabrasion pattern (1-non-preserved Mask) Mask
the sharpening process is to sharpen the original image by using a blunted mask subjected to blur filtering. The sharpening process comprises the following steps: the input image is subjected to Gaussian filtering or other low-pass filtering to obtain a blurred image, the original image and the blurred image are subtracted to obtain an edge image, and the edge image and the original image are subjected to linear combination to obtain a sharpened image. As shown in fig. 9, x (n, m) represents the original, the Filter in the figure is a Linear high-pass Filter (Linear HP Filter), the Filter output z (n, m) is an edge image obtained by subtracting the high-pass Filter from the original, and y (n, m) is a sharpened image obtained by combining the original and the edge image. Fig. 10 shows the contrast before and after sharpening after high contrast retention peeling, fig. 10a is the high contrast peeling before sharpening and fig. 10b is the high contrast peeling after sharpening, it can be seen that the sharpness of the image after sharpening is greatly improved.
This step may be an optional step.
Step 27, performing color adjustment (for example, red adjustment) on the input image to obtain a color-adjusted input image, wherein the degree of redness can be improved through the color adjustment;
selectable Color (Selective Color) or Color adjustment simulates the amount of ink in a certain Color area of an image being increased or decreased during printing to change a certain Color.
The ink content of any of the primary colors can be selectively and subtly modified with selectable color commands without affecting the other primary colors. The selectable color adjustments may adjust the primary colors including: nine main colors of R (red), G (green), B (blue), C (cyan), M (magenta, also called magenta), Y (yellow), white, black, and neutral colors (colors other than pure black and pure white).
When the adjustment component is one of RGB, the dominant color is defined as the component with the largest value in three channels of RGB, and only the pixel position where the dominant color is consistent with the adjustment component can be adjusted. When the adjustment component is one of CMY, the dominant color is defined as the component with the smallest median among three channels RGB, because CMY is complementary to RGB, and the channel with the smallest component among RGB is the channel with the largest component among CMY channels. Only the pixel position where the channel with the smallest component coincides with the adjustment component can be adjusted. When the adjustment component is white, the adjustable position is a position where both RGB values are greater than 128. When the adjustment component is black, the adjustable position is where the RGB component values are all less than 128. When the adjustment component is neutral, the remaining pixels are adjusted except for the positions of pure black and pure white.
The specific implementation process of color adjustment comprises the following steps:
(1) the parameter adjusting principle adjusts the slider according to the complementary color principle, such as: decreasing cyan, which corresponds to increasing the complementary color red; decreasing magenta, which is equivalent to increasing its complementary color green; decreasing the yellow color, which corresponds to increasing its complementary color blue; reducing black, i.e. equivalent to brightening; increasing the color black, i.e., corresponds to darkening; the relationship of CMY to RGB can be seen in the color ring shown in FIG. 11;
(2) determining the key color of the key parameters: the tunable dominant colors are RGB, or CMY, or black, white and gray. The difference is that the values for the calculation of the variable ranges are different;
(3) determining a maximum value, a median value and a minimum value of the tunable dominant color for each pixel:
max (R, G, B)
The median mid is mid (R, G, B)
Min (R, G, B)
-if max > mid > min, the dominant hue can be adjusted to a hue taking max, or a color mixture of max and mid
-if max ═ mid > min, then a mixed color with dominant hue adjusted to max and mid
-if mid is min, the dominant hue is adjusted to a hue max
-if the tunable dominant color is one of RGB, the tunable range is range max-mid
-if the tunable dominant color is one of CMY, the tunable range is range mid-min
(4) Calculating CMY-RGB
Converting original image from RGB image to CMY image
(5) According to the sliding rod needing to be adjusted (for example, three or four sliding rods can be arranged, if the three sliding rods can be respectively CMY three-color sliding rods, if the three sliding rods are four sliding rods, the CMY three-color sliding rods can be simultaneously adjusted by the fourth sliding rod except CMY), the adjustable proportion of each sliding rod is calculated, wherein the maximum adjustable proportion of the sliding rod for adjusting the color C is pmax ═ C, (C) the maximum adjustable proportion of the sliding rod for adjusting the color M is pmax ═ M, (M) the maximum adjustable proportion of the sliding rod for adjusting the color Y is pmax ═ Y, (Y), and C, M and Y are in the same size (0,1)
-when the adjustment direction is negative, i.e. decreasing cmy the ratio x of one of the colors, the adjustment ratio being min (pmax, x)
New value of slide bar-range x min (pmax, x)
-increasing cmy the ratio x of one of the colors when the adjustment direction is positive, the adjustment ratio being min (1-pmax, x)
New value of slide bar + range × min (1-pmax, x)
(6) Conversion back to RGB-1-CMY
The adjusted CMY image is converted back to an RGB image.
For skin adjustment, the color component that can be adjusted is red, and then only those pixel locations where the red channel component has the largest value can be adjusted, and the values of the remaining pixels are unchanged, i.e., the cyan color of the red region pixels, which are the pixels with the largest red channel component value, can be reduced.
Fig. 12a is an original image, and fig. 12b is a schematic diagram of an input image after red adjustment, in which the adjustment intensity is reduced by 100% to reduce the cyan color, and it can be seen that the lip region is significantly reddened.
This step may be an optional step.
first result plot (sharp (high contrast dermabrasion plot) × (skin Mask-preserved Mask)
Wherein, sharpening (high contrast dermabrasion picture) represents a picture obtained by subjecting the high contrast dermabrasion picture to sharpening processing; (skin Mask-preserved area Mask) means the skin area other than the preserved areas (e.g., the eyes, eyebrows, and lip areas) of all the skin areas.
second result graph (original graph) (1-skin Mask + reserved Mask) for color adjustment
Wherein, the color adjustment (original drawing) represents a drawing in which the original drawing is color-adjusted; (1-skin Mask + reserved region Mask) corresponds to "1- (skin Mask-reserved region Mask)".
The objects that the second result graph can adjust include primarily lips. There may also be background areas (depending on the color of the background).
And step 30, fusing the first result graph and the second result graph to obtain an output image.
Outputting the image as the first result image and the second result image
The image with beautified skin can be obtained through the steps. Fig. 13 is a comparison graph of the beauty effect of the embodiment of the present disclosure, in which fig. 13a is an original graph, and fig. 13b is an effect graph after image processing is performed by using the present disclosure.
The embodiment of the disclosure provides a method for beautifying face skin by partition refinement, which solves the problems of face beautification and inconsistent background from two aspects of face detection and skin detection. In the aspect of face beautification, different areas of the face are subjected to differentiation processing. The method has the advantages of obtaining a brilliant effect on the aspect of ensuring the texture and the definition of each part of the face. The skin detection and the face area are combined to distinguish the face skin area from the background area, so that the problem of boundary processing of the face area and the background area is solved, and background distortion is avoided. Different regions of the human face are subjected to differentiation processing, regions such as human eyes, eyebrows and lips are extracted through key points of the human face, skin grinding is not carried out, and textures of the human eyes, the eyebrows and the lips are reserved. The eye pouch area and the statutory line area of the face are heavily buffed, so that a better beautifying effect is achieved. The skin-beautifying effect can be further enhanced by rapidly adjusting the redness of the lips and the like by using color adjustment.
In an exemplary embodiment, the present disclosure further provides an image processing apparatus, which may include a processor and a memory, the memory storing a computer program executable on the processor, wherein the processor implements the steps of the image processing method in any of the above embodiments of the present disclosure when executing the computer program.
In an exemplary embodiment, fig. 14 is a schematic structural diagram of a display device in an embodiment of the present disclosure. As shown in fig. 14, the apparatus 60 includes: at least one processor 601; and at least one memory 602, bus 603 connected to processor 601; the processor 601 and the memory 602 complete communication with each other through the bus 603; the processor 601 is used to call program instructions in the memory 602 to perform the steps of the image processing method in any of the above embodiments.
The Processor may be a Central Processing Unit (CPU), a Microprocessor (MPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a transistor logic device, or the like, which is not limited in this disclosure.
The Memory may include Read Only Memory (ROM) and Random Access Memory (RAM), and provides instructions and data to the processor. The portion of memory may also include non-volatile random access memory. For example, the memory may also store device type information.
The bus may include a power bus, a control bus, a status signal bus, and the like, in addition to the data bus. But for clarity of illustration the various buses are labeled as buses in figure 14.
In implementation, the processing performed by the processing device may be performed by instructions in the form of hardware integrated logic circuits or software in the processor. That is, the method steps of the embodiments of the present disclosure may be implemented by a hardware processor, or implemented by a combination of hardware and software modules in a processor. The software module may be located in a storage medium such as a random access memory, a flash memory, a read only memory, a programmable read only memory or an electrically erasable programmable memory, a register, etc. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor. To avoid repetition, it is not described in detail here.
In an exemplary embodiment, the disclosed embodiments also provide a non-transitory computer readable storage medium having stored thereon a computer program executable on a processor, the computer program, when executed by the processor, implementing the steps of the aforementioned image processing method.
It will be understood by those of ordinary skill in the art that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between the functional modules/units mentioned in the above description is not trivial; for example, one physical component may have multiple functions, or one function or step may be performed by several physical components in cooperation. Some or all of the components may be implemented as software executed by a processor, such as a digital signal processor or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.
Although the embodiments disclosed in the present disclosure are described above, the descriptions are only for the convenience of understanding the present disclosure, and are not intended to limit the present disclosure. It will be understood by those skilled in the art of the present disclosure that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure, and that the scope of the disclosure is to be limited only by the terms of the appended claims.
Claims (16)
1. An image processing method, comprising:
acquiring an input image containing a human face, and determining a skin area; detecting a face in the input image to obtain a face area, extracting key points of the face in the face area, and determining a reserved area and a non-reserved area of the face area according to the key points;
and fusing the image of the reserved area, the first skin grinding image of the non-reserved area and the second skin grinding image of the skin area to obtain an output image, wherein the skin grinding intensity of the second skin grinding image is smaller than that of the first skin grinding image.
2. The method of claim 1, wherein fusing the image of the preserved area, the first dermabrasion image of the unreserved region, and the second dermabrasion image of the skin area to obtain an output image comprises:
performing first buffing treatment to obtain a first buffing image of the non-reserved area;
performing second buffing treatment to obtain second buffing images of other areas except the non-reserved area;
fusing the first buffing image and the second buffing image to obtain a third buffing image;
and obtaining an output image according to the skin area mask and the reserved area mask based on the third skin grinding image.
3. The method of claim 2, wherein said performing a first buffing pass to obtain a first buffing image of the unreserved region comprises:
carrying out first buffing processing on the input image to obtain a first buffing processing result, and obtaining a first buffing image of a non-reserved area according to a non-reserved area mask: and the first buffing image is a first buffing processing result graph and is a non-reserved area mask.
4. The method of claim 2, wherein said performing a second buffing pass to obtain a second buffing image of the area other than the non-retained area comprises:
carrying out second buffing processing on the input image to obtain a second buffing processing result image, and obtaining second buffing images of other areas except the non-reserved area according to a non-reserved area mask: and the second buffing image is a second buffing processing result graph (1-non-reserved area mask).
5. The method of claim 2, wherein deriving an output image from a skin region mask and a reserve region mask based on the third dermabrasion image comprises:
based on the third dermabrasion image, obtaining a first result image as an output image according to the skin area mask and the reserved area mask:
the first result plot is the third dermabrasion image (skin area mask-retained area mask).
6. The method of claim 2, further comprising: sharpening the third buffing image to obtain a sharpened third buffing image;
obtaining an output image according to a skin area mask and a reserved area mask based on the third dermabrasion image, wherein the output image comprises:
based on the sharpened third dermabrasion image, obtaining a first result image as an output image according to a skin area mask and a reserved area mask:
the first result image is the sharpened third dermabrasion image (skin area mask-retained area mask).
7. The method of any of claims 2-6, further comprising:
carrying out color adjustment on the input image to obtain an image after color adjustment;
obtaining a second result graph according to the skin area mask and the reserved area mask:
second result graph (color adjusted image) (1-skin region mask + remaining region mask);
the obtaining of the output image includes: and fusing the first result image and the second result image to obtain an output image.
8. The method of claim 7, wherein the color adjusting the input image comprises:
the cyan color of the red region pixel, which is the pixel with the largest red channel component value, is reduced.
9. The method of claim 1, wherein the obtaining an input image containing a human face and determining a skin region comprises:
and performing skin detection on the input image, and determining a skin area in the face area as the skin area by combining the face area.
10. The method of claim 1,
the reserved area includes one or more of: eye area, eyebrow area, lip area;
the unreserved region comprises one or more of: a stature area, an eye pouch area.
11. The method of claim 2, wherein the first dermabrasion process is a high contrast retained dermabrasion process and the second dermabrasion process is a high contrast retained dermabrasion process.
12. The method of claim 11, the high contrast retention peeling treatment comprising:
acquiring a characteristic point diagram of the input image with the gray level mutation;
performing feature point enhancement processing on the feature point diagram for multiple times to obtain a high-contrast reserved mask, and controlling the intensity of buffing processing through the times of the feature point enhancement processing;
and calculating according to the high-contrast reserved mask to obtain a buffing image.
13. The method of claim 12, wherein after performing a plurality of feature point enhancement processes, the method further comprises: and zooming the feature map subjected to the feature point strengthening treatment to obtain a high-contrast reserved mask, and controlling the intensity of the peeling treatment through the zooming intensity.
14. The method of claim 12,
further comprising: brightening an input image to obtain a brightening image;
the obtaining of the buffing image according to the high contrast retained mask calculation comprises: calculating according to the high contrast reserved mask and the brightening image to obtain a buffing image:
buffing image ═ input image ═ high contrast retention mask + highlight image — (1 — high contrast retention mask).
15. An image processing apparatus comprising a processor and a memory storing a computer program operable on the processor, wherein the processor implements the steps of the image processing method according to any one of claims 1 to 14 when executing the program.
16. A computer-readable storage medium, on which a computer program that can be run on a processor is stored, which computer program, when being executed by the processor, carries out the steps of the image processing method according to any one of claims 1 to 14.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110114651.3A CN112766204A (en) | 2021-01-26 | 2021-01-26 | Image processing method, image processing apparatus, and computer-readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110114651.3A CN112766204A (en) | 2021-01-26 | 2021-01-26 | Image processing method, image processing apparatus, and computer-readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112766204A true CN112766204A (en) | 2021-05-07 |
Family
ID=75706297
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110114651.3A Pending CN112766204A (en) | 2021-01-26 | 2021-01-26 | Image processing method, image processing apparatus, and computer-readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112766204A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113362344A (en) * | 2021-06-30 | 2021-09-07 | 展讯通信(天津)有限公司 | Face skin segmentation method and device |
CN113744145A (en) * | 2021-08-20 | 2021-12-03 | 武汉瓯越网视有限公司 | Method for improving image definition, storage medium, electronic device and system |
CN113763284A (en) * | 2021-09-27 | 2021-12-07 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
CN113763285A (en) * | 2021-09-27 | 2021-12-07 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
CN117274498A (en) * | 2023-10-16 | 2023-12-22 | 北京百度网讯科技有限公司 | Image processing method, device, electronic equipment and storage medium |
-
2021
- 2021-01-26 CN CN202110114651.3A patent/CN112766204A/en active Pending
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113362344A (en) * | 2021-06-30 | 2021-09-07 | 展讯通信(天津)有限公司 | Face skin segmentation method and device |
CN113362344B (en) * | 2021-06-30 | 2023-08-11 | 展讯通信(天津)有限公司 | Face skin segmentation method and equipment |
CN113744145A (en) * | 2021-08-20 | 2021-12-03 | 武汉瓯越网视有限公司 | Method for improving image definition, storage medium, electronic device and system |
CN113744145B (en) * | 2021-08-20 | 2024-05-10 | 武汉瓯越网视有限公司 | Method, storage medium, electronic device and system for improving image definition |
CN113763284A (en) * | 2021-09-27 | 2021-12-07 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
CN113763285A (en) * | 2021-09-27 | 2021-12-07 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
CN113763285B (en) * | 2021-09-27 | 2024-06-11 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
CN117274498A (en) * | 2023-10-16 | 2023-12-22 | 北京百度网讯科技有限公司 | Image processing method, device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112766204A (en) | Image processing method, image processing apparatus, and computer-readable storage medium | |
JP5547730B2 (en) | Automatic facial and skin beautification using face detection | |
US10304166B2 (en) | Eye beautification under inaccurate localization | |
US8520089B2 (en) | Eye beautification | |
CN112784773B (en) | Image processing method and device, storage medium and terminal | |
EP2615577A1 (en) | Image-processing device, image-processing method, and control program | |
CN107369133B (en) | Face image beautifying method and device | |
EP1318475A1 (en) | A method and system for selectively applying enhancement to an image | |
JP2001126075A (en) | Method and device for picture processing, and recording medium | |
US20210374925A1 (en) | Image Enhancement System and Method | |
CN112686800A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
JP4742068B2 (en) | Image processing method, image processing system, and image processing program | |
JP2009050035A (en) | Image processing method, image processing system, and image processing program | |
CN114240743A (en) | Skin beautifying method based on high-contrast buffing human face image | |
CN115908106A (en) | Image processing method, device, equipment and storage medium | |
CN114596213A (en) | Image processing method and device | |
Choudhury et al. | Perceptually motivated automatic color contrast enhancement based on color constancy estimation | |
CN108986052A (en) | A kind of adaptive image removes illumination method and system | |
JP2005094452A (en) | Method, system, and program for processing image | |
CN116523818A (en) | Image processing method and device | |
Kotera | Image-dependent Quality and Preference Control. | |
CN116612036A (en) | Method for realizing portrait peeling and whitening based on Unity | |
CN114494066A (en) | Human image sharpening method, device, equipment and medium based on Hessian filter | |
Choudhury et al. | Based on Color Constancy Estimation Based on Color Constancy Estimation | |
Choudhury et al. | Research Article Perceptually Motivated Automatic Color Contrast Enhancement Based on Color Constancy Estimation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |