WO2022052862A1 - 图像的边缘增强处理方法及应用 - Google Patents
图像的边缘增强处理方法及应用 Download PDFInfo
- Publication number
- WO2022052862A1 WO2022052862A1 PCT/CN2021/116307 CN2021116307W WO2022052862A1 WO 2022052862 A1 WO2022052862 A1 WO 2022052862A1 CN 2021116307 W CN2021116307 W CN 2021116307W WO 2022052862 A1 WO2022052862 A1 WO 2022052862A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- skin color
- edge
- face
- value
- image
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 12
- 238000001514 detection method Methods 0.000 claims abstract description 47
- 238000012545 processing Methods 0.000 claims abstract description 42
- 238000003708 edge detection Methods 0.000 claims description 32
- 238000000034 method Methods 0.000 claims description 32
- 230000001629 suppression Effects 0.000 claims description 18
- 238000004458 analytical method Methods 0.000 claims description 14
- 230000008569 process Effects 0.000 claims description 14
- 230000007704 transition Effects 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 6
- 230000000694 effects Effects 0.000 abstract description 15
- 230000001815 facial effect Effects 0.000 abstract 1
- 238000005516 engineering process Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 210000000887 face Anatomy 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 2
- 230000010365 information processing Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 208000002874 Acne Vulgaris Diseases 0.000 description 1
- 206010014970 Ephelides Diseases 0.000 description 1
- 208000003351 Melanosis Diseases 0.000 description 1
- 101000860173 Myxococcus xanthus C-factor Proteins 0.000 description 1
- 206010000496 acne Diseases 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
Images
Classifications
-
- G06T5/73—
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/60—Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20192—Edge enhancement; Edge preservation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Definitions
- the invention relates to the technical field of digital image processing, in particular to an image edge enhancement processing method and application.
- the skin area of the human face often has a lot of details, such as fine lines, acne marks, freckles and shadow boundaries, etc. These details generally have relatively weak contrast, and this part generally does not need to be enhanced too much. It should not be too wide, otherwise the face will look unnatural; at the same time, for the non-face part of the image, such as scenery, buildings, etc., in order to make the details more obvious, the edges of the details with relatively weak contrast are often the focus of enhancement. If the above two parts of the image use uniform enhancement parameters, the final effect cannot achieve a good balance between the two.
- the prior art also provides a face enhancement scheme for distinguishing skin color points and non-skin color points, taking the published Chinese patent application CN102542538A as an example, it provides an edge enhancement method: using the method of color detection to distinguish skin color points and non-skin-colored points, weaken the enhancement strength of skin-colored points to improve the enhancement effect of human faces.
- the color detection method can only distinguish skin color points from non-skin color points, and cannot accurately locate the skin color points of the face, and the false detection rate is also high.
- the color of the common indoor beige textured floor is also within the skin color range. If it is treated as a skin color point and the edge enhancement strength is weakened, the floor texture that should be enhanced more will not be effectively enhanced. the overall enhancement of the image.
- the purpose of the present invention is to overcome the deficiencies of the prior art, and to provide an image edge enhancement processing method and application.
- the invention uses the characteristics of the face image to use independent edge enhancement parameters for the face skin points, and improves the edge enhancement effect of the human face skin points on the premise of not affecting the edge enhancement effect of the non-face skin color points.
- the present invention provides the following technical solutions:
- An image edge enhancement processing method comprising the following steps:
- the luminance signal corresponds to the input luminance image
- the chrominance signal corresponds to the input chrominance image
- the first edge value is obtained by processing the input luminance image through the first parameter group applicable to the non-face skin color points
- the second edge value is obtained by processing the input luminance image through the second parameter group applicable to the human face skin color point
- perform skin color detection on the input chromaticity image to obtain the skin color weight value of each pixel point, and obtain the skin color weight value of each pixel point according to the face area information of the input image
- the face skin color weight value is equal to the skin color weight value of the point, and the face skin color weight value of all points outside the face area is cleared;
- the first edge value and the second edge value are mixed according to the aforementioned face skin color weight value, and the edge value obtained by mixing is combined with the input luminance value to perform edge enhancement.
- the first parameter group is a parameter group a including an edge detection operator, a noise suppression parameter and an intensity adjustment parameter, and the parameter group a is used to sequentially perform edge detection, noise suppression, and intensity adjustment processing on the input luminance image to obtain an edge value.
- E 2a the parameter group a including an edge detection operator, a noise suppression parameter and an intensity adjustment parameter, and the parameter group a is used to sequentially perform edge detection, noise suppression, and intensity adjustment processing on the input luminance image to obtain an edge value.
- the second parameter group is a parameter group b including an edge detection operator, a noise suppression parameter and an intensity adjustment parameter, and the parameter group b is used to sequentially perform edge detection, noise suppression, and intensity adjustment processing on the input luminance image to obtain an edge value E 2b .
- the edge value is calculated according to the following formula:
- T 0 is the noise threshold, T 0 >0;
- the parameter Gain p is the adjustment gain of the positive edge
- the parameter Gain n is the negative edge gain. When the gain is greater than 1, it means to increase the edge strength; when the gain is less than 1, it means to weaken the edge strength.
- the skin color is detected on the input chromaticity image, the skin color is detected in the (H, S) coordinate system, where H represents the chromaticity of the pixel, and S represents the saturation of the pixel;
- Cr(i,j) is the red difference at point (i,j), -128 ⁇ Cr(i,j) ⁇ 127;
- Cb(i,j) is the blue difference at (i,j), - 128 ⁇ Cb(i,j) ⁇ 127.
- a rectangular area is determined as the skin color area by the preset parameters H 0 , H 1 , S 0 and S 1 , if the H value and S value of the pixel point (i, j) are in the If the (H, S) coordinate system falls within the rectangular area, the pixel is considered to be a skin color point, otherwise it is a non-skin color point;
- the skin color weight value W skin (i,j) of the point is calculated by the following formula:
- H 0 ,H 1 ,S 0 ,S 1 ,D s ,D h are preset parameter values , and satisfy H 0 +2 ⁇ D h ⁇ H 1 , S 0 +2 ⁇ D s ⁇ S 1 ;
- D s is used to adjust the width of the weight transition interval in the S direction, and D h is used to adjust the weight transition in the H direction interval width;
- Step 1 perform face detection on the input image through the face detection module, and obtain the face area information of the input image.
- the k (0 ⁇ k ⁇ N)th face is
- the face area information F(x k , y k , w k , h k ) indicates that the coordinates of the upper left corner of the face area are (x k , y k ), the width of the area is w k , and the height is h k ;
- Step 3 according to the skin color weight W skin (i, j) and the face weight W face (i, j) of each pixel point (i, j), calculate the skin color weight W (i, j) of each pixel point according to the following formula: ,j),
- E(i,j) E2a (i,j) ⁇ (1-W(i,j))+ E2b (i,j) ⁇ W(i,j),
- the processed image is obtained as an output luminance image for output.
- step 2 includes,
- Step 22 Obtain the face area information F(x k , y k , w k , h k ) of the kth face;
- Step 23 for the pixel point (i, j), judge whether x 0 ⁇ j ⁇ x 0 +w k and y 0 ⁇ i ⁇ y 0 +h k is true, when it is judged to be true, the point falls on the kth sheet In the face area, jump to step 27; otherwise, continue to step 24;
- Step 25 judge whether k ⁇ N is true, when it is judged to be true, jump to execute step 22; otherwise, continue to execute step 26;
- the present invention also provides an image edge enhancement processing device, comprising:
- memory for storing processor executable instructions and parameters
- the processor includes an edge analysis unit, a skin color analysis unit and an enhancement processing unit,
- the edge analysis unit is used to receive an input image separated into a luminance signal and a chrominance signal, the luminance signal corresponds to the input luminance image, and the chrominance signal corresponds to the input chrominance image;
- a parameter group processes the input brightness image to obtain a first edge value, and processes the input brightness image through a second parameter group suitable for skin color points of a human face to obtain a second edge value;
- the skin color analysis unit is used to detect the skin color of the input chromaticity image to obtain the skin color weight value of each pixel point, and obtain the skin color weight value of each pixel point according to the face area information of the input image;
- the skin color weight value of the skin color point in the area is equal to the skin color weight value of the point, and the face skin color weight value of all points outside the face area is cleared;
- the enhancement processing unit is configured to mix the first edge value and the second edge value according to the aforementioned face skin color weight value, and combine the mixed edge value with the input luminance value to perform edge enhancement.
- the present invention also provides an image edge enhancement processing system, comprising an edge detection module and an edge enhancement module, and a face area modulation module arranged between the edge detection module and the edge enhancement module, the face area modulation module is connected to skin color detection module;
- the skin color detection module is configured to: perform skin color detection on the input chromaticity image to obtain the skin color weight value of each pixel point, and obtain the skin color weight value of each pixel point according to the face area information of the input image;
- the skin color weight value of the skin color point in the face area is equal to the skin color weight value of the point, and the face skin color weight value of all points outside the face area is cleared;
- the face area modulation module is configured to: process the input luminance image through a first parameter group applicable to non-face skin color points to obtain a first edge value, and use a second parameter group applicable to human face skin color points to process the input brightness image.
- the brightness image is processed to obtain a second edge value; the first edge value and the second edge value are mixed according to the aforementioned face skin color weight value, and the mixed edge value is transmitted to the edge enhancement module for edge enhancement processing.
- the present invention has the following advantages and positive effects as an example due to the adoption of the above technical solutions: using the characteristics of the face image to use an independent edge enhancement parameter for the skin points of the face, without affecting the skin color of the non-face On the premise of the point edge enhancement effect, improve the edge enhancement effect of face skin points.
- the scheme provided in the present invention is combined with people. Face detection and skin color detection accurately locate the skin points of the face, and can significantly reduce the false detection rate by excluding the skin color points that are not of the face.
- edge enhancement parameters for face skin points and non-face skin points preferably including edge detection parameters, noise parameters, and intensity parameters, the present invention can more finely distinguish between skin color points and non-skin color points.
- the enhancement effect is also convenient for the user to flexibly adjust the enhancement effect according to the characteristics and preferences of the human face, with wide applicability and strong flexibility.
- FIG. 1 is a flowchart of an image edge enhancement processing method provided by the present invention.
- FIG. 2 is an information processing flowchart of an image edge enhancement processing method provided by an embodiment of the present invention.
- FIG. 3 is a schematic diagram of the skin color region in the (H, S) coordinate system provided by the present invention.
- FIG. 4 is a schematic diagram of the weight transition area in the (H, S) coordinate system provided by the present invention.
- FIG. 5 is an example diagram of a detected face region provided by the present invention.
- FIG. 6 is a flowchart of detecting whether a pixel falls into a face area provided by the present invention.
- this embodiment provides an image edge enhancement processing method.
- the method includes the following steps:
- S100 Receive an input image separated into a luminance signal and a chrominance signal, where the luminance signal corresponds to the input luminance image, and the chrominance signal corresponds to the input chrominance image.
- the image data separated into the luminance signal and the chrominance signal may be one of YCbCr image data, HSV image data, and HIS image data.
- the luminance signal is a Y signal
- the chrominance signal is a chrominance (C) signal.
- the luminance signal refers to an electrical signal representing the luminance of a picture in a video system.
- the signal representing the chrominance information is usually superimposed with the luminance signal to save the frequency bandwidth of the transmitted signal.
- a signal representing luminance information is referred to as a Y signal
- a signal component representing chrominance information is referred to as a C signal.
- the input image is separated into an input luminance image corresponding to the luminance signal (ie, Y in ), and an input chrominance image corresponding to the chrominance signal (ie, Cr and Cb).
- the first parameter group is a parameter group a including an edge detection operator, a noise suppression parameter and an intensity adjustment parameter, and the parameter group a is used to sequentially perform edge detection, noise suppression, and intensity adjustment on the input luminance image process, and obtain the edge value E 2a .
- the second parameter group is a parameter group b including an edge detection operator, a noise suppression parameter and an intensity adjustment parameter, and the parameter group b is used to sequentially perform edge detection, noise suppression, and intensity adjustment processing on the input luminance image to obtain an edge value E 2b .
- the skin color weight of each pixel is calculated, and the larger the weight value, the greater the possibility of skin color.
- the skin color points in the non-face area are excluded, that is, only the skin color weight of the face area points is retained, and the skin color points of other skin color points outside the face area are excluded. Weights are cleared.
- S300 Mix the first edge value and the second edge value according to the aforementioned face skin color weight value, and combine the mixed edge value with the input luminance value to perform edge enhancement.
- the obtained edge values E 2a and E 2b are mixed according to the face skin color weight value to obtain the final edge value.
- the final edge value is then applied to the input luminance value to obtain an edge-enhanced luminance value to generate an enhanced edge.
- the edge value is calculated according to the following formula:
- E 0 (i,j) is the edge value of point (i,j);
- Y in (i,j) The luminance value of the input image at point (i,j), 0 ⁇ Yin (i,j) ⁇ 255;
- n, L are integers, -L ⁇ m ⁇ L, -L ⁇ n ⁇ L.
- C 3 and C 5 are not unique, and those skilled in the art can select and adjust the calculation matrix of the edge detection operator C Q as required.
- the user can preset the calculation matrix corresponding to the Q edge detection operator C Q in the memory, and call the corresponding calculation matrix according to the Q value (calculated by the L value) when needed.
- T 0 is the noise threshold, T 0 >0.
- the intensity of positive and negative edges can be adjusted separately using the following formula:
- the parameter Gain p is the adjustment gain of the positive edge
- the parameter Gain n is the gain of the negative edge.
- skin color is detected in a (H, S) coordinate system, where H is used to describe the chromaticity of a pixel, and S is used to describe Saturation of pixels.
- Cr(i,j) is the red difference at point (i,j), -128 ⁇ Cr(i,j) ⁇ 127;
- Cb(i,j) is the blue difference at (i,j), - 128 ⁇ Cb(i,j) ⁇ 127.
- a rectangular area may be determined as a skin color area by preset parameters H 0 , H 1 , S 0 and S 1 , as shown in FIG. 3 .
- the pixel is considered to be a skin color point, otherwise it is a non-skin color point.
- the skin color weight value W skin (i, j) of the point can be calculated by the following formula:
- H 0 , H 1 , S 0 , S 1 , D s , and D h are all preset parameter values, and satisfy H 0 +2 ⁇ D h ⁇ H 1 and S 0 +2 ⁇ D s ⁇ S 1 .
- the parameter D s is used to adjust the width of the weight transition interval in the S direction
- the parameter D h is used to adjust the width of the weight transition interval in the H direction, as shown in FIG. 4 .
- the user can personalize the parameter values of H 0 , H 1 , S 0 , S 1 , D s , D h as required, or set H 0 , H 1 , S 0 , S 1 through the system , D s , D h parameter values are set adaptively.
- the face detection result can come from a dedicated face detection module.
- a face detection algorithm module is usually set.
- the face area information can be obtained, including the face position (usually represented by coordinate values) and size information (usually represented by width and height).
- the steps of obtaining the weight value of the face skin color of each pixel point according to the face area information of the input image are as follows:
- Step 1 perform face detection on the input image through the face detection module, and obtain the face area information of the input image.
- N faces are detected in the image to be processed, which are face 0, face 1, face 2, . . . , face N-1.
- the face area information of the kth (0 ⁇ k ⁇ N) face is represented as F(x k , y k , w k , h k ), indicating that the coordinates of the upper left corner of the face area are (x k , y k ) ), the region width is w k , and the height is h k .
- Step 2 for each pixel point (i, j) in the image, from the 0th face to the N-1th face, it is determined that the pixel falls in the above-mentioned face area.
- Step 22 Obtain the face region information F(x k , y k , w k , h k ) of the kth face.
- Step 23 for the pixel point (i, j), judge whether x 0 ⁇ j ⁇ x 0 +w k and y 0 ⁇ i ⁇ y 0 +h k is true, when it is judged to be true, the point falls on the kth sheet In the face area, skip to step 27; otherwise, continue to step 24.
- Step 25 determine whether k ⁇ N is true, if it is true, skip to step 22; otherwise, continue to step 26.
- Step 3 Finally, according to the skin color weight W skin (i, j) and the face weight W face (i, j) of each pixel point (i, j), calculate the skin color weight W of each pixel point according to the following formula: (i,j):
- the final edge value is made by mixing E 2a (i,j) and E 2b (i,j) with weight W(i,j).
- E(i,j) E2a (i,j) ⁇ (1-W(i,j))+ E2b (i,j) ⁇ W(i,j),
- the first edge value E 2a and the second edge value E 2b are mixed according to the weight value of the skin color of the face, so as to obtain the final edge value E(i,j) of each pixel point.
- the final edge value E(i,j) of each pixel is summed with the input luminance value for edge enhancement processing.
- the calculation formula is as follows
- the processed image can then be obtained as an output luminance image for output.
- the above technical solution provided by the present invention combines face detection and skin color detection to accurately locate the skin points of the human face, and can significantly reduce the false detection rate by excluding the skin color points that are not of the human face. Further, by using different edge enhancement parameters for human face skin points and non-human face skin points, preferably including edge detection parameters, noise parameters, and intensity parameters, the present invention can more finely distinguish the enhancement effect of skin color points and non-skin color points. At the same time, it is also convenient for users to flexibly adjust the enhancement effect according to the characteristics and preferences of the human face, with wide applicability and strong flexibility.
- the apparatus includes a processor and a memory for storing processor-executable instructions and parameters.
- the processor includes an edge analysis unit, a skin color analysis unit and an enhancement processing unit.
- the edge analysis unit is used to receive an input image separated into a luminance signal and a chrominance signal, the luminance signal corresponds to the input luminance image, and the chrominance signal corresponds to the input chrominance image;
- a parameter group processes the input luminance image to obtain a first edge value, and processes the input luminance image through a second parameter group suitable for skin color points of a human face to obtain a second edge value.
- the skin color analysis unit is used to detect the skin color of the input chromaticity image to obtain the skin color weight value of each pixel point, and obtain the skin color weight value of each pixel point according to the face area information of the input image;
- the skin color weight value of the skin color point in the area is equal to the skin color weight value of the point, and the skin color weight value of all points outside the face area is cleared to zero.
- the enhancement processing unit is configured to mix the first edge value and the second edge value according to the aforementioned face skin color weight value, and combine the mixed edge value with the input luminance value to perform edge enhancement.
- the edge analysis unit may further include an edge detection subunit, a noise suppression subunit, and an intensity adjustment subunit.
- the edge detection subunit is configured to: for a certain pixel point located in the i-th row and the j-th column in the image, denoted as (i, j), and calculate the edge value of the point according to the following formula:
- E 0 (i,j) is the edge value of point (i,j);
- Y in (i,j) The luminance value of the input image at point (i,j), 0 ⁇ Yin (i,j) ⁇ 255;
- n, L are integers, -L ⁇ m ⁇ L, -L ⁇ n ⁇ L.
- the noise suppression subunit is configured to remove or attenuate noise in edge values using the following formula:
- T 0 is the noise threshold, T 0 >0.
- the edge intensity subunit is configured to adjust the intensity of positive and negative edges respectively using the following formula:
- the parameter Gain p is the adjustment gain of the positive edge
- the parameter Gain n is the gain of the negative edge.
- the skin color analysis unit may further include a skin color detection subunit and a face skin color detection subunit.
- the skin color detection subunit is configured to detect skin color in the (H, S) coordinate system; wherein, for a certain pixel point (i, j) located in the i-th row and the j-th column in the image, the corresponding H and S value are calculated according to the following formula:
- Cr(i,j) is the red difference at point (i,j), -128 ⁇ Cr(i,j) ⁇ 127;
- Cb(i,j) is the blue difference at (i,j) , -128 ⁇ Cb(i,j) ⁇ 127.
- the skin color detection subunit is also configured to: judge that the H value and the S value of the current pixel point (i, j) fall within the skin color area in the (H, S) coordinate system, and when falling within the skin color area, consider this The pixel point is a skin color point, otherwise it is a non-skin color point;
- the skin color weight value W skin (i,j) of the point is calculated by the following formula:
- H 0 , H 1 , S 0 , S 1 , D s , D h are all pre- Set parameter values, and satisfy H 0 +2 ⁇ D h ⁇ H 1 , S 0 +2 ⁇ D s ⁇ S 1 ;
- the face skin color detection subunit is configured to: perform face detection on the input image through the face detection module, and obtain the face area information of the input image, wherein the kth (0 ⁇ k ⁇ N) face of the person
- the face area information is expressed as F(x k , y k , w k , h k ), which means that the coordinates of the upper left corner of the face area are (x k , y k ), the width of the area is w k , and the height is h k ;
- the enhancement processing unit includes an edge synthesis subunit and an enhanced edge generation subunit.
- the edge synthesis subunit is configured as: according to the following formula
- E(i,j) E2a (i,j) ⁇ (1-W(i,j))+ E2b (i,j) ⁇ W(i,j),
- the first edge value E 2a and the second edge value E 2b are mixed according to the weight value of the skin color of the face, so as to obtain the final edge value E(i,j) of each pixel point.
- the generating enhanced edge subunit is configured as: according to the following formula
- the final edge value E(i,j) of each pixel is summed with the input luminance value for edge enhancement processing.
- Another embodiment of the present invention also provides an image edge enhancement processing system.
- the system includes an edge detection module and an edge enhancement module, and a face region modulation module arranged between the edge detection module and the edge enhancement module, and the face region modulation module is connected to the skin color detection module.
- the skin color detection module is configured to: perform skin color detection on the input chromaticity image to obtain the skin color weight value of each pixel point, and obtain the skin color weight value of each pixel point according to the face area information of the input image;
- the skin color weight value of the skin color point in the face area is equal to the skin color weight value of the point, and the face skin color weight value of all points outside the face area is cleared.
- the face area modulation module is configured to: process the input luminance image through a first parameter group applicable to non-face skin color points to obtain a first edge value, and use a second parameter group applicable to human face skin color points to process the input brightness image.
- the brightness image is processed to obtain a second edge value; the first edge value and the second edge value are mixed according to the aforementioned face skin color weight value, and the mixed edge value is transmitted to the edge enhancement module for edge enhancement processing.
- each module of the system may be configured to include a plurality of sub-modules to perform the information processing process described in the previous embodiments, which will not be repeated here.
Abstract
Description
Claims (10)
- 一种图像的边缘增强处理方法,其特征在于包括如下步骤:接收被分离成亮度信号和色度信号的输入图像,亮度信号对应输入亮度图像,色度信号对应输入色度图像;通过适用于非人脸肤色点的第一参数组对输入亮度图像进行处理获得第一边缘值,通过适用于人脸肤色点的第二参数组对输入亮度图像进行处理获得第二边缘值;以及,对输入色度图像进行肤色检测获得每一个像素点的肤色权重值,根据输入图像的人脸区域信息获取每个像素点的人脸肤色权重值;其中,人脸区域中的肤色点的人脸肤色权重值等于该点的肤色权重值,人脸区域外的所有点的人脸肤色权重值被清零;将第一边缘值与第二边缘值按前述人脸肤色权重值进行混合,将混合得到的边缘值结合到输入亮度值上进行边缘增强。
- 根据权利要求1所述方法,其特征在于:所述第一参数组为包括边缘检测算子、噪声抑制参数和强度调整参数的参数组a,使用参数组a对输入亮度图像依次进行边缘检测、噪声抑制、强度调整处理,得到边缘值E 2a;所述第二参数组为包括边缘检测算子、噪声抑制参数和强度调整参数的参数组b,使用参数组b对输入亮度图像依次进行边缘检测、噪声抑制、强度调整处理,得到边缘值E 2b。
- 根据权利要求2所述方法,其特征在于:对于图像中某一位于第i行、第j列的像素点,记为(i,j),按以下公式计算边缘值:其中,E 0(i,j)为(i,j)点的边缘值;Y in(i,j)输入图像在(i,j)点的亮度值,0≤Y in(i,j)≤255;C Q为大小为Q的边缘检测算子,Q=(2×L+1),L为设定参数值;m,n,L为整数,-L≤m≤L,-L≤n≤L;进行噪声抑制处理时,使用以下公式去除或减弱边缘值中的噪声:其中,T 0为噪声阈值,T 0>0;进行强度调整时,使用以下公式对正、负边缘的强度分别调整:其中,参数Gain p为正边缘的调整增益,参数Gain n为负边缘增益,当增益大于1时,表示增强边缘强度;当增益小于1时,表示减弱边缘强度。
- 根据权利要求4所述方法,其特征在于:在(H,S)坐标系中,由预设参数H 0、H 1、S 0和S 1确定一矩形区域为肤色区域,若像素点(i,j)的H值和S值在(H,S)坐标系中落在矩形区域内,则认为该像素点为肤色点,否则为非肤色点;对于肤色点,由以下公式计算该点的肤色权重值W skin(i,j):其中,0≤W skin(i,j)≤1,值越大表示该点为肤色点的可能性越大;H 0,H 1,S 0,S 1,D s,D h为预设的参数值,且满足H 0+2×D h<H 1,S 0+2×D s<S 1;D s用于调节S方向上的权重过渡区间宽度,D h用于调节H方向上的权重过渡区间宽度;对于非肤色点,对应的肤色权重值W skin(i,j)=0。
- 根据权利要求5所述方法,其特征在于:根据输入图像的人脸区域信息获取每个像素点的人脸肤色权重值的步骤如下:步骤1,通过人脸检测模块对输入图像进行人脸检测,获取输入图像的人脸区域信息,对于输入图像中检测到的N张人脸,第k(0≤k<N)张人脸的人脸区域信息F(x k,y k,w k,h k),表示人脸区域左上角坐标为(x k,y k),区域宽度为w k,高度为h k;步骤2,对于每个像素点(i,j),从第0张人脸开始直至第N-1张人脸,判断该像素点落在上述人脸区域中;判定像素点落在上述人脸区域中时,该像素点的人脸权重W face(i,j)=1;判定像素点未落在上述人脸区域中时,该像素点的人脸权重W face(i,j)=0;步骤3,根据每个像素点(i,j)的肤色权重W skin(i,j)和人脸权重W face(i,j),按以下公式计算各像素点的人脸肤色权重W(i,j),
- 根据权利要求6所述方法,其特征在于:按公式E(i,j)=E 2a(i,j)×(1-W(i,j))+E 2b(i,j)×W(i,j),将第一边缘值E 2a与第二边缘值E 2b按前述人脸肤色权重值进行混合,以获得每个像素点的最终边缘值E(i,j);以及,将每个像素点的最终边缘值E(i,j)与输入亮度值进行和运算以进行边缘增强处理,计算公式如下Y out(i,j)=Y in(i,j)+E(i,j),获得处理后的图像作为输出亮度图像进行输出。
- 根据权利要求6所述方法,其特征在于:所述步骤2包括,步骤21,初始化,令k=0;步骤22,获取第k张人脸的人脸区域信息F(x k,y k,w k,h k);步骤23,对于像素点(i,j),判断x 0≤j≤x 0+w k且y 0≤i≤y 0+h k是否为真,判定为真时,该点落在第k张人脸区域中,跳转执行步骤27;否则,继续执行步骤24;步骤24,令k=k+1;步骤25,判断k<N是否为真,判定为真时,跳转执行步骤22;否则继续执行步骤26;步骤26,令W face(i,j)=0,结束当前像素点的判定;步骤27,令W face(i,j)=1,结束当前像素点的判定。
- 一种图像的边缘增强处理装置,其特征在于包括:处理器;用于存储处理器可执行指令和参数的存储器;所述处理器包括边缘分析单元、肤色分析单元和增强处理单元,所述边缘分析单元,用于接收被分离成亮度信号和色度信号的输入图像,亮度信号对应输入亮度图像,色度信号对应输入色度图像;以及,通过适用于非人脸肤色点的第一参数组对输入亮度图像进行处理获得第一边缘值,通过适用于人脸肤色点的第二参数组对输入亮度图像进行处理获得第二边缘值;所述肤色分析单元,用于对输入色度图像进行肤色检测获得每一个像素点的肤色权重值,根据输入图像的人脸区域信息获取每个像素点的人脸肤色权重值;其中,人脸区域中的肤色点的人脸肤色权重值等于该点的肤色权重值,人脸区域外的所有点的人脸肤色权重值被清零;所述增强处理单元,用于将第一边缘值与第二边缘值按前述人脸肤色权重值进行混合,将混合得到的边缘值结合到输入亮度值上进行边缘增强。
- 一种图像的边缘增强处理系统,包括边缘检测模块和边缘增强模块,其特征在于:还包括设置在边缘检测模块和边缘增强模块之间的人脸区域调制模块,所述人脸区域调制模块连接肤色检测模块;所述肤色检测模块被配置为,对输入色度图像进行肤色检测获得每一个像素点的肤色权重值,根据输入图像的人脸区域信息获取每个像素点的人脸肤色权重值;其中,人脸区域中的肤色点的人脸肤色权重值等于该点的肤色权重值,人脸区域外的所有点的人脸肤色权重值被清零;所述人脸区域调制模块被配置为,通过适用于非人脸肤色点的第一参数组对输入亮度图像进行处理获得第一边缘值,通过适用于人脸肤色点的第二参数组对输入亮度图像进行处理获得第二边缘值;将第一边缘值与第二边缘值按前述人脸肤色权重值进行混合,并将混合得到的边缘值传输到边缘增强模块进行边缘增强处理。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/171,452 US20230206458A1 (en) | 2020-09-08 | 2023-02-20 | Image edge enhancement processing method and application thereof |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010931595.8 | 2020-09-08 | ||
CN202010931595.8A CN111798401B (zh) | 2020-09-08 | 2020-09-08 | 图像的边缘增强处理方法及应用 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/171,452 Continuation US20230206458A1 (en) | 2020-09-08 | 2023-02-20 | Image edge enhancement processing method and application thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022052862A1 true WO2022052862A1 (zh) | 2022-03-17 |
Family
ID=72834286
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/116307 WO2022052862A1 (zh) | 2020-09-08 | 2021-09-02 | 图像的边缘增强处理方法及应用 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20230206458A1 (zh) |
CN (1) | CN111798401B (zh) |
WO (1) | WO2022052862A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024069322A1 (en) | 2022-09-27 | 2024-04-04 | Pixelgen Technologies Ab | Method for fixing primary antibodies to a biological sample |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111798401B (zh) * | 2020-09-08 | 2020-12-04 | 眸芯科技(上海)有限公司 | 图像的边缘增强处理方法及应用 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101742338A (zh) * | 2008-11-05 | 2010-06-16 | 美格纳半导体有限会社 | 锐度增强设备及方法 |
CN101841642A (zh) * | 2010-04-22 | 2010-09-22 | 南京航空航天大学 | 一种基于分数阶次信号处理的边缘检测方法 |
CN110070502A (zh) * | 2019-03-25 | 2019-07-30 | 成都品果科技有限公司 | 人脸图像磨皮的方法、装置和存储介质 |
CN111798401A (zh) * | 2020-09-08 | 2020-10-20 | 眸芯科技(上海)有限公司 | 图像的边缘增强处理方法及应用 |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10055821B2 (en) * | 2016-01-30 | 2018-08-21 | John W. Glotzbach | Device for and method of enhancing quality of an image |
-
2020
- 2020-09-08 CN CN202010931595.8A patent/CN111798401B/zh active Active
-
2021
- 2021-09-02 WO PCT/CN2021/116307 patent/WO2022052862A1/zh active Application Filing
-
2023
- 2023-02-20 US US18/171,452 patent/US20230206458A1/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101742338A (zh) * | 2008-11-05 | 2010-06-16 | 美格纳半导体有限会社 | 锐度增强设备及方法 |
CN101841642A (zh) * | 2010-04-22 | 2010-09-22 | 南京航空航天大学 | 一种基于分数阶次信号处理的边缘检测方法 |
CN110070502A (zh) * | 2019-03-25 | 2019-07-30 | 成都品果科技有限公司 | 人脸图像磨皮的方法、装置和存储介质 |
CN111798401A (zh) * | 2020-09-08 | 2020-10-20 | 眸芯科技(上海)有限公司 | 图像的边缘增强处理方法及应用 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024069322A1 (en) | 2022-09-27 | 2024-04-04 | Pixelgen Technologies Ab | Method for fixing primary antibodies to a biological sample |
Also Published As
Publication number | Publication date |
---|---|
US20230206458A1 (en) | 2023-06-29 |
CN111798401B (zh) | 2020-12-04 |
CN111798401A (zh) | 2020-10-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109639982B (zh) | 一种图像降噪方法、装置、存储介质及终端 | |
EP3479346B1 (en) | Method and electronic device for producing composite image | |
Huang et al. | Efficient contrast enhancement using adaptive gamma correction with weighting distribution | |
US8520089B2 (en) | Eye beautification | |
US9007480B2 (en) | Automatic face and skin beautification using face detection | |
WO2022052862A1 (zh) | 图像的边缘增强处理方法及应用 | |
CN109272459A (zh) | 图像处理方法、装置、存储介质及电子设备 | |
WO2022161009A1 (zh) | 图像处理方法及装置、存储介质、终端 | |
TWI511559B (zh) | 影像處理方法 | |
JP2003230160A (ja) | カラー映像の彩度調節装置及び方法 | |
CN109727215A (zh) | 图像处理方法、装置、终端设备及存储介质 | |
CN108876742A (zh) | 图像色彩增强方法和装置 | |
WO2023056950A1 (zh) | 图像处理方法和电子设备 | |
WO2018165023A1 (en) | Method of decaying chrominance in images | |
JP2002281327A (ja) | 画像処理のための装置、方法及びプログラム | |
JP4752912B2 (ja) | 画像の質感を補正する画像処理装置、画像処理プログラム、画像処理方法、および電子カメラ | |
CN110012277B (zh) | 一种针对人像图像的自动白平衡方法及装置 | |
WO2012153661A1 (ja) | 画像補正装置、画像補正表示装置、画像補正方法、プログラム、及び、記録媒体 | |
JP5327766B2 (ja) | デジタル画像における記憶色の修正 | |
JP2009010636A (ja) | 適応ヒストグラム等化方法及び適応ヒストグラム等化装置 | |
WO2023010796A1 (zh) | 图像处理方法及相关装置 | |
CN112686800B (zh) | 图像处理方法、装置、电子设备及存储介质 | |
JP6590047B2 (ja) | 画像処理装置、撮像装置、画像処理方法、及びプログラム | |
Arora et al. | Enhancement of overexposed color images | |
CN111047533A (zh) | 一种人脸图像的美化方法和装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21865916 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21865916 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21865916 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25.09.2023) |