WO2022052862A1 - 图像的边缘增强处理方法及应用 - Google Patents

图像的边缘增强处理方法及应用 Download PDF

Info

Publication number
WO2022052862A1
WO2022052862A1 PCT/CN2021/116307 CN2021116307W WO2022052862A1 WO 2022052862 A1 WO2022052862 A1 WO 2022052862A1 CN 2021116307 W CN2021116307 W CN 2021116307W WO 2022052862 A1 WO2022052862 A1 WO 2022052862A1
Authority
WO
WIPO (PCT)
Prior art keywords
skin color
edge
face
value
image
Prior art date
Application number
PCT/CN2021/116307
Other languages
English (en)
French (fr)
Inventor
何珊
孙德印
Original Assignee
眸芯科技(上海)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 眸芯科技(上海)有限公司 filed Critical 眸芯科技(上海)有限公司
Publication of WO2022052862A1 publication Critical patent/WO2022052862A1/zh
Priority to US18/171,452 priority Critical patent/US20230206458A1/en

Links

Images

Classifications

    • G06T5/73
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the invention relates to the technical field of digital image processing, in particular to an image edge enhancement processing method and application.
  • the skin area of the human face often has a lot of details, such as fine lines, acne marks, freckles and shadow boundaries, etc. These details generally have relatively weak contrast, and this part generally does not need to be enhanced too much. It should not be too wide, otherwise the face will look unnatural; at the same time, for the non-face part of the image, such as scenery, buildings, etc., in order to make the details more obvious, the edges of the details with relatively weak contrast are often the focus of enhancement. If the above two parts of the image use uniform enhancement parameters, the final effect cannot achieve a good balance between the two.
  • the prior art also provides a face enhancement scheme for distinguishing skin color points and non-skin color points, taking the published Chinese patent application CN102542538A as an example, it provides an edge enhancement method: using the method of color detection to distinguish skin color points and non-skin-colored points, weaken the enhancement strength of skin-colored points to improve the enhancement effect of human faces.
  • the color detection method can only distinguish skin color points from non-skin color points, and cannot accurately locate the skin color points of the face, and the false detection rate is also high.
  • the color of the common indoor beige textured floor is also within the skin color range. If it is treated as a skin color point and the edge enhancement strength is weakened, the floor texture that should be enhanced more will not be effectively enhanced. the overall enhancement of the image.
  • the purpose of the present invention is to overcome the deficiencies of the prior art, and to provide an image edge enhancement processing method and application.
  • the invention uses the characteristics of the face image to use independent edge enhancement parameters for the face skin points, and improves the edge enhancement effect of the human face skin points on the premise of not affecting the edge enhancement effect of the non-face skin color points.
  • the present invention provides the following technical solutions:
  • An image edge enhancement processing method comprising the following steps:
  • the luminance signal corresponds to the input luminance image
  • the chrominance signal corresponds to the input chrominance image
  • the first edge value is obtained by processing the input luminance image through the first parameter group applicable to the non-face skin color points
  • the second edge value is obtained by processing the input luminance image through the second parameter group applicable to the human face skin color point
  • perform skin color detection on the input chromaticity image to obtain the skin color weight value of each pixel point, and obtain the skin color weight value of each pixel point according to the face area information of the input image
  • the face skin color weight value is equal to the skin color weight value of the point, and the face skin color weight value of all points outside the face area is cleared;
  • the first edge value and the second edge value are mixed according to the aforementioned face skin color weight value, and the edge value obtained by mixing is combined with the input luminance value to perform edge enhancement.
  • the first parameter group is a parameter group a including an edge detection operator, a noise suppression parameter and an intensity adjustment parameter, and the parameter group a is used to sequentially perform edge detection, noise suppression, and intensity adjustment processing on the input luminance image to obtain an edge value.
  • E 2a the parameter group a including an edge detection operator, a noise suppression parameter and an intensity adjustment parameter, and the parameter group a is used to sequentially perform edge detection, noise suppression, and intensity adjustment processing on the input luminance image to obtain an edge value.
  • the second parameter group is a parameter group b including an edge detection operator, a noise suppression parameter and an intensity adjustment parameter, and the parameter group b is used to sequentially perform edge detection, noise suppression, and intensity adjustment processing on the input luminance image to obtain an edge value E 2b .
  • the edge value is calculated according to the following formula:
  • T 0 is the noise threshold, T 0 >0;
  • the parameter Gain p is the adjustment gain of the positive edge
  • the parameter Gain n is the negative edge gain. When the gain is greater than 1, it means to increase the edge strength; when the gain is less than 1, it means to weaken the edge strength.
  • the skin color is detected on the input chromaticity image, the skin color is detected in the (H, S) coordinate system, where H represents the chromaticity of the pixel, and S represents the saturation of the pixel;
  • Cr(i,j) is the red difference at point (i,j), -128 ⁇ Cr(i,j) ⁇ 127;
  • Cb(i,j) is the blue difference at (i,j), - 128 ⁇ Cb(i,j) ⁇ 127.
  • a rectangular area is determined as the skin color area by the preset parameters H 0 , H 1 , S 0 and S 1 , if the H value and S value of the pixel point (i, j) are in the If the (H, S) coordinate system falls within the rectangular area, the pixel is considered to be a skin color point, otherwise it is a non-skin color point;
  • the skin color weight value W skin (i,j) of the point is calculated by the following formula:
  • H 0 ,H 1 ,S 0 ,S 1 ,D s ,D h are preset parameter values , and satisfy H 0 +2 ⁇ D h ⁇ H 1 , S 0 +2 ⁇ D s ⁇ S 1 ;
  • D s is used to adjust the width of the weight transition interval in the S direction, and D h is used to adjust the weight transition in the H direction interval width;
  • Step 1 perform face detection on the input image through the face detection module, and obtain the face area information of the input image.
  • the k (0 ⁇ k ⁇ N)th face is
  • the face area information F(x k , y k , w k , h k ) indicates that the coordinates of the upper left corner of the face area are (x k , y k ), the width of the area is w k , and the height is h k ;
  • Step 3 according to the skin color weight W skin (i, j) and the face weight W face (i, j) of each pixel point (i, j), calculate the skin color weight W (i, j) of each pixel point according to the following formula: ,j),
  • E(i,j) E2a (i,j) ⁇ (1-W(i,j))+ E2b (i,j) ⁇ W(i,j),
  • the processed image is obtained as an output luminance image for output.
  • step 2 includes,
  • Step 22 Obtain the face area information F(x k , y k , w k , h k ) of the kth face;
  • Step 23 for the pixel point (i, j), judge whether x 0 ⁇ j ⁇ x 0 +w k and y 0 ⁇ i ⁇ y 0 +h k is true, when it is judged to be true, the point falls on the kth sheet In the face area, jump to step 27; otherwise, continue to step 24;
  • Step 25 judge whether k ⁇ N is true, when it is judged to be true, jump to execute step 22; otherwise, continue to execute step 26;
  • the present invention also provides an image edge enhancement processing device, comprising:
  • memory for storing processor executable instructions and parameters
  • the processor includes an edge analysis unit, a skin color analysis unit and an enhancement processing unit,
  • the edge analysis unit is used to receive an input image separated into a luminance signal and a chrominance signal, the luminance signal corresponds to the input luminance image, and the chrominance signal corresponds to the input chrominance image;
  • a parameter group processes the input brightness image to obtain a first edge value, and processes the input brightness image through a second parameter group suitable for skin color points of a human face to obtain a second edge value;
  • the skin color analysis unit is used to detect the skin color of the input chromaticity image to obtain the skin color weight value of each pixel point, and obtain the skin color weight value of each pixel point according to the face area information of the input image;
  • the skin color weight value of the skin color point in the area is equal to the skin color weight value of the point, and the face skin color weight value of all points outside the face area is cleared;
  • the enhancement processing unit is configured to mix the first edge value and the second edge value according to the aforementioned face skin color weight value, and combine the mixed edge value with the input luminance value to perform edge enhancement.
  • the present invention also provides an image edge enhancement processing system, comprising an edge detection module and an edge enhancement module, and a face area modulation module arranged between the edge detection module and the edge enhancement module, the face area modulation module is connected to skin color detection module;
  • the skin color detection module is configured to: perform skin color detection on the input chromaticity image to obtain the skin color weight value of each pixel point, and obtain the skin color weight value of each pixel point according to the face area information of the input image;
  • the skin color weight value of the skin color point in the face area is equal to the skin color weight value of the point, and the face skin color weight value of all points outside the face area is cleared;
  • the face area modulation module is configured to: process the input luminance image through a first parameter group applicable to non-face skin color points to obtain a first edge value, and use a second parameter group applicable to human face skin color points to process the input brightness image.
  • the brightness image is processed to obtain a second edge value; the first edge value and the second edge value are mixed according to the aforementioned face skin color weight value, and the mixed edge value is transmitted to the edge enhancement module for edge enhancement processing.
  • the present invention has the following advantages and positive effects as an example due to the adoption of the above technical solutions: using the characteristics of the face image to use an independent edge enhancement parameter for the skin points of the face, without affecting the skin color of the non-face On the premise of the point edge enhancement effect, improve the edge enhancement effect of face skin points.
  • the scheme provided in the present invention is combined with people. Face detection and skin color detection accurately locate the skin points of the face, and can significantly reduce the false detection rate by excluding the skin color points that are not of the face.
  • edge enhancement parameters for face skin points and non-face skin points preferably including edge detection parameters, noise parameters, and intensity parameters, the present invention can more finely distinguish between skin color points and non-skin color points.
  • the enhancement effect is also convenient for the user to flexibly adjust the enhancement effect according to the characteristics and preferences of the human face, with wide applicability and strong flexibility.
  • FIG. 1 is a flowchart of an image edge enhancement processing method provided by the present invention.
  • FIG. 2 is an information processing flowchart of an image edge enhancement processing method provided by an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of the skin color region in the (H, S) coordinate system provided by the present invention.
  • FIG. 4 is a schematic diagram of the weight transition area in the (H, S) coordinate system provided by the present invention.
  • FIG. 5 is an example diagram of a detected face region provided by the present invention.
  • FIG. 6 is a flowchart of detecting whether a pixel falls into a face area provided by the present invention.
  • this embodiment provides an image edge enhancement processing method.
  • the method includes the following steps:
  • S100 Receive an input image separated into a luminance signal and a chrominance signal, where the luminance signal corresponds to the input luminance image, and the chrominance signal corresponds to the input chrominance image.
  • the image data separated into the luminance signal and the chrominance signal may be one of YCbCr image data, HSV image data, and HIS image data.
  • the luminance signal is a Y signal
  • the chrominance signal is a chrominance (C) signal.
  • the luminance signal refers to an electrical signal representing the luminance of a picture in a video system.
  • the signal representing the chrominance information is usually superimposed with the luminance signal to save the frequency bandwidth of the transmitted signal.
  • a signal representing luminance information is referred to as a Y signal
  • a signal component representing chrominance information is referred to as a C signal.
  • the input image is separated into an input luminance image corresponding to the luminance signal (ie, Y in ), and an input chrominance image corresponding to the chrominance signal (ie, Cr and Cb).
  • the first parameter group is a parameter group a including an edge detection operator, a noise suppression parameter and an intensity adjustment parameter, and the parameter group a is used to sequentially perform edge detection, noise suppression, and intensity adjustment on the input luminance image process, and obtain the edge value E 2a .
  • the second parameter group is a parameter group b including an edge detection operator, a noise suppression parameter and an intensity adjustment parameter, and the parameter group b is used to sequentially perform edge detection, noise suppression, and intensity adjustment processing on the input luminance image to obtain an edge value E 2b .
  • the skin color weight of each pixel is calculated, and the larger the weight value, the greater the possibility of skin color.
  • the skin color points in the non-face area are excluded, that is, only the skin color weight of the face area points is retained, and the skin color points of other skin color points outside the face area are excluded. Weights are cleared.
  • S300 Mix the first edge value and the second edge value according to the aforementioned face skin color weight value, and combine the mixed edge value with the input luminance value to perform edge enhancement.
  • the obtained edge values E 2a and E 2b are mixed according to the face skin color weight value to obtain the final edge value.
  • the final edge value is then applied to the input luminance value to obtain an edge-enhanced luminance value to generate an enhanced edge.
  • the edge value is calculated according to the following formula:
  • E 0 (i,j) is the edge value of point (i,j);
  • Y in (i,j) The luminance value of the input image at point (i,j), 0 ⁇ Yin (i,j) ⁇ 255;
  • n, L are integers, -L ⁇ m ⁇ L, -L ⁇ n ⁇ L.
  • C 3 and C 5 are not unique, and those skilled in the art can select and adjust the calculation matrix of the edge detection operator C Q as required.
  • the user can preset the calculation matrix corresponding to the Q edge detection operator C Q in the memory, and call the corresponding calculation matrix according to the Q value (calculated by the L value) when needed.
  • T 0 is the noise threshold, T 0 >0.
  • the intensity of positive and negative edges can be adjusted separately using the following formula:
  • the parameter Gain p is the adjustment gain of the positive edge
  • the parameter Gain n is the gain of the negative edge.
  • skin color is detected in a (H, S) coordinate system, where H is used to describe the chromaticity of a pixel, and S is used to describe Saturation of pixels.
  • Cr(i,j) is the red difference at point (i,j), -128 ⁇ Cr(i,j) ⁇ 127;
  • Cb(i,j) is the blue difference at (i,j), - 128 ⁇ Cb(i,j) ⁇ 127.
  • a rectangular area may be determined as a skin color area by preset parameters H 0 , H 1 , S 0 and S 1 , as shown in FIG. 3 .
  • the pixel is considered to be a skin color point, otherwise it is a non-skin color point.
  • the skin color weight value W skin (i, j) of the point can be calculated by the following formula:
  • H 0 , H 1 , S 0 , S 1 , D s , and D h are all preset parameter values, and satisfy H 0 +2 ⁇ D h ⁇ H 1 and S 0 +2 ⁇ D s ⁇ S 1 .
  • the parameter D s is used to adjust the width of the weight transition interval in the S direction
  • the parameter D h is used to adjust the width of the weight transition interval in the H direction, as shown in FIG. 4 .
  • the user can personalize the parameter values of H 0 , H 1 , S 0 , S 1 , D s , D h as required, or set H 0 , H 1 , S 0 , S 1 through the system , D s , D h parameter values are set adaptively.
  • the face detection result can come from a dedicated face detection module.
  • a face detection algorithm module is usually set.
  • the face area information can be obtained, including the face position (usually represented by coordinate values) and size information (usually represented by width and height).
  • the steps of obtaining the weight value of the face skin color of each pixel point according to the face area information of the input image are as follows:
  • Step 1 perform face detection on the input image through the face detection module, and obtain the face area information of the input image.
  • N faces are detected in the image to be processed, which are face 0, face 1, face 2, . . . , face N-1.
  • the face area information of the kth (0 ⁇ k ⁇ N) face is represented as F(x k , y k , w k , h k ), indicating that the coordinates of the upper left corner of the face area are (x k , y k ) ), the region width is w k , and the height is h k .
  • Step 2 for each pixel point (i, j) in the image, from the 0th face to the N-1th face, it is determined that the pixel falls in the above-mentioned face area.
  • Step 22 Obtain the face region information F(x k , y k , w k , h k ) of the kth face.
  • Step 23 for the pixel point (i, j), judge whether x 0 ⁇ j ⁇ x 0 +w k and y 0 ⁇ i ⁇ y 0 +h k is true, when it is judged to be true, the point falls on the kth sheet In the face area, skip to step 27; otherwise, continue to step 24.
  • Step 25 determine whether k ⁇ N is true, if it is true, skip to step 22; otherwise, continue to step 26.
  • Step 3 Finally, according to the skin color weight W skin (i, j) and the face weight W face (i, j) of each pixel point (i, j), calculate the skin color weight W of each pixel point according to the following formula: (i,j):
  • the final edge value is made by mixing E 2a (i,j) and E 2b (i,j) with weight W(i,j).
  • E(i,j) E2a (i,j) ⁇ (1-W(i,j))+ E2b (i,j) ⁇ W(i,j),
  • the first edge value E 2a and the second edge value E 2b are mixed according to the weight value of the skin color of the face, so as to obtain the final edge value E(i,j) of each pixel point.
  • the final edge value E(i,j) of each pixel is summed with the input luminance value for edge enhancement processing.
  • the calculation formula is as follows
  • the processed image can then be obtained as an output luminance image for output.
  • the above technical solution provided by the present invention combines face detection and skin color detection to accurately locate the skin points of the human face, and can significantly reduce the false detection rate by excluding the skin color points that are not of the human face. Further, by using different edge enhancement parameters for human face skin points and non-human face skin points, preferably including edge detection parameters, noise parameters, and intensity parameters, the present invention can more finely distinguish the enhancement effect of skin color points and non-skin color points. At the same time, it is also convenient for users to flexibly adjust the enhancement effect according to the characteristics and preferences of the human face, with wide applicability and strong flexibility.
  • the apparatus includes a processor and a memory for storing processor-executable instructions and parameters.
  • the processor includes an edge analysis unit, a skin color analysis unit and an enhancement processing unit.
  • the edge analysis unit is used to receive an input image separated into a luminance signal and a chrominance signal, the luminance signal corresponds to the input luminance image, and the chrominance signal corresponds to the input chrominance image;
  • a parameter group processes the input luminance image to obtain a first edge value, and processes the input luminance image through a second parameter group suitable for skin color points of a human face to obtain a second edge value.
  • the skin color analysis unit is used to detect the skin color of the input chromaticity image to obtain the skin color weight value of each pixel point, and obtain the skin color weight value of each pixel point according to the face area information of the input image;
  • the skin color weight value of the skin color point in the area is equal to the skin color weight value of the point, and the skin color weight value of all points outside the face area is cleared to zero.
  • the enhancement processing unit is configured to mix the first edge value and the second edge value according to the aforementioned face skin color weight value, and combine the mixed edge value with the input luminance value to perform edge enhancement.
  • the edge analysis unit may further include an edge detection subunit, a noise suppression subunit, and an intensity adjustment subunit.
  • the edge detection subunit is configured to: for a certain pixel point located in the i-th row and the j-th column in the image, denoted as (i, j), and calculate the edge value of the point according to the following formula:
  • E 0 (i,j) is the edge value of point (i,j);
  • Y in (i,j) The luminance value of the input image at point (i,j), 0 ⁇ Yin (i,j) ⁇ 255;
  • n, L are integers, -L ⁇ m ⁇ L, -L ⁇ n ⁇ L.
  • the noise suppression subunit is configured to remove or attenuate noise in edge values using the following formula:
  • T 0 is the noise threshold, T 0 >0.
  • the edge intensity subunit is configured to adjust the intensity of positive and negative edges respectively using the following formula:
  • the parameter Gain p is the adjustment gain of the positive edge
  • the parameter Gain n is the gain of the negative edge.
  • the skin color analysis unit may further include a skin color detection subunit and a face skin color detection subunit.
  • the skin color detection subunit is configured to detect skin color in the (H, S) coordinate system; wherein, for a certain pixel point (i, j) located in the i-th row and the j-th column in the image, the corresponding H and S value are calculated according to the following formula:
  • Cr(i,j) is the red difference at point (i,j), -128 ⁇ Cr(i,j) ⁇ 127;
  • Cb(i,j) is the blue difference at (i,j) , -128 ⁇ Cb(i,j) ⁇ 127.
  • the skin color detection subunit is also configured to: judge that the H value and the S value of the current pixel point (i, j) fall within the skin color area in the (H, S) coordinate system, and when falling within the skin color area, consider this The pixel point is a skin color point, otherwise it is a non-skin color point;
  • the skin color weight value W skin (i,j) of the point is calculated by the following formula:
  • H 0 , H 1 , S 0 , S 1 , D s , D h are all pre- Set parameter values, and satisfy H 0 +2 ⁇ D h ⁇ H 1 , S 0 +2 ⁇ D s ⁇ S 1 ;
  • the face skin color detection subunit is configured to: perform face detection on the input image through the face detection module, and obtain the face area information of the input image, wherein the kth (0 ⁇ k ⁇ N) face of the person
  • the face area information is expressed as F(x k , y k , w k , h k ), which means that the coordinates of the upper left corner of the face area are (x k , y k ), the width of the area is w k , and the height is h k ;
  • the enhancement processing unit includes an edge synthesis subunit and an enhanced edge generation subunit.
  • the edge synthesis subunit is configured as: according to the following formula
  • E(i,j) E2a (i,j) ⁇ (1-W(i,j))+ E2b (i,j) ⁇ W(i,j),
  • the first edge value E 2a and the second edge value E 2b are mixed according to the weight value of the skin color of the face, so as to obtain the final edge value E(i,j) of each pixel point.
  • the generating enhanced edge subunit is configured as: according to the following formula
  • the final edge value E(i,j) of each pixel is summed with the input luminance value for edge enhancement processing.
  • Another embodiment of the present invention also provides an image edge enhancement processing system.
  • the system includes an edge detection module and an edge enhancement module, and a face region modulation module arranged between the edge detection module and the edge enhancement module, and the face region modulation module is connected to the skin color detection module.
  • the skin color detection module is configured to: perform skin color detection on the input chromaticity image to obtain the skin color weight value of each pixel point, and obtain the skin color weight value of each pixel point according to the face area information of the input image;
  • the skin color weight value of the skin color point in the face area is equal to the skin color weight value of the point, and the face skin color weight value of all points outside the face area is cleared.
  • the face area modulation module is configured to: process the input luminance image through a first parameter group applicable to non-face skin color points to obtain a first edge value, and use a second parameter group applicable to human face skin color points to process the input brightness image.
  • the brightness image is processed to obtain a second edge value; the first edge value and the second edge value are mixed according to the aforementioned face skin color weight value, and the mixed edge value is transmitted to the edge enhancement module for edge enhancement processing.
  • each module of the system may be configured to include a plurality of sub-modules to perform the information processing process described in the previous embodiments, which will not be repeated here.

Abstract

本发明公开了图像的边缘增强处理方法及应用,涉及数字图像处理技术领域。一种图像的边缘增强处理方法,包括步骤:接收被分离成亮度信号和色度信号的输入图像;通过适用于非人脸肤色点的第一参数组对输入亮度图像进行处理获得第一边缘值,通过适用于人脸肤色点的第二参数组对输入亮度图像进行处理获得第二边缘值;以及,对输入色度图像进行肤色检测获得每一个像素点的肤色权重值,根据输入图像的人脸区域获取每个像素点的人脸肤色权重值;将第一边缘值与第二边缘值按人脸肤色权重值进行混合,将混合得到的边缘值结合到输入亮度值上进行边缘增强。本发明在不影响非人脸肤色点边缘增强效果的前提下,提高人脸皮肤点的边缘增强效果。

Description

图像的边缘增强处理方法及应用 技术领域
本发明涉及数字图像处理技术领域,尤其涉及一种图像的边缘增强处理方法及应用。
背景技术
随着数字图像处理技术的发展,各种改善画质的方法被应用于视频处理器以为用户提供高品质的视频图像。其中,肤色检测与处理技术是视频图像处理技术的一个重要分支。由于拍摄时的光、电和热环境的干扰,人体肤色可能出现与人类视觉习惯不一致的情况,导致人眼感官的不适应。因此需要对人体肤色进行检测、校正处理以便使其看上去更自然、健康,从而符合人眼视觉习惯。然而,现有的肤色检测和处理通常基于单一的彩色空间,这些处理方法虽然易于硬件实现,但当图像中包含大量的类肤色噪声和背景像素时,容易将背景中的类肤色像素误检为人体肤色像素,导致误检率较高。另一方面,人脸的皮肤区域往往具有非常多的细节,作为举例,比如细纹、痘印、雀斑以及阴影边界等,这些细节一般对比度相对较弱,这部分一般不需要增强太多,边缘也不能太宽,否则人脸看起来会不自然;同时,对于图像中非人脸的部分,比如景物、建筑等,为了让细节更明显,对比度相对弱的细节的边缘往往是增强的重点。如果以上两部分图像使用统一的增强参数,最终效果无法在二者之间获得很好的平衡。
目前,虽然现有技术中也提供了区分肤色点和非肤色点的人脸增强方案,以公开的中国专利申请CN102542538A为例,其提供了一种边缘增强方法:使用色彩检测的方法区分肤色点和非肤色点,减弱肤色点的增强强度,以改善人脸的增强效果。然而,上述方法中,色彩检测的方法仅能区分肤色点和非肤色点,无法精确定位到人脸肤色点,误检率也很高。作为举例,比如室内常见的米黄色本纹地板的颜色也在肤色范围内,若被当成肤色点处理,减弱边缘增强强度,那么本应该被增强更多的地板纹路将得不到有效增强,影响了图像的整体增强效果。
基于上述现有技术,如何在不影响非人脸肤色点边缘增强效果的前提下,提高人脸肤色点的边缘增强效果,是当前亟需解决的技术问题。
发明内容
本发明的目的在于:克服现有技术的不足,提供了一种图像的边缘增强处 理方法及应用。本发明利用人脸图像特性对人脸皮肤点使用独立的边缘增强参数,在不影响非人脸肤色点边缘增强效果的前提下,提高人脸皮肤点的边缘增强效果。
为实现上述目标,本发明提供了如下技术方案:
一种图像的边缘增强处理方法,包括如下步骤:
接收被分离成亮度信号和色度信号的输入图像,亮度信号对应输入亮度图像,色度信号对应输入色度图像;
通过适用于非人脸肤色点的第一参数组对输入亮度图像进行处理获得第一边缘值,通过适用于人脸肤色点的第二参数组对输入亮度图像进行处理获得第二边缘值;以及,对输入色度图像进行肤色检测获得每一个像素点的肤色权重值,根据输入图像的人脸区域信息获取每个像素点的人脸肤色权重值;其中,人脸区域中的肤色点的人脸肤色权重值等于该点的肤色权重值,人脸区域外的所有点的人脸肤色权重值被清零;
将第一边缘值与第二边缘值按前述人脸肤色权重值进行混合,将混合得到的边缘值结合到输入亮度值上进行边缘增强。
进一步,所述第一参数组为包括边缘检测算子、噪声抑制参数和强度调整参数的参数组a,使用参数组a对输入亮度图像依次进行边缘检测、噪声抑制、强度调整处理,得到边缘值E 2a
所述第二参数组为包括边缘检测算子、噪声抑制参数和强度调整参数的参数组b,使用参数组b对输入亮度图像依次进行边缘检测、噪声抑制、强度调整处理,得到边缘值E 2b
进一步,对于图像中某一位于第i行、第j列的像素点,记为(i,j),按以下公式计算边缘值:
Figure PCTCN2021116307-appb-000001
其中,
E 0(i,j)为(i,j)点的边缘值;Y in(i,j)输入图像在(i,j)点的亮度值,0≤Y in(i,j)≤255;C Q为大小为Q的边缘检测算子,Q=(2×L+1),L为设定参数值;m,n,L为整数,-L≤m≤L,-L≤n≤L;
进行噪声抑制处理时,使用以下公式去除或减弱边缘值中的噪声:
Figure PCTCN2021116307-appb-000002
其中,
T 0为噪声阈值,T 0>0;
进行强度调整时,使用以下公式对正、负边缘的强度分别调整:
Figure PCTCN2021116307-appb-000003
其中,
参数Gain p为正边缘的调整增益,参数Gain n为负边缘增益,当增益大于1时,表示增强边缘强度;当增益小于1时,表示减弱边缘强度。
进一步,对输入色度图像进行肤色检测时,在(H,S)坐标系对肤色进行检测,H表示像素点的色度,S表示像素点的饱和度;
对于图像中某一位于第i行、第j列的像素点(i,j),
Figure PCTCN2021116307-appb-000004
Figure PCTCN2021116307-appb-000005
其中,
Cr(i,j)为(i,j)点的红色差值,-128≤Cr(i,j)≤127;Cb(i,j)为(i,j)点的蓝色差值,-128≤Cb(i,j)≤127。
进一步,在(H,S)坐标系中,由预设参数H 0、H 1、S 0和S 1确定一矩形区域为肤色区域,若像素点(i,j)的H值和S值在(H,S)坐标系中落在矩形区域内,则认为该像素点为肤色点,否则为非肤色点;
对于肤色点,由以下公式计算该点的肤色权重值W skin(i,j):
Figure PCTCN2021116307-appb-000006
Figure PCTCN2021116307-appb-000007
Figure PCTCN2021116307-appb-000008
其中,
0≤W skin(i,j)≤1,值越大表示该点为肤色点的可能性越大;H 0,H 1,S 0,S 1,D s,D h为预设的参数值,且满足H 0+2×D h<H 1,S 0+2×D s<S 1;D s用于调节S方向上的权重过渡区间宽度,D h用于调节H方向上的权重过渡区间宽度;
对于非肤色点,对应的肤色权重值W skin(i,j)=0。
进一步,根据输入图像的人脸区域信息获取每个像素点的人脸肤色权重值的步骤如下:
步骤1,通过人脸检测模块对输入图像进行人脸检测,获取输入图像的人脸区域信息,对于输入图像中检测到的N张人脸,第k(0≤k<N)张人脸的人脸区域信息F(x k,y k,w k,h k),表示人脸区域左上角坐标为(x k,y k),区域宽度为w k,高度为h k
步骤2,对于每个像素点(i,j),从第0张人脸开始直至第N-1张人脸,判断该像素点落在上述人脸区域中;判定像素点落在上述人脸区域中时,该像素点的人脸权重W face(i,j)=1;判定像素点未落在上述人脸区域中时,该像素点的人脸权重W face(i,j)=0;
步骤3,根据每个像素点(i,j)的肤色权重W skin(i,j)和人脸权重W face(i,j),按以下公式计算各像素点的人脸肤色权重W(i,j),
Figure PCTCN2021116307-appb-000009
进一步,按公式
E(i,j)=E 2a(i,j)×(1-W(i,j))+E 2b(i,j)×W(i,j),
将第一边缘值E 2a与第二边缘值E 2b按前述人脸肤色权重值进行混合,以获得每个像素点的最终边缘值E(i,j);
以及,将每个像素点的最终边缘值E(i,j)与输入亮度值进行和运算以进行边缘增强处理,计算公式如下
Y out(i,j)=Y in(i,j)+E(i,j),
获得处理后的图像作为输出亮度图像进行输出。
进一步,所述步骤2包括,
步骤21,初始化,令k=0;
步骤22,获取第k张人脸的人脸区域信息F(x k,y k,w k,h k);
步骤23,对于像素点(i,j),判断x 0≤j≤x 0+w k且y 0≤i≤y 0+h k是否为真,判定为真时,该点落在第k张人脸区域中,跳转执行步骤27;否则,继续执行步骤24;
步骤24,令k=k+1;
步骤25,判断k<N是否为真,判定为真时,跳转执行步骤22;否则继续执行步骤26;
步骤26,令W face(i,j)=0,结束当前像素点的判定;
步骤27,令W face(i,j)=1,结束当前像素点的判定。
本发明还提供了一种图像的边缘增强处理装置,包括:
处理器;
用于存储处理器可执行指令和参数的存储器;
所述处理器包括边缘分析单元、肤色分析单元和增强处理单元,
所述边缘分析单元,用于接收被分离成亮度信号和色度信号的输入图像,亮度信号对应输入亮度图像,色度信号对应输入色度图像;以及,通过适用于非人脸肤色点的第一参数组对输入亮度图像进行处理获得第一边缘值,通过适用于人脸肤色点的第二参数组对输入亮度图像进行处理获得第二边缘值;
所述肤色分析单元,用于对输入色度图像进行肤色检测获得每一个像素点的肤色权重值,根据输入图像的人脸区域信息获取每个像素点的人脸肤色权重值;其中,人脸区域中的肤色点的人脸肤色权重值等于该点的肤色权重值,人脸区域外的所有点的人脸肤色权重值被清零;
所述增强处理单元,用于将第一边缘值与第二边缘值按前述人脸肤色权重值进行混合,将混合得到的边缘值结合到输入亮度值上进行边缘增强。
本发明还提供了一种图像的边缘增强处理系统,包括边缘检测模块和边缘增强模块,以及设置在边缘检测模块和边缘增强模块之间的人脸区域调制模块,所述人脸区域调制模块连接肤色检测模块;
所述肤色检测模块被配置为:对输入色度图像进行肤色检测获得每一个像 素点的肤色权重值,根据输入图像的人脸区域信息获取每个像素点的人脸肤色权重值;其中,人脸区域中的肤色点的人脸肤色权重值等于该点的肤色权重值,人脸区域外的所有点的人脸肤色权重值被清零;
所述人脸区域调制模块被配置为:通过适用于非人脸肤色点的第一参数组对输入亮度图像进行处理获得第一边缘值,通过适用于人脸肤色点的第二参数组对输入亮度图像进行处理获得第二边缘值;将第一边缘值与第二边缘值按前述人脸肤色权重值进行混合,并将混合得到的边缘值传输到边缘增强模块进行边缘增强处理。
本发明由于采用以上技术方案,与现有技术相比,作为举例,具有以下的优点和积极效果:利用人脸图像特性对人脸皮肤点使用独立的边缘增强参数,在不影响非人脸肤色点边缘增强效果的前提下,提高人脸皮肤点的边缘增强效果。
相比于现有的通过色彩检测方法区分肤色点和非肤色点的方法(所有落在肤色范围内的点都会被当成肤色点处理,误检率比较高),本发明中提供的方案联合人脸检测和肤色检测精确定位人脸皮肤点,通过排除非人脸的肤色点,可以显著降低误检率。另一方面,本发明通过对人脸皮肤点和非人脸皮肤点使用不同的边缘增强参数,优选的包括边缘检测参数、噪声参数、强度参数,能够更精细地区别肤色点和非肤色点的增强效果,同时也方便用户灵活地根据人脸皮肤特点和喜好对增强效果进行调整,适用性广,灵活性强。
附图说明
图1为本发明提供的图像的边缘增强处理方法的流程图。
图2为本发明实施例提供的图像的边缘增强处理方法的信息处理流程图。
图3为本发明提供的(H,S)坐标系中的肤色区域示意图。
图4为本发明提供的(H,S)坐标系中的权重过渡区示意图。
图5为本发明提供的检测出的人脸区域示例图。
图6为本发明提供的检测像素点是否落入人脸区域的流程图。
具体实施方式
以下结合附图和具体实施例对本发明公开的图像的边缘增强处理方法及应用作进一步详细说明。应当注意的是,下述实施例中描述的技术特征或者技术特征的组合不应当被认为是孤立的,它们可以被相互组合从而达到更好的技术效果。在下述实施例的附图中,各附图所出现的相同标号代表相同的特征或者部件,可应用于不同实施例中。因此,一旦某一项在一个附图中被定义,则在 随后的附图中不需要对其进行进一步讨论。
需说明的是,本说明书所附图中所绘示的结构、比例、大小等,均仅用以配合说明书所揭示的内容,以供熟悉此技术的人士了解与阅读,并非用以限定发明可实施的限定条件,任何结构的修饰、比例关系的改变或大小的调整,在不影响发明所能产生的功效及所能达成的目的下,均应落在发明所揭示的技术内容所能涵盖的范围内。本发明的优选实施方式的范围包括另外的实现,其中可以不按所述的或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本发明的实施例所属技术领域的技术人员所理解。
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为授权说明书的一部分。在这里示出和讨论的所有示例中,任何具体值应被解释为仅仅是示例性的,而不是作为限制。因此,示例性实施例的其它示例可以具有不同的值。
实施例
参见图1所示,为本实施例提供了一种图像的边缘增强处理方法。所述方法包括如下步骤:
S100,接收被分离成亮度信号和色度信号的输入图像,亮度信号对应输入亮度图像,色度信号对应输入色度图像。
本实施例中,被分离为亮度信号和色度信号的图像数据可以是YCbCr图像数据、HSV图像数据和HIS图像数据之一。例如,在YCbCr图像数据的情况下,亮度信号是Y信号,色度信号是色度(C)信号。亮度信号指的是表示视频系统中的画面亮度的电信号。当在视频系统中传送信号时,通常会将表示色度信息的信号与亮度信号相重叠,以节省传送信号的频率带宽。在此情况下,将表示亮度信息的信号称为Y信号,并将表示色度信息的信号分量称为C信号。
在下文中,为了便于描述,以YCbCr图像数据作为例子来描述。
参见图2所示,输入图像被分离成对应亮度信号(即,Y in)的输入亮度图像,对应色度信号(即,Cr和Cb)输入色度图像。
S200,通过适用于非人脸肤色点的第一参数组对输入亮度图像进行处理获得第一边缘值,通过适用于人脸肤色点的第二参数组对输入亮度图像进行处理获得第二边缘值;以及,对输入色度图像进行肤色检测获得每一个像素点的肤色权重值,根据输入图像的人脸区域信息获取每个像素点的人脸肤色权重值。其中,人脸区域中的肤色点的人脸肤色权重值等于该点的肤色权重值,人脸区域外的所有点的人脸肤色权重值被清零。
在优选的实施方式中,所述第一参数组为包括边缘检测算子、噪声抑制参 数和强度调整参数的参数组a,使用参数组a对输入亮度图像依次进行边缘检测、噪声抑制、强度调整处理,得到边缘值E 2a
所述第二参数组为包括边缘检测算子、噪声抑制参数和强度调整参数的参数组b,使用参数组b对输入亮度图像依次进行边缘检测、噪声抑制、强度调整处理,得到边缘值E 2b
对输入色度图像进行肤色检测时,计算每一个像素点的肤色权重,权重值越大说明为肤色的可能性越大。
然后,根据输入图像中的人脸区域信息——包括人脸位置和大小信息,排除非人脸区域的肤色点,即仅保留人脸区域点的肤色权重,人脸区域外的其它肤色点的权重被清零。
S300,将第一边缘值与第二边缘值按前述人脸肤色权重值进行混合,将混合得到的边缘值结合到输入亮度值上进行边缘增强。
获得各像素点的人脸肤色权重值后,将前述得到的边缘值E 2a与E 2b按人脸肤色权重值进行混合,得到最终边缘值。然后将所述最终边缘值应用到输入亮度值上,得到边缘增强后的亮度值,生成增强边缘。
下面结合图2至图6详细描述本实施例提供的技术方案。
1)边缘检测
对于图像中某一位于第i行、第j列的像素点,记为(i,j),按以下公式计算边缘值:
Figure PCTCN2021116307-appb-000010
其中,
E 0(i,j)为(i,j)点的边缘值;
Y in(i,j)输入图像在(i,j)点的亮度值,0≤Y in(i,j)≤255;
C Q为大小为Q的边缘检测算子,Q=(2×L+1),L为设定参数值;
m,n,L为整数,-L≤m≤L,-L≤n≤L。
作为举例而非限制,下面分别以L=1和L=2为例,来示例大小为3和5的边缘检测算子。
当L=1时,Q=3,即使用3×3的边缘检测算子C 3计算每一点的边缘值E 0(i,j)。作为举例,C 3的典型取值可以如下:
Figure PCTCN2021116307-appb-000011
当L=2时,Q=5,即使用5×5的边缘检测算子C 5计算每一点的边缘值E 0(i,j)。作为举例,C 5的典型取值可以如下:
Figure PCTCN2021116307-appb-000012
需要说明的是,上述C 3和C 5并不唯一,本领域技术人员可根据需要选择和调整边缘检测算子C Q的计算矩阵。用户可以将大小为Q边缘检测算子C Q对应的计算矩阵预设在存储器中,在需要时根据Q值(通过L值计算得到)调用对应计算矩阵。
2)边缘噪声抑制
进行噪声抑制处理时,可以使用以下公式去除或减弱边缘值中的噪声:
Figure PCTCN2021116307-appb-000013
其中,
T 0为噪声阈值,T 0>0。
3)边缘强度调整
进行强度调整时,可以使用以下公式对正、负边缘的强度分别调整:
Figure PCTCN2021116307-appb-000014
其中,
参数Gain p为正边缘的调整增益,参数Gain n为负边缘增益。
当增益大于1时,表示增强边缘强度;当增益小于1时,表示减弱边缘强度。
4)肤色检测
本实施例中,在对输入色度图像进行肤色检测时,优选的,在(H,S)坐标系对肤色进行检测,所述H用于描述像素点的色度,所述S用于描述像素点的饱和度。
具体的,对于图像中某一位于第i行、第j列的像素点(i,j),对应的H和S值按以下公式计算得到:
Figure PCTCN2021116307-appb-000015
Figure PCTCN2021116307-appb-000016
其中,
Cr(i,j)为(i,j)点的红色差值,-128≤Cr(i,j)≤127;Cb(i,j)为(i,j)点的蓝色差值,-128≤Cb(i,j)≤127。
具体实施时,可以在(H,S)坐标系中,可以由预设参数H 0、H 1、S 0和S 1确定一矩形区域为肤色区域,参见图3所示。
如果当前像素点(i,j)的H值和S值在(H,S)坐标系中落在矩形区域内,则认为该像素点为肤色点,否则为非肤色点。
也就是说,当H 0≤H(i,j)≤H 1且S 0≤S(i,j)≤S 1,可以判定当前像素点落在肤色区域内,即为肤色点。此时,可以由以下公式计算该点的肤色权重值W skin(i,j):
Figure PCTCN2021116307-appb-000017
Figure PCTCN2021116307-appb-000018
Figure PCTCN2021116307-appb-000019
其中,
0≤W skin(i,j)≤1,值越大,表示该点为肤色点的可能性越大;
H 0,H 1,S 0,S 1,D s,D h均为预设的参数值,且满足H 0+2×D h<H 1,S 0+2×D s<S 1
所述参数D s用于调节S方向上的权重过渡区间宽度,参数D h用于调节H方向上的权重过渡区间宽度,参见图4所示。用户在设置时,可以根据需要对H 0,H 1,S 0,S 1,D s,D h的参数值进行个性化设置,也可以通过系统对H 0,H 1,S 0,S 1,D s,D h的参数值进行自适应设置。
对于未落在上述肤色区域的像素点(即,非肤色点),对应的肤色权重值W skin(i,j)=0,即非肤色点的肤色权重值统一设置为零。
5)人脸肤色检测
人脸检测结果可以来源于专门的人脸检测模块,作为举例,比如在手机及监控应用场景中,通常都会设置人脸检测算法模块。根据人脸检测结果可以获取人脸区域信息,包括人脸位置(通常以坐标值来表示)和大小信息(通常以宽度和高度来表示)。
本实施例中,根据输入图像的人脸区域信息获取每个像素点的人脸肤色权重值的步骤如下:
步骤1,通过人脸检测模块对输入图像进行人脸检测,获取输入图像的人脸区域信息。作为举例而非限制,参见图5所示,比如在待处理图像中检测到了N张人脸,分别为人脸0,人脸1,人脸2,……,人脸N-1。其中,第k(0≤k<N)张人脸的人脸区域信息表示为F(x k,y k,w k,h k),表示人脸区域左上角坐标为(x k,y k),区域宽度为w k,高度为h k
步骤2,对于图像中的每个像素点(i,j),从第0张人脸开始直至第N-1张人脸,判断该像素点落在上述人脸区域中。判定像素点落在上述人脸区域中时,该像素点的人脸权重W face(i,j)=1。判定像素点未落在上述人脸区域中时,该像素点的人脸权重W face(i,j)=0。
具体的判定步骤,结合图6所示进行描述。
步骤21,初始化,令k=0。
步骤22,获取第k张人脸的人脸区域信息F(x k,y k,w k,h k)。
步骤23,对于像素点(i,j),判断x 0≤j≤x 0+w k且y 0≤i≤y 0+h k是否为真,判定为真时,该点落在第k张人脸区域中,跳转执行步骤27;否则,继 续执行步骤24。
步骤24,令k=k+1。
步骤25,判断k<N是否为真,判定为真时,跳转执行步骤22;否则继续执行步骤26。
步骤26,令W face(i,j)=0,结束当前像素点的判定。
步骤27,令W face(i,j)=1,结束当前像素点的判定。
步骤3,最后,根据每个像素点(i,j)的肤色权重W skin(i,j)和人脸权重W face(i,j),按以下公式计算各像素点的人脸肤色权重W(i,j):
Figure PCTCN2021116307-appb-000020
6)边缘合成
最终边缘值由E 2a(i,j)和E 2b(i,j)按权重W(i,j)混合而成。
具体的,按如下公式
E(i,j)=E 2a(i,j)×(1-W(i,j))+E 2b(i,j)×W(i,j),
将第一边缘值E 2a与第二边缘值E 2b按人脸肤色权重值进行混合,从而获得每个像素点的最终边缘值E(i,j)。
7)生成增强边缘
将每个像素点的最终边缘值E(i,j)与输入亮度值进行和运算以进行边缘增强处理,计算公式如下
Y out(i,j)=Y in(i,j)+E(i,j)。
然后可以获得处理后的图像作为输出亮度图像进行输出。
本发明提供的上述技术方案,联合人脸检测和肤色检测精确定位人脸皮肤点,通过排除非人脸的肤色点,可以显著降低误检率。进一步,本发明通过对人脸皮肤点和非人脸皮肤点使用不同的边缘增强参数,优选的包括边缘检测参数、噪声参数、强度参数,能够更精细地区别肤色点和非肤色点的增强效果,同时也方便用户灵活地根据人脸皮肤特点和喜好对增强效果进行调整,适用性广,灵活性强。
本发明的另一实施例,还提供了一种图像的边缘增强处理装置。所述装置包括处理器和用于存储处理器可执行指令和参数的存储器。
其中,所述处理器包括边缘分析单元、肤色分析单元和增强处理单元。
所述边缘分析单元,用于接收被分离成亮度信号和色度信号的输入图像, 亮度信号对应输入亮度图像,色度信号对应输入色度图像;以及,通过适用于非人脸肤色点的第一参数组对输入亮度图像进行处理获得第一边缘值,通过适用于人脸肤色点的第二参数组对输入亮度图像进行处理获得第二边缘值。
所述肤色分析单元,用于对输入色度图像进行肤色检测获得每一个像素点的肤色权重值,根据输入图像的人脸区域信息获取每个像素点的人脸肤色权重值;其中,人脸区域中的肤色点的人脸肤色权重值等于该点的肤色权重值,人脸区域外的所有点的人脸肤色权重值被清零。
所述增强处理单元,用于将第一边缘值与第二边缘值按前述人脸肤色权重值进行混合,将混合得到的边缘值结合到输入亮度值上进行边缘增强。
本实施例中,所述边缘分析单元又可以包括边缘检测子单元、噪声抑制子单元和强度调整子单元。
所述边缘检测子单元被配置为:对于图像中某一位于第i行、第j列的像素点,记为(i,j),按以下公式计算该点的边缘值:
Figure PCTCN2021116307-appb-000021
其中,
E 0(i,j)为(i,j)点的边缘值;
Y in(i,j)输入图像在(i,j)点的亮度值,0≤Y in(i,j)≤255;
C Q为大小为Q的边缘检测算子,Q=(2×L+1),L为设定参数值;
m,n,L为整数,-L≤m≤L,-L≤n≤L。
所述噪声抑制子单元被配置为:使用以下公式去除或减弱边缘值中的噪声:
Figure PCTCN2021116307-appb-000022
其中,
T 0为噪声阈值,T 0>0。
所述边缘强度子单元,被配置为:使用以下公式对正、负边缘的强度分别调整:
Figure PCTCN2021116307-appb-000023
其中,
参数Gain p为正边缘的调整增益,参数Gain n为负边缘增益。
当增益大于1时,表示增强边缘强度;当增益小于1时,表示减弱边缘强度。
所述肤色分析单元又可以包括肤色检测子单元和人脸肤色检测子单元。
所述肤色检测子单元被配置为:在(H,S)坐标系对肤色进行检测;其中,对于图像中某一位于第i行、第j列的像素点(i,j),对应的H和S值按以下公式计算得到:
Figure PCTCN2021116307-appb-000024
Figure PCTCN2021116307-appb-000025
其中,Cr(i,j)为(i,j)点的红色差值,-128≤Cr(i,j)≤127;Cb(i,j)为(i,j)点的蓝色差值,-128≤Cb(i,j)≤127。
所述肤色检测子单元还被配置为:判断当前像素点(i,j)的H值和S值在(H,S)坐标系中落在肤色区域内,落在肤色区域内时则认为该像素点为肤色点,否则为非肤色点;
对于肤色点,由以下公式计算该点的肤色权重值W skin(i,j):
Figure PCTCN2021116307-appb-000026
Figure PCTCN2021116307-appb-000027
Figure PCTCN2021116307-appb-000028
其中,0≤W skin(i,j)≤1,值越大,表示该点为肤色点的可能性越大;H 0,H 1,S 0,S 1,D s,D h均为预设的参数值,且满足H 0+2×D h<H 1,S 0+2×D s<S 1
对于非肤色点,对应的肤色权重值W skin(i,j)=0。
所述人脸肤色检测子单元被配置为:通过人脸检测模块对输入图像进行人脸检测,获取输入图像的人脸区域信息,其中,第k(0≤k<N)张人脸的人脸区域信息表示为F(x k,y k,w k,h k),表示人脸区域左上角坐标为(x k,y k),区域宽度为w k,高度为h k;对于图像中的每个像素点(i,j),从第0张人脸开始直至第N-1张人脸,判断该像素点落在上述人脸区域中,判定像素点落在上述人脸区域中时,该像素点的人脸权重W face(i,j)=1,判定像素点未落在上述人脸区域中时,该像素点的人脸权重W face(i,j)=0;以及,根据每个像素点(i,j)的肤色权重W skin(i,j)和人脸权重W face(i,j),按以下公式计算各像素点的人脸肤色权重W(i,j):
Figure PCTCN2021116307-appb-000029
所述增强处理单元,包括边缘合成子单元和生成增强边缘子单元。
所述边缘合成子单元被配置为:按如下公式
E(i,j)=E 2a(i,j)×(1-W(i,j))+E 2b(i,j)×W(i,j),
将第一边缘值E 2a与第二边缘值E 2b按人脸肤色权重值进行混合,从而获得每个像素点的最终边缘值E(i,j)。
所述生成增强边缘子单元被配置为:按如下公式
Y out(i,j)=Y in(i,j)+E(i,j),
将每个像素点的最终边缘值E(i,j)与输入亮度值进行和运算以进行边缘增强处理。
本发明的另一实施例,还提供了一种图像的边缘增强处理系统。所述系统包括边缘检测模块和边缘增强模块,以及设置在边缘检测模块和边缘增强模块之间的人脸区域调制模块,所述人脸区域调制模块连接肤色检测模块。
所述肤色检测模块被配置为:对输入色度图像进行肤色检测获得每一个像素点的肤色权重值,根据输入图像的人脸区域信息获取每个像素点的人脸肤色权重值;其中,人脸区域中的肤色点的人脸肤色权重值等于该点的肤色权重值,人脸区域外的所有点的人脸肤色权重值被清零。
所述人脸区域调制模块被配置为:通过适用于非人脸肤色点的第一参数组对输入亮度图像进行处理获得第一边缘值,通过适用于人脸肤色点的第二参数组对输入亮度图像进行处理获得第二边缘值;将第一边缘值与第二边缘值按前述人脸肤色权重值进行混合,并将混合得到的边缘值传输到边缘增强模块进行边缘增强处理。
其它技术特征参见在前实施例的描述,所述系统的各模块可以被配置为包括多个子模块以进行在前实施例中描述的信息处理过程,在此不再赘述。
在上面的描述中,本发明的公开内容并不旨在将其自身限于这些方面。而是,在本公开内容的目标保护范围内,各组件可以以任意数目选择性地且操作性地进行合并。另外,像“包括”、“囊括”以及“具有”的术语应当默认被解释为包括性的或开放性的,而不是排他性的或封闭性,除非其被明确限定为相反的含义。所有技术、科技或其他方面的术语都符合本领域技术人员所理解的含义,除非其被限定为相反的含义。在词典里找到的公共术语应当在相关技术文档的背景下不被太理想化或太不实际地解释,除非本公开内容明确将其限定成那样。本发明领域的普通技术人员根据上述揭示内容做的任何变更、修饰,均属于权利要求书的保护范围。

Claims (10)

  1. 一种图像的边缘增强处理方法,其特征在于包括如下步骤:
    接收被分离成亮度信号和色度信号的输入图像,亮度信号对应输入亮度图像,色度信号对应输入色度图像;
    通过适用于非人脸肤色点的第一参数组对输入亮度图像进行处理获得第一边缘值,通过适用于人脸肤色点的第二参数组对输入亮度图像进行处理获得第二边缘值;以及,对输入色度图像进行肤色检测获得每一个像素点的肤色权重值,根据输入图像的人脸区域信息获取每个像素点的人脸肤色权重值;其中,人脸区域中的肤色点的人脸肤色权重值等于该点的肤色权重值,人脸区域外的所有点的人脸肤色权重值被清零;
    将第一边缘值与第二边缘值按前述人脸肤色权重值进行混合,将混合得到的边缘值结合到输入亮度值上进行边缘增强。
  2. 根据权利要求1所述方法,其特征在于:所述第一参数组为包括边缘检测算子、噪声抑制参数和强度调整参数的参数组a,使用参数组a对输入亮度图像依次进行边缘检测、噪声抑制、强度调整处理,得到边缘值E 2a
    所述第二参数组为包括边缘检测算子、噪声抑制参数和强度调整参数的参数组b,使用参数组b对输入亮度图像依次进行边缘检测、噪声抑制、强度调整处理,得到边缘值E 2b
  3. 根据权利要求2所述方法,其特征在于:对于图像中某一位于第i行、第j列的像素点,记为(i,j),按以下公式计算边缘值:
    Figure PCTCN2021116307-appb-100001
    其中,
    E 0(i,j)为(i,j)点的边缘值;Y in(i,j)输入图像在(i,j)点的亮度值,0≤Y in(i,j)≤255;C Q为大小为Q的边缘检测算子,Q=(2×L+1),L为设定参数值;m,n,L为整数,-L≤m≤L,-L≤n≤L;
    进行噪声抑制处理时,使用以下公式去除或减弱边缘值中的噪声:
    Figure PCTCN2021116307-appb-100002
    其中,
    T 0为噪声阈值,T 0>0;
    进行强度调整时,使用以下公式对正、负边缘的强度分别调整:
    Figure PCTCN2021116307-appb-100003
    其中,
    参数Gain p为正边缘的调整增益,参数Gain n为负边缘增益,当增益大于1时,表示增强边缘强度;当增益小于1时,表示减弱边缘强度。
  4. 根据权利要求3所述方法,其特征在于:对输入色度图像进行肤色检测时,在(H,S)坐标系对肤色进行检测,H表示像素点的色度,S表示像素点的饱和度;
    对于图像中某一位于第i行、第j列的像素点(i,j),
    Figure PCTCN2021116307-appb-100004
    Figure PCTCN2021116307-appb-100005
    其中,
    Cr(i,j)为(i,j)点的红色差值,-128≤Cr(i,j)≤127;Cb(i,j)为(i,j)点的蓝色差值,-128≤Cb(i,j)≤127。
  5. 根据权利要求4所述方法,其特征在于:在(H,S)坐标系中,由预设参数H 0、H 1、S 0和S 1确定一矩形区域为肤色区域,若像素点(i,j)的H值和S值在(H,S)坐标系中落在矩形区域内,则认为该像素点为肤色点,否则为非肤色点;
    对于肤色点,由以下公式计算该点的肤色权重值W skin(i,j):
    Figure PCTCN2021116307-appb-100006
    Figure PCTCN2021116307-appb-100007
    Figure PCTCN2021116307-appb-100008
    其中,
    0≤W skin(i,j)≤1,值越大表示该点为肤色点的可能性越大;H 0,H 1,S 0,S 1,D s,D h为预设的参数值,且满足H 0+2×D h<H 1,S 0+2×D s<S 1;D s用于调节S方向上的权重过渡区间宽度,D h用于调节H方向上的权重过渡区间宽度;
    对于非肤色点,对应的肤色权重值W skin(i,j)=0。
  6. 根据权利要求5所述方法,其特征在于:根据输入图像的人脸区域信息获取每个像素点的人脸肤色权重值的步骤如下:
    步骤1,通过人脸检测模块对输入图像进行人脸检测,获取输入图像的人脸区域信息,对于输入图像中检测到的N张人脸,第k(0≤k<N)张人脸的人脸区域信息F(x k,y k,w k,h k),表示人脸区域左上角坐标为(x k,y k),区域宽度为w k,高度为h k
    步骤2,对于每个像素点(i,j),从第0张人脸开始直至第N-1张人脸,判断该像素点落在上述人脸区域中;判定像素点落在上述人脸区域中时,该像素点的人脸权重W face(i,j)=1;判定像素点未落在上述人脸区域中时,该像素点的人脸权重W face(i,j)=0;
    步骤3,根据每个像素点(i,j)的肤色权重W skin(i,j)和人脸权重W face(i,j),按以下公式计算各像素点的人脸肤色权重W(i,j),
    Figure PCTCN2021116307-appb-100009
  7. 根据权利要求6所述方法,其特征在于:按公式
    E(i,j)=E 2a(i,j)×(1-W(i,j))+E 2b(i,j)×W(i,j),
    将第一边缘值E 2a与第二边缘值E 2b按前述人脸肤色权重值进行混合,以获得每个像素点的最终边缘值E(i,j);
    以及,将每个像素点的最终边缘值E(i,j)与输入亮度值进行和运算以进行边缘增强处理,计算公式如下
    Y out(i,j)=Y in(i,j)+E(i,j),
    获得处理后的图像作为输出亮度图像进行输出。
  8. 根据权利要求6所述方法,其特征在于:所述步骤2包括,
    步骤21,初始化,令k=0;
    步骤22,获取第k张人脸的人脸区域信息F(x k,y k,w k,h k);
    步骤23,对于像素点(i,j),判断x 0≤j≤x 0+w k且y 0≤i≤y 0+h k是否为真,判定为真时,该点落在第k张人脸区域中,跳转执行步骤27;否则,继续执行步骤24;
    步骤24,令k=k+1;
    步骤25,判断k<N是否为真,判定为真时,跳转执行步骤22;否则继续执行步骤26;
    步骤26,令W face(i,j)=0,结束当前像素点的判定;
    步骤27,令W face(i,j)=1,结束当前像素点的判定。
  9. 一种图像的边缘增强处理装置,其特征在于包括:
    处理器;
    用于存储处理器可执行指令和参数的存储器;
    所述处理器包括边缘分析单元、肤色分析单元和增强处理单元,
    所述边缘分析单元,用于接收被分离成亮度信号和色度信号的输入图像,亮度信号对应输入亮度图像,色度信号对应输入色度图像;以及,通过适用于非人脸肤色点的第一参数组对输入亮度图像进行处理获得第一边缘值,通过适用于人脸肤色点的第二参数组对输入亮度图像进行处理获得第二边缘值;
    所述肤色分析单元,用于对输入色度图像进行肤色检测获得每一个像素点的肤色权重值,根据输入图像的人脸区域信息获取每个像素点的人脸肤色权重值;其中,人脸区域中的肤色点的人脸肤色权重值等于该点的肤色权重值,人脸区域外的所有点的人脸肤色权重值被清零;
    所述增强处理单元,用于将第一边缘值与第二边缘值按前述人脸肤色权重值进行混合,将混合得到的边缘值结合到输入亮度值上进行边缘增强。
  10. 一种图像的边缘增强处理系统,包括边缘检测模块和边缘增强模块,其特征在于:还包括设置在边缘检测模块和边缘增强模块之间的人脸区域调制模块,所述人脸区域调制模块连接肤色检测模块;所述肤色检测模块被配置为,对输入色度图像进行肤色检测获得每一个像素点的肤色权重值,根据输入图像的人脸区域信息获取每个像素点的人脸肤色权重值;其中,人脸区域中的肤色点的人脸肤色权重值等于该点的肤色权重值,人脸区域外的所有点的人脸肤色权重值被清零;
    所述人脸区域调制模块被配置为,通过适用于非人脸肤色点的第一参数组对输入亮度图像进行处理获得第一边缘值,通过适用于人脸肤色点的第二参数组对输入亮度图像进行处理获得第二边缘值;将第一边缘值与第二边缘值按前述人脸肤色权重值进行混合,并将混合得到的边缘值传输到边缘增强模块进行边缘增强处理。
PCT/CN2021/116307 2020-09-08 2021-09-02 图像的边缘增强处理方法及应用 WO2022052862A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/171,452 US20230206458A1 (en) 2020-09-08 2023-02-20 Image edge enhancement processing method and application thereof

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010931595.8 2020-09-08
CN202010931595.8A CN111798401B (zh) 2020-09-08 2020-09-08 图像的边缘增强处理方法及应用

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/171,452 Continuation US20230206458A1 (en) 2020-09-08 2023-02-20 Image edge enhancement processing method and application thereof

Publications (1)

Publication Number Publication Date
WO2022052862A1 true WO2022052862A1 (zh) 2022-03-17

Family

ID=72834286

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/116307 WO2022052862A1 (zh) 2020-09-08 2021-09-02 图像的边缘增强处理方法及应用

Country Status (3)

Country Link
US (1) US20230206458A1 (zh)
CN (1) CN111798401B (zh)
WO (1) WO2022052862A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024069322A1 (en) 2022-09-27 2024-04-04 Pixelgen Technologies Ab Method for fixing primary antibodies to a biological sample

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111798401B (zh) * 2020-09-08 2020-12-04 眸芯科技(上海)有限公司 图像的边缘增强处理方法及应用

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101742338A (zh) * 2008-11-05 2010-06-16 美格纳半导体有限会社 锐度增强设备及方法
CN101841642A (zh) * 2010-04-22 2010-09-22 南京航空航天大学 一种基于分数阶次信号处理的边缘检测方法
CN110070502A (zh) * 2019-03-25 2019-07-30 成都品果科技有限公司 人脸图像磨皮的方法、装置和存储介质
CN111798401A (zh) * 2020-09-08 2020-10-20 眸芯科技(上海)有限公司 图像的边缘增强处理方法及应用

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10055821B2 (en) * 2016-01-30 2018-08-21 John W. Glotzbach Device for and method of enhancing quality of an image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101742338A (zh) * 2008-11-05 2010-06-16 美格纳半导体有限会社 锐度增强设备及方法
CN101841642A (zh) * 2010-04-22 2010-09-22 南京航空航天大学 一种基于分数阶次信号处理的边缘检测方法
CN110070502A (zh) * 2019-03-25 2019-07-30 成都品果科技有限公司 人脸图像磨皮的方法、装置和存储介质
CN111798401A (zh) * 2020-09-08 2020-10-20 眸芯科技(上海)有限公司 图像的边缘增强处理方法及应用

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024069322A1 (en) 2022-09-27 2024-04-04 Pixelgen Technologies Ab Method for fixing primary antibodies to a biological sample

Also Published As

Publication number Publication date
US20230206458A1 (en) 2023-06-29
CN111798401B (zh) 2020-12-04
CN111798401A (zh) 2020-10-20

Similar Documents

Publication Publication Date Title
CN109639982B (zh) 一种图像降噪方法、装置、存储介质及终端
EP3479346B1 (en) Method and electronic device for producing composite image
Huang et al. Efficient contrast enhancement using adaptive gamma correction with weighting distribution
US8520089B2 (en) Eye beautification
US9007480B2 (en) Automatic face and skin beautification using face detection
WO2022052862A1 (zh) 图像的边缘增强处理方法及应用
CN109272459A (zh) 图像处理方法、装置、存储介质及电子设备
WO2022161009A1 (zh) 图像处理方法及装置、存储介质、终端
TWI511559B (zh) 影像處理方法
JP2003230160A (ja) カラー映像の彩度調節装置及び方法
CN109727215A (zh) 图像处理方法、装置、终端设备及存储介质
CN108876742A (zh) 图像色彩增强方法和装置
WO2023056950A1 (zh) 图像处理方法和电子设备
WO2018165023A1 (en) Method of decaying chrominance in images
JP2002281327A (ja) 画像処理のための装置、方法及びプログラム
JP4752912B2 (ja) 画像の質感を補正する画像処理装置、画像処理プログラム、画像処理方法、および電子カメラ
CN110012277B (zh) 一种针对人像图像的自动白平衡方法及装置
WO2012153661A1 (ja) 画像補正装置、画像補正表示装置、画像補正方法、プログラム、及び、記録媒体
JP5327766B2 (ja) デジタル画像における記憶色の修正
JP2009010636A (ja) 適応ヒストグラム等化方法及び適応ヒストグラム等化装置
WO2023010796A1 (zh) 图像处理方法及相关装置
CN112686800B (zh) 图像处理方法、装置、电子设备及存储介质
JP6590047B2 (ja) 画像処理装置、撮像装置、画像処理方法、及びプログラム
Arora et al. Enhancement of overexposed color images
CN111047533A (zh) 一种人脸图像的美化方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21865916

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21865916

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 21865916

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25.09.2023)