CN113379778A - Image purple boundary detection method based on content self-adaptive threshold - Google Patents

Image purple boundary detection method based on content self-adaptive threshold Download PDF

Info

Publication number
CN113379778A
CN113379778A CN202110626336.9A CN202110626336A CN113379778A CN 113379778 A CN113379778 A CN 113379778A CN 202110626336 A CN202110626336 A CN 202110626336A CN 113379778 A CN113379778 A CN 113379778A
Authority
CN
China
Prior art keywords
image
gradient
pixel
value
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110626336.9A
Other languages
Chinese (zh)
Inventor
陈荣
陈慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Maritime University
Original Assignee
Dalian Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Maritime University filed Critical Dalian Maritime University
Priority to CN202110626336.9A priority Critical patent/CN113379778A/en
Publication of CN113379778A publication Critical patent/CN113379778A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • G06T5/92
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation

Abstract

The invention discloses an image purple boundary detection method based on a content self-adaptive threshold, which comprises the following steps: subtracting the local minimum value from the local maximum value of the image to obtain a gray level image, and obtaining a mask binary image by a self-adaptive threshold value method; converting the RGB image into a YCbCr image, simultaneously obtaining a first quadrant pixel, performing Gaussian weighted convolution operation on the YCbCr brightness component Yn image, and calculating the gradient size of each point in the Yn image; setting a self-adaptive threshold according to the gradient value and the standard deviation, and selecting a high-contrast pixel value; calculating a norm ratio according to the chromatic value of the high-contrast pixel; selecting a binary area according to the constrained decision domain; and the obtained mask binary image and the binary area are subjected to AND operation to obtain a fine purple fringing detection area.

Description

Image purple boundary detection method based on content self-adaptive threshold
Technical Field
The invention relates to an image detection technology, in particular to an image purple boundary detection method based on a content self-adaptive threshold value.
Background
When a camera takes a picture in a backlight mode, purple edges occur at the edges of objects in the picture, namely, purple stripes are left near the edges with high contrast by some cameras during image acquisition, and in order to detect purple edge areas, a PFA detection method is the most commonly used method for detecting chromatic aberration. The PFA method firstly identifies a high-contrast neighborhood and then applies coloring constraint to detect the purple stripe pixels. At present, the purple fringing detection technology comprises the following methods: the shading constraint is exactly the comparison of the R, G, B channels for a single pixel. Since the position of the purple color is not clearly defined at the time of streak detection, the definition of the purple color is blurred. This may lead to false positive results. These shading constraints can be deployed in any color space, and need not be limited to RGB; to further mitigate the effects of variations in the composition of the light source, a shading constraint is placed in CIExy space. However, this brightness normalization does not work completely; by converting RGB into YCbCr space, complete independence from the luminance space is achieved, where the chrominance channels have exclusive color cast information, opening up the idea of specifying an area in the chrominance space, where the violet concentration is the largest. While angle constraints are utilized to correct for edge aberrations, they do not utilize gradient constraints to detect edge pixels; in the existing PFA detection method, the intermediate threshold for separating PFA regions is absolute. Given the diversity of camera devices and their drawbacks, these approaches are unlikely to cover all forms of purple edges. In summary, the problems of the current purple fringing experiment are as follows: 1. the detection result is accurate, but the detection range is narrow, and a large amount of missed detections exist. 2. The detection result is rough, the detection range is wide, but a large amount of false detection exists. 3. A problem with the RGB space is the drift associated with the intensity variations of the high contrast areas, which causes a shift in the fringe-rendering pattern.
Disclosure of Invention
According to the problems in the prior art, the invention discloses an image purple boundary detection method based on a content self-adaptive threshold, which comprises the following specific steps:
subtracting the local minimum filtering from the local maximum filtering of the image to obtain a gray level image, and obtaining a mask binary image by a self-adaptive threshold method;
converting the RGB image into a YCbCr image, simultaneously obtaining a first quadrant pixel, performing Gaussian weighted convolution operation on the YCbCr brightness component Yn image, and calculating the gradient size of each point in the Yn image;
setting a self-adaptive threshold according to the gradient value and the standard deviation, and selecting a high-contrast pixel value;
calculating a norm ratio according to the chromatic value of the high-contrast pixel;
selecting a binary area according to the constrained decision domain;
and the obtained mask binary image and the binary area are subjected to AND operation to obtain a fine purple fringing detection area.
Further, converting the RGB palette into YCbCr when converting the RGB image into the YCbCr image normalizes the values of luminance components Yn, Cbn and Crn using the following formulas,
Figure BDA0003102206360000021
wherein R isn,Gn,Bn∈[0,1];Yn∈[0,1],Cbn∈[—0.5,0.5],Crn∈[-0.5,0.5]High contrast regions are detected using the gradient information, and gaussian-weighted X and Y gradients are calculated at each pixel of Yn.
Further, when selecting the high contrast pixel value: assuming that the GREY-SCALE image is represented by a function Yn (x, y), the results of the convolution with horizontal and vertical Gaussian-weighted gradient kernels, respectively, are as follows:
Yh(x,y)=Yn(x,y)*Gx(x,y)
Yv(x,y)=Yn(x,y)*Gv(x,y),
Yv(x,y),Yh(X, y) are horizontal and vertical Gaussian weighted gradient kernel convolutions, respectively, at one point (X)i,Yi) The magnitude of the upper estimated gradient is calculated from the following equation:
Figure BDA0003102206360000022
calculating the average gradient size (x) of all pixel pointsi,yi)(μgrad) And standard deviation (σ)grad),α∈[0,3]For setting a threshold parameter for selecting the maximum gradient, the global gradient threshold amplitude being set
Tgrad=μgrad+ασ
Alpha is greater than or equal to 0.5, high contrast pixel positions are selected, and by calculating smooth discrete derivatives, the Gaussian weighting process ensures that striations and texture patterns do not occur in the gradient selection, where (x)i,yi) Each pixel in a location satisfies the following equation:
MY(xi,yi)>Tgrad
are considered to be high contrast pixels.
Further, the high contrast pixel is represented as (x)h(i),yh(i));i=1,2…NhIn which N ishIs the total number of detected high contrast pixels, a w × w window is constructed around each high contrast pixel, w ═ 5, for the position (x)h(i),yh(i)) Scanning all pixels in the window, the pixels being in coordinates (x)h(i)+p,yh(i)+ q) denotes p, q ∈ 0, ± (w-1)/2 · all these w2The chromaticity value at a position is calculated as,
Cb(p,q)=Cbn(xh(i)+p,yh(i)+q)
Cr(p,q)=Crn(xh(i)+p,yh(i)+q)
form w composed of these chrominance components2Vector sequence
Figure BDA0003102206360000031
For each vector Xp,qCalculating the norm ratio:
Figure BDA0003102206360000032
wherein the content of the first and second substances,
Figure BDA0003102206360000033
decision domain of constraint: rhop,q<TPFANamely selecting the purple boundary area.
Due to the adoption of the technical scheme, the method for detecting the purple fringing of the image based on the content self-adaptive threshold is different from the traditional detection learning method, and the provided method can break through the traditional experience threshold assumption, namely the statistical data of the image edge does not need to be acquired, so that the method has the advantages of more accurate detection result, capability of avoiding the omission and the false detection of the purple fringing and capability of accurately identifying the purple fringing region.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of the method disclosed herein.
Detailed Description
In order to make the technical solutions and advantages of the present invention clearer, the following describes the technical solutions in the embodiments of the present invention clearly and completely with reference to the drawings in the embodiments of the present invention:
as shown in fig. 1, the method for detecting purple boundary of an image based on a content adaptive threshold specifically includes the following steps:
subtracting the local minimum value from the local maximum value of the image to obtain a gray level image, and obtaining a mask binary image by a self-adaptive threshold value method;
converting the RGB image into a YCbCr image, simultaneously obtaining a first quadrant pixel, performing Gaussian weighted convolution operation on the YCbCr brightness component Yn image, and calculating the gradient size of each point in the Yn image;
setting a self-adaptive threshold according to the gradient value and the standard deviation, and selecting a high-contrast pixel value;
calculating a norm ratio according to the chromatic value of the high-contrast pixel;
selecting a binary area according to the constrained decision domain;
and the obtained mask binary image and the binary area are subjected to AND operation to obtain a fine purple fringing detection area.
Further, converting the RGB palette into YCbCr when converting the RGB image into the YCbCr image normalizes the values of luminance components Yn, Cbn and Crn using the following formulas,
Figure BDA0003102206360000041
wherein R isn,Gn,Bn∈[0,1];Yn∈[0,1],Cbn∈[-0.5,0.5],Crn∈[-0.5,0.5]Detecting a high contrast region by using gradient information, and calculating a Gaussian-weighted X and Y gradient on each pixel of Yn;
further, assuming that the green-SCALE image is represented by a function Yn (x, y), the result of convolution with horizontal and vertical gaussian-weighted gradient kernels, respectively, is as follows:
Yh(x,y)=Yn(x,y)*Gx(x,y)
Yv(x,y)=Yn(x,y)*Gv(x,y),
Yv(x,y),Yh(X, y) are horizontal and vertical Gaussian weighted gradient kernel convolutions, respectively, at one point (X)i,Yi) The magnitude of the upper estimated gradient is calculated from the following equation:
Figure BDA0003102206360000042
calculating the average gradient size (x) of all pixel pointsi,yi)(μgrad) And standard deviation (σ)grad),α∈[0,3]For setting a threshold parameter for selecting the maximum gradient, the global gradient threshold amplitude being set
Tgrad=μgrad+ασ
Alpha is greater than or equal to 0.5, high contrast pixel positions are selected, and by calculating smooth discrete derivatives, the Gaussian weighting process ensures that striations and texture patterns do not occur in the gradient selection, where (x)i,yi) Each pixel in a location satisfies the following equation:
MY(xi,yi)>Tgrad
are considered to be high contrast pixels.
Further, the high contrast pixel is represented as (x)h(i),yh(i));i=1,2…NhIn which N ishIs the total number of detected high contrast pixels, a w × w window is constructed around each high contrast pixel, w ═ 5, for the position (x)h(i),yh(i)) Scanning all pixels in the window, the pixels being in coordinates (x)h(i)+p,yh(i)+ q) denotes p, q ∈ 0, ± (w-1)/2 · all these w2The chromaticity value at a position is calculated as,
Cb(p,q)=Cbn(xh(i)+p,yh(i)+q)
Cr(p,q)=Crn(xh(i)+p,yh(i)+q)
form w composed of these chrominance components2Vector sequence
Figure BDA0003102206360000051
For each vector Xp,qCalculating the norm ratio:
Figure BDA0003102206360000052
wherein the content of the first and second substances,
Figure BDA0003102206360000053
decision domain of constraint: rhop,q<TPFANamely selecting the purple boundary area.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.

Claims (4)

1. An image purple boundary detection method based on a content self-adaptive threshold value is characterized by comprising the following steps:
subtracting the local minimum filtering from the local maximum filtering of the image to obtain a gray level image, and obtaining a mask binary image by a self-adaptive threshold method;
converting the RGB image into a YCbCr image, simultaneously obtaining a first quadrant pixel, performing Gaussian weighted convolution operation on the YCbCr brightness component Yn image, and calculating the gradient size of each point in the Yn image;
setting a self-adaptive threshold according to the gradient value and the standard deviation, and selecting a high-contrast pixel value;
calculating a norm ratio according to the chromatic value of the high-contrast pixel;
selecting a binary area according to the constrained decision domain;
and the obtained mask binary image and the binary area are subjected to AND operation to obtain a fine purple fringing detection area.
2. The method of claim 1, wherein: converting the RGB palette to YCbCr when converting the RGB image to the YCbCr image normalizes the values of luminance components Yn, Cbn and Crn using the following formulas,
Figure FDA0003102206350000011
wherein R isn,Gn,Bn∈[0,1];Yn∈[0,1],Cbn∈[-0.5,0.5],Crn∈[-0.5,0.5]Detection of high contrast regions using gradient informationA gaussian weighted X and Y gradient is calculated over each pixel of Yn.
3. The method of claim 1, wherein: when selecting the high contrast pixel value: assuming that the GREY-SCALE image is represented by a function Yn (x, y), the results of the convolution with horizontal and vertical Gaussian-weighted gradient kernels, respectively, are as follows:
Yh(x,y)=Yn(x,y)*Gx(x,y)
Yv(x,y)=Yn(x,y)*Gv(x,y),
Yv(x,y),Yh(X, y) are horizontal and vertical Gaussian weighted gradient kernel convolutions, respectively, at one point (X)i,Yi) The magnitude of the upper estimated gradient is calculated from the following equation:
Figure FDA0003102206350000021
calculating the average gradient size (x) of all pixel pointsi,yi)(μgrad) And standard deviation (σ)grad),α∈[0,3]For setting a threshold parameter for selecting the maximum gradient, the global gradient threshold amplitude being set
Tgrad=μgrad+ασ
Alpha is greater than or equal to 0.5, high contrast pixel positions are selected, and by calculating smooth discrete derivatives, the Gaussian weighting process ensures that striations and texture patterns do not occur in the gradient selection, where (x)i,yi) Each pixel in a location satisfies the following equation:
MY(xi,yi)>Tgrad
are considered to be high contrast pixels.
4. The method of claim 1, wherein: high contrast pixels are denoted as (x)h(i),yh(i));i=1,2…NhIn which N ishIs the total number of high contrast pixels detected, at each highA w × w window is constructed around the contrast pixel, w ═ 5, for the position (x)h(i),yh(i)) Scanning all pixels in the window, the pixels being in coordinates (x)h(i)+p,yh(i)+ q) denotes p, q ∈ 0, ± (w-1)/2 · all these w2The chromaticity value at a position is calculated as,
Cb(p,q)=Cbn(xh(i)+p,yh(i)+q)
Cr(p,q)=Crn(xh(i)+p,yh(i)+q)
form w composed of these chrominance components2Vector sequence
Figure FDA0003102206350000022
For each vector Xp,qCalculating the norm ratio:
Figure FDA0003102206350000023
wherein the content of the first and second substances,
Figure FDA0003102206350000024
decision domain of constraint: rhop,q<TPFANamely selecting the purple boundary area.
CN202110626336.9A 2021-06-04 2021-06-04 Image purple boundary detection method based on content self-adaptive threshold Pending CN113379778A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110626336.9A CN113379778A (en) 2021-06-04 2021-06-04 Image purple boundary detection method based on content self-adaptive threshold

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110626336.9A CN113379778A (en) 2021-06-04 2021-06-04 Image purple boundary detection method based on content self-adaptive threshold

Publications (1)

Publication Number Publication Date
CN113379778A true CN113379778A (en) 2021-09-10

Family

ID=77575883

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110626336.9A Pending CN113379778A (en) 2021-06-04 2021-06-04 Image purple boundary detection method based on content self-adaptive threshold

Country Status (1)

Country Link
CN (1) CN113379778A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105335979A (en) * 2015-10-28 2016-02-17 努比亚技术有限公司 Image processing method and apparatus
CN112887693A (en) * 2021-01-12 2021-06-01 浙江大华技术股份有限公司 Image purple border elimination method, equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105335979A (en) * 2015-10-28 2016-02-17 努比亚技术有限公司 Image processing method and apparatus
CN112887693A (en) * 2021-01-12 2021-06-01 浙江大华技术股份有限公司 Image purple border elimination method, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PARVEEN MALIK, KANNAN KARTHIK: "Iterative content adaptable purple fringe detection", 《SPRINGER》, 18 July 2017 (2017-07-18), pages 181 - 188, XP036409520, DOI: 10.1007/s11760-017-1144-1 *

Similar Documents

Publication Publication Date Title
CN111563889B (en) Liquid crystal screen Mura defect detection method based on computer vision
US6195467B1 (en) Method and apparatus for sharpening a grayscale image
US7577311B2 (en) Color fringe desaturation for electronic imagers
US7227990B2 (en) Color image processing device and color image processing method
US7746505B2 (en) Image quality improving apparatus and method using detected edges
US7639878B2 (en) Shadow detection in images
CN108985305B (en) Laser etching industrial detonator coded image positioning and correcting method
CA2650180C (en) Image binarization using dynamic sub-image division
CA2477097A1 (en) Detection and correction of red-eye features in digital images
US20100195902A1 (en) System and method for calibration of image colors
WO2015070723A1 (en) Eye image processing method and apparatus
US9361669B2 (en) Image processing apparatus, image processing method, and program for performing a blurring process on an image
CN108389215B (en) Edge detection method and device, computer storage medium and terminal
CN106815587B (en) Image processing method and device
CN109949248B (en) Method, apparatus, device and medium for modifying color of vehicle in image
CN112862832B (en) Dirt detection method based on concentric circle segmentation positioning
CN112200019B (en) Rapid building night scene lighting lamp fault detection method
CN112785534A (en) Ghost-removing multi-exposure image fusion method in dynamic scene
Lynch et al. Colour constancy from both sides of the shadow edge
CN112583999A (en) Lens contamination detection method for camera module
JP4242796B2 (en) Image recognition method and image recognition apparatus
CN107256539B (en) Image sharpening method based on local contrast
CN116012767B (en) Visual detection method for cracks of clutch housing of electrically-controlled silicone oil fan
EP4332879A1 (en) Method and apparatus for processing graphic symbol, and computer-readable storage medium
CN113379778A (en) Image purple boundary detection method based on content self-adaptive threshold

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination