CN113793274A - Highlight image restoration method based on tone - Google Patents
Highlight image restoration method based on tone Download PDFInfo
- Publication number
- CN113793274A CN113793274A CN202110986703.6A CN202110986703A CN113793274A CN 113793274 A CN113793274 A CN 113793274A CN 202110986703 A CN202110986703 A CN 202110986703A CN 113793274 A CN113793274 A CN 113793274A
- Authority
- CN
- China
- Prior art keywords
- image
- chromaticity
- pixel
- diffuse reflection
- reflection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 69
- 238000005286 illumination Methods 0.000 claims abstract description 20
- 238000001914 filtration Methods 0.000 claims abstract description 17
- 230000004927 fusion Effects 0.000 claims abstract description 16
- 230000002146 bilateral effect Effects 0.000 claims abstract description 10
- 230000000694 effects Effects 0.000 claims abstract description 9
- 238000002474 experimental method Methods 0.000 claims description 6
- 238000007635 classification algorithm Methods 0.000 claims description 3
- 150000001875 compounds Chemical class 0.000 claims description 3
- 239000000463 material Substances 0.000 claims description 3
- 239000000126 substance Substances 0.000 claims description 3
- 239000013598 vector Substances 0.000 claims description 3
- 238000002156 mixing Methods 0.000 claims description 2
- 238000004422 calculation algorithm Methods 0.000 abstract description 12
- 235000019646 color tone Nutrition 0.000 abstract description 4
- 238000012545 processing Methods 0.000 abstract description 3
- 230000000007 visual effect Effects 0.000 abstract description 3
- 238000009792 diffusion process Methods 0.000 description 4
- 241001465754 Metazoa Species 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 3
- 235000013399 edible fruits Nutrition 0.000 description 3
- 238000011084 recovery Methods 0.000 description 3
- 241000251468 Actinopterygii Species 0.000 description 2
- 241000219109 Citrullus Species 0.000 description 2
- 235000012828 Citrullus lanatus var citroides Nutrition 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Processing (AREA)
Abstract
The invention relates to a highlight image restoration method based on color tones. Firstly, clustering images by using tone information of the images based on the fact that the tone information in the color images is not easily interfered by specular reflection; then, calculating the distance between the pixel chromaticity and the illumination chromaticity to obtain a fusion coefficient of diffuse reflection and specular reflection; meanwhile, in order to prevent the pixel clustering from being interfered by noise, the method executes bilateral filtering operation on the fusion coefficient. And finally, according to the obtained fusion coefficient, obtaining a diffuse reflection image after the specular reflection is eliminated. Experimental results show that the algorithm has a good effect, and can effectively remove specular reflection and simultaneously keep the details and the edge information of the image. By qualitative and quantitative comparison with various methods, the method provided by the method achieves the best performance on peak signal-to-noise ratio and structural similarity; meanwhile, the algorithm also obtains the best visual effect in the process of processing the natural highlight picture.
Description
Technical Field
The invention relates to an image recovery method, in particular to a highlight image recovery method based on color tones.
Background
Specular reflection often destroys image quality, causing image information to be lost, affecting subsequent visual algorithm effects such as image segmentation, color constancy, object detection and target tracking. Therefore, removal of the specular reflection area is necessary. Most of the current specular reflection removal methods are based on a two-color reflection model, and a diffuse reflection pixel is firstly found in an image area, then the diffuse reflection pixel is spread to the area, and a specular reflection component is calculated. Due to the lack of global information of the image, the sought diffuse reflectance chromaticity cannot be guaranteed to be accurate, and therefore the specular component is difficult to remove cleanly.
The two-color reflection model is proposed to solve the problem of complicated reflecting surface modeling, and is now widely applied in the field of specular reflection removal. Klinker et al extended the two-color reflectance model by proposing that the color of the object and the color of the illumination conform to a T-shaped distribution, but the acquisition of the T-shaped distribution is easily disturbed by noise. In order to reduce the influence of noise on highlight image restoration, Tan et al propose a specular reflection removal method based on the distribution of diffuse reflection and specular reflection in the maximum chromaticity space. Tan and Lin et al perform specular reflection removal by an image restoration technique that synthetically fills the missing regions with neighboring patterns. Since the highlight region is related to the photographing direction, it is also possible to restore diffuse reflection using a sequence of images from different perspectives. Mallick et al use partial differential equations to recover the diffuse component from the video, but this does not work well for large areas of specular reflection.
Yoon et al propose reflection invariants, which are used to remove highlights. Shen et al first selects a pixel as the diffuse reflection component and then calculates the specular reflection component using a least squares method. Their main idea is to want the removal of reflections by an iterative method, but this method is time consuming. Yang and Liu et al, which notice that the saturation of specular reflection pixels is lower than that of diffuse reflection and therefore they achieve the goal of reflection component removal by adjusting the saturation of specular reflection pixels, propose a fast bilateral filter that uses a non-specular reflection image as a distance weighting function. Due to lack of global information, methods like these based on local diffuse reflection diffusion cannot completely remove reflections, and the recovered image is often not smooth. Shen et al first cluster color image pixels and then detect and remove diffuse components by calculating the ratio of the maximum value of the pixel to its intensity range-their method sometimes destroys image detail information because pixels with different diffuse reflections may have the same intensity ratio. Other efforts, such as methods that remove reflections by matrix decomposition, do not work well with texture images. Kim et al first look for candidate regions of specular reflection using the dark channel (the minimum of the three channels), then use some a priori assumptions such as that specular reflection regions are sparse and diffuse reflection regions are smooth, and then build an energy function that they can separate specular and diffuse components by solving the energy function. Their method works well with natural images, but for noise images and texture images, their method often causes ringing. Akashi and Okatani define reflection separation as a sparse Nonnegative Matrix Factorization (NMF) problem. However, current NMF algorithms are sensitive to initial values and can only ensure that local minima are found instead of global minima. Thus, the method requires multiple runs to achieve the most reasonable results. Furthermore, since NMF is typically highly sensitive to outliers, this method may fail in the presence of strong specular reflections or noise. Ren et al introduced a method of obtaining light source chromaticity through a color linear constraint condition based on a two-color reflection model to quickly remove highlight, but when performing pixel clustering, a clustering error problem occurs due to the influence of highlight and noise, so that abnormal points appear in a restored image, and highlight areas cannot be removed completely. Guo et al propose a sparse low-order reflection model. In their framework, diffuse and specular highlight images are estimated simultaneously by optimization. However, in the restored diffuse reflection image, excessively dark pixels may be generated in a highlight region.
Disclosure of Invention
In view of the above technical deficiencies, it is an object of the present invention to provide a highlight removal method capable of effectively removing specular reflection while preserving details and edge information of an image, which should not be easily disturbed by noise.
The technical scheme adopted by the invention for solving the technical problems is as follows: a highlight image restoration method based on color tones comprises the following steps:
a method of tonal based highlight image restoration, comprising:
step 1) estimating the illumination chromaticity of an image;
step 2) obtaining tone information of the image, and using the information to perform clustering operation on image pixels;
and 3) separating diffuse reflection and specular reflection pixel by pixel in all classes according to the distance from each pixel to the illumination chromaticity to obtain a diffuse reflection image with the specular reflection eliminated.
The estimating the illumination chromaticity of the image comprises:
step 1-1) removing specular reflection components by using global diffuse reflection information according to a bicolor reflection model to obtain a chromaticity image;
and 1-2) because the pixels with the same diffuse reflection chromaticity are gathered on the same straight line, a plurality of straight lines with different diffuse reflection chromaticities exist in one image, and the intersection point of the straight lines with different diffuse reflection chromaticities is calculated to be used as the light source chromaticity of the image.
The acquiring the chrominance image includes:
a. according to the two-color reflection model, the color of a certain pixel point on an object is formed by linear combination of diffuse reflection and specular reflection, and the following formula is as follows:
I(x)=D(x)+S(x)=md(x)Λ(x)+ms(x)Γ(x) (1)
in the formula md(x) And ms(x) Respectively, a diffuse reflection coefficient and a specular reflection coefficient, which depend on the position of the pixel in the scene and the light source intensity, Λ (x) denotes the diffuse reflection chromaticity, which is determined by the properties of the material of the object itself, Γ (x) denotes the specular reflection chromaticity, which is determined by the light source chromaticity, which is generally considered as the light source chromaticity;
b. dividing the image pixel value by the sum of three channel pixel values to obtain the chrominance image
Substituting formula (1) into formula (2) to yield:
c. where the reflected chromaticity is typically normalized to 1, sigmac∈{r,g,b}Λc(x)=1,∑c∈{r,g,b}Γc(x) 1, then ∑c∈{r,g,b}Ic(x)=md(x)+ms(x) Then equation (3) is further written as:
the performing clustering operations includes:
step 2-1) normalizing the hue H (x) to [0,1 ];
step 2-2) defining the hue difference Δ H between two pointsnewClustering image pixels to divide different clusters;
and 2-3) distributing a label to each pixel in the cluster, calculating the average value of all clusters, taking the average value as an initial value, and clustering the image pixels again by using a k-nearest neighbor classification algorithm.
The hue difference Δ H between said defined two pointsnewClustering the image pixels comprises:
wherein Δ H is the difference in hue between the normalized two points;
if the hue difference Δ H between two pixelsnewIs less thanAnd if the threshold value T is not the threshold value T, the clusters belong to the same cluster, and if the threshold value T is not the threshold value T, the clusters are divided into different clusters.
The estimating the illumination chromaticity of the image comprises:
step 3-1) calculating the distance between pixel chromaticity and illumination chromaticity to obtain the fusion coefficient of diffuse reflection and specular reflection;
step 3-2) adopting bilateral filtering on the fusion coefficient mu (x) to replace filtering on the recovered image;
step 3-3) diffuse reflection component of chromaticity spaceAnd converting the color space back to RGB, and acquiring a diffuse reflection image without highlight.
The blending coefficient of the diffuse reflection and the specular reflection comprises:
in the normalized RGB space, all pixels of the image lie within a sphere, centered on the illumination chromaticity Γ (x), definingIs composed of
Wherein the content of the first and second substances,is a chrominance imageA set of direction vectors to the source chromaticity Γ (x), the illumination chromaticity Γ (x) being fixed for the same image, the distance r (x) being determined only by μ (x) for a given diffuse reflectance chromaticity Λ (x), the closer the pixel chromaticity is to the source chromaticity, the more likely the image is to be a highlight region, so the size of the distance r (x) determines the pixel specular reflectance contribution;
r(x)=μ(x)||Λ(x)-Γ(x)||2,0≤μ(x)≤1 (9)
because of the fact thatWhen μ (x) is 1, the image contains only diffuse reflection chromaticity, and the corresponding pixel is the pixel r at the farthest distancemax(x):
Where the maximum distance is estimated for each cluster class CL. According to (8) and (9), the fusion coefficient μ (x) of each point can be estimated pixel by pixel:
the bilateral filtering of the fusion coefficient μ (x) comprises:
bilateral filtering is carried out on the fusion coefficient mu (x) according to the formula (12), so that the detail part of the image is prevented from being excessively damaged, and the quality of the image after highlight removal is effectively improved;
wherein S (i, j) refers to a range of (2N +1) sizes centered on (i, j), μ (k, l) represents an input point, w (i, j, k, l) ═ Ws × Wr, and Ws, Wr are a spatial domain kernel and a value domain kernel, respectively; ws is determined by Euclidean distance between the center pixel of the filter and other pixel positions in the filter block, and has a value ofWr is determined by the difference between the value of the center pixel of the filter and the value of the other pixels in the filter block, which isThrough experiments, when the filtering radius N is set to be 2, the domain variance sigma is definedSSet to 5, the value domain variance σrThe filtering effect is best when the value is set to 0.9.
The diffuse reflection component of the chromaticity spaceConverting back to the RGB color space includes:
According to the formula (2), theConvert back to RGB color space, diffuse reflection image after eliminating the highlight is:
the specular reflection image is:
S(x)=I(x)-D(x)。 (15)
the invention has the following beneficial effects and advantages:
1. based on the observation result that the hue information in the color image is not easily interfered by mirror reflection, the method carries out pixel clustering through the hue information, and greatly improves the accuracy of the pixel clustering.
2. The influence of noise is avoided, and more detail information can be reserved.
3. The method is superior to the existing algorithm in the aspect of highlight image recovery, and can effectively remove specular reflection and simultaneously retain the details and edge information of the image.
Drawings
FIG. 1 is an overall flow diagram of the method;
fig. 2 specular reflection invariant. (a) Inputting an image; (b) color tone H (x); (c) a hue conversion angle α (x); (d) an azimuth angle θ (x); (e) elevation angle phi (x); (f) distance r (x) of pixel chromaticity from light source chromaticity;
FIG. 3 shows clustering results of the Fish images. (a) Inputting an image, (b) a clustering result of YANG, (c) a clustering result of REN, (d) a clustering result of the method;
fig. 4 is the result of specular reflection removal: (a) input images, (b) true values, (c) results of YANG, (d) results of REN, (e) results of GUO, (f) results of the method;
FIG. 5 shows the result of removing specular reflection from the natural highlight images Toys, Watermelon and Fish;
Detailed Description
The present invention will be described in further detail with reference to examples. The method steps are explained with reference to the drawings.
The highlight removal algorithm provided by the method mainly comprises three steps: 1) estimating the chromaticity of the illumination; 2) Obtaining tone information of the image, and using the information to perform clustering operation on image pixels; 3) diffuse reflection and specular reflection are separated on a pixel-by-pixel basis according to the distance of each pixel from the chromaticity of the illumination. The whole flow is shown in figure 1.
1. Reflection model
The two-color reflection model has been widely applied to the understanding of scene reflections. According to the two-color reflection model, the color of a certain pixel point on an object is formed by linear combination of diffuse reflection and specular reflection, namely as shown in the following formula:
I(x)=D(x)+S(x)=md(x)Λ(x)+ms(x)Γ(x) (1)
in the formula md(x) And ms(x) Respectively, a diffuse reflection coefficient andthe specular reflectance, which depends on the position of the pixel in the scene and the light source intensity, Λ (x) denotes the diffuse reflectance chromaticity, determined by the properties of the material of the object itself, and Γ (x) denotes the specular reflectance chromaticity, which is also commonly referred to as the light source chromaticity because it is determined by the light source chromaticity.
Most of the existing highlight removal methods are based on two-color reflection models, which divide an image into different areas by clustering, then search for one pixel in each image area as a diffuse reflection component, then spread the obtained diffuse reflection component to the whole image area and calculate the corresponding specular reflection component. Due to the lack of global diffuse reflection information, the resulting diffuse reflection component in the local image area is often inaccurate, and therefore the specular reflection component cannot be completely removed. Furthermore, for images with complex textures, it is difficult to cluster all pixels well to get an accurate image area.
The removal of the specular reflection component by using the global diffuse reflection information is a relatively effective reflection removal method. The global information not only enables finding the optimal diffuse reflection chromaticity value, but also enables efficient processing of texture images. In addition, the processing capacity is good for the light reflecting area with a large area. The chrominance image is obtained by dividing the pixel values of the image by the sum of the three channel pixel values
By substituting formula (1) into formula (2), can be obtained
Where the reflected chromaticity is typically normalized to 1, sigmac∈{r,g,b}Λc(x)=1,∑c∈{r,g,b}Γc(x) 1, then ∑c∈{r,g,b}Ic(x)=md(x)+ms(x) In that respect Can further write (3) as
according to the formula (4), it can be found that pixels having the same diffuse reflectance chromaticity are gathered on a straight line. For the whole image, different diffuse reflection chromaticities represent different straight lines, and the intersection point of all the straight lines is the chromaticity of the light source. The light source chromaticity of the picture can be obtained by calculating the intersection point of the chromaticity lines.
2. Pixel clustering
In previous studies, some scholars considered the azimuth angle θ (x) and elevation angle obtained by converting an image from a rectangular coordinate system to a spherical coordinate system as compared with hue information h (x) in the image HSI color spaceLess susceptible to high light because of hueα (x) is the hue conversion angle, for any point in the image,
as can be seen from equation (5), the calculation process of the hue is affected by the magnitude of B, G, when the image B, G components are relatively close, a very small color difference may generate a great difference in hue value, and the effect in the original red area after visualization will be very poor, as shown in fig. 2(b), but the hue is angle information, and is not distinguished between 0 ° and 360 °, so it is inaccurate to determine that the hue information is easily affected by highlight through fig. 2 (b). To avoid this problem, the hue conversion angle is visualized in fig. 2(c), where it can be seen that the hue is not easily affected by highlight regions, while fig. 2(d) and 2(f) where both azimuth and elevation are not sensitive enough to color changes, so the method is not sensitive enough to color changes at the fish mouth and fish tail
Clustering pixels using hue information, normalizing hue H (x) to [0,1] when clustering, defining hue difference between two points
Wherein Δ H is the difference in hue between the normalized two points;
the method uses Δ HnewThe image pixels are clustered. If the hue difference between two pixels is smaller than the threshold T, they belong to the same cluster, otherwise they are divided into different clusters. After assigning a label to each pixel, the average of all clusters is calculated. With the average value as an initial value, image pixels are re-clustered using a KNN (k nearest neighbor classification algorithm) search rule. If T is too small, the number of pixel clusters may increase, which may result in incomplete removal of specular reflection. Also, if T is too large, the number of pixel clusters will decrease, the specular reflection of the image will be excessively separated, and we set the threshold to 0.05 through several experiments.
FIG. 3 shows the clustering result of the Fish images. YANG et al use region growing algorithm to locally diffuse regions with similar diffuse reflectance chromaticity, he can capture the details of the image well, but the smooth regions of the image are easily destroyed (see (b)), REN et al use the angular coordinates of the image to perform clustering, when affected by high light, it is difficult to distinguish the regions with closer colors and noise is easily generated in the darker regions (see (c)), the method uses the hue information of the global image, and can guarantee the clustering smoothness of the uniform regions of the image while preserving the image texture (see (d)).
3. Specular reflection separation
In the normalized RGB space, all pixels of the image are located within a sphere, centered at the illumination chromaticity Γ (x). Definition ofIs composed of
Wherein the content of the first and second substances,is a chrominance imageA set of direction vectors to the source chromaticity Γ (x). For the same image, the illumination chromaticity Γ (x) is fixed, for a given chromaticity Λ (x), the distance r (x) is determined only by μ (x), the smaller r (x), the closer the pixel chromaticity is to the light source chromaticity, the more likely the image is to be a highlight region, and the size of the distance r (x) determines the pixel specular reflectance.
r(x)=μ(x)||Λ(x)-Γ(x)||2,0≤μ(x)≤1。 (9)
because of the fact thatWhen μ (x) is 1, the image contains only diffuse reflection chromaticity, and the corresponding pixel is the pixel r at the farthest distancemax(x)。
Where the maximum distance is estimated for each cluster class CL. According to (8) and (9), the fusion coefficient μ (x) of each point can be estimated pixel by pixel:
due to the noise influence and the highlight influence, clustering of partial points is not accurate enough when pixels are clustered, bilateral filtering is carried out on the fusion coefficient mu (x) in order to improve the visual effect after highlight removal, and compared with filtering of a recovered image, bilateral filtering of the fusion coefficient does not excessively damage the detail part of the image, and the quality of the highlight-removed image can be effectively improved.
The filtering process is shown in the above formula, where S (i, j) refers to a range of (2N +1) with (i, j) as the center, μ (k, l) represents the input point, w (i, j, k, l) ═ Ws × Wr, and Ws and Wr are the spatial domain kernel and the value domain kernel, respectively. Wherein Ws is determined by Euclidean distance between the filter center pixel and other pixel positions in the filter block, and has a value ofWr is determined by the difference between the value of the center pixel of the filter and the value of the other pixels in the filter block, which isThrough experiments, when the filter radius N is set to 2, the domain variance σ is definedSSet to 5, the value domain variance σrThe filtering effect is best when set to 0.9.
According to the formula (2), theConvert back to RGB color space, diffuse reflection image after eliminating the highlight is:
the specular reflection image is:
S(x)=I(x)-D(x)。 (15)
the overall algorithm flow is given in algorithm 1.
4. Results and analysis of the experiments
FIG. 4 shows the specular reflection removal results for four high-light images, Animals, Cups, Fruit, and Masks. In addition, in tables 1 and 2, the method compares the peak signal-to-noise ratio and the structural similarity of the images of different methods at the same time. In the four images, the Yang et al method generates unsmooth in the green frame area of the images of Animals, Cups and Fruit, destroys the detail information of the blue frame area in the Fruit, and the highlights in the red frame area of the images of Cups and Masks are not completely removed, because the Yang et al method mainly eliminates the highlights by means of local diffuse reflection diffusion, lacks global information, can effectively process the images with smaller highlights like the images of Animals, is difficult to completely remove due to overlarge area of the highlights, and easily generates unsmooth in the edge area of the diffuse reflection diffusion. The Ren et al method is not accurate enough for pixel clustering, so that highlight is difficult to be thoroughly removed in a red frame selection area of an image, detail information of the highlight area is damaged, and bright spots and unsmoothness also appear in a part of places with wrong pixel clustering. Guo et al use the sparsity of specular reflection to separate specular reflection, and when the specular reflection area is large, the sparsity is also reduced, so that the highlight cannot be completely removed, and the specific effect is shown in the red frame selection area in (e). The quantitative comparison result of the PSNR and the SSIM shows that the method can achieve the best effect in most pictures. The above experiments prove that the method of the invention can obtain better quantification results than other algorithms.
TABLE 1 Peak SNR for different algorithms
Table 1 PSNR of different methods
TABLE 2 structural similarity of different algorithms
Table 2 SSIM of different methods
The present invention also tests the performance of different methods on natural images. In fig. 5, for the highlight images Toys, all the methods have better effect because the highlight in the images is weaker, and only the method of Ren et al generates the unsmooth phenomenon in the green frame selection area. The methods of the people such as the high-intensity image watermelonon, Yang and Liu only use local information, and cannot effectively process the high-intensity image in a large range, so that an over-processed area appears on the surface of the processed Watermelon. Yang et al use the elimination of highlights by means of local diffuse reflection diffusion, which is more effective for removing highlights in smaller areas, but it is not able to remove larger highlight areas, such as specular reflection spots on watermelonon. For highlight images Fish, the methods of Yang and Liu et al have been over-processed in the boxed area, and the methods of Yang et al and Ren et al have been unsmooth in the boxed area. The method of the invention not only can remove highlight spots in the smooth area, but also can better retain the image texture compared with other methods.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (9)
1. A highlight image restoration method based on hue, comprising:
step 1) estimating the illumination chromaticity of an image;
step 2) obtaining tone information of the image, and using the information to perform clustering operation on image pixels;
and 3) separating diffuse reflection and specular reflection pixel by pixel in all classes according to the distance from each pixel to the illumination chromaticity to obtain a diffuse reflection image with the specular reflection eliminated.
2. The method of claim 1, wherein estimating the illumination chromaticity of the image comprises:
step 1-1) removing specular reflection components by using global diffuse reflection information according to a bicolor reflection model to obtain a chromaticity image;
and 1-2) because the pixels with the same diffuse reflection chromaticity are gathered on the same straight line, a plurality of straight lines with different diffuse reflection chromaticities exist in one image, and the intersection point of the straight lines with different diffuse reflection chromaticities is calculated to be used as the light source chromaticity of the image.
3. The method according to claim 2, wherein said obtaining a chrominance image comprises:
a. according to the two-color reflection model, the color of a certain pixel point on an object is formed by linear combination of diffuse reflection and specular reflection, and the following formula is as follows:
I(x)=D(x)+S(x)=md(x)Λ(x)+ms(x)Γ(x) (1)
in the formula md(x) And ms(x) Respectively, a diffuse reflection coefficient and a specular reflection coefficient, which depend on the position of the pixel in the scene and the light source intensity, Λ (x) denotes the diffuse reflection chromaticity, which is determined by the properties of the material of the object itself, Γ (x) denotes the specular reflection chromaticity, which is determined by the light source chromaticity, which is generally considered as the light source chromaticity;
b. dividing the image pixel value by the sum of three channel pixel values to obtain the chrominance image
Substituting formula (1) into formula (2) to yield:
c. where the reflected chromaticity is typically normalized to 1, sigmac∈{r,g,b}Λc(x)=1,∑c∈{r,g,b}Γc(x) 1, then ∑c∈{r,g,b}Ic(x)=md(x)+ms(x) Then equation (3) is further written as:
4. a method for restoring a highlight image based on hue according to claim 1, characterized in that said performing a clustering operation comprises:
step 2-1) normalizing the hue H (x) to [0,1 ];
step 2-2) defining the hue difference Δ H between two pointsnewClustering image pixels so as to divide different clusters;
and 2-3) distributing a label to each pixel in the cluster, calculating the average value of all clusters, taking the average value as an initial value, and clustering the image pixels again by using a k-nearest neighbor classification algorithm.
5. A method for restoring a highlight image based on hue according to claim 4 characterized in that said definition of the hue difference Δ H between two pointsnewClustering the image pixels comprises:
wherein Δ H is the difference in hue between the normalized two points;
if the hue difference Δ H between two pixelsnewLess than the threshold T, they belong to the same cluster, otherwise they are divided into different clusters.
6. The method of claim 1, wherein estimating the illumination chromaticity of the image comprises:
step 3-1) calculating the distance between pixel chromaticity and illumination chromaticity to obtain the fusion coefficient of diffuse reflection and specular reflection;
step 3-2) carrying out bilateral filtering on the fusion coefficient mu (x) instead of filtering the recovered image;
7. The method according to claim 6, wherein the blending coefficient of diffuse reflection and specular reflection comprises:
in the normalized RGB space, all pixels of the image lie within a sphere, centered on the illumination chromaticity Γ (x), definingIs composed of
Wherein the content of the first and second substances,is a chrominance imageA set of direction vectors to the source chromaticity Γ (x), the illumination chromaticity Γ (x) being fixed for the same image, the distance r (x) being determined only by μ (x) for a given diffuse reflectance chromaticity Λ (x), the smaller r (x), the closer the pixel chromaticity is to the source chromaticity, the more likely the image is to be a highlight region, and thus the size of the distance r (x) determines the pixel specular reflectance contribution;
r(x)=μ(x)||Λ(x)-Γ(x)||2,0≤μ(x)≤1 (9)
because of the fact thatWhen mu (x) is 1, the image only contains diffuse reflection chroma, and the corresponding pixel is at the distance of 1Pixel r at the furthest awaymax(x):
Where the maximum distance is estimated for each cluster class CL. According to (8) and (9), the fusion coefficient μ (x) of each point can be estimated pixel by pixel:
8. the method according to claim 6, wherein said bilateral filtering the fusion coefficient μ (x) comprises:
bilateral filtering is carried out on the fusion coefficient mu (x) according to the formula (12), so that the detail part of the image is prevented from being excessively damaged, and the quality of the image after highlight removal is effectively improved;
wherein S (i, j) refers to a range of (2N +1) sizes centered on (i, j), μ (k, l) represents an input point, w (i, j, k, l) ═ Ws × Wr, and Ws, Wr are a spatial domain kernel and a value domain kernel, respectively; ws is determined by the Euclidean distance between the filter center pixel and other pixel positions in the filter block, and has a value ofWr is determined by the difference between the value of the center pixel of the filter and the value of the other pixels in the filter block, which isThrough experiments, when the filter radius N is set to 2, the domain variance σ is definedSSet to 5, the value domain variance σrThe filtering effect is best when set to 0.9.
9. The method of claim 6, wherein the diffuse reflection component of the chromaticity space is usedConverting back to the RGB color space includes:
According to the formula (2), theConvert back to RGB color space, diffuse reflection image after eliminating the highlight is:
the specular reflection image is:
S(x)=I(x)-D(x)。 (15)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110986703.6A CN113793274A (en) | 2021-08-26 | 2021-08-26 | Highlight image restoration method based on tone |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110986703.6A CN113793274A (en) | 2021-08-26 | 2021-08-26 | Highlight image restoration method based on tone |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113793274A true CN113793274A (en) | 2021-12-14 |
Family
ID=78876413
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110986703.6A Withdrawn CN113793274A (en) | 2021-08-26 | 2021-08-26 | Highlight image restoration method based on tone |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113793274A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116152249A (en) * | 2023-04-20 | 2023-05-23 | 济宁立德印务有限公司 | Intelligent digital printing quality detection method |
CN116297463A (en) * | 2023-05-16 | 2023-06-23 | 四川省港奇电子有限公司 | Power adapter shell injection molding detection method, system and device |
CN117474921A (en) * | 2023-12-27 | 2024-01-30 | 中国科学院长春光学精密机械与物理研究所 | Anti-noise light field depth measurement method, system and medium based on specular highlight removal |
-
2021
- 2021-08-26 CN CN202110986703.6A patent/CN113793274A/en not_active Withdrawn
Non-Patent Citations (1)
Title |
---|
张箴: "基于色调约束的镜面反射分离", 《模式识别与人工智能》, vol. 34, no. 8, pages 742 - 749 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116152249A (en) * | 2023-04-20 | 2023-05-23 | 济宁立德印务有限公司 | Intelligent digital printing quality detection method |
CN116152249B (en) * | 2023-04-20 | 2023-07-07 | 济宁立德印务有限公司 | Intelligent digital printing quality detection method |
CN116297463A (en) * | 2023-05-16 | 2023-06-23 | 四川省港奇电子有限公司 | Power adapter shell injection molding detection method, system and device |
CN116297463B (en) * | 2023-05-16 | 2023-08-01 | 四川省港奇电子有限公司 | Power adapter shell injection molding detection method, system and device |
CN117474921A (en) * | 2023-12-27 | 2024-01-30 | 中国科学院长春光学精密机械与物理研究所 | Anti-noise light field depth measurement method, system and medium based on specular highlight removal |
CN117474921B (en) * | 2023-12-27 | 2024-05-07 | 中国科学院长春光学精密机械与物理研究所 | Anti-noise light field depth measurement method, system and medium based on specular highlight removal |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113793274A (en) | Highlight image restoration method based on tone | |
CN107833220B (en) | Fabric defect detection method based on deep convolutional neural network and visual saliency | |
JP4746050B2 (en) | Method and system for processing video data | |
US8077969B2 (en) | Contour finding in segmentation of video sequences | |
US8126268B2 (en) | Edge-guided morphological closing in segmentation of video sequences | |
US8565525B2 (en) | Edge comparison in segmentation of video sequences | |
Sidorov | Conditional gans for multi-illuminant color constancy: Revolution or yet another approach? | |
US20090028432A1 (en) | Segmentation of Video Sequences | |
WO2007076891A1 (en) | Average calculation in color space, particularly for segmentation of video sequences | |
Ikonomakis et al. | Color image segmentation for multimedia applications | |
Palus | Color image segmentation: selected techniques | |
Russell et al. | An evaluation of moving shadow detection techniques | |
US20230351582A1 (en) | A line clearance system | |
Yu et al. | Efficient highlight removal of metal surfaces | |
Yarlagadda et al. | A reflectance based method for shadow detection and removal | |
US20220222791A1 (en) | Generating image masks from digital images utilizing color density estimation and deep learning models | |
Wang | Image matting with transductive inference | |
Domislović et al. | Outdoor daytime multi-illuminant color constancy | |
JPH06251147A (en) | Video feature processing method | |
CN114240788B (en) | Complex scene-oriented robustness and adaptive background restoration method | |
Lindsay et al. | Automatic multi-light white balance using illumination gradients and color space projection | |
Zhang et al. | Low-Light Image Enhancement with Color Transfer Based on Local Statistical Feature | |
Hemrit et al. | Revisiting and Optimising a CNN Colour Constancy Method for Multi-Illuminant Estimation | |
Guo et al. | A Novel Low-light Image Enhancement Algorithm Based On Information Assistance | |
Mondal et al. | A Statistical Approach for Multi-frame Shadow Movement Detection and Shadow Removal for Document Capture |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20211214 |