CN113793274A - Highlight image restoration method based on tone - Google Patents

Highlight image restoration method based on tone Download PDF

Info

Publication number
CN113793274A
CN113793274A CN202110986703.6A CN202110986703A CN113793274A CN 113793274 A CN113793274 A CN 113793274A CN 202110986703 A CN202110986703 A CN 202110986703A CN 113793274 A CN113793274 A CN 113793274A
Authority
CN
China
Prior art keywords
image
chromaticity
pixel
diffuse reflection
reflection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110986703.6A
Other languages
Chinese (zh)
Inventor
张箴
田建东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Institute of Automation of CAS
Original Assignee
Shenyang Institute of Automation of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Institute of Automation of CAS filed Critical Shenyang Institute of Automation of CAS
Priority to CN202110986703.6A priority Critical patent/CN113793274A/en
Publication of CN113793274A publication Critical patent/CN113793274A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a highlight image restoration method based on color tones. Firstly, clustering images by using tone information of the images based on the fact that the tone information in the color images is not easily interfered by specular reflection; then, calculating the distance between the pixel chromaticity and the illumination chromaticity to obtain a fusion coefficient of diffuse reflection and specular reflection; meanwhile, in order to prevent the pixel clustering from being interfered by noise, the method executes bilateral filtering operation on the fusion coefficient. And finally, according to the obtained fusion coefficient, obtaining a diffuse reflection image after the specular reflection is eliminated. Experimental results show that the algorithm has a good effect, and can effectively remove specular reflection and simultaneously keep the details and the edge information of the image. By qualitative and quantitative comparison with various methods, the method provided by the method achieves the best performance on peak signal-to-noise ratio and structural similarity; meanwhile, the algorithm also obtains the best visual effect in the process of processing the natural highlight picture.

Description

Highlight image restoration method based on tone
Technical Field
The invention relates to an image recovery method, in particular to a highlight image recovery method based on color tones.
Background
Specular reflection often destroys image quality, causing image information to be lost, affecting subsequent visual algorithm effects such as image segmentation, color constancy, object detection and target tracking. Therefore, removal of the specular reflection area is necessary. Most of the current specular reflection removal methods are based on a two-color reflection model, and a diffuse reflection pixel is firstly found in an image area, then the diffuse reflection pixel is spread to the area, and a specular reflection component is calculated. Due to the lack of global information of the image, the sought diffuse reflectance chromaticity cannot be guaranteed to be accurate, and therefore the specular component is difficult to remove cleanly.
The two-color reflection model is proposed to solve the problem of complicated reflecting surface modeling, and is now widely applied in the field of specular reflection removal. Klinker et al extended the two-color reflectance model by proposing that the color of the object and the color of the illumination conform to a T-shaped distribution, but the acquisition of the T-shaped distribution is easily disturbed by noise. In order to reduce the influence of noise on highlight image restoration, Tan et al propose a specular reflection removal method based on the distribution of diffuse reflection and specular reflection in the maximum chromaticity space. Tan and Lin et al perform specular reflection removal by an image restoration technique that synthetically fills the missing regions with neighboring patterns. Since the highlight region is related to the photographing direction, it is also possible to restore diffuse reflection using a sequence of images from different perspectives. Mallick et al use partial differential equations to recover the diffuse component from the video, but this does not work well for large areas of specular reflection.
Yoon et al propose reflection invariants, which are used to remove highlights. Shen et al first selects a pixel as the diffuse reflection component and then calculates the specular reflection component using a least squares method. Their main idea is to want the removal of reflections by an iterative method, but this method is time consuming. Yang and Liu et al, which notice that the saturation of specular reflection pixels is lower than that of diffuse reflection and therefore they achieve the goal of reflection component removal by adjusting the saturation of specular reflection pixels, propose a fast bilateral filter that uses a non-specular reflection image as a distance weighting function. Due to lack of global information, methods like these based on local diffuse reflection diffusion cannot completely remove reflections, and the recovered image is often not smooth. Shen et al first cluster color image pixels and then detect and remove diffuse components by calculating the ratio of the maximum value of the pixel to its intensity range-their method sometimes destroys image detail information because pixels with different diffuse reflections may have the same intensity ratio. Other efforts, such as methods that remove reflections by matrix decomposition, do not work well with texture images. Kim et al first look for candidate regions of specular reflection using the dark channel (the minimum of the three channels), then use some a priori assumptions such as that specular reflection regions are sparse and diffuse reflection regions are smooth, and then build an energy function that they can separate specular and diffuse components by solving the energy function. Their method works well with natural images, but for noise images and texture images, their method often causes ringing. Akashi and Okatani define reflection separation as a sparse Nonnegative Matrix Factorization (NMF) problem. However, current NMF algorithms are sensitive to initial values and can only ensure that local minima are found instead of global minima. Thus, the method requires multiple runs to achieve the most reasonable results. Furthermore, since NMF is typically highly sensitive to outliers, this method may fail in the presence of strong specular reflections or noise. Ren et al introduced a method of obtaining light source chromaticity through a color linear constraint condition based on a two-color reflection model to quickly remove highlight, but when performing pixel clustering, a clustering error problem occurs due to the influence of highlight and noise, so that abnormal points appear in a restored image, and highlight areas cannot be removed completely. Guo et al propose a sparse low-order reflection model. In their framework, diffuse and specular highlight images are estimated simultaneously by optimization. However, in the restored diffuse reflection image, excessively dark pixels may be generated in a highlight region.
Disclosure of Invention
In view of the above technical deficiencies, it is an object of the present invention to provide a highlight removal method capable of effectively removing specular reflection while preserving details and edge information of an image, which should not be easily disturbed by noise.
The technical scheme adopted by the invention for solving the technical problems is as follows: a highlight image restoration method based on color tones comprises the following steps:
a method of tonal based highlight image restoration, comprising:
step 1) estimating the illumination chromaticity of an image;
step 2) obtaining tone information of the image, and using the information to perform clustering operation on image pixels;
and 3) separating diffuse reflection and specular reflection pixel by pixel in all classes according to the distance from each pixel to the illumination chromaticity to obtain a diffuse reflection image with the specular reflection eliminated.
The estimating the illumination chromaticity of the image comprises:
step 1-1) removing specular reflection components by using global diffuse reflection information according to a bicolor reflection model to obtain a chromaticity image;
and 1-2) because the pixels with the same diffuse reflection chromaticity are gathered on the same straight line, a plurality of straight lines with different diffuse reflection chromaticities exist in one image, and the intersection point of the straight lines with different diffuse reflection chromaticities is calculated to be used as the light source chromaticity of the image.
The acquiring the chrominance image includes:
a. according to the two-color reflection model, the color of a certain pixel point on an object is formed by linear combination of diffuse reflection and specular reflection, and the following formula is as follows:
I(x)=D(x)+S(x)=md(x)Λ(x)+ms(x)Γ(x) (1)
in the formula md(x) And ms(x) Respectively, a diffuse reflection coefficient and a specular reflection coefficient, which depend on the position of the pixel in the scene and the light source intensity, Λ (x) denotes the diffuse reflection chromaticity, which is determined by the properties of the material of the object itself, Γ (x) denotes the specular reflection chromaticity, which is determined by the light source chromaticity, which is generally considered as the light source chromaticity;
b. dividing the image pixel value by the sum of three channel pixel values to obtain the chrominance image
Figure BDA0003230926580000031
Figure BDA0003230926580000041
Substituting formula (1) into formula (2) to yield:
Figure BDA0003230926580000042
c. where the reflected chromaticity is typically normalized to 1, sigmac∈{r,g,b}Λc(x)=1,∑c∈{r,g,b}Γc(x) 1, then ∑c∈{r,g,b}Ic(x)=md(x)+ms(x) Then equation (3) is further written as:
Figure BDA0003230926580000043
in the formula (I), the compound is shown in the specification,
Figure BDA0003230926580000044
the performing clustering operations includes:
step 2-1) normalizing the hue H (x) to [0,1 ];
step 2-2) defining the hue difference Δ H between two pointsnewClustering image pixels to divide different clusters;
and 2-3) distributing a label to each pixel in the cluster, calculating the average value of all clusters, taking the average value as an initial value, and clustering the image pixels again by using a k-nearest neighbor classification algorithm.
The hue difference Δ H between said defined two pointsnewClustering the image pixels comprises:
Figure BDA0003230926580000045
wherein Δ H is the difference in hue between the normalized two points;
if the hue difference Δ H between two pixelsnewIs less thanAnd if the threshold value T is not the threshold value T, the clusters belong to the same cluster, and if the threshold value T is not the threshold value T, the clusters are divided into different clusters.
The estimating the illumination chromaticity of the image comprises:
step 3-1) calculating the distance between pixel chromaticity and illumination chromaticity to obtain the fusion coefficient of diffuse reflection and specular reflection;
step 3-2) adopting bilateral filtering on the fusion coefficient mu (x) to replace filtering on the recovered image;
step 3-3) diffuse reflection component of chromaticity space
Figure BDA0003230926580000059
And converting the color space back to RGB, and acquiring a diffuse reflection image without highlight.
The blending coefficient of the diffuse reflection and the specular reflection comprises:
in the normalized RGB space, all pixels of the image lie within a sphere, centered on the illumination chromaticity Γ (x), defining
Figure BDA0003230926580000051
Is composed of
Figure BDA0003230926580000052
Wherein the content of the first and second substances,
Figure BDA0003230926580000053
is a chrominance image
Figure BDA0003230926580000054
A set of direction vectors to the source chromaticity Γ (x), the illumination chromaticity Γ (x) being fixed for the same image, the distance r (x) being determined only by μ (x) for a given diffuse reflectance chromaticity Λ (x), the closer the pixel chromaticity is to the source chromaticity, the more likely the image is to be a highlight region, so the size of the distance r (x) determines the pixel specular reflectance contribution;
Figure BDA0003230926580000055
because of the fact that
Figure BDA0003230926580000056
For any given pixel cluster, r (x) depends on μ (x):
r(x)=μ(x)||Λ(x)-Γ(x)||2,0≤μ(x)≤1 (9)
because of the fact that
Figure BDA0003230926580000057
When μ (x) is 1, the image contains only diffuse reflection chromaticity, and the corresponding pixel is the pixel r at the farthest distancemax(x):
Figure BDA0003230926580000058
Where the maximum distance is estimated for each cluster class CL. According to (8) and (9), the fusion coefficient μ (x) of each point can be estimated pixel by pixel:
Figure BDA0003230926580000061
the bilateral filtering of the fusion coefficient μ (x) comprises:
bilateral filtering is carried out on the fusion coefficient mu (x) according to the formula (12), so that the detail part of the image is prevented from being excessively damaged, and the quality of the image after highlight removal is effectively improved;
Figure BDA0003230926580000062
wherein S (i, j) refers to a range of (2N +1) sizes centered on (i, j), μ (k, l) represents an input point, w (i, j, k, l) ═ Ws × Wr, and Ws, Wr are a spatial domain kernel and a value domain kernel, respectively; ws is determined by Euclidean distance between the center pixel of the filter and other pixel positions in the filter block, and has a value of
Figure BDA0003230926580000063
Wr is determined by the difference between the value of the center pixel of the filter and the value of the other pixels in the filter block, which is
Figure BDA0003230926580000064
Through experiments, when the filtering radius N is set to be 2, the domain variance sigma is definedSSet to 5, the value domain variance σrThe filtering effect is best when the value is set to 0.9.
The diffuse reflection component of the chromaticity space
Figure BDA0003230926580000065
Converting back to the RGB color space includes:
diffuse reflection component of chromaticity space
Figure BDA0003230926580000066
Obtained by the following formula
Figure BDA0003230926580000067
According to the formula (2), the
Figure BDA0003230926580000068
Convert back to RGB color space, diffuse reflection image after eliminating the highlight is:
Figure BDA0003230926580000069
the specular reflection image is:
S(x)=I(x)-D(x)。 (15)
the invention has the following beneficial effects and advantages:
1. based on the observation result that the hue information in the color image is not easily interfered by mirror reflection, the method carries out pixel clustering through the hue information, and greatly improves the accuracy of the pixel clustering.
2. The influence of noise is avoided, and more detail information can be reserved.
3. The method is superior to the existing algorithm in the aspect of highlight image recovery, and can effectively remove specular reflection and simultaneously retain the details and edge information of the image.
Drawings
FIG. 1 is an overall flow diagram of the method;
fig. 2 specular reflection invariant. (a) Inputting an image; (b) color tone H (x); (c) a hue conversion angle α (x); (d) an azimuth angle θ (x); (e) elevation angle phi (x); (f) distance r (x) of pixel chromaticity from light source chromaticity;
FIG. 3 shows clustering results of the Fish images. (a) Inputting an image, (b) a clustering result of YANG, (c) a clustering result of REN, (d) a clustering result of the method;
fig. 4 is the result of specular reflection removal: (a) input images, (b) true values, (c) results of YANG, (d) results of REN, (e) results of GUO, (f) results of the method;
FIG. 5 shows the result of removing specular reflection from the natural highlight images Toys, Watermelon and Fish;
Detailed Description
The present invention will be described in further detail with reference to examples. The method steps are explained with reference to the drawings.
The highlight removal algorithm provided by the method mainly comprises three steps: 1) estimating the chromaticity of the illumination; 2) Obtaining tone information of the image, and using the information to perform clustering operation on image pixels; 3) diffuse reflection and specular reflection are separated on a pixel-by-pixel basis according to the distance of each pixel from the chromaticity of the illumination. The whole flow is shown in figure 1.
1. Reflection model
The two-color reflection model has been widely applied to the understanding of scene reflections. According to the two-color reflection model, the color of a certain pixel point on an object is formed by linear combination of diffuse reflection and specular reflection, namely as shown in the following formula:
I(x)=D(x)+S(x)=md(x)Λ(x)+ms(x)Γ(x) (1)
in the formula md(x) And ms(x) Respectively, a diffuse reflection coefficient andthe specular reflectance, which depends on the position of the pixel in the scene and the light source intensity, Λ (x) denotes the diffuse reflectance chromaticity, determined by the properties of the material of the object itself, and Γ (x) denotes the specular reflectance chromaticity, which is also commonly referred to as the light source chromaticity because it is determined by the light source chromaticity.
Most of the existing highlight removal methods are based on two-color reflection models, which divide an image into different areas by clustering, then search for one pixel in each image area as a diffuse reflection component, then spread the obtained diffuse reflection component to the whole image area and calculate the corresponding specular reflection component. Due to the lack of global diffuse reflection information, the resulting diffuse reflection component in the local image area is often inaccurate, and therefore the specular reflection component cannot be completely removed. Furthermore, for images with complex textures, it is difficult to cluster all pixels well to get an accurate image area.
The removal of the specular reflection component by using the global diffuse reflection information is a relatively effective reflection removal method. The global information not only enables finding the optimal diffuse reflection chromaticity value, but also enables efficient processing of texture images. In addition, the processing capacity is good for the light reflecting area with a large area. The chrominance image is obtained by dividing the pixel values of the image by the sum of the three channel pixel values
Figure BDA0003230926580000081
Figure BDA0003230926580000082
By substituting formula (1) into formula (2), can be obtained
Figure BDA0003230926580000083
Where the reflected chromaticity is typically normalized to 1, sigmac∈{r,g,b}Λc(x)=1,∑c∈{r,g,b}Γc(x) 1, then ∑c∈{r,g,b}Ic(x)=md(x)+ms(x) In that respect Can further write (3) as
Figure BDA0003230926580000084
In the formula (I), the compound is shown in the specification,
Figure BDA0003230926580000085
according to the formula (4), it can be found that pixels having the same diffuse reflectance chromaticity are gathered on a straight line. For the whole image, different diffuse reflection chromaticities represent different straight lines, and the intersection point of all the straight lines is the chromaticity of the light source. The light source chromaticity of the picture can be obtained by calculating the intersection point of the chromaticity lines.
2. Pixel clustering
In previous studies, some scholars considered the azimuth angle θ (x) and elevation angle obtained by converting an image from a rectangular coordinate system to a spherical coordinate system as compared with hue information h (x) in the image HSI color space
Figure BDA0003230926580000091
Less susceptible to high light because of hue
Figure BDA0003230926580000092
α (x) is the hue conversion angle, for any point in the image,
Figure BDA0003230926580000093
as can be seen from equation (5), the calculation process of the hue is affected by the magnitude of B, G, when the image B, G components are relatively close, a very small color difference may generate a great difference in hue value, and the effect in the original red area after visualization will be very poor, as shown in fig. 2(b), but the hue is angle information, and is not distinguished between 0 ° and 360 °, so it is inaccurate to determine that the hue information is easily affected by highlight through fig. 2 (b). To avoid this problem, the hue conversion angle is visualized in fig. 2(c), where it can be seen that the hue is not easily affected by highlight regions, while fig. 2(d) and 2(f) where both azimuth and elevation are not sensitive enough to color changes, so the method is not sensitive enough to color changes at the fish mouth and fish tail
Clustering pixels using hue information, normalizing hue H (x) to [0,1] when clustering, defining hue difference between two points
Figure BDA0003230926580000094
Wherein Δ H is the difference in hue between the normalized two points;
the method uses Δ HnewThe image pixels are clustered. If the hue difference between two pixels is smaller than the threshold T, they belong to the same cluster, otherwise they are divided into different clusters. After assigning a label to each pixel, the average of all clusters is calculated. With the average value as an initial value, image pixels are re-clustered using a KNN (k nearest neighbor classification algorithm) search rule. If T is too small, the number of pixel clusters may increase, which may result in incomplete removal of specular reflection. Also, if T is too large, the number of pixel clusters will decrease, the specular reflection of the image will be excessively separated, and we set the threshold to 0.05 through several experiments.
FIG. 3 shows the clustering result of the Fish images. YANG et al use region growing algorithm to locally diffuse regions with similar diffuse reflectance chromaticity, he can capture the details of the image well, but the smooth regions of the image are easily destroyed (see (b)), REN et al use the angular coordinates of the image to perform clustering, when affected by high light, it is difficult to distinguish the regions with closer colors and noise is easily generated in the darker regions (see (c)), the method uses the hue information of the global image, and can guarantee the clustering smoothness of the uniform regions of the image while preserving the image texture (see (d)).
3. Specular reflection separation
In the normalized RGB space, all pixels of the image are located within a sphere, centered at the illumination chromaticity Γ (x). Definition of
Figure BDA0003230926580000101
Is composed of
Figure BDA0003230926580000102
Wherein the content of the first and second substances,
Figure BDA0003230926580000103
is a chrominance image
Figure BDA0003230926580000104
A set of direction vectors to the source chromaticity Γ (x). For the same image, the illumination chromaticity Γ (x) is fixed, for a given chromaticity Λ (x), the distance r (x) is determined only by μ (x), the smaller r (x), the closer the pixel chromaticity is to the light source chromaticity, the more likely the image is to be a highlight region, and the size of the distance r (x) determines the pixel specular reflectance.
Figure BDA0003230926580000111
Because of the fact that
Figure BDA0003230926580000112
For a given pixel cluster, r (x) depends on μ (x):
r(x)=μ(x)||Λ(x)-Γ(x)||2,0≤μ(x)≤1。 (9)
because of the fact that
Figure BDA0003230926580000113
When μ (x) is 1, the image contains only diffuse reflection chromaticity, and the corresponding pixel is the pixel r at the farthest distancemax(x)。
Figure BDA0003230926580000114
Where the maximum distance is estimated for each cluster class CL. According to (8) and (9), the fusion coefficient μ (x) of each point can be estimated pixel by pixel:
Figure BDA0003230926580000115
due to the noise influence and the highlight influence, clustering of partial points is not accurate enough when pixels are clustered, bilateral filtering is carried out on the fusion coefficient mu (x) in order to improve the visual effect after highlight removal, and compared with filtering of a recovered image, bilateral filtering of the fusion coefficient does not excessively damage the detail part of the image, and the quality of the highlight-removed image can be effectively improved.
Figure BDA0003230926580000116
The filtering process is shown in the above formula, where S (i, j) refers to a range of (2N +1) with (i, j) as the center, μ (k, l) represents the input point, w (i, j, k, l) ═ Ws × Wr, and Ws and Wr are the spatial domain kernel and the value domain kernel, respectively. Wherein Ws is determined by Euclidean distance between the filter center pixel and other pixel positions in the filter block, and has a value of
Figure BDA0003230926580000117
Wr is determined by the difference between the value of the center pixel of the filter and the value of the other pixels in the filter block, which is
Figure BDA0003230926580000118
Through experiments, when the filter radius N is set to 2, the domain variance σ is definedSSet to 5, the value domain variance σrThe filtering effect is best when set to 0.9.
The diffuse reflection component of the chromaticity space
Figure BDA0003230926580000121
Can be obtained by the following formula
Figure BDA0003230926580000122
According to the formula (2), the
Figure BDA0003230926580000123
Convert back to RGB color space, diffuse reflection image after eliminating the highlight is:
Figure BDA0003230926580000124
the specular reflection image is:
S(x)=I(x)-D(x)。 (15)
the overall algorithm flow is given in algorithm 1.
Figure BDA0003230926580000125
4. Results and analysis of the experiments
FIG. 4 shows the specular reflection removal results for four high-light images, Animals, Cups, Fruit, and Masks. In addition, in tables 1 and 2, the method compares the peak signal-to-noise ratio and the structural similarity of the images of different methods at the same time. In the four images, the Yang et al method generates unsmooth in the green frame area of the images of Animals, Cups and Fruit, destroys the detail information of the blue frame area in the Fruit, and the highlights in the red frame area of the images of Cups and Masks are not completely removed, because the Yang et al method mainly eliminates the highlights by means of local diffuse reflection diffusion, lacks global information, can effectively process the images with smaller highlights like the images of Animals, is difficult to completely remove due to overlarge area of the highlights, and easily generates unsmooth in the edge area of the diffuse reflection diffusion. The Ren et al method is not accurate enough for pixel clustering, so that highlight is difficult to be thoroughly removed in a red frame selection area of an image, detail information of the highlight area is damaged, and bright spots and unsmoothness also appear in a part of places with wrong pixel clustering. Guo et al use the sparsity of specular reflection to separate specular reflection, and when the specular reflection area is large, the sparsity is also reduced, so that the highlight cannot be completely removed, and the specific effect is shown in the red frame selection area in (e). The quantitative comparison result of the PSNR and the SSIM shows that the method can achieve the best effect in most pictures. The above experiments prove that the method of the invention can obtain better quantification results than other algorithms.
TABLE 1 Peak SNR for different algorithms
Table 1 PSNR of different methods
Figure BDA0003230926580000132
TABLE 2 structural similarity of different algorithms
Table 2 SSIM of different methods
Figure BDA0003230926580000131
The present invention also tests the performance of different methods on natural images. In fig. 5, for the highlight images Toys, all the methods have better effect because the highlight in the images is weaker, and only the method of Ren et al generates the unsmooth phenomenon in the green frame selection area. The methods of the people such as the high-intensity image watermelonon, Yang and Liu only use local information, and cannot effectively process the high-intensity image in a large range, so that an over-processed area appears on the surface of the processed Watermelon. Yang et al use the elimination of highlights by means of local diffuse reflection diffusion, which is more effective for removing highlights in smaller areas, but it is not able to remove larger highlight areas, such as specular reflection spots on watermelonon. For highlight images Fish, the methods of Yang and Liu et al have been over-processed in the boxed area, and the methods of Yang et al and Ren et al have been unsmooth in the boxed area. The method of the invention not only can remove highlight spots in the smooth area, but also can better retain the image texture compared with other methods.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (9)

1. A highlight image restoration method based on hue, comprising:
step 1) estimating the illumination chromaticity of an image;
step 2) obtaining tone information of the image, and using the information to perform clustering operation on image pixels;
and 3) separating diffuse reflection and specular reflection pixel by pixel in all classes according to the distance from each pixel to the illumination chromaticity to obtain a diffuse reflection image with the specular reflection eliminated.
2. The method of claim 1, wherein estimating the illumination chromaticity of the image comprises:
step 1-1) removing specular reflection components by using global diffuse reflection information according to a bicolor reflection model to obtain a chromaticity image;
and 1-2) because the pixels with the same diffuse reflection chromaticity are gathered on the same straight line, a plurality of straight lines with different diffuse reflection chromaticities exist in one image, and the intersection point of the straight lines with different diffuse reflection chromaticities is calculated to be used as the light source chromaticity of the image.
3. The method according to claim 2, wherein said obtaining a chrominance image comprises:
a. according to the two-color reflection model, the color of a certain pixel point on an object is formed by linear combination of diffuse reflection and specular reflection, and the following formula is as follows:
I(x)=D(x)+S(x)=md(x)Λ(x)+ms(x)Γ(x) (1)
in the formula md(x) And ms(x) Respectively, a diffuse reflection coefficient and a specular reflection coefficient, which depend on the position of the pixel in the scene and the light source intensity, Λ (x) denotes the diffuse reflection chromaticity, which is determined by the properties of the material of the object itself, Γ (x) denotes the specular reflection chromaticity, which is determined by the light source chromaticity, which is generally considered as the light source chromaticity;
b. dividing the image pixel value by the sum of three channel pixel values to obtain the chrominance image
Figure FDA0003230926570000012
Figure FDA0003230926570000011
Substituting formula (1) into formula (2) to yield:
Figure FDA0003230926570000021
c. where the reflected chromaticity is typically normalized to 1, sigmac∈{r,g,b}Λc(x)=1,∑c∈{r,g,b}Γc(x) 1, then ∑c∈{r,g,b}Ic(x)=md(x)+ms(x) Then equation (3) is further written as:
Figure FDA0003230926570000022
in the formula (I), the compound is shown in the specification,
Figure FDA0003230926570000023
4. a method for restoring a highlight image based on hue according to claim 1, characterized in that said performing a clustering operation comprises:
step 2-1) normalizing the hue H (x) to [0,1 ];
step 2-2) defining the hue difference Δ H between two pointsnewClustering image pixels so as to divide different clusters;
and 2-3) distributing a label to each pixel in the cluster, calculating the average value of all clusters, taking the average value as an initial value, and clustering the image pixels again by using a k-nearest neighbor classification algorithm.
5. A method for restoring a highlight image based on hue according to claim 4 characterized in that said definition of the hue difference Δ H between two pointsnewClustering the image pixels comprises:
Figure FDA0003230926570000024
wherein Δ H is the difference in hue between the normalized two points;
if the hue difference Δ H between two pixelsnewLess than the threshold T, they belong to the same cluster, otherwise they are divided into different clusters.
6. The method of claim 1, wherein estimating the illumination chromaticity of the image comprises:
step 3-1) calculating the distance between pixel chromaticity and illumination chromaticity to obtain the fusion coefficient of diffuse reflection and specular reflection;
step 3-2) carrying out bilateral filtering on the fusion coefficient mu (x) instead of filtering the recovered image;
step 3-3) diffuse reflection component of chromaticity space
Figure FDA0003230926570000038
And converting the color space back to RGB, and acquiring a diffuse reflection image without highlight.
7. The method according to claim 6, wherein the blending coefficient of diffuse reflection and specular reflection comprises:
in the normalized RGB space, all pixels of the image lie within a sphere, centered on the illumination chromaticity Γ (x), defining
Figure FDA0003230926570000031
Is composed of
Figure FDA0003230926570000032
Wherein the content of the first and second substances,
Figure FDA0003230926570000033
is a chrominance image
Figure FDA0003230926570000034
A set of direction vectors to the source chromaticity Γ (x), the illumination chromaticity Γ (x) being fixed for the same image, the distance r (x) being determined only by μ (x) for a given diffuse reflectance chromaticity Λ (x), the smaller r (x), the closer the pixel chromaticity is to the source chromaticity, the more likely the image is to be a highlight region, and thus the size of the distance r (x) determines the pixel specular reflectance contribution;
Figure FDA0003230926570000035
because of the fact that
Figure FDA0003230926570000036
For any given pixel cluster, r (x) depends on μ (x):
r(x)=μ(x)||Λ(x)-Γ(x)||2,0≤μ(x)≤1 (9)
because of the fact that
Figure FDA0003230926570000037
When mu (x) is 1, the image only contains diffuse reflection chroma, and the corresponding pixel is at the distance of 1Pixel r at the furthest awaymax(x):
Figure FDA0003230926570000041
Where the maximum distance is estimated for each cluster class CL. According to (8) and (9), the fusion coefficient μ (x) of each point can be estimated pixel by pixel:
Figure FDA0003230926570000042
8. the method according to claim 6, wherein said bilateral filtering the fusion coefficient μ (x) comprises:
bilateral filtering is carried out on the fusion coefficient mu (x) according to the formula (12), so that the detail part of the image is prevented from being excessively damaged, and the quality of the image after highlight removal is effectively improved;
Figure FDA0003230926570000043
wherein S (i, j) refers to a range of (2N +1) sizes centered on (i, j), μ (k, l) represents an input point, w (i, j, k, l) ═ Ws × Wr, and Ws, Wr are a spatial domain kernel and a value domain kernel, respectively; ws is determined by the Euclidean distance between the filter center pixel and other pixel positions in the filter block, and has a value of
Figure FDA0003230926570000044
Wr is determined by the difference between the value of the center pixel of the filter and the value of the other pixels in the filter block, which is
Figure FDA0003230926570000045
Through experiments, when the filter radius N is set to 2, the domain variance σ is definedSSet to 5, the value domain variance σrThe filtering effect is best when set to 0.9.
9. The method of claim 6, wherein the diffuse reflection component of the chromaticity space is used
Figure FDA0003230926570000046
Converting back to the RGB color space includes:
diffuse reflection component of chromaticity space
Figure FDA0003230926570000047
Obtained by the following formula
Figure FDA0003230926570000048
According to the formula (2), the
Figure FDA0003230926570000051
Convert back to RGB color space, diffuse reflection image after eliminating the highlight is:
Figure FDA0003230926570000052
the specular reflection image is:
S(x)=I(x)-D(x)。 (15)
CN202110986703.6A 2021-08-26 2021-08-26 Highlight image restoration method based on tone Withdrawn CN113793274A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110986703.6A CN113793274A (en) 2021-08-26 2021-08-26 Highlight image restoration method based on tone

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110986703.6A CN113793274A (en) 2021-08-26 2021-08-26 Highlight image restoration method based on tone

Publications (1)

Publication Number Publication Date
CN113793274A true CN113793274A (en) 2021-12-14

Family

ID=78876413

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110986703.6A Withdrawn CN113793274A (en) 2021-08-26 2021-08-26 Highlight image restoration method based on tone

Country Status (1)

Country Link
CN (1) CN113793274A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116152249A (en) * 2023-04-20 2023-05-23 济宁立德印务有限公司 Intelligent digital printing quality detection method
CN116297463A (en) * 2023-05-16 2023-06-23 四川省港奇电子有限公司 Power adapter shell injection molding detection method, system and device
CN117474921A (en) * 2023-12-27 2024-01-30 中国科学院长春光学精密机械与物理研究所 Anti-noise light field depth measurement method, system and medium based on specular highlight removal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张箴: "基于色调约束的镜面反射分离", 《模式识别与人工智能》, vol. 34, no. 8, pages 742 - 749 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116152249A (en) * 2023-04-20 2023-05-23 济宁立德印务有限公司 Intelligent digital printing quality detection method
CN116152249B (en) * 2023-04-20 2023-07-07 济宁立德印务有限公司 Intelligent digital printing quality detection method
CN116297463A (en) * 2023-05-16 2023-06-23 四川省港奇电子有限公司 Power adapter shell injection molding detection method, system and device
CN116297463B (en) * 2023-05-16 2023-08-01 四川省港奇电子有限公司 Power adapter shell injection molding detection method, system and device
CN117474921A (en) * 2023-12-27 2024-01-30 中国科学院长春光学精密机械与物理研究所 Anti-noise light field depth measurement method, system and medium based on specular highlight removal
CN117474921B (en) * 2023-12-27 2024-05-07 中国科学院长春光学精密机械与物理研究所 Anti-noise light field depth measurement method, system and medium based on specular highlight removal

Similar Documents

Publication Publication Date Title
CN113793274A (en) Highlight image restoration method based on tone
CN107833220B (en) Fabric defect detection method based on deep convolutional neural network and visual saliency
JP4746050B2 (en) Method and system for processing video data
US8077969B2 (en) Contour finding in segmentation of video sequences
US8126268B2 (en) Edge-guided morphological closing in segmentation of video sequences
US8565525B2 (en) Edge comparison in segmentation of video sequences
Sidorov Conditional gans for multi-illuminant color constancy: Revolution or yet another approach?
US20090028432A1 (en) Segmentation of Video Sequences
WO2007076891A1 (en) Average calculation in color space, particularly for segmentation of video sequences
Ikonomakis et al. Color image segmentation for multimedia applications
Palus Color image segmentation: selected techniques
Russell et al. An evaluation of moving shadow detection techniques
US20230351582A1 (en) A line clearance system
Yu et al. Efficient highlight removal of metal surfaces
Yarlagadda et al. A reflectance based method for shadow detection and removal
US20220222791A1 (en) Generating image masks from digital images utilizing color density estimation and deep learning models
Wang Image matting with transductive inference
Domislović et al. Outdoor daytime multi-illuminant color constancy
JPH06251147A (en) Video feature processing method
CN114240788B (en) Complex scene-oriented robustness and adaptive background restoration method
Lindsay et al. Automatic multi-light white balance using illumination gradients and color space projection
Zhang et al. Low-Light Image Enhancement with Color Transfer Based on Local Statistical Feature
Hemrit et al. Revisiting and Optimising a CNN Colour Constancy Method for Multi-Illuminant Estimation
Guo et al. A Novel Low-light Image Enhancement Algorithm Based On Information Assistance
Mondal et al. A Statistical Approach for Multi-frame Shadow Movement Detection and Shadow Removal for Document Capture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20211214