CN113469895B - Image highlight removing method based on color partition - Google Patents

Image highlight removing method based on color partition Download PDF

Info

Publication number
CN113469895B
CN113469895B CN202110519805.7A CN202110519805A CN113469895B CN 113469895 B CN113469895 B CN 113469895B CN 202110519805 A CN202110519805 A CN 202110519805A CN 113469895 B CN113469895 B CN 113469895B
Authority
CN
China
Prior art keywords
image
pixel
pixels
color
highlight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110519805.7A
Other languages
Chinese (zh)
Other versions
CN113469895A (en
Inventor
徐尚龙
叶鑫龙
颛孙壮志
郑师晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202110519805.7A priority Critical patent/CN113469895B/en
Publication of CN113469895A publication Critical patent/CN113469895A/en
Application granted granted Critical
Publication of CN113469895B publication Critical patent/CN113469895B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)
  • Facsimile Image Signal Circuits (AREA)

Abstract

The invention discloses an image highlight removing method based on color partition, belonging to the field of image processing; aiming at a white light source or a highlight image corrected by the light source, the method carries out color partition on the image based on HSV color space; distinguishing diffuse reflection pixels from pixels containing specular reflection according to characteristic trends of image highlight pixels in HSV space in each color partition; separating specular reflection components in the RGB color space by diffuse reflection intensity ratios; and combining the processing results of each color partition to obtain an image with the highlight removed. The invention can effectively remove the highlight in the image, does not change the color information of the highlight part, and simultaneously maintains the texture detail of the highlight area as much as possible.

Description

Image highlight removing method based on color partition
Technical Field
The invention belongs to the field of image processing, and relates to an image highlight removing method based on color partition.
Background
In real life, because the surface smoothness of a shot object is high, the material is heterogeneous, the illumination of a light source is uneven, and the like, the obtained target image often has a highlight part. The method mainly reflects the characteristics of a light source, weakens or even covers the color texture characteristics of the surface of an object, and influences the quality of a target image. Therefore, research on how to accurately remove the image highlight and restore the image information has important significance.
Shafer first proposed a method of separating the reflection components from a single picture based on a dichromatic reflection model. The following characteristics exist for the diffuse emission component and the specular reflection component: the maximum chromaticity of diffuse reflection of different color object surfaces is not necessarily equal, but the specular reflection weights in the same color surfaces are different in pixel gray values and saturation.
Shen proposes an improved SF image MSF image, wherein the calculation of the chromaticity of the MSF image is more robust than the calculated chromaticity of the SF image of Tan et al, and then the specular reflection component and the diffuse reflection component are separated by a least square method to remove the highlights in the image. Shen et al propose to cluster the pixel point in the minimum and maximum chromaticity space of the MSF image, and calculate the specular reflection component by using the intensity ratio of the clustered pixels, thereby realizing the highlight removal of the image. In the method, no distinction and treatment are carried out on the saturated high-light part, so that the method has better treatment effect on the object with weak reflection. The saturated highlight part of the image needs to be processed by adopting an image restoration method, and a restoration algorithm based on a quick travel method proposed by Telea is relatively simple and quick in restoration speed compared with a restoration method based on a variational equation.
Disclosure of Invention
The invention aims at: the image highlight removing method based on the color subareas is characterized in that diffuse reflection, specular reflection component characteristics and MSF image characteristics are combined in an HSV color space, specular reflection component separation based on the intensity ratio is carried out on each color subarea, and the image highlight removing is realized by repairing saturated highlights based on a rapid travelling method. The method has strong applicability, and can greatly reserve the detailed information of the image with rich textures or saturated pixels.
The technical scheme adopted by the invention is as follows:
an image highlight removing method based on color partition comprises the following processing steps between an input image and an output image,
s1, color partitioning, namely converting an input highlight image from an RGB color space to an HSV color space, performing color partitioning according to an image tone H component value range, and dividing the image into 12 color areas;
s2, classifying pixels, namely classifying each color partition image according to characteristic trend of highlight pixels in an HSV color space, and distinguishing diffuse reflection pixels from pixels containing specular reflection;
s3, specular reflection components of each color partition are calculated through diffuse reflection intensity ratio, saturated specular pixels are distinguished according to the intensity values of the specular reflection components, and the specular reflection components are removed by the other pixels through a bicolor reflection model; repairing the saturated highlight pixels by adopting a fast-travelling-based method; obtaining an image with each color partition removed of highlight;
and S4, finally, merging the processed images in the RGB space of each color partition to obtain an overall image with the highlight removed.
Further: in the S1, an input highlight image is converted from an RGB color space to an HSV color space through a formula (1);
Figure GDA0003247777060000021
v=max (1)
wherein (r, g, b) is the red, blue and green color coordinates of the RGB space, and the ranges are all [0,1]; max is the maximum value in a single pixel three channel (r, g, b); min is the minimum value; the range of hues h in the HSV color space is [0,360 ], the range of s is [0,1], and the range of brightness v is [0,1];
dividing the image into 12 color areas according to the value range [0,360 DEG ] of the tone h; the values of the S component and the H component are in the ranges of [0,1], and the values of the S component and the H component are in the ranges of [0,1].
Further: the specific method for classifying the pixels in the step S2 is as follows:
in the image HSV color space of each color partition, the pixels satisfying the condition 1 are divided into diffuse reflection pixels; the pixels satisfying the condition 2 are divided into strongly reflective pixels; dividing the remaining pixels that do not meet the conditions 1, 2 into weakly reflective pixels, wherein:
condition 1: v (1-s) is less than or equal to Th 1 (2)
Condition 2:
Figure GDA0003247777060000031
where v is the brightness component of the HSV color space, s is the saturation component of the HSV color space, and Th 1 Is a set judgment threshold value, the range of which is [0,1]。
Further: the specific method for removing the highlight in the step S3 is as follows:
s3.1 in the RGB color space of the color partition image, the colors of the light sources are processedNormalization processing is performed so that specular reflection chromaticity γ= [1/3,1/3 of highlight pixels] T Diffuse reflection intensity ratio R according to each color zone d (x) Calculating the specular reflection component I of the highlight image obtained by the simultaneous formula (4) s (x);
Figure GDA0003247777060000032
Wherein I is ran (x) Is the intensity range value of a single pixel, I max (x) Is the maximum intensity value of three channels (r, g, b) in the pixel, I min (x) At minimum, m d (x) Is the weight coefficient of diffuse reflection component, beta max 、β min Maximum and minimum diffuse reflectance chromaticity in a pixel, respectively. m is m s (x) Is the weight coefficient of specular reflection component, and gamma is the chromaticity of specular reflection;
and calculated specular reflection components to the highlight,
Figure GDA0003247777060000033
wherein the method comprises the steps of
Figure GDA0003247777060000034
For diffuse reflection intensity ratio R d (x) Taking the median value of the intensity ratio of the diffuse reflection pixels of the same color partition;
s3.2, judging specular reflection components of the strong reflection pixel and the weak reflection pixel by adopting a condition 3, and marking the pixel conforming to the condition 3 as saturated high light, wherein:
condition 3: i s (x)≤Th 2 (6)
And for the pixels which do not meet the condition 3, removing specular reflection components from the pixel intensity in the image according to the bicolor reflection model to obtain the image with diffuse emission of the color subareas.
I d (x)=I(x)-I s (x)=m s (x)β (7)
For pixels that meet condition 3 and are marked as saturated highlights, a fast-marching based image restoration method is employed for processing.
Further: the fast-travelling-based image restoration method proposed by Telea in S3.2 is specifically as follows:
s3.2.1 the pixel points of the color partitioned image of the RGB color space are divided into three classes:
band, the pixel is the boundary part of the pixel which does not meet the condition 3 in S3 and the marked saturated high-light pixel, and belongs to the boundary of the area to be repaired
Figure GDA0003247777060000041
Is known, its pixel intensity value is known;
inservice, the pixels are marked saturated high-light parts and belong to the boundary of the area to be repaired
Figure GDA0003247777060000042
An internal pixel point whose pixel intensity value is unknown;
known that the type of pixel belongs to the boundary of the region to be repaired
Figure GDA0003247777060000043
An external pixel whose pixel intensity value is known;
s3.2.2, initializing: setting the pixel point T value of Known and Band to 0 and the Inside pixel point T value to 10 6
S3.2.3 diffusion: solving the T value of Band class, sequencing, finding out the minimum T value pixel point, marking as p (i, j), and changing the p (i, j) into Known class; traversing the neighborhood points of the pixel points p (i, j), marking the points belonging to the class of the amide in the neighborhood points as q, and calculating the intensity values of the points according to a formula (10) for repairing; updating the T value of q according to the formula (9) and changing the T value into Band class; finally repeating the steps until the saturated Gao Guangxiang element of the class of the instride is empty;
wherein: the T value refers to the initial boundary from each pixel point in the region omega to be repaired
Figure GDA0003247777060000044
Is derived from Eikonal equation (8);
Figure GDA0003247777060000045
max(D -x T,-D +x T,0) 2 +max(D -y T,-D +y T,0) 2 =1 (9)
In the formula (9)
D -x T(i,j)=T(i,j)-T(i-1,j)
D +x T(i,j)=T(i+1,j)-T(i,j)
D -y T(i,j)=T(i,j)-T(i,j-1)
D +y T(i,j)=T(i,j+1)-T(i,j)
Wherein the method comprises the steps of
T (i, j) =min (T1, T2..tn), t1=solve (f (i-1, j), f (i, j-1), T (i-1, j), T (i, j-1)), and the rest, n is the neighborhood pixel count;
Figure GDA0003247777060000046
in equation (10) w (q, I) is a weight function and I (I) is a known pixel intensity value within the neighborhood.
In summary, due to the adoption of the technical scheme, the beneficial effects of the invention are as follows:
the method provided by the invention has simple whole flow and easy implementation, can solve the problem that the saturated highlight treatment is invalid in the traditional highlight removal method, and can sufficiently eliminate highlight components with different degrees. When the texture is rich, the reflection is weaker or the texture is weaker, the color information of the image target can be better kept when the object with stronger reflection is reflected, and the characteristics of the image target are kept.
Drawings
For a clearer description of the technical solutions of embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and should not be considered limiting in scope, and other related drawings can be obtained according to these drawings without inventive effort for a person skilled in the art, wherein:
FIG. 1 is a schematic flow diagram of a highlight removal method implementation;
FIG. 2 is a schematic illustration of image restoration based on a fast-marching method;
FIG. 3 is an image processing intermediate process diagram, wherein (a), (c), (e) are color partition images, and (b), (d), (f) are color partition images after highlight removal;
fig. 4 (a) shows an original image to be input, and (b) shows an output image from which high light is removed.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the particular embodiments described herein are illustrative only and are not intended to limit the invention, i.e., the embodiments described are merely some, but not all, of the embodiments of the invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present invention.
It is noted that relational terms such as "first" and "second", and the like, are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The features and capabilities of the present invention are described in further detail below in connection with examples.
Example 1
The highlight removal processing is performed on the input highlight image, as shown in fig. 4 (a) of the original image, and the overall flow steps are shown in fig. 1.
S1, color partition: the input highlight image is converted from RGB color space to HSV color space, color partition is carried out according to the value range of the hue H component of the image, the image is divided into 12 color areas, and the color partition images of the figures 3 (a), (c) and (e) are obtained by combining the color areas.
The S1 specifically comprises the following steps:
the image is converted from RGB space to HSV color space according to equation (1).
Figure GDA0003247777060000061
Figure GDA0003247777060000062
v=max (1)
Wherein (r, g, b) is the red, blue and green color coordinates of the RGB space, and the ranges are all [0,1]; max is the maximum value in a single pixel three channel (r, g, b); min is the minimum. The range of hues h in the HSV color space is [0,360 ], the range of s is [0,1], and the range of brightness v is [0,1].
The image is divided into 12 color areas according to the value range [0,360 DEG ] of the hue h. The color region H component ranges are shown in the following table, and the value ranges of the S component and the H component are 0, 1.
Figure GDA0003247777060000063
S2, classifying pixels: and classifying each color partition image according to the characteristic trend of the highlight pixels in the HSV color space, and distinguishing diffuse reflection pixels from specular reflection pixels.
The step S2 is specifically as follows:
in the image HSV color space of each color partition, the pixels satisfying the condition 1 are divided into diffuse reflection pixels; the pixels satisfying the condition 2 are divided into strongly reflective pixels; the remaining pixels that do not satisfy the conditions 1, 2 are divided into weakly reflective pixels.
Condition 1:
v(1-s)≤Th 1 (2)
condition 2:
Figure GDA0003247777060000071
where v is the brightness component of the HSV color space, s is the saturation component of the HSV color space, and Th 1 Is a set judgment threshold value, the range of which is [0,1]。
S3, highlight removal: calculating specular reflection components of each color partition through diffuse reflection intensity ratio, distinguishing saturated high-light pixels according to specular reflection component intensity values, and removing specular reflection components by using a bicolor reflection model for other pixels; the saturated highlight pixels are repaired by adopting a Fast Marching Method (FMM). Finally, an image with highlights removed for each color zone is obtained as in fig. 3 (b), (d), and (f).
The step S3 is specifically as follows:
s3.1 normalizing the colors of the light sources in the RGB color space of the color partitioned image so that the specular chromaticity of the highlight pixels gamma= [1/3,1/3] T Diffuse reflection intensity ratio R according to each color zone d (x) Calculating the specular reflection component I of the highlight image obtained by the simultaneous formula (4) s (x)。
Figure GDA0003247777060000072
Wherein I is ran (x) Is the intensity range value of a single pixel, I max (x) Is the maximum intensity value of three channels (r, g, b) in the pixel, I min (x) At minimum, m d (x) Is the weight coefficient of diffuse reflection component, beta max 、β min Maximum and minimum diffuse reflectance chromaticity in a pixel, respectively. m is m s (x) Is the specular component weight coefficient, and γ is the specular chromaticity.
Calculated specular component to high-light image
Figure GDA0003247777060000073
Wherein the method comprises the steps of
Figure GDA0003247777060000074
For diffuse reflection intensity ratio R d (x) Taking the median value of the intensity ratio of the diffusely reflected pixels of the same color partition.
S3.2, judging specular reflection components of the strong reflection pixel and the weak reflection pixel by adopting a condition 3, and marking the pixel conforming to the condition 3 as saturated high light.
Condition 3:
I s (x)≤Th 2 (6)
and for the pixels which do not meet the condition 3, removing specular reflection components from the pixel intensity in the image according to the bicolor reflection model to obtain the image with diffuse emission of the color subareas.
I d (x)=I(x)-I s (x)=m s (x)β (7)
For the pixel satisfying the condition 3 and marked as saturated highlight as the area to be repaired, an image repair method based on fast marching (FMM) is adopted for processing, as shown in fig. 2.
The image restoration method based on fast running (FMM) in S3.2 specifically processes saturated highlight pixels as follows:
s3.2.1 the pixels of the color-partitioned image of the RGB color space are divided into three classes:
band, the pixel is the boundary part of the pixel which does not meet the condition 3 in S3 and the marked saturated high-light pixel, and belongs to the boundary of the area to be repaired
Figure GDA0003247777060000081
Is known.
Inservice, the pixels are marked saturated high-light parts and belong to the boundary of the area to be repaired
Figure GDA0003247777060000082
The pixel intensity value of the pixel points inside is unknown.
Known that the type of pixel belongs to the boundary of the region to be repaired
Figure GDA0003247777060000083
The pixel intensity values of the external pixels are known.
S3.2.2 initializing: setting the pixel point T value of Known and Band to 0 and the Inside pixel point T value to 10 6
S3.2.3 diffusion: solving the T value of Band class, sequencing, finding out the minimum T value pixel point, marking as p (i, j), and changing the p (i, j) into Known class; traversing the neighborhood points of the pixel points p (i, j), marking the points belonging to the class of the amide in the neighborhood points as q, and calculating the intensity values of the points according to a formula (10) for repairing; the T value of q is updated and changed to Band class according to equation (9). Finally, repeating the steps until the saturated Gao Guangxiang element of the class of the amide is empty.
Wherein the T value refers to the initial boundary from each pixel point in the region omega to be repaired
Figure GDA0003247777060000084
Equation (8) is derived from Eikonal equation.
Figure GDA0003247777060000085
max(D -x T,-D +x T,0) 2 +max(D -y T,-D +y T,0) 2 =1 (9)
In the formula (9)
D -x T(i,j)=T(i,j)-T(i-1,j)
D +x T(i,j)=T(i+1,j)-T(i,j)
D -y T(i,j)=T(i,j)-T(i,j-1)
D +y T(i,j)=T(i,j+1)-T(i,j)
Wherein the method comprises the steps of
T (i, j) =min (T1, T2..tn), t1=solve (f (i-1, j), f (i, j-1), T (i-1, j), T (i, j-1)), and the like, n being the neighborhood pixel count.
Figure GDA0003247777060000091
In equation (10) w (q, I) is a weight function and I (I) is a known pixel intensity value within the neighborhood.
w(q,i)=dst(q,i)*lev(q,i)*dir(q,i)
Figure GDA0003247777060000092
Figure GDA0003247777060000093
Figure GDA0003247777060000094
In the formula (11), dst (q, i) is a distance factor, lev (q, i) is a level set factor, dir (q, i) is a direction factor, and N represents an isochrone direction.
4) The processed images of the respective color division RGB spaces are combined to obtain an overall image after highlight removal, as shown in fig. 4 (b).
The above description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention, and any modifications, equivalents, improvements and modifications within the spirit and principles of the invention will become apparent to those skilled in the art.

Claims (3)

1. The method for removing the image highlight based on the color partition is characterized by comprising the following steps of: the following processing steps are included between the input image and the output image,
s1, color partitioning, namely converting an input highlight image from an RGB color space to an HSV color space, performing color partitioning according to an image tone H component value range, and dividing the image into 12 color areas;
s2, classifying pixels, namely classifying each color partition image according to characteristic trend of highlight pixels in an HSV color space, and distinguishing diffuse reflection pixels from pixels containing specular reflection;
s3, specular reflection components of each color partition are calculated through diffuse reflection intensity ratio, saturated specular pixels are distinguished according to the intensity values of the specular reflection components, and the specular reflection components are removed by the other pixels through a bicolor reflection model; repairing the saturated highlight pixels by adopting a fast-travelling-based method; obtaining an image with each color partition removed of highlight;
s4, finally, merging the processed images in each color partition RGB space to obtain an overall image with the highlight removed;
the specific method for removing the highlight in the step S3 is as follows:
s3.1 normalizing the colors of the light sources in the RGB color space of the color partitioned image so that the specular chromaticity of the highlight pixels gamma= [1/3,1/3] T Diffuse reflection intensity ratio R according to each color zone d (x) Calculating the specular reflection component I of the highlight image obtained by the simultaneous formula (4) s (x);
Figure FDA0004156616010000011
Wherein I is ran (x) Is the intensity range value of a single pixel, I max (x) Is the maximum intensity value of the (r, g, b) three channels in the pixel,I min (x) At minimum, m d (x) Is the weight coefficient of diffuse reflection component, beta max 、β min Maximum and minimum diffuse reflectance chromaticity, m, respectively, in a pixel s (x) Is the weight coefficient of specular reflection component, and gamma is the chromaticity of specular reflection;
and calculated specular reflection components to the highlight,
Figure FDA0004156616010000012
wherein the method comprises the steps of
Figure FDA0004156616010000013
For diffuse reflection intensity ratio R d (x) Taking the median value of the intensity ratio of the diffuse reflection pixels of the same color partition;
s3.2, judging specular reflection components of the strong reflection pixel and the weak reflection pixel by adopting a condition 3, and marking the pixel conforming to the condition 3 as saturated high light, wherein:
condition 3: i s (x)≤Th 2 (6)
For the pixels which do not meet the condition 3, removing specular reflection components from the pixel intensity in the image according to the bicolor reflection model to obtain an image with diffuse emission of color subareas;
I d (x)=I(x)-I s (x)=m s (x)β (7)
for the pixels which meet the condition 3 and are marked as saturated high light, adopting a fast-travelling-based image restoration method for processing;
the fast-travelling-based image restoration method proposed by Telea in S3.2 is specifically as follows:
s3.2.1 the pixel points of the color partitioned image of the RGB color space are divided into three classes:
band, the pixel is the boundary part of the pixel which does not meet the condition 3 in S3 and the marked saturated high-light pixel, and belongs to the boundary of the area to be repaired
Figure FDA0004156616010000021
Is known, its pixel intensity value is known;
inservice, the pixels are marked saturated high-light parts and belong to the boundary of the area to be repaired
Figure FDA0004156616010000022
An internal pixel point whose pixel intensity value is unknown;
known that the type of pixel belongs to the boundary of the region to be repaired
Figure FDA0004156616010000023
An external pixel whose pixel intensity value is known;
s3.2.2, initializing: setting the pixel point T value of Known and Band to 0 and the Inside pixel point T value to 10 6
S3.2.3 diffusion: solving the T value of Band class, sequencing, finding out the minimum T value pixel point, marking as p (i, j), and changing the p (i, j) into Known class; traversing the neighborhood points of the pixel points p (i, j), marking the points belonging to the class of the amide in the neighborhood points as q, and calculating the intensity values of the points according to a formula (10) for repairing; updating the T value of q according to the formula (9) and changing the T value into Band class; finally repeating the steps until the saturated Gao Guangxiang element of the class of the instride is empty;
wherein: the T value refers to the initial boundary from each pixel point in the region omega to be repaired
Figure FDA0004156616010000024
Equation (8) is obtained from Eikonal equation;
Figure FDA0004156616010000025
max(D -x T,-D +x T,0) 2 +max(D -y T,-D +y T,0) 2 =1 (9)
in the formula (9)
D -x T(i,j)=T(i,j)-T(i-1,j)
D +x T(i,j)=T(i+1,j)-T(i,j)
D -y T(i,j)=T(i,j)-T(i,j-1)
D +y T(i,j)=T(i,j+1)-T(i,j)
Wherein the method comprises the steps of
T (i, j) =min (T1, T2..tn), t1=solve (f (i-1, j), f (i, j-1), T (i-1, j), T (i, j-1)), and the rest, n is the neighborhood pixel count;
Figure FDA0004156616010000031
in equation (10) w (q, I) is a weight function and I (I) is a known pixel intensity value within the neighborhood.
2. The method for removing image highlights based on color partition as claimed in claim 1, wherein: in the S1, an input highlight image is converted from an RGB color space to an HSV color space through a formula (1);
Figure FDA0004156616010000032
/>
Figure FDA0004156616010000033
v=max (1)
wherein (r, g, b) is the red, blue and green color coordinates of the RGB space, and the ranges are all [0,1]; max is the maximum value in a single pixel three channel (r, g, b); min is the minimum value; the range of hues h in the HSV color space is [0,360 ], the range of s is [0,1], and the range of brightness v is [0,1];
dividing the image into 12 color areas according to the value range [0,360 DEG ] of the tone h; the values of the S component and the H component are in the ranges of [0,1].
3. The method for removing image highlights based on color partition as claimed in claim 1, wherein: the specific method for classifying the pixels in the step S2 is as follows:
in the image HSV color space of each color partition, the pixels satisfying the condition 1 are divided into diffuse reflection pixels; the pixels satisfying the condition 2 are divided into strongly reflective pixels; dividing the remaining pixels that do not meet the conditions 1, 2 into weakly reflective pixels, wherein:
condition 1: v (1-s) is less than or equal to Th 1 (2)
Condition 2:
Figure FDA0004156616010000041
where v is the brightness component of the HSV color space, s is the saturation component of the HSV color space, and Th 1 Is a set judgment threshold value, the range of which is [0,1]。
CN202110519805.7A 2021-05-12 2021-05-12 Image highlight removing method based on color partition Active CN113469895B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110519805.7A CN113469895B (en) 2021-05-12 2021-05-12 Image highlight removing method based on color partition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110519805.7A CN113469895B (en) 2021-05-12 2021-05-12 Image highlight removing method based on color partition

Publications (2)

Publication Number Publication Date
CN113469895A CN113469895A (en) 2021-10-01
CN113469895B true CN113469895B (en) 2023-05-23

Family

ID=77870792

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110519805.7A Active CN113469895B (en) 2021-05-12 2021-05-12 Image highlight removing method based on color partition

Country Status (1)

Country Link
CN (1) CN113469895B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108665421A (en) * 2017-03-31 2018-10-16 北京旷视科技有限公司 The high light component removal device of facial image and method, storage medium product
CN110390648A (en) * 2019-06-24 2019-10-29 浙江大学 A kind of image high-intensity region method distinguished based on unsaturation and saturation bloom
CN112508806A (en) * 2020-11-24 2021-03-16 北京航空航天大学 Endoscopic image highlight removal method based on non-convex low-rank matrix decomposition

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7496228B2 (en) * 2003-06-13 2009-02-24 Landwehr Val R Method and system for detecting and classifying objects in images, such as insects and other arthropods

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108665421A (en) * 2017-03-31 2018-10-16 北京旷视科技有限公司 The high light component removal device of facial image and method, storage medium product
CN110390648A (en) * 2019-06-24 2019-10-29 浙江大学 A kind of image high-intensity region method distinguished based on unsaturation and saturation bloom
CN112508806A (en) * 2020-11-24 2021-03-16 北京航空航天大学 Endoscopic image highlight removal method based on non-convex low-rank matrix decomposition

Also Published As

Publication number Publication date
CN113469895A (en) 2021-10-01

Similar Documents

Publication Publication Date Title
US8774503B2 (en) Method for color feature extraction
CN109410126B (en) Tone mapping method of high dynamic range image with detail enhancement and brightness self-adaption
CN114723701A (en) Gear defect detection method and system based on computer vision
KR101277229B1 (en) Apparatus and method for improved foreground/background separation in digitl images
JP4139571B2 (en) Color image segmentation
CN108830800B (en) Brightness improvement and enhancement method for image in dark scene
CN1941923A (en) Automatic white balance method for color digital image
CN108182671B (en) Single image defogging method based on sky area identification
JPH10222669A (en) Aligning device
CN110749598A (en) Silkworm cocoon surface defect detection method integrating color, shape and texture characteristics
CN106815602B (en) runway FOD image detection method and device based on multi-level feature description
CN1224242C (en) Method for fast picking up picture with any image as background in digital image process
US9384561B2 (en) Method of selecting a region of interest
CN113469895B (en) Image highlight removing method based on color partition
JPH05216989A (en) Method and device for extracting area of color picture
CN114878595A (en) Book printing quality detection method
CN110619643A (en) Region growing image segmentation method based on local information
CN1260682C (en) Natural image scratching method in digital image treatment based on HVS precessing
CN105243651B (en) Image edge enhancement method based on approximating variances and dark-coloured block pixels statistics information
CN105184758B (en) A kind of method of image defogging enhancing
JP2002090308A (en) Evaluation system for degree of surface degradation of steel using image processing
CN110298812B (en) Image fusion processing method and device
CN111861899A (en) Image enhancement method and system based on illumination nonuniformity
CN116468689A (en) Flaw identification method based on gray scale characteristics
CN1564198A (en) Natural image digging method based on sensing colour space

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant