CN107146258B - Image salient region detection method - Google Patents

Image salient region detection method Download PDF

Info

Publication number
CN107146258B
CN107146258B CN201710283246.8A CN201710283246A CN107146258B CN 107146258 B CN107146258 B CN 107146258B CN 201710283246 A CN201710283246 A CN 201710283246A CN 107146258 B CN107146258 B CN 107146258B
Authority
CN
China
Prior art keywords
color
image
colors
value
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710283246.8A
Other languages
Chinese (zh)
Other versions
CN107146258A (en
Inventor
马建设
国立博
刘彤
苏萍
任晓强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huizhou Frant Photoelectric Technology Co ltd
Shenzhen International Graduate School of Tsinghua University
Original Assignee
Shenzhen Graduate School Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Graduate School Tsinghua University filed Critical Shenzhen Graduate School Tsinghua University
Priority to CN201710283246.8A priority Critical patent/CN107146258B/en
Publication of CN107146258A publication Critical patent/CN107146258A/en
Application granted granted Critical
Publication of CN107146258B publication Critical patent/CN107146258B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Facsimile Image Signal Circuits (AREA)
  • Color Image Communication Systems (AREA)

Abstract

The invention discloses a method for detecting a salient region of an image, which comprises the following steps: p1: color space conversion: converting an image to be detected from an RGB color space into a CIELab color space; p2: color classification: dividing the colors in the image to be detected processed in the step P1 into N classes according to the similarity of the colors, and recording the number of pixels covered by each class of colors in the image to be detected; p3: calculating a dispersion factor: respectively calculating dispersion factors of the N types of colors; p5: calculating a significance value: respectively calculating significance values of the N colors according to the dispersion factors obtained in the step P3, wherein the significance values and the dispersion factors are in a negative correlation relationship; p6: normalization treatment: and normalizing the significance values of the N colors calculated in the step P5 to obtain a gray image of the significance values of the image to be detected. The image salient region detection method provided by the invention reduces the calculated amount and ensures higher accuracy.

Description

Image salient region detection method
Technical Field
The invention relates to the field of computer image processing, in particular to a method for detecting an image salient region.
Background
The Human Visual System (Human Visual System) is capable of rapidly focusing attention on a few prominent Visual objects in a complex scene, a capability referred to as Visual attention. The image saliency detection technology is to apply a mechanism of visual attention to image processing and extract a region which can attract more attention in an image by a certain method. By extracting the salient regions, resources required for analyzing the image can be purposefully allocated. The saliency image can be widely applied to the fields of image segmentation, target recognition, adaptive compression, image editing, image retrieval and the like.
The image saliency region detection method is to obtain a saliency value of each pixel point in an image to be processed and obtain a corresponding gray-scale image according to the saliency value. In the existing image saliency region detection methods, some methods only consider brightness as a single channel to calculate saliency values, and have small calculation amount but low precision; some of the saliency values are calculated independently for each pixel point, and the obtained gray level image is accurate in result but large in calculation amount; in addition, many methods only calculate the influence of color difference on the significance value, and do not consider that the color distribution also influences the judgment of the significance value, so that the accuracy of detecting the salient region of the image is low.
The above background disclosure is only for the purpose of assisting understanding of the concept and technical solution of the present invention and does not necessarily belong to the prior art of the present patent application, and should not be used for evaluating the novelty and inventive step of the present application in the case that there is no clear evidence that the above content is disclosed at the filing date of the present patent application.
Disclosure of Invention
In order to solve the technical problem, the invention provides a method for detecting an image salient region, which reduces the calculation amount and ensures higher accuracy.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention discloses a method for detecting a salient region of an image, which comprises the following steps:
p1: color space conversion: converting an image to be detected from an RGB color space into a CIELab color space;
p2: color classification: dividing the colors in the image to be detected processed in the step P1 into N classes according to the similarity of the colors, and recording the number T of pixels covered by each class of colors in the image to be detected1、T2、……、TNWherein T isiRepresenting the number of pixels covered by the ith type of color in the image to be detected;
p3: calculating a dispersion factor: respectively calculating dispersion factors G (1), G (2), … … and G (N) of N types of colors, wherein G (i) represents the distribution condition of the ith type of color in the image to be detected;
p5: calculating a significance value: calculating significance values S (1), S (2), … …, S (N) of the N colors according to the dispersion factors obtained in the step P3, wherein the significance values and the dispersion factors are in a negative correlation relationship;
p6: normalization treatment: and normalizing the significance values of the N colors calculated in the step P5 to obtain a gray image of the significance values of the image to be detected.
Preferably, the image salient region detection method further includes the following steps:
p4: calculating a color difference factor: respectively calculating color difference factors D (1), D (2), … … and D (N) of N types of colors, wherein D (i) represents the difference between the i type of color and the colors of all other pixels in the image to be detected;
wherein the step P5 of calculating the significance value further includes calculating the significance values S (1), S (2), … …, S (N) of the N colors respectively according to the color difference factor obtained in step P4, wherein the significance values and the color difference factor have a positive correlation.
Preferably, step P4 calculates the color difference factor specifically according to the following formula:
Figure GDA0002279894850000021
wherein D (C)i,Cj) Represents the color distance, T, of the ith and jth colors in the CIELab color spacejIndicating the number of pixels covered by the jth class of colors.
Preferably, D (C)i,Cj) The calculation formula of (2) is as follows:
wherein, ILi、Iai、IbiRespectively representing the values of the ith color in three channels of L, a and b in the CIELab color space, ILj、Iaj、IbjRespectively representing the values of the j-th color in three channels of L, a and b in the CIELab color space.
Preferably, step P5 calculates the saliency value according to the following formula:
Figure GDA0002279894850000031
preferably, step P5 calculates the saliency value according to the following formula:
S(i)=D(i)*(M-G(i))
wherein M is a constant and M > max (G (i)).
Preferably, in step P1, the color space conversion is performed specifically according to the following steps:
r, G, B is first converted to a middle function X, Y, Z:
X=R×0.4124+G×0.3576+B×0.1805
Y=R×0.2126+G×0.7152+B×0.0722
Z=R×0.0193+G×0.1192+B×0.9505
the intermediate function X, Y, Z is then converted to L, a, b:
Figure GDA0002279894850000032
Figure GDA0002279894850000033
Figure GDA0002279894850000034
wherein:
r, G, B respectively represents the numerical values of R, G, B channels of pixel points in the image to be detected in an RGB color space; l, a and b respectively represent the numerical values of the pixel points in the L, a and b channels of the CIELab color space in the image to be detected after conversion, and X isN、YN、ZNThe values are 95.047, 100.0 and 108.883 respectively.
Preferably, the color classification in step P2 further includes: dividing each channel of the CIELab color space into N types according to the value, and dividing the color in the image to be detected processed in the step P1 into N-N × N × N types.
Preferably, the value range of N is p/250-p/100, wherein p is the total number of pixels of the image to be detected.
Preferably, step P3 calculates the dispersion factor specifically according to the following formula:
Figure GDA0002279894850000041
wherein x isikX-coordinate value, y, of k-th pixel among pixels representing i-th type color overlayikA y-coordinate value representing a k-th pixel among the pixels covered by the i-th type color; mu.sixMeans, μ, of x-coordinate values of all pixels representing the ith class of color overlayiyRepresents the average of the y coordinate values of all pixels covered by the ith class color.
Preferably, step P6 performs the normalization process according to the following formula:
wherein S ismax、SminAnd S' (i) is the significance value of the ith color after normalization processing.
Compared with the prior art, the invention has the beneficial effects that: the image salient region detection method comprehensively calculates the saliency value of each pixel point in an image to be detected based on color classification and color dispersion, classifies the pixels in the image to be detected according to the color characteristics of the pixels, and has high similarity of the colors of the same type of pixels, so that the saliency values of the pixels of each type of colors can be reasonably set to be the same, the saliency value is calculated by taking the color type as a unit, and the calculation amount is obviously reduced compared with the method for independently calculating the saliency value of each pixel point, and meanwhile, the higher accuracy of the result is ensured; meanwhile, the concept of the dispersion factor is introduced to describe the distribution condition of colors in the image to calculate a significance value, so that a significance area is highlighted, and the accuracy is improved.
In a further scheme, the invention also introduces a concept of a color difference factor to describe the difference of the pixel and the whole image in color, and takes the color difference factor and the dispersion factor as parameters for calculating the significance value, and the two factors are mutually corrected to enable the significance area to be more prominent, thereby further improving the accuracy.
Drawings
FIG. 1 is a flow chart of a method for detecting salient regions in an image according to a preferred embodiment of the present invention;
FIG. 2a is an original image of an embodiment of the present invention;
FIG. 2b is a truth diagram of FIG. 2 a;
fig. 2c is a final image saliency map of fig. 2a obtained using a preferred embodiment method of the present invention.
Detailed Description
The invention will be further described with reference to the accompanying drawings and preferred embodiments.
As shown in fig. 1, a preferred embodiment of the present invention discloses a method for detecting a salient region of an image, which includes the following steps:
p1: color space conversion: converting an image to be detected from an RGB color space into a CIELab color space;
the RGB color space is a color standard in the industry, and various colors are obtained by changing three color channels of red (R), green (G) and blue (B) and superimposing them on each other. The CIELab color space is a set of color description methods for converting light wavelength into brightness and hue according to the characteristics of human vision by the International Commission on illumination (CIE); wherein L describes the brightness of the color, a describes the range of colors from green to red, b describes the range of colors from blue to yellow; the CIELab color space is device independent.
In some embodiments, the conversion process is as follows:
r, G, B is first converted to a middle function X, Y, Z:
X=R×0.4124+G×0.3576+B×0.1805
Y=R×0.2126+G×0.7152+B×0.0722
Z=R×0.0193+G×0.1192+B×0.9505
the intermediate function X, Y, Z is then converted to L, a, b:
Figure GDA0002279894850000051
Figure GDA0002279894850000052
Figure GDA0002279894850000053
wherein:
Figure GDA0002279894850000061
in the formula, R, G, B respectively represents the values of R, G, B channels of the pixel points in the image to be detected in the RGB color space; l, a and b respectively represent the numerical values of the pixel points in the image to be detected in the three channels L, a and b in the CIELab color space after conversion; wherein XN、YN、ZNCommon default empirical values are 95.047, 100.0, 108.883. After spatial transformation, the value range of the L channel is [0, 100]]The value range of the channel a and the channel b is [ -128, 127]]。
P2: color classification: dividing the colors in the image to be detected processed in the step P1 into N classes according to the similarity of the colors, and recording the number T of pixels covered by each class of colors in the image to be detected1、T2、……、TNWherein T isiRepresenting the number of pixels covered by the ith type of color in the image to be detected; and taking the intermediate value of each channel numerical value in each color as the characteristic color of the corresponding color.
Each channel of the CIELab color space is equally divided into N types according to the numerical value, and then N is equal to N multiplied by N; the value of n is comprehensively set by the requirements of the user on the calculated amount and the calculation precision of the region detection method. The smaller n is, the more kinds of color classification are, and the more accurate the calculation result is and the larger the calculation amount is; conversely, the larger n is, the less accurate the calculation result is, and the smaller the calculation amount is. And replacing the color of the pixel in the original image with the characteristic color corresponding to the pixel, and counting the number of covered pixels of each type of color. In a more preferred embodiment, the value range of N is p/250 to p/100, where p is the total number of pixels of the image to be detected, i.e., each color includes 100 to 255 pixels, the effect is good, and the calculation amount is not large.
The image to be detected is subjected to color classification, the classification method is simple and feasible, and the set saliency values of the pixels covered by the same type of color are the same, so that in the process of calculating the saliency value, the calculation is carried out by taking the type of the color as a unit, and compared with the method for independently calculating the saliency value for each pixel, the calculation amount can be obviously reduced.
P3: calculating a dispersion factor: respectively calculating dispersion factors G (1), G (2), … … and G (N) of the N types of colors, wherein G (i) represents the distribution condition of the ith type of color in the image to be detected;
firstly, obtaining the spatial coordinate value of the ith type color coverage pixel
Figure GDA0002279894850000063
Wherein T isiIndicating the number of pixels covered by the ith type of color.
The formula for calculating the dispersion factor of the ith color is as follows:
Figure GDA0002279894850000071
wherein x isikX-coordinate value, y, of k-th pixel among pixels representing i-th type color overlayikA y-coordinate value representing a k-th pixel among the pixels covered by the i-th type color; mu.sixMeans, μ, of x-coordinate values of all pixels representing the ith class of color overlayiyRepresents the average of the y coordinate values of all pixels covered by the ith class color.
The dispersion factor calculated by the above formula represents the distribution of the color in the image to be detected. The dispersion factor and the significance value are in a negative correlation relationship, namely the higher the dispersion is, the larger the color distribution range is represented, the higher the possibility of becoming a background is, and the smaller the significance value is; conversely, the lower the dispersion, the more concentrated the representation color, the greater the probability that the covered pixel will become a salient region, and the greater the saliency value.
P4: calculating a color difference factor: respectively calculating color difference factors D (1), D (2), … … and D (N) of the N types of colors, wherein D (i) represents the difference between the i type of color and the colors of all other pixels in the image to be detected;
the color difference factor represents the difference between the pixel color and all other pixel colors, i.e. represents the uniqueness of the pixel color. The calculation formula of the color difference factor of the ith color is as follows:
Figure GDA0002279894850000072
wherein, CiRepresents a color of class i, CjRepresenting the j-th color, TjThe number of the pixels covered by the jth color is represented, namely the frequency of the jth color appearing in the image to be detected; d (C)i,Cj) The color distance between the ith type color and the jth type color in the CIELab color space is represented by the following calculation formula:
Figure GDA0002279894850000073
wherein, ILi、Iai、IbiRespectively representing the values of the ith color in three channels of L, a and b in the CIELab color space, ILj、Iaj、IbjRespectively representing the values of the j-th color in three channels of L, a and b in the CIELab color space.
In the preferred embodiment, the color difference factor is calculated according to the formula, and is not ignored as a single brightness value, or is not simply calculated by using the pixel average value, but three-channel color values of each pixel after color classification are used for calculation, so that the information quantity is sufficient, the uniqueness of the pixel can be represented, and a foundation is laid for the accuracy of a subsequently obtained significance value.
P5: calculating a significance value: calculating significance values S (1), S (2), … … and S (N) of the N colors according to the dispersion factor obtained in the step P3 and the color difference factor obtained in the step P4;
the significance value of the pixel is in positive correlation with the chromatic aberration factor, and the greater the chromatic aberration is, the higher the significance value of the pixel is; the significance value of the pixel is in negative correlation with the dispersion factor, and the smaller the dispersion factor is, the higher the significance value of the pixel is.
In one embodiment, the formula for calculating the significance value is:
Figure GDA0002279894850000081
calculating the significance value of the ith color by using the formula, wherein all pixels covered by the formula contain the same significance value; the formula is very simple in calculation, and meanwhile, the precision requirement can be met.
In another embodiment, the significance value is calculated as: s (i) ═ d (i) (M-g (i)), where M is a constant and M > max (g (i)), this equation is also simple to calculate and has higher accuracy.
P6: normalization treatment: and normalizing the significance values of the N colors calculated in the step P5 to obtain a gray image of the significance values of the image to be detected.
In order to conveniently represent the saliency values of an image by a grayscale image, a normalization process is required, that is, each saliency value is converted into a range of grayscale values [0, 255 ]. The conversion formula is:
Figure GDA0002279894850000082
wherein S ismax、SminAnd S' (i) is the significance value of the ith color after normalization processing. Through normalization, the significance value is finally converted into a value of [0, 255]]Gray values within the range that make up the gray scale image of the final saliency value.
The following describes the image salient region detection method according to a preferred embodiment of the present invention with reference to specific embodiments. Taking an actual natural image as an original image, as shown in fig. 2a and fig. 2b, which are an original image and a truth map derived from an MSRA1000 image data set, respectively, and the picture size is 368 × 400 pixels, the method for detecting the image salient region of the image shown in fig. 2a includes the following steps:
p1: color space conversion: selecting three pixel points in an original image: the coordinates of the pixel point 1 are (50, 50), the coordinates of the pixel point 2 are (200 ), the coordinates of the pixel point 3 are (350 ), the value ranges of three color channels (R, G, B) in RGB color space are [0, 255], the values of the three coordinate points in the three color channels (R, G, B) in RGB color space are pixel point 1(175, 39, 41), pixel point 2(202, 206, 209) and pixel point 3(22, 57, 61), the RGB color space is converted into CIELab color space, the value ranges of the three color channels (L, a, b) in CIELab color space are [0, 100], [ -128, 127], the values of the three color channels (L, a, b) in CIELab color space are pixel point 1(40, 54, 34), pixel point 2(84, -1, -2), pixel point 3(22, -12, -6).
P2: color classification: the three channels (L, a, b) are divided into 10 classes (i.e. N is 10, N is 1000), and assuming that the class of the pixel 1 is the pixel 1 class, the characteristic color of the pixel 1 class is (35, 63, 37), the original image contains 32594 pixels in the pixel 1 class in total, and similarly, the characteristic color of the pixel 2 class is (85, -14, -14), and the original image contains 14301 pixels in the pixel 2 class in total; the pixel 3 class characteristic color is (25, -14, -14), and the original image contains 17506 pixels in the pixel 3 class; and simultaneously, respectively obtaining the space coordinates of all points in each color.
P3: calculating a dispersion factor: according to the dispersion factor calculation formula and the space coordinates of all the points in the pixel 1 class, the dispersion factor value G1 of the pixel 1 class can be calculated to be 118, the dispersion factors G2 of the pixel 2 class and the pixel 3 class can be obtained to be 63, and G3 can be obtained to be 65.
P4: calculating a color difference factor: obtaining the color difference factor D1 of the pixel 1 class which is approximately equal to 2.17 multiplied by 10 according to the color difference factor calculation formula and the characteristic colors of all the color classes9The same method is used to obtain the color difference factor D2 of pixel 2 class and pixel 3 class which is approximately equal to 1.75 multiplied by 109,D3≈4.84×108
P5: generating a significance value: the significance value of pixel 1 obtained by using G1 and D1 is S1 ═ D1/G1 ≈ 1.84 × 106The same method is used to obtain the significance values S2 of the pixel 2 class and the pixel 3 class, D2/G2 is approximately equal to 2.78 multiplied by 107,S3=D3/G3≈7.45×106
P6: normalization treatment: s is obtained by calculation, wherein the least significant value of all color classes ismin=5.39×105The maximum significant value is Smax=3.82×107. Controlling the significant value of all pixels to be 0, 255 through a normalization formula]Within range, the final saliency map (i.e. grey scale image) is obtained, as shown in fig. 2 c. The final saliency value S ' 1 of the pixel class 1 is 101, and the final saliency values S ' 2 and S ' 3 of the pixel class 2 and the pixel class 3 are 174 and 16, respectively.
The final saliency value of the pixel is the gray scale value of the pixel in the saliency map, the saliency value represents the probability that the pixel belongs to the target area, and the higher the saliency value is, the more probability that the pixel belongs to the target area is, the higher the saliency value is, the pixel with the higher saliency value should be segmented into the target area in the image segmentation. In the aspect of image compression, compression ratios can be set according to different region saliency values, so that compression efficiency and precision are improved. The saliency evaluation is mainly compared with a truth map, and the higher the saliency within the target region is, the better the saliency outside the target region is, the lower the saliency is, the better the saliency is. As can be clearly seen by comparing fig. 2b with fig. 2c, the method for detecting the salient region of the image according to the preferred embodiment of the present invention obtains a higher salient value in the target region, and obtains a salient value with higher accuracy.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several equivalent substitutions or obvious modifications can be made without departing from the spirit of the invention, and all the properties or uses are considered to be within the scope of the invention.

Claims (10)

1. A method for detecting a salient region of an image is characterized by comprising the following steps:
p1: color space conversion: converting an image to be detected from an RGB color space into a CIELab color space;
p2: color classification: dividing the colors in the image to be detected processed in the step P1 into N classes according to the similarity of the colors, and recording the number T of pixels covered by each class of colors in the image to be detected1、T2、…、Ti、…、TNWherein T isiRepresenting the number of pixels covered by the ith type of color in the image to be detected;
p3: calculating a dispersion factor: respectively calculating dispersion factors G (1), G (2), …, G (i), … and G (N) of N types of colors, wherein G (i) represents the distribution condition of the ith type of color in the image to be detected; firstly, obtaining the spatial coordinate value of the ith type color coverage pixel
Figure FDA0002279894840000011
Wherein T isiRepresenting the number of pixels covered by the ith type of color;
the formula for calculating the dispersion factor of the ith color is as follows:
Figure FDA0002279894840000012
wherein x isikX-coordinate value, y, of k-th pixel among pixels representing i-th type color overlayikA y-coordinate value representing a k-th pixel among the pixels covered by the i-th type color; mu.sixMeans, μ, of x-coordinate values of all pixels representing the ith class of color overlayiyAn average value of y-coordinate values representing all pixels covered by the ith type of color;
p5: calculating a significance value: calculating significance values S (1), S (2), …, S (i), … and S (N) of the N colors according to the dispersion factors obtained in the step P3, wherein S (i) represents the significance value of the ith color, and the significance value and the dispersion factors are in a negative correlation relationship;
p6: normalization treatment: and normalizing the significance values of the N colors calculated in the step P5 to obtain a gray image of the significance values of the image to be detected.
2. The image salient region detection method according to claim 1, characterized by further comprising the steps of:
p4: calculating a color difference factor: respectively calculating color difference factors D (1), D (2), …, D (i), … and D (N) of N types of colors, wherein D (i) represents the difference between the i type of color and the colors of all other pixels in the image to be detected;
wherein the step P5 of calculating the significance value further includes calculating the significance values S (1), S (2), …, S (i), …, S (N) of the N colors respectively according to the color difference factor obtained in step P4, wherein the significance values and the color difference factor have a positive correlation.
3. The image salient region detection method according to claim 2, wherein the step P4 is specifically used for calculating the color difference factor according to the following formula:
Figure FDA0002279894840000021
wherein D (C)i,Cj) Represents the color distance, T, of the ith and jth colors in the CIELab color spacejIndicating the number of pixels covered by the jth class of colors.
4. The image salient region detection method according to claim 3, wherein D (C)i,Cj) The calculation formula of (2) is as follows:
wherein, ILi、Iai、IbiRespectively representing the values of the ith color in three channels of L, a and b in the CIELab color space, ILj、Iaj、IbjRespectively representing L, a and b three channels of j-th color in CIELab color spaceThe value in the trace.
5. The image salient region detection method according to claim 2, wherein the step P5 is specifically used for calculating the salient value according to the following formula:
6. the image salient region detection method according to claim 2, wherein the step P5 is specifically used for calculating the salient value according to the following formula:
S(i)=D(i)*(M-G(i))
wherein M is a constant and M > max (G (i)).
7. The image salient region detection method according to any one of claims 1 to 6, wherein in step P1, the color space conversion is specifically performed according to the following steps:
r, G, B is first converted to a middle function X, Y, Z:
X=R×0.4124+G×0.3576+B×0.1805
Y=R×0.2126+G×0.7152+B×0.0722
Z=R×0.0193+G×0.1192+B×0.9505
the intermediate function X, Y, Z is then converted to L, a, b:
Figure FDA0002279894840000031
Figure FDA0002279894840000032
Figure FDA0002279894840000033
wherein:
Figure FDA0002279894840000034
r, G, B respectively represents the numerical values of R, G, B channels of pixel points in the image to be detected in an RGB color space; l, a and b respectively represent the numerical values of the pixel points in the L, a and b channels of the CIELab color space in the image to be detected after conversion, and X isN、YN、ZNThe values are 95.047, 100.0 and 108.883 respectively.
8. The image salient region detection method according to any one of claims 1 to 6, wherein the color classification in step P2 specifically comprises: dividing each channel of the CIELab color space into N types according to the value, and dividing the color in the image to be detected processed in the step P1 into N-N × N × N types.
9. The image salient region detection method according to any one of claims 1 to 6, characterized in that the value range of N is p/250-p/100, wherein p is the total number of pixels of the image to be detected.
10. The image salient region detection method according to any one of claims 1 to 6, wherein the step P6 is specifically normalized according to the following formula:
Figure FDA0002279894840000035
wherein S ismax、SminAnd S' (i) is the significance value of the ith color after normalization processing.
CN201710283246.8A 2017-04-26 2017-04-26 Image salient region detection method Expired - Fee Related CN107146258B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710283246.8A CN107146258B (en) 2017-04-26 2017-04-26 Image salient region detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710283246.8A CN107146258B (en) 2017-04-26 2017-04-26 Image salient region detection method

Publications (2)

Publication Number Publication Date
CN107146258A CN107146258A (en) 2017-09-08
CN107146258B true CN107146258B (en) 2020-01-10

Family

ID=59774457

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710283246.8A Expired - Fee Related CN107146258B (en) 2017-04-26 2017-04-26 Image salient region detection method

Country Status (1)

Country Link
CN (1) CN107146258B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110443858A (en) * 2018-05-04 2019-11-12 财团法人石材暨资源产业研究发展中心 Color quantization method for the analysis of stone material pigment figure
CN111080722B (en) * 2019-12-11 2023-04-21 中山大学 Color migration method and system based on significance detection
CN111311697B (en) * 2020-03-19 2023-11-17 北京搜狐新媒体信息技术有限公司 Picture color richness detection method and related device
CN113379747B (en) * 2021-08-16 2021-12-21 深圳市信润富联数字科技有限公司 Wood defect detection method, electronic device, device and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7940985B2 (en) * 2007-06-06 2011-05-10 Microsoft Corporation Salient object detection
CN102509099B (en) * 2011-10-21 2013-02-27 清华大学深圳研究生院 Detection method for image salient region
CN102693426B (en) * 2012-05-21 2014-01-08 清华大学深圳研究生院 Method for detecting image salient regions
CN102722891B (en) * 2012-06-12 2014-08-27 大连理工大学 Method for detecting image significance
CN103679173B (en) * 2013-12-04 2017-04-26 清华大学深圳研究生院 Method for detecting image salient region
CN104463870A (en) * 2014-12-05 2015-03-25 中国科学院大学 Image salient region detection method
CN105701813A (en) * 2016-01-11 2016-06-22 深圳市未来媒体技术研究院 Significance detection method of light field image

Also Published As

Publication number Publication date
CN107146258A (en) 2017-09-08

Similar Documents

Publication Publication Date Title
CN107146258B (en) Image salient region detection method
Liu et al. A robust skin color based face detection algorithm
Kalist et al. Possiblistic-fuzzy C-means clustering approach for the segmentation of satellite images in HSL color space
CN112614060A (en) Method and device for rendering human face image hair, electronic equipment and medium
CN103914699A (en) Automatic lip gloss image enhancement method based on color space
CN108154160A (en) Color recognizing for vehicle id method and system
CN111476849B (en) Object color recognition method, device, electronic equipment and storage medium
CN109740572A (en) A kind of human face in-vivo detection method based on partial color textural characteristics
CN110648284B (en) Image processing method and device with uneven illumination
CN113963041A (en) Image texture recognition method and system
CN112215795A (en) Intelligent server component detection method based on deep learning
CN105184771A (en) Adaptive moving target detection system and detection method
CN104504722A (en) Method for correcting image colors through gray points
CN111784703B (en) Image segmentation method and device, electronic equipment and storage medium
CN108711160A (en) A kind of Target Segmentation method based on HSI enhancement models
Zhengming et al. Skin detection in color images
CN106485639A (en) The method and apparatus differentiating forged certificate picture
CN104636728B (en) A kind of image processing method
Azad et al. A robust and adaptable method for face detection based on color probabilistic estimation technique
CN104766068A (en) Random walk tongue image extraction method based on multi-rule fusion
Wang et al. Crop disease leaf image segmentation method based on color features
CN111161291A (en) Contour detection method based on target depth of field information
CN112396638A (en) Image processing method, terminal and computer readable storage medium
Hu et al. General regression neural network utilized for color transformation between images on RGB color space
SOETEDJO et al. A new approach on red color thresholding for traffic sign recognition system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 518000 Guangdong city in Shenzhen Province, Nanshan District City Xili street Shenzhen University Tsinghua Campus A building two floor

Patentee after: Tsinghua Shenzhen International Graduate School

Address before: 518055 Guangdong city of Shenzhen province Nanshan District Xili of Tsinghua

Patentee before: GRADUATE SCHOOL AT SHENZHEN, TSINGHUA University

CP03 Change of name, title or address
TR01 Transfer of patent right

Effective date of registration: 20200628

Address after: Room 2101-2108, 21 / F, Kerong Chuangye building, No. 666, Zhongkai Avenue (Huihuan section), Zhongkai high tech Zone, Huizhou City, Guangdong Province

Patentee after: Huizhou Frant Photoelectric Technology Co.,Ltd.

Address before: 518000 Guangdong city in Shenzhen Province, Nanshan District City Xili street Shenzhen University Tsinghua Campus A building two floor

Patentee before: Tsinghua Shenzhen International Graduate School

TR01 Transfer of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200110

CF01 Termination of patent right due to non-payment of annual fee