CN114359244A - Image significance detection method based on super-pixel segmentation and multiple color features - Google Patents

Image significance detection method based on super-pixel segmentation and multiple color features Download PDF

Info

Publication number
CN114359244A
CN114359244A CN202210020811.2A CN202210020811A CN114359244A CN 114359244 A CN114359244 A CN 114359244A CN 202210020811 A CN202210020811 A CN 202210020811A CN 114359244 A CN114359244 A CN 114359244A
Authority
CN
China
Prior art keywords
super
pixel
color
image
time image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210020811.2A
Other languages
Chinese (zh)
Inventor
聂勇
江佳诚
黄方昊
闫拓羽
陈正
唐建中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202210020811.2A priority Critical patent/CN114359244A/en
Publication of CN114359244A publication Critical patent/CN114359244A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an image significance detection method based on superpixel segmentation and multiple color features. The method comprises the steps of performing superpixel segmentation on a real-time image by using a SLIC method to obtain a superpixel set and a corresponding label graph; processing the super-pixel set based on the label graph to obtain the pixel set, color characteristics and position information of each super-pixel in the super-pixel set; then calculating the contrast significant value of each super pixel, and performing gray assignment on the corresponding super pixel according to the contrast significant value of each super pixel to obtain a significant image; then, processing edge pixels of the current real-time image by using a main color description method to obtain an image edge main color, and then calculating to obtain a background prior image; and finally, performing linear fusion on the saliency map and the background prior map to obtain a final saliency map of the current real-time image. The invention improves the overall efficiency of the algorithm and the significance detection accuracy in the complex environment.

Description

Image significance detection method based on super-pixel segmentation and multiple color features
Technical Field
The invention belongs to a method for detecting the saliency of an image in the field of computer vision, and particularly relates to a method for detecting the saliency of an image based on superpixel segmentation and multiple color characteristics.
Background
The image is used as an important information carrier, the understanding and processing of the image are always the key points of computer vision research, and the saliency detection of the image is one of the key points of the analysis of the image processed by computer vision. The saliency detection means that a visual attention mechanism of a human visual system is simulated through a model and an algorithm, and a region which is more likely to be an interested target is found out from an image, so that limited computing resources are more distributed to the region, the computing efficiency is improved, and the result of image processing is closer to the recognition effect of human vision.
Based on the visual attention mechanism, the visual saliency detection algorithm is mainly divided into a bottom-up algorithm and a top-up algorithm. The bottom-up method is mainly based on the bottom visual features and the contrast of images, the top-down method is mainly realized through machine learning or deep learning, and compared with the bottom-up method, the method needs manual labeling diagrams and a large number of test sets for training, and although the two saliency detection technologies can well realize saliency detection, certain problems still exist: the bottom-up method mainly depends on image color and contrast, and has poor detection effect in a complex scene with similar background and significant target color; the top-down method requires a lot of time for model training, and requires high computational power for the device during detection.
Disclosure of Invention
In order to solve the problems and needs in the background art, the present invention provides an image saliency detection method based on superpixel segmentation and multiple color features.
In order to achieve the purpose, the technical scheme of the invention comprises the following specific contents:
the invention comprises the following steps:
the first step is as follows: performing superpixel segmentation on an input real-time image through a Simple Linear Iterative Clustering (SLIC) algorithm to obtain a superpixel set and a corresponding label graph of the current real-time image;
the second step is that: processing a super-pixel set of the current real-time image based on the label graph to obtain a pixel set, color characteristics and position information of each super-pixel in the super-pixel set of the current real-time image; the color characteristics of each super pixel of the current real-time image are the average value of each color component of each super pixel in a CIELab color space and the average color in a quantized HSV color space;
the third step: calculating a significant value of each super pixel based on the contrast based on the pixel set of each super pixel of the current real-time image, the average value of each color component in the CIELab color space and the position information, and then calculating and obtaining a significant image of the current real-time image according to the significant value of each super pixel based on the contrast;
the fourth step: based on the average color of each super pixel of the current real-time image in a quantized HSV color space, processing the edge pixel of the current real-time image by using a main color description method based on the quantized HSV color space to obtain the image edge main color of the current real-time image, and calculating according to the image edge main color of the current real-time image to obtain a background prior image of the current real-time image;
the fifth step: respectively normalizing the saliency map and the background prior map of the current real-time image and then performing linear fusion to obtain the final saliency map of the current real-time image.
The third step is specifically as follows:
3.1) the position information of each super pixel is the geometric center of each super pixel; calculating the contrast between each super pixel and each super pixel according to the pixel set of each super pixel, the average value of each color component of each super pixel in the CIELab color space and the geometric center of each super pixel;
3.2) calculating a significant value based on the contrast of the current super pixel according to the contrast of the current super pixel;
3.3) repeating the step 3.2), traversing the residual superpixels, and calculating and obtaining the significant values of the residual superpixels based on the contrast;
and 3.4) carrying out gray assignment on the corresponding super-pixels according to the significant value of each super-pixel based on the contrast, and then obtaining a significant image of the current real-time image.
The fourth step is specifically as follows:
4.1) including edge pixels p of the real-time image in the superpixel seteAs an edge superpixel set E; calculating a color histogram of the edge superpixel set E based on the quantized HSV color space;
4.2) edge color L quantized in the color histogram of the edge superpixel set EESatisfy LE∈[0,71]The current quantized edge color LENeighborhood color of { L }E-1,LE,LE+1 the sum of the percentages in the color histogram of the edge superpixel set E is taken as the current quantized edge color LEThe percentage of the color histogram in the edge super-pixel set E;
4.3) repeating step 4.2), and traversing and calculating all quantized edge colors LEThe percentage of the color histogram in the edge super-pixel set E;
4.4) taking the percentage of the color histogram of the edge super-pixel set E which is more than or equal to 20 percent in the quantized edge colors as the image edge main color of the current real-time image;
4.5) calculating the significant value of each super pixel relative to the background information by combining the image edge main color of the current real-time image based on the average color in the quantized HSV color space in the color features of each super pixel of the current real-time image;
and 4.6) carrying out gray assignment on the corresponding super pixels based on the significant value of each super pixel relative to the background information to obtain a background prior image of the current real-time image.
In the fifth step, the calculation formula of linear fusion is as follows:
Sal(Sk)=λSal1Sal'E(Sk)+λSal2Sal'C(Sk)
wherein Sal (S)k) Superpixel S in final saliency map representing current real-time imagekOf Sal'E(Sk) Normalized superpixel S in saliency map representing current real-time imagekSignificant value relative to background information, Sal'E(Sk) Super-pixel S in saliency map representing current real-time imagekNormalized significant value, Sal 'against background information'C(Sk) Superpixel S in background prior image representing current real-time imagekIs based on a normalized saliency value, lambda, of the contrastSal1Is a first weight coefficient, λSal2Is a second weight coefficient and satisfies lambdaSal1Sal2=1。
The calculation formula of the significant value of the current super pixel based on the contrast is as follows:
Figure BDA0003462498860000031
Figure BDA0003462498860000032
Figure BDA0003462498860000033
Figure BDA0003462498860000034
wherein, SalC(Sk) Representing a super-pixel SkK represents the total number of superpixels, w (S)k,Si) Representing a super-pixel SkAnd super pixel SiWeight coefficient between dpos(Sk,Si) Representing a super-pixel SkAnd super pixel SiCoefficient of spatial distance between, dc(Sk,Si) Representing a super-pixel SkAnd super pixel SiThe color contrast in CIELab color space, # { } represents the number of elements in the set, λposIs the coefficient that adjusts the effect of spatial distance on the saliency value calculation,
Figure BDA0003462498860000035
respectively representing super-pixels SkThe luminance in the CIELab color space and the average values of the first and second color channels,
Figure BDA0003462498860000036
respectively representing super-pixels SiThe luminance in the CIELab color space and the average values of the first and second color channels,
Figure BDA0003462498860000037
representing a super-pixel SkThe geometric center of (a) in the real-time image,
Figure BDA0003462498860000038
representing a super-pixel SiThe geometric center of (a) in the real-time image, λposIs the coefficient that adjusts the significant value of the spatial distance that affects the contrast, and i is the superpixel number in the superpixel set.
The calculation formula of the significant value of each super pixel relative to the background information is as follows:
Figure BDA0003462498860000039
wherein, SalE(Sk) Representing a super-pixel SkThe significance value with respect to the background information,
Figure BDA00034624988600000310
representing a super-pixel SkThe average color in the quantized HSV color space,
Figure BDA00034624988600000311
the method comprises the steps of quantizing the ith quantized edge color in the main color of the edge of the image, | | represents absolute value operation, min represents minimum value operation, and N is the color number of the main color of the edge of the image.
Compared with the prior art, the invention has the following beneficial effects:
1. the detection effect in a complex environment with similar background and target colors is improved by a saliency detection method based on super-pixel segmentation and multiple color space features.
2. The image area significant value is calculated based on the super-pixel segmentation regionalization description image information, the pixel set of each super-pixel, the average value of each color component of each super-pixel in the CIELab color space, the average color in the quantized HSV color space and the geometric center of each super-pixel, and the overall calculation efficiency and accuracy are improved.
3. And the HSV color space with better effect in color segmentation is used for describing background prior information, so that the significance detection accuracy in a complex environment is improved.
Drawings
FIG. 1 is a flow chart of a saliency detection algorithm based on superpixel segmentation and multiple color features of the present invention.
Fig. 2 is a saliency detection raw image of the present invention.
Fig. 3 is a saliency detection result image of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The invention will now be further described with reference to the following examples and drawings:
the implementation technical scheme of the invention is as follows:
as shown in fig. 1, the present invention comprises the steps of:
the first step is as follows: performing superpixel segmentation on an input real-time image through a Simple Linear Iterative Clustering (SLIC) algorithm to obtain a superpixel set and a corresponding label graph of the current real-time image; the real-time image is decomposed into a plurality of local areas, the calculation amount of subsequent calculation is reduced, and the overall efficiency of the algorithm is improved. The input real-time image is shown in fig. 2.
The second step is that: processing a super-pixel set of the current real-time image based on the label graph to obtain a pixel set, color characteristics and position information of each super-pixel in the super-pixel set of the current real-time image; the color characteristics of each super pixel of the current real-time image are the average value of each color component of each super pixel in a CIELab color space and the average color in a quantized HSV color space;
the third step: calculating a significant value of each super pixel based on the contrast based on the pixel set of each super pixel of the current real-time image, the average value of each color component in the CIELab color space and the position information, and then calculating and obtaining a significant image of the current real-time image according to the significant value of each super pixel based on the contrast;
the third step is specifically as follows:
3.1) the position information of each super pixel is the geometric center of each super pixel; calculating the contrast between each super pixel and each super pixel according to the pixel set of each super pixel, the average value of each color component of each super pixel in the CIELab color space and the geometric center of each super pixel;
3.2) calculating the significant value of the current superpixel based on the contrast of the current superpixel by the following formula:
Figure BDA0003462498860000051
Figure BDA0003462498860000052
Figure BDA0003462498860000053
Figure BDA0003462498860000054
wherein, SalC(Sk) Representing a super-pixel SkK represents the total number of super pixels, and in this embodiment, K is 150. w (S)k,Si) Representing a super-pixel SkAnd super pixel SiWeight coefficient between dpos(Sk,Si) Representing a super-pixel SkAnd super pixel SiCoefficient of spatial distance between, dc(Sk,Si) Representing a super-pixel SkAnd super pixel SiThe color contrast in CIELab color space, # { } represents the number of elements in the set, λposIs the coefficient that adjusts the influence of the spatial distance on the calculation of the saliency value, in this example λpos=10。
Figure BDA0003462498860000055
Respectively representing super-pixels SkThe luminance in the CIELab color space and the average values of the first and second color channels,
Figure BDA0003462498860000056
respectively representing super-pixels SiThe luminance in the CIELab color space and the average of the first and second color channels, the first color channel being from dark green (low luminance value) to gray (medium luminance value) to bright pink red (high luminance value)Value), the second color channel from bright blue (low luminance value) to gray (medium luminance value) to yellow (high luminance value),
Figure BDA0003462498860000057
representing a super-pixel SkThe geometric center of (a) in the real-time image,
Figure BDA0003462498860000058
representing a super-pixel SiThe geometric center of (a) in the real-time image, λposThe coefficient is a significant value of adjusting the spatial distance to influence the contrast, and i is a super pixel serial number in the super pixel set;
3.3) repeating the step 3.2), traversing the residual superpixels, and calculating and obtaining the significant values of the residual superpixels based on the contrast;
and 3.4) carrying out gray assignment on the corresponding super-pixels according to the significant value of each super-pixel based on the contrast, and then obtaining a significant image of the current real-time image.
The fourth step: based on the average color of each super pixel of the current real-time image in a quantized HSV color space, processing the edge pixel of the current real-time image by using a main color description method based on the quantized HSV color space to obtain the image edge main color of the current real-time image, and describing the background information of the real-time image by using the image edge main color; calculating to obtain a background prior image of the current real-time image according to the image edge main color of the current real-time image;
the fourth step is specifically as follows:
4.1) including edge pixels p of the real-time image in the superpixel seteAs an edge superpixel set E; satisfy the requirement of
Figure BDA0003462498860000066
SjRepresenting the jth superpixel, P, in the edge superpixel set EjRepresenting the pixel of the jth superpixel in the edge superpixel set E. Calculating a color histogram of the edge superpixel set E based on the quantized HSV color space;
4.2) at the edgeQuantized edge color L in color histogram of super-pixel set EESatisfy LE∈[0,71]The current quantized edge color LENeighborhood color of { L }E-1,LE,LE+1 the sum of the percentages in the color histogram of the edge superpixel set E is taken as the current quantized edge color LEThe percentage of the color histogram in the edge super-pixel set E;
4.3) repeating step 4.2), and traversing and calculating all quantized edge colors LEThe percentage of the color histogram in the edge super-pixel set E;
4.4) taking the percentage of the color histogram of the edge super-pixel set E in the quantized edge colors to be more than or equal to 20 percent as the image edge main color D of the current real-time imageEDCDescribing the background information of the real-time image by using the edge dominant color of the image;
Figure BDA0003462498860000061
wherein N is the number of colors of the dominant color of the edge of the image and satisfies N ∈ {0,1,2,3,4,5},
Figure BDA0003462498860000065
representing an nth quantized edge color of the edge dominant colors of the image;
4.5) calculating the significant value of each super pixel relative to the background information by combining the image edge main color of the current real-time image based on the average color in the quantized HSV color space in the color features of each super pixel of the current real-time image through the following formula:
Figure BDA0003462498860000062
wherein, SalE(Sk) Representing a super-pixel SkThe significance value with respect to the background information,
Figure BDA0003462498860000063
representing a super-pixel SkThe average color in the quantized HSV color space,
Figure BDA0003462498860000064
the method comprises the steps that the ith quantized edge color in the main color of the edge of the image is obtained, | | represents absolute value operation, min represents minimum value operation, and N is the color number of the main color of the edge of the image;
and 4.6) carrying out gray assignment on the corresponding super pixels based on the significant value of each super pixel relative to the background information to obtain a background prior image of the current real-time image.
The fifth step: the saliency map and the background prior map of the current real-time image are normalized respectively and then subjected to linear fusion to obtain a final saliency map of the current real-time image, as shown in fig. 3.
In the fifth step, the calculation formula of linear fusion is as follows:
Sal(Sk)=λSal1Sal'E(Sk)+λSal2Sal'C(Sk)
wherein Sal (S)k) Superpixel S in final saliency map representing current real-time imagekOf Sal'E(Sk) Super-pixel S in saliency map representing current real-time imagekNormalized significant value, Sal 'against background information'C(Sk) Superpixel S in background prior image representing current real-time imagekIs based on a normalized saliency value, lambda, of the contrastSal1Is a first weight coefficient, λSal2Is a second weight coefficient and satisfies lambdaSal1Sal2=1。
The above-mentioned contents are only technical ideas of the present invention, and the protection scope of the present invention is not limited thereby, and any modifications made on the basis of the technical ideas proposed by the present invention fall within the protection scope of the claims of the present invention.

Claims (6)

1. An image saliency detection method based on superpixel segmentation and multiple color features is characterized by comprising the following steps:
the first step is as follows: performing superpixel segmentation on an input real-time image through a simple linear iterative clustering algorithm to obtain a superpixel set and a corresponding label graph of the current real-time image;
the second step is that: processing a super-pixel set of the current real-time image based on the label graph to obtain a pixel set, color characteristics and position information of each super-pixel in the super-pixel set of the current real-time image; the color characteristics of each super pixel of the current real-time image are the average value of each color component of each super pixel in a CIELab color space and the average color in a quantized HSV color space;
the third step: calculating a significant value of each super pixel based on the contrast based on the pixel set of each super pixel of the current real-time image, the average value of each color component in the CIELab color space and the position information, and then calculating and obtaining a significant image of the current real-time image according to the significant value of each super pixel based on the contrast;
the fourth step: based on the average color of each super pixel of the current real-time image in a quantized HSV color space, processing the edge pixel of the current real-time image by using a main color description method based on the quantized HSV color space to obtain the image edge main color of the current real-time image, and calculating according to the image edge main color of the current real-time image to obtain a background prior image of the current real-time image;
the fifth step: respectively normalizing the saliency map and the background prior map of the current real-time image and then performing linear fusion to obtain the final saliency map of the current real-time image.
2. The image saliency detection method based on superpixel segmentation and multiple color features according to claim 1, characterized in that said third step specifically is:
3.1) the position information of each super pixel is the geometric center of each super pixel; calculating the contrast between each super pixel and each super pixel according to the pixel set of each super pixel, the average value of each color component of each super pixel in the CIELab color space and the geometric center of each super pixel;
3.2) calculating a significant value based on the contrast of the current super pixel according to the contrast of the current super pixel;
3.3) repeating the step 3.2), traversing the residual superpixels, and calculating and obtaining the significant values of the residual superpixels based on the contrast;
and 3.4) carrying out gray assignment on the corresponding super-pixels according to the significant value of each super-pixel based on the contrast, and then obtaining a significant image of the current real-time image.
3. The image saliency detection method based on superpixel segmentation and multiple color features according to claim 1, characterized in that said fourth step specifically is:
4.1) including edge pixels p of the real-time image in the superpixel seteAs an edge superpixel set E; calculating a color histogram of the edge superpixel set E based on the quantized HSV color space;
4.2) edge color L quantized in the color histogram of the edge superpixel set EESatisfy LE∈[0,71]The current quantized edge color LENeighborhood color of { L }E-1,LE,LE+1 the sum of the percentages in the color histogram of the edge superpixel set E is taken as the current quantized edge color LEThe percentage of the color histogram in the edge super-pixel set E;
4.3) repeating step 4.2), and traversing and calculating all quantized edge colors LEThe percentage of the color histogram in the edge super-pixel set E;
4.4) taking the percentage of the color histogram of the edge super-pixel set E which is more than or equal to 20 percent in the quantized edge colors as the image edge main color of the current real-time image;
4.5) calculating the significant value of each super pixel relative to the background information by combining the image edge main color of the current real-time image based on the average color in the quantized HSV color space in the color features of each super pixel of the current real-time image;
and 4.6) carrying out gray assignment on the corresponding super pixels based on the significant value of each super pixel relative to the background information to obtain a background prior image of the current real-time image.
4. The method for detecting image saliency based on super-pixel segmentation and multiple color features according to claim 1, wherein in the fifth step, the calculation formula of linear fusion is as follows:
Sal(Sk)=λSal1Sal'E(Sk)+λSal2Sal'C(Sk)
wherein Sal (S)k) Superpixel S in final saliency map representing current real-time imagekOf Sal'E(Sk) Normalized superpixel S in saliency map representing current real-time imagekSignificant value relative to background information, Sal'E(Sk) Super-pixel S in saliency map representing current real-time imagekNormalized significant value, Sal 'against background information'C(Sk) Superpixel S in background prior image representing current real-time imagekIs based on a normalized saliency value, lambda, of the contrastSal1Is a first weight coefficient, λSal2Is a second weight coefficient and satisfies lambdaSal1Sal2=1。
5. The method according to claim 2, wherein the calculation formula of the saliency value based on contrast of the current super pixel is as follows:
Figure FDA0003462498850000021
Figure FDA0003462498850000022
Figure FDA0003462498850000023
Figure FDA0003462498850000031
wherein, SalC(Sk) Representing a super-pixel SkK represents the total number of superpixels, w (S)k,Si) Representing a super-pixel SkAnd super pixel SiWeight coefficient between dpos(Sk,Si) Representing a super-pixel SkAnd super pixel SiCoefficient of spatial distance between, dc(Sk,Si) Representing a super-pixel SkAnd super pixel SiThe color contrast in CIELab color space, # { } represents the number of elements in the set, λposIs the coefficient that adjusts the effect of spatial distance on the saliency value calculation,
Figure FDA0003462498850000032
respectively representing super-pixels SkThe luminance in the CIELab color space and the average values of the first and second color channels,
Figure FDA0003462498850000033
respectively representing super-pixels SiThe luminance in the CIELab color space and the average values of the first and second color channels,
Figure FDA0003462498850000034
representing a super-pixel SkThe geometric center of (a) in the real-time image,
Figure FDA0003462498850000035
representing a super-pixel SiThe geometric center of (a) in the real-time image, λposIs a coefficient for adjusting the significance of the spatial distance influencing the contrastAnd i is the superpixel number in the superpixel set.
6. The method according to claim 3, wherein the calculation formula of the saliency value of each super pixel with respect to the background information is as follows:
Figure FDA0003462498850000036
wherein, SalE(Sk) Representing a super-pixel SkThe significance value with respect to the background information,
Figure FDA0003462498850000037
representing a super-pixel SkThe average color in the quantized HSV color space,
Figure FDA0003462498850000038
the method comprises the steps of quantizing the ith quantized edge color in the main color of the edge of the image, | | represents absolute value operation, min represents minimum value operation, and N is the color number of the main color of the edge of the image.
CN202210020811.2A 2022-01-10 2022-01-10 Image significance detection method based on super-pixel segmentation and multiple color features Pending CN114359244A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210020811.2A CN114359244A (en) 2022-01-10 2022-01-10 Image significance detection method based on super-pixel segmentation and multiple color features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210020811.2A CN114359244A (en) 2022-01-10 2022-01-10 Image significance detection method based on super-pixel segmentation and multiple color features

Publications (1)

Publication Number Publication Date
CN114359244A true CN114359244A (en) 2022-04-15

Family

ID=81107619

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210020811.2A Pending CN114359244A (en) 2022-01-10 2022-01-10 Image significance detection method based on super-pixel segmentation and multiple color features

Country Status (1)

Country Link
CN (1) CN114359244A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170083762A1 (en) * 2015-06-22 2017-03-23 Photomyne Ltd. System and Method for Detecting Objects in an Image
CN107025672A (en) * 2017-03-30 2017-08-08 上海理工大学 A kind of conspicuousness detection method based on improvement convex closure
CN107578451A (en) * 2017-09-20 2018-01-12 太原工业学院 A kind of adaptive key color extraction method towards natural image
CN108549891A (en) * 2018-03-23 2018-09-18 河海大学 Multi-scale diffusion well-marked target detection method based on background Yu target priori
CN109242854A (en) * 2018-07-14 2019-01-18 西北工业大学 A kind of image significance detection method based on FLIC super-pixel segmentation
CN112037230A (en) * 2019-06-04 2020-12-04 北京林业大学 Forest region image segmentation algorithm based on super-pixel and super-metric contour map
CN113705579A (en) * 2021-08-27 2021-11-26 河海大学 Automatic image annotation method driven by visual saliency

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170083762A1 (en) * 2015-06-22 2017-03-23 Photomyne Ltd. System and Method for Detecting Objects in an Image
CN107025672A (en) * 2017-03-30 2017-08-08 上海理工大学 A kind of conspicuousness detection method based on improvement convex closure
CN107578451A (en) * 2017-09-20 2018-01-12 太原工业学院 A kind of adaptive key color extraction method towards natural image
CN108549891A (en) * 2018-03-23 2018-09-18 河海大学 Multi-scale diffusion well-marked target detection method based on background Yu target priori
CN109242854A (en) * 2018-07-14 2019-01-18 西北工业大学 A kind of image significance detection method based on FLIC super-pixel segmentation
CN112037230A (en) * 2019-06-04 2020-12-04 北京林业大学 Forest region image segmentation algorithm based on super-pixel and super-metric contour map
CN113705579A (en) * 2021-08-27 2021-11-26 河海大学 Automatic image annotation method driven by visual saliency

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
JIWHAN KIM等: "Salient Region Detection via High-Dimensional Color Transform", 《 IEEE TRANSACTIONS ON IMAGE PROCESSING 》, 26 October 2015 (2015-10-26) *
JIWHAN KIM等: "Salient region detection via high-dimensional color transform", 《2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》, 25 September 2014 (2014-09-25) *
司马海峰;米爱中;王志衡;杜守恒;: "显著特征融合的主颜色聚类分割算法", 模式识别与人工智能, no. 06, 15 June 2016 (2016-06-15) *
孙赫赫;尚晓清;王冲;: "基于区域对比的图像显著性检测方法", 计算机工程与应用, no. 10, 19 April 2017 (2017-04-19) *
肖春霞;聂勇伟;黄先锋;赵勇;彭群生;: "基于联合双边滤波的纹理合成上采样算法", 计算机学报, no. 02, 15 February 2009 (2009-02-15) *
邵凯旋;余映;钱俊;吴青龙;杨鉴;: "基于边缘信息结合空间权重的图像显著性检测算法研究", 云南大学学报(自然科学版), no. 03, 10 May 2020 (2020-05-10) *

Similar Documents

Publication Publication Date Title
CN109829443B (en) Video behavior identification method based on image enhancement and 3D convolution neural network
WO2022199583A1 (en) Image processing method and apparatus, computer device, and storage medium
CN108629783B (en) Image segmentation method, system and medium based on image feature density peak search
CN114359323B (en) Image target area detection method based on visual attention mechanism
CN112614060A (en) Method and device for rendering human face image hair, electronic equipment and medium
CN111008632B (en) License plate character segmentation method based on deep learning
CN107292933B (en) Vehicle color identification method based on BP neural network
CN102306307B (en) Positioning method of fixed point noise in color microscopic image sequence
CN112435191A (en) Low-illumination image enhancement method based on fusion of multiple neural network structures
CN111583279A (en) Super-pixel image segmentation method based on PCBA
CN115131375B (en) Automatic ore cutting method
CN108711160B (en) Target segmentation method based on HSI (high speed input/output) enhanced model
CN112464731A (en) Traffic sign detection and identification method based on image processing
CN114821452B (en) Colored drawing train number identification method, system and medium
CN113947732B (en) Aerial visual angle crowd counting method based on reinforcement learning image brightness adjustment
CN107146258B (en) Image salient region detection method
CN110415816B (en) Skin disease clinical image multi-classification method based on transfer learning
CN109766860B (en) Face detection method based on improved Adaboost algorithm
CN114359244A (en) Image significance detection method based on super-pixel segmentation and multiple color features
CN111797694A (en) License plate detection method and device
CN109165659B (en) Vehicle color identification method based on superpixel segmentation
CN108154188B (en) FCM-based artificial text extraction method under complex background
CN115761459A (en) Multi-scene self-adaption method for bridge and tunnel apparent disease identification
CN115496816A (en) Sequence clothing image theme color self-adaptive extraction method
CN112487927B (en) Method and system for realizing indoor scene recognition based on object associated attention

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Country or region after: China

Address after: 316021 Zhoushan campus of Zhejiang University, No.1 Zheda Road, Dinghai District, Zhoushan City, Zhejiang Province

Applicant after: ZHEJIANG University

Address before: 310058 Yuhang Tang Road, Xihu District, Hangzhou, Zhejiang 866

Applicant before: ZHEJIANG University

Country or region before: China