CN106327507A - Color image significance detection method based on background and foreground information - Google Patents

Color image significance detection method based on background and foreground information Download PDF

Info

Publication number
CN106327507A
CN106327507A CN201610654316.1A CN201610654316A CN106327507A CN 106327507 A CN106327507 A CN 106327507A CN 201610654316 A CN201610654316 A CN 201610654316A CN 106327507 A CN106327507 A CN 106327507A
Authority
CN
China
Prior art keywords
background
super
significance
pixel block
foreground
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610654316.1A
Other languages
Chinese (zh)
Other versions
CN106327507B (en
Inventor
王正兵
徐贵力
程月华
朱春省
曾大为
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN201610654316.1A priority Critical patent/CN106327507B/en
Publication of CN106327507A publication Critical patent/CN106327507A/en
Application granted granted Critical
Publication of CN106327507B publication Critical patent/CN106327507B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Probability & Statistics with Applications (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a color image significance detection method based on background and foreground information, and the method comprises the following steps: carrying out the segmentation of an inputted color image, and obtaining a series of super-pixel blocks; selecting background seeds, and obtaining the roughess significance through the feature comparison of the super-pixel blocks with the background seeds; defining the background weights of the super-pixel blocks based on the feature distribution of the background seeds, improving the roughness significance through the background weights, and obtaining the significance based on the background information; carrying out the segmentation of a saliency map which is formed at the previous step and based on the background information, selecting a compact foreground region from all segmentation results, extracting the features of the foreground region, and obtaining the significance based on the foreground information through the feature comparison; integrating the significance, obtained at the former two steps, based on the background and foreground information, carrying out the smoothening, and obtaining the significance of all super-pixel blocks after optimization. The method can highlight a foreground target in the image more consistently, and achieves a good inhibition effect for the background noise in the image.

Description

A kind of coloured image significance detection method based on background and foreground information
Technical field
The invention belongs to the significance detection technique field of image scene, be specifically related to a kind of based on background and foreground information Coloured image significance detection method.
Background technology
Vision significance is visual cognition and the important research content of scene understanding, relates to cognitive psychology, cognition neural Multiple subject such as science, computer vision.Due to the foreground target in scene and the difference on its background existing characteristics, human eye regards Vision system tends to foreground area and the information in this region of priority treatment quickly positioning in scene.In order to simulate human eye This high efficiency information processing method, significance detection has obtained the extensive concern of association area scholars in recent years.
The research of significance detection can trace back to the theoretical (Anne of feature integration that Treisman et al. proposes the earliest Treisman and Garry Gelade(1980)."A feature-integration theory of attention." Cognitive Psychology,Vol.12,No.1,pp.97–136).On this basis, Itti and Koch et al. proposes Significance detection computation model (L.Itti, C.Koch, E.Niebur, " Amodel of saliency-based the earliest visual attention for rapid scene analysis”,IEEE Trans.Pattern Anal.Mach.Intell 20 (11) (1998) 1,254 1259), the most famous IT model.Significance detection algorithm note in early days Focus is watched in heavily prediction attentively, it is impossible to enough as one man highlighting in foreground target region, and the notable figure formed includes the substantial amounts of back of the body Scape noise, these problems significantly limit the application of significance detection algorithm.
Along with the development of computer vision, in the particularly near more than ten years, scholars propose the inspection of substantial amounts of significance Method of determining and calculating, its main thought remains and highlights foreground target by the way of Characteristic Contrast.Based on local feature contrast notable Property detection algorithm use the Comparing method of central authorities-periphery, i.e. by the Characteristic Contrast of middle section and its peripheral neighborhood, prominent mesh Mark object.The method often highlights the marginal area of foreground target and can not as one man highlight whole foreground target.Based on entirely The significance detection algorithm of office's Characteristic Contrast often chooses suitable background information, such as, the feature in framing mask region, and By the prominent foreground target of contrast.The method only considered the feature of selected background area, is directly regarded as background characteristics For Characteristic Contrast, and ignore the spatial distribution of background area feature, therefore, the background characteristics extracted may comprise portion Divide foreground information, follow-up significance detection can be adversely affected.Therefore, the method for feature based contrast is capable of detecting when Well-marked target under simple scenario, but there are various features the complex scene deposited for prospect or background, and its Detection results is still The most undesirable.
In recent years, scholars gradually recognize that significance is examined by the Cognitive Science such as cognitive neuroscience and cognitive psychology The heuristic effect surveyed.Such as, Wei et al. (Geodesic saliency using background priors) finder The frame portion of image is often defaulted as background component when observing image by eye, thus introduces background priori, and passes through feature Contrast formed the overall situation significance detection.The method only used the feature of background component and highlights foreground area, does not the most examine Consider the spatial distribution of its feature.For effectively suppressing the background noise in notable figure, the Lu Hu river team of Dalian University of Technology (J.Wang,H.Lu,X.Li,N.Tong,W.Liu,Saliency detection via background and Foreground seed selection, Neurocomputing 152 (2015) 359 368.) during significance detects Introducing foreground information, its convex hull successively formed by angle point in image and figure notable to background carry out the district of adaptivenon-uniform sampling generation Territory is considered as foreground area.But, the convex hull that angle point is formed does not accounts for the profile information of target object, and adaptivenon-uniform sampling does not has Consider the compact nature of target object, therefore, make the foreground information introduced in aforementioned manners itself can comprise substantial amounts of background and make an uproar Sound so that follow-up noise suppression effect is the best.
Summary of the invention
It is an object of the invention to overcome deficiency of the prior art, it is provided that a kind of based on background with the coloured silk of foreground information Color image significance detection method, solves and cannot accomplish in prior art as one man highlight foreground target and effectively suppress aobvious The technical problem of the background noise in work figure.
For solving above-mentioned technical problem, the invention provides a kind of coloured image significance based on background and foreground information Detection method, is characterized in that, comprises the following steps:
Step one, Image semantic classification: the coloured image inputted is carried out over-segmentation process and obtains a series of super-pixel block, will Super-pixel block is as minimal processing unit;
Step 2, significance based on background information detects: choose background seed, by each super-pixel block and background seed Between Characteristic Contrast obtain coarse significance;Feature based on background seed distribution defines the background weight of each super-pixel block, logical Cross background weight and improve coarse significance acquisition significance based on background information;
Step 3, significance based on foreground information detects: based on background information the notable figure forming previous step enters Row segmentation, chooses a close foreground area in all segmentation results, extracts foreground area feature, is obtained by Characteristic Contrast Obtain significance based on foreground information;
Step 4, the integration of significance: the significance based on background and foreground information integrating first two steps acquisition obtains whole Close significance, and carry out the significance after smooth operation obtains the optimization of all super-pixel block to integrating significance.
Further, in described step one, over-segmentation processes and uses SLIC superpixel segmentation method.
Further, in described step 2, it is thus achieved that the detailed process of significance based on background information is:
11) super-pixel block at framing mask is chosen as background seed, by super-pixel block each in image and background kind Son carries out Characteristic Contrast and obtains the coarse significance of each super-pixel block;
12) selected background seed is carried out K mean cluster, determine that each cluster belongs to according to the spatial distribution of each cluster In the probability of background, kth cluster, the background weight of background seed is defined as follows:
Pk=1-exp (-α (Ls+Lo)) (k=1,2 ..., K)
Wherein, LsFor comprising the length of the shortest super-pixel chain of all kth cluster, LoFor this super-pixel chain belongs to it The quantity of the super-pixel block of his cluster, parameter alpha scope is 0.01~0.08, and K is the cluster centre number chosen;
13) for other super-pixel block in image, first, the geodesic distance of super-pixel block and had powerful connections seed is calculated, Obtain and the background seed of this super-pixel block geodesic distance minimum:
s j * = arg min s j ∈ B G d g e o ( s i , s j ) , ( s i ∉ B G )
Wherein, BG is the set of background seed, dgeo(si,sj) it is the geodesic distance of two super-pixel block;By previous step 12) Understand the background probability of this background seed, remember that the background probability of this background seed isThis super-pixel block and this background seed Geodesic distance isThen the background weight of this super-pixel block is:
p s i = p s j * d g e o * , ( s i ∉ B G )
Calculate the background weight of each super-pixel block the most successively;
14) significance based on background information of definition super-pixel block is:
S i b = S i c * ( 1 - p s i )
Wherein,For super-pixel block siSignificance based on background information,For step 11) in calculated super picture Element block siCoarse significance.
Further, parameter alpha is 0.05.
Further, in described step 3, it is thus achieved that significance detailed process based on foreground information is:
21) utilize parametric maxflow method that based on background the notable figure obtained in previous step is carried out point Cutting, obtain a series of close foreground area, maxflow method segmentation result is:
x f = min x &Sigma; i = 1 N ( - lnS i b + &lambda;A i ) x i + &Sigma; 1 < i < j < N e i j x i x j
Wherein, N is super-pixel number, A in imageiFor super-pixel block siArea, xi{ 1,0} represents super-pixel block s to ∈i Whether belong to foreground area, eijFor the similarity between neighbouring super pixels block, xfFor the foreground area segmentation result obtained;
22) in all segmentation results, the segmentation result of foundation below equation selected value optimum is as foreground area:
x * = arg min x f &Sigma; i = 1 N ( x i f - S i b ) + V ( x f )
Wherein, XfThe multiple segmentation results obtained for segmentation, N is the number of super-pixel block,For super-pixel block siBase In the significance of background, V (xf) it is segmentation result xfSpace coordinates variance;
23) extract the foreground area feature selected, by Characteristic Contrast determine each super-pixel block in image based on prospect Significance:
S i f = &Sigma; s j &Element; F G 1 d c ( s i , s j ) + d l ( s i , s j )
Wherein, FG is the set of the prospect super-pixel block obtained, dc(si,sj) and dl(si,sj) be respectively super-pixel block it Between color distance and space length.
Further, in described step 4, detailed process is:
31) significance based on background and prospect obtained is integrated, integrates formula as follows:
S i u = S i b * ( 1 - exp ( - &beta; * S i f ) )
Wherein,For super-pixel block siIntegration significance,Represent super-pixel block siSignificance based on background,Represent super-pixel block siSignificance based on prospect, parameter P value scope is 2.5~8,
32) significance optimized and combined further, majorized function is as follows:
S r = arg min s &lsqb; &Sigma; i , j = 1 N w c ( s i , s j ) ( S i - S j ) 2 + &Sigma; i = 1 N p s i ( 1 - x i * ) ( S i - S i u ) 2 + &Sigma; i = 1 N ( 1 - p s i ) x i * ( S i - S i u ) 2 &rsqb;
Wherein, wc(si,sj) represent the color similarity of two adjacent super-pixel block, SiAnd SjRepresent to be optimized adjacent The significance of two super-pixel block,WithIt is respectively super-pixel block siBackground weight and prospect labelling,For super-pixel block siIntegration significance, N is the number of super-pixel in image;This majorized function is global optimization function, carries out majorized function Optimize the optimization significance solving all super-pixel block.
Further, parameter P value is 4.
Compared with prior art, the present invention is reached to provide the benefit that: the present invention uses based on framing mask region special Levy the background weight of distribution, improve significance Detection results based on background characteristics contrast;Before extracting based on maxflow method Scene area, has taken into account the marginal information of foreground target and the compactness of target object, it is possible to the prospect mesh in accurate description scene Mark;Background and the foreground information not same-action in significance detects integrate two the notable figures obtained, and optimize and combine further The notable figure obtained, the notable figure of optimization is more smooth-out inside background and foreground area.The present invention can be more consistent Highlight the foreground target in image, and background noise in image is had good inhibition.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet of the coloured image significance detection of the embodiment of the present invention.
Fig. 2 is that the background weight of the embodiment of the present invention improves the design sketch that significance detects, and wherein, (a) is input picture, B () is well-marked target true value, (c) is coarse notable figure, and (d) is background weight, and (e) is notable figure based on background.
Fig. 3 is the background weight calculation process schematic diagram of the embodiment of the present invention, and wherein, (a) is input picture, and (b) is super Pixel segmentation result, (c) is background seed cluster result, and (d) is the background weight of selected seed, and (e) is all super-pixel block Background weight.
Fig. 4 is that the foreground area of the embodiment of the present invention is extracted and the design sketch of suppression noise, and wherein, (a) is input figure Picture, (b) is notable figure based on background, and (c) is the foreground area extracted, and (d) is notable figure based on prospect, and (e) is for integrating Notable figure, (f) is the notable figure optimized.
Fig. 5 is the significance testing result comparison diagram with the testing result of prior art of the embodiment of the present invention, wherein, A () is input picture, (b) is well-marked target true value, and (c) is the testing result of the present patent application, and (d) is the detection knot of IT model Really, (e) is the testing result of XIE model, and (f) is the testing result of BFS model.
Detailed description of the invention
The invention will be further described below in conjunction with the accompanying drawings.Following example are only used for clearly illustrating the present invention Technical scheme, and can not limit the scope of the invention with this.
Fig. 1 is the schematic flow sheet of the coloured image significance detection of the embodiment of the present invention.As it is shown in figure 1, this method A kind of coloured image significance detection method based on background and foreground information, is characterized in that, comprise the following steps:
Step one, Image semantic classification: the coloured image inputted is carried out over-segmentation process and obtains a series of super-pixel block, will Super-pixel block is as minimal processing unit.
The coloured image of input is utilized SLIC superpixel segmentation method, image is too segmented into the super-pixel block of many, Using super-pixel block as the minimum processing unit of subsequent operation.
Step 2, significance based on background information detects:
Human eye tends to paying close attention to the center (it has been generally acknowledged that target occurs in the center of image) of image when observing image, And ignore the frame region (frame of image is background) of image.Therefore the super-pixel block at framing mask is chosen as background kind Son, carries out Characteristic Contrast by super-pixel block each in image and background seed and obtains the coarse significance of all super-pixel block, shape Become coarse notable figure, as shown in Fig. 2 (c).In order to get rid of the interference of foreground information, consider the spy of selected background seed further Levy distribution, background seed is carried out feature clustering, determine that seed belongs to the probability of background according to the spatial distribution of each cluster, according to This defines the background weight of each super-pixel block, as shown in Fig. 2 (d).Background weight is used to improve coarse significance, it is thus achieved that based on the back of the body The significance of scape information, forms notable figure based on background, as shown in Fig. 2 (e).
The detailed process obtaining notable figure based on background information is:
11) super-pixel block at framing mask is chosen as background seed, by super-pixel block each in image and background kind Son carries out Characteristic Contrast and obtains the coarse significance of each super-pixel block, forms coarse notable figure, and this process is with reference to prior art;
12) selected background seed is carried out K mean cluster, as shown in Fig. 3 (c), divide according to the space of each cluster Cloth determines that each cluster belongs to the probability of background, and in kth cluster, the background weight of background seed is defined as follows:
Pk=1-exp (-α (Ls+Lo)) (k=1,2 ..., K)
Wherein, LsFor comprising the length of the shortest super-pixel chain of all kth cluster, LoFor this super-pixel chain belongs to it The quantity of the super-pixel block of his cluster, parameter alpha is constant, could be arranged to 0.01~0.08, determines that α is arranged by actual tests Can obtain optimum detection effect when being 0.05, K is the cluster centre number chosen;The background weight value of super-pixel block is the biggest, and it belongs to The biggest in the probability of background, on the contrary, it is worth the least, then it is assumed that its probability belonging to prospect is the biggest.
13) for other super-pixel block in image, it is calculated according to the connectedness of super-pixel block Yu selected background seed Background weight.First, calculate the geodesic distance of super-pixel block and had powerful connections seed, obtain with this super-pixel block geodesic distance Little background seed:
s j * = arg min s j &Element; B G d g e o ( s i , s j ) , ( s i &NotElement; B G )
Wherein, BG is the set of background seed, dgeo(si,sj) it is the geodesic distance of two super-pixel block.By previous step 12) Understand the background probability of this background seed, remember that the background probability of this background seed isThis super-pixel block and this background seed Geodesic distance isThen the background weight of this super-pixel block is:
p s i = p s j * d g e o * , ( s i &NotElement; B G )
Calculate shown in effect such as Fig. 3 (e);
14) significance based on background information of definition super-pixel block is:
S i b = S i c * ( 1 - p s i )
Wherein,For super-pixel block siSignificance based on background information,For step 11) in calculated super Block of pixels siCoarse significance, by each super-pixel block significance based on background information obtain based on background information notable Figure, as shown in Fig. 2 (e).
Step 3, significance based on foreground information detects: based on background the notable figure obtained in previous step is carried out Segmentation, chooses a close foreground area in all segmentation results, extracts foreground target feature, is obtained by Characteristic Contrast Significance based on foreground information, forms notable figure based on foreground information.
The detailed process obtaining notable figure based on foreground information is:
21) utilize parametric maxflow method that the significance based on background obtained in previous step is carried out point Cut, obtain a series of close foreground area:
x f = min x &Sigma; i = 1 N ( - lnS i b + &lambda;A i ) x i + &Sigma; 1 < i < j < N e i j x i x j
Wherein, N is super-pixel number, A in imageiFor super-pixel block siArea, xi{ 1,0} represents super-pixel block s to ∈i Whether belong to foreground area, eijFor the similarity between neighbouring super pixels block, xfFor the foreground area segmentation result obtained.Compare In OTSU method, the foreground area that the method segmentation obtains is consistent with the characteristic of the tight close of well-marked target, it is possible to more preferably Ground describes the well-marked target in image, as shown in Fig. 4 (c);
22) according to each segmentation result and the consistent degree of notable figure based on background and the space precise of significance target Property, choose most suitable segmentation result as foreground area according to below equation:
x * = arg min x f &Sigma; i = 1 N ( x i f - S i b ) + V ( x f )
Wherein, XfThe multiple segmentation results obtained for segmentation, N is the number of super-pixel block,For super-pixel block siBase In the significance of background, V (xf) it is segmentation result xfSpace coordinates variance.As shown in Fig. 4 (c), this foreground area extracting method Both having considered the marginal information of foreground target it is contemplated that the compactness of target object, the foreground area extracted can be the most anti- Reflect foreground target feature;
23) extract foreground area feature, determine the based on prospect notable of each super-pixel block in image by Characteristic Contrast Degree:
S i f = &Sigma; s j &Element; F G 1 d c ( s i , s j ) + d l ( s i , s j )
Wherein, FG is the set of the prospect super-pixel block obtained, dc(si,sj) and dl(si,sj) be respectively super-pixel block it Between color distance and space length.Notable figure based on prospect is obtained by the significance based on prospect of each super-pixel block, as Shown in Fig. 4 (d), owing to background area and foreground area often exist color and difference spatially, the method is used to calculate To based on prospect notable figure can effectively suppress background noise.
Step 4, the integration of notable figure and optimization:
Owing to background information is different with foreground information effect in significance detects, i.e. background information is used for highlighting prospect Target, and foreground information is used for suppressing background noise, integrates scheming based on background and the notable of foreground information of first two steps acquisition.Base It is as follows that significance in background and prospect integrates formula:
S i u = S i b * ( 1 - exp ( - &beta; * S i f ) )
Wherein,For the integration significance of i-th super-pixel block,Represent the based on background aobvious of i-th super-pixel block Work degree,Represent i-th super-pixel block significance based on prospect, parameter P value is constant, parameter area can be 2.5~ 8, determined by actual tests and can obtain optimum detection effect when β is set to 4;Integrate the notable figure obtained and can either highlight prospect Target restrained effectively again background noise.
In order to obtain the notable figure the most more smoothed, the saliency map optimized and combined further, majorized function is as follows:
S r = arg min s &lsqb; &Sigma; i , j = 1 N w c ( s i , s j ) ( S i - S j ) 2 + &Sigma; i = 1 N p s i ( 1 - x i * ) ( S i - S i u ) 2 + &Sigma; i = 1 N ( 1 - p s i ) x i * ( S i - S i u ) 2 &rsqb;
Wherein, wc(si,sj) represent the color similarity of two adjacent super-pixel block, SiAnd SjRepresent to be optimized adjacent The significance of two super-pixel block,WithIt is respectively super-pixel block siBackground weight and prospect labelling,For super-pixel block siIntegration significance, N is the number of super-pixel in image.This majorized function is global optimization function, carries out majorized function Solving, solution procedure sees prior art, the last optimization significance directly once solving all super-pixel block, is formed final Based on background and the notable figure of foreground information.
Fig. 5 is the comparison diagram of the significance testing result according to embodiments of the invention and the testing result of prior art. Wherein, Fig. 5 (c) is the testing result of the present patent application, and Fig. 5 (d) is IT model (L.Itti, C.Koch, E.Niebur, A model of saliency-based visual attention for rapid scene analysis,IEEE Trans.Pattern Anal.Mach.Intell 20 (11) (1998) 1,254 1259.) testing result, Fig. 5 (e) is XIE Model (Y.Xie, H.Lu, M.-H.Yang, Bayesian saliency via low and mid level cues, IEEE Trans.Image Processing 22 (5) (2013) 1,689 1698.) testing result, Fig. 5 (f) is BFS model (J.Wang,H.Lu,X.Li,N.Tong,W.Liu,Saliency detection via background and Foreground seed selection, Neurocomputing 152 (2015) 359 368.) testing result.IT model Being that one watches Focus prediction model attentively, it cannot as one man highlight whole well-marked target.XIE model and BFS model pass through respectively The region that angle point convex hull and adaptivenon-uniform sampling produce introduces foreground information, usually contains background parts, no in its region produced Foreground target feature can be reflected exactly.
The above is only the preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art For Yuan, on the premise of without departing from the technology of the present invention principle, it is also possible to make some improvement and modification, these improve and modification Also should be regarded as protection scope of the present invention.

Claims (7)

1. a coloured image significance detection method based on background and foreground information, is characterized in that, comprise the following steps:
Step one, Image semantic classification: the coloured image inputted is carried out over-segmentation process and obtains a series of super-pixel block, by super picture Element block is as minimal processing unit;
Step 2, significance based on background information detects: choose background seed, by between each super-pixel block and background seed Characteristic Contrast obtains coarse significance;Feature based on background seed distribution defines the background weight of each super-pixel block, by the back of the body Scape weight is improved coarse significance and is obtained significance based on background information;
Step 3, significance based on foreground information detects: based on background information the notable figure forming previous step is carried out point Cut, all segmentation results are chosen a close foreground area, extract foreground area feature, obtain base by Characteristic Contrast Significance in foreground information;
Step 4, the integration of significance: significance based on background and the foreground information acquisition integrating first two steps acquisition is integrated aobvious Work degree, and carry out the significance after smooth operation obtains the optimization of all super-pixel block to integrating significance.
A kind of coloured image significance detection method based on background and foreground information the most according to claim 1, it is special Levying and be, in described step one, over-segmentation processes and uses SLIC superpixel segmentation method.
A kind of coloured image significance detection method based on background and foreground information the most according to claim 1, it is special Levy and be, in described step 2, it is thus achieved that the detailed process of significance based on background information is:
11) choose the super-pixel block at framing mask as background seed, entered with background seed by super-pixel block each in image Row Characteristic Contrast obtains the coarse significance of each super-pixel block;
12) selected background seed is carried out K mean cluster, determine that each cluster belongs to the back of the body according to the spatial distribution of each cluster The probability of scape, in kth cluster, the background weight of background seed is defined as follows:
Pk=1-exp (-α (Ls+Lo)) (k=1,2 ..., K)
Wherein, LsFor comprising the length of the shortest super-pixel chain of all kth cluster, LoGather for this super-pixel chain belongs to other The quantity of the super-pixel block of class, parameter alpha scope is 0.01~0.08, and K is the cluster centre number chosen;
13) for other super-pixel block in image, first, calculate the geodesic distance of super-pixel block and had powerful connections seed, obtain The background seed minimum with this super-pixel block geodesic distance:
s j * = arg min s j &Element; B G d g e o ( s i , s j ) ( s i &NotElement; B G )
Wherein, BG is the set of background seed, dgeo(si, sj) it is the geodesic distance of two super-pixel block;From previous step 12) The background probability of this background seed, remembers that the background probability of this background seed isThis super-pixel block and the geodetic of this background seed Distance isThen the background weight of this super-pixel block is:
p s i = p s j * d g e o * ( s i &NotElement; B G )
Calculate the background weight of each super-pixel block the most successively;
14) significance based on background information of definition super-pixel block is:
S i b = S i c * ( 1 - p s i )
Wherein,For super-pixel block siSignificance based on background information,For step 11) in calculated super-pixel block siCoarse significance.
A kind of coloured image significance detection method based on background and foreground information the most according to claim 3, it is special Levying and be, parameter alpha is 0.05.
A kind of coloured image significance detection method based on background and foreground information the most according to claim 3, it is special Levy and be, in described step 3, it is thus achieved that significance detailed process based on foreground information is:
21) utilize parametric maxflow method that based on background the notable figure obtained in previous step is split, Obtaining a series of close foreground area, maxflow method segmentation result is:
x f = min x &Sigma; i = 1 N ( - ln S i b + &lambda;A i ) x i + &Sigma; 1 < i < j < N e i j x i x j
Wherein, N is super-pixel number, A in imageiFor super-pixel block siArea, xi{ 1,0} represents super-pixel block s to ∈iWhether belong to In foreground area, eijFor the similarity between neighbouring super pixels block, xfFor the foreground area segmentation result obtained;
22) in all segmentation results, the segmentation result of foundation below equation selected value optimum is as foreground area:
x * = arg min x f &Sigma; i = 1 N ( x i f - S i b ) + V ( x f )
Wherein, XfThe multiple segmentation results obtained for segmentation, N is the number of super-pixel block,For super-pixel block siBased on background Significance, V (xf) it is segmentation result xfSpace coordinates variance;
23) extract the foreground area feature selected, determine that in image, the based on prospect of each super-pixel block shows by Characteristic Contrast Work degree:
S i f = &Sigma; s j &Element; F G 1 d c ( s i , s j ) + d l ( s i , s j )
Wherein, FG is the set of the prospect super-pixel block obtained, dc(si, sj) and dl(si, sj) be respectively between super-pixel block Color distance and space length.
A kind of coloured image significance detection method based on background and foreground information the most according to claim 5, it is special Levying and be, in described step 4, detailed process is:
31) significance based on background and prospect obtained is integrated, integrates formula as follows:
S i u = S i b * ( 1 - exp ( - &beta; * S i f ) )
Wherein,For super-pixel block siIntegration significance,Represent super-pixel block siSignificance based on background,Represent Super-pixel block siSignificance based on prospect, parameter P value scope is 2.5~8;
32) significance optimized and combined further, majorized function is as follows:
S r = arg min S &lsqb; &Sigma; i , j = 1 N w c ( s i , s j ) ( S i - S j ) 2 + &Sigma; i = 1 N p s i ( 1 - x i * ) ( S i - S i u ) 2 + &Sigma; i = 1 N ( 1 - p s i ) x i * ( S i - S i u ) 2 &rsqb;
Wherein, wc(si,sj) represent the color similarity of two adjacent super-pixel block, SiAnd SjRepresent to be optimized adjacent two The significance of super-pixel block,WithIt is respectively super-pixel block siBackground weight and prospect labelling,For super-pixel block si's Integrating significance, N is the number of super-pixel in image;This majorized function is global optimization function, is optimized majorized function Solve the optimization significance of all super-pixel block.
A kind of coloured image significance detection method based on background and foreground information the most according to claim 6, it is special Levying and be, parameter P value is 4.
CN201610654316.1A 2016-08-10 2016-08-10 A kind of color image conspicuousness detection method based on background and foreground information Expired - Fee Related CN106327507B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610654316.1A CN106327507B (en) 2016-08-10 2016-08-10 A kind of color image conspicuousness detection method based on background and foreground information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610654316.1A CN106327507B (en) 2016-08-10 2016-08-10 A kind of color image conspicuousness detection method based on background and foreground information

Publications (2)

Publication Number Publication Date
CN106327507A true CN106327507A (en) 2017-01-11
CN106327507B CN106327507B (en) 2019-02-22

Family

ID=57740141

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610654316.1A Expired - Fee Related CN106327507B (en) 2016-08-10 2016-08-10 A kind of color image conspicuousness detection method based on background and foreground information

Country Status (1)

Country Link
CN (1) CN106327507B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106709474A (en) * 2017-01-23 2017-05-24 无锡职业技术学院 Handwritten telephone number identification, verification and information sending system
CN107016682A (en) * 2017-04-11 2017-08-04 四川大学 A kind of notable object self-adapting division method of natural image
CN107194870A (en) * 2017-05-24 2017-09-22 北京大学深圳研究生院 A kind of image scene reconstructing method based on conspicuousness object detection
CN107452013A (en) * 2017-05-27 2017-12-08 深圳市美好幸福生活安全系统有限公司 Conspicuousness detection method based on Harris Corner Detections and Sugeno fuzzy integrals
CN108198172A (en) * 2017-12-28 2018-06-22 北京大学深圳研究生院 Image significance detection method and device
CN108965739A (en) * 2018-06-22 2018-12-07 北京华捷艾米科技有限公司 video keying method and machine readable storage medium
CN109166106A (en) * 2018-08-02 2019-01-08 山东大学 A kind of target detection aligning method and apparatus based on sliding window
CN109472259A (en) * 2018-10-30 2019-03-15 河北工业大学 Conspicuousness detection method is cooperateed with based on energy-optimised image
CN110310263A (en) * 2019-06-24 2019-10-08 北京师范大学 A kind of SAR image residential block detection method based on significance analysis and background priori
CN110866896A (en) * 2019-10-29 2020-03-06 中国地质大学(武汉) Image saliency target detection method based on k-means and level set super-pixel segmentation
CN112183556A (en) * 2020-09-27 2021-01-05 长光卫星技术有限公司 Port ore heap contour extraction method based on spatial clustering and watershed transformation
CN112861858A (en) * 2021-02-19 2021-05-28 首都师范大学 Significance truth diagram generation method and significance detection model training method
CN113378873A (en) * 2021-01-13 2021-09-10 杭州小创科技有限公司 Algorithm for determining attribution or classification of target object
CN117745563A (en) * 2024-02-21 2024-03-22 深圳市格瑞邦科技有限公司 Dual-camera combined tablet personal computer enhanced display method
CN112861858B (en) * 2021-02-19 2024-06-07 北京龙翼风科技有限公司 Method for generating saliency truth value diagram and method for training saliency detection model

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150235374A1 (en) * 2014-02-20 2015-08-20 Nokia Corporation Method, apparatus and computer program product for image segmentation
CN105513070A (en) * 2015-12-07 2016-04-20 天津大学 RGB-D salient object detection method based on foreground and background optimization

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150235374A1 (en) * 2014-02-20 2015-08-20 Nokia Corporation Method, apparatus and computer program product for image segmentation
CN105513070A (en) * 2015-12-07 2016-04-20 天津大学 RGB-D salient object detection method based on foreground and background optimization

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JIANPENG WANG, ET AL.: "Saliency detection via background and foreground seed selection", 《NEUROCOMPUTING》 *
LAURENT ITTI, ET AL.: "A Model of Saliency-Based Visual Attention for Rapid Scene Analysis", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
韩守东 等: "基于高斯超像素的快速Graph Cuts图像分割方法", 《自动化学报》 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106709474A (en) * 2017-01-23 2017-05-24 无锡职业技术学院 Handwritten telephone number identification, verification and information sending system
CN107016682A (en) * 2017-04-11 2017-08-04 四川大学 A kind of notable object self-adapting division method of natural image
CN107194870B (en) * 2017-05-24 2020-07-28 北京大学深圳研究生院 Image scene reconstruction method based on salient object detection
CN107194870A (en) * 2017-05-24 2017-09-22 北京大学深圳研究生院 A kind of image scene reconstructing method based on conspicuousness object detection
CN107452013A (en) * 2017-05-27 2017-12-08 深圳市美好幸福生活安全系统有限公司 Conspicuousness detection method based on Harris Corner Detections and Sugeno fuzzy integrals
CN108198172A (en) * 2017-12-28 2018-06-22 北京大学深圳研究生院 Image significance detection method and device
CN108965739A (en) * 2018-06-22 2018-12-07 北京华捷艾米科技有限公司 video keying method and machine readable storage medium
CN109166106A (en) * 2018-08-02 2019-01-08 山东大学 A kind of target detection aligning method and apparatus based on sliding window
CN109472259B (en) * 2018-10-30 2021-03-26 河北工业大学 Image collaborative saliency detection method based on energy optimization
CN109472259A (en) * 2018-10-30 2019-03-15 河北工业大学 Conspicuousness detection method is cooperateed with based on energy-optimised image
CN110310263A (en) * 2019-06-24 2019-10-08 北京师范大学 A kind of SAR image residential block detection method based on significance analysis and background priori
CN110310263B (en) * 2019-06-24 2020-12-01 北京师范大学 SAR image residential area detection method based on significance analysis and background prior
CN110866896A (en) * 2019-10-29 2020-03-06 中国地质大学(武汉) Image saliency target detection method based on k-means and level set super-pixel segmentation
CN112183556A (en) * 2020-09-27 2021-01-05 长光卫星技术有限公司 Port ore heap contour extraction method based on spatial clustering and watershed transformation
CN112183556B (en) * 2020-09-27 2022-08-30 长光卫星技术股份有限公司 Port ore heap contour extraction method based on spatial clustering and watershed transformation
CN113378873A (en) * 2021-01-13 2021-09-10 杭州小创科技有限公司 Algorithm for determining attribution or classification of target object
CN112861858A (en) * 2021-02-19 2021-05-28 首都师范大学 Significance truth diagram generation method and significance detection model training method
CN112861858B (en) * 2021-02-19 2024-06-07 北京龙翼风科技有限公司 Method for generating saliency truth value diagram and method for training saliency detection model
CN117745563A (en) * 2024-02-21 2024-03-22 深圳市格瑞邦科技有限公司 Dual-camera combined tablet personal computer enhanced display method
CN117745563B (en) * 2024-02-21 2024-05-14 深圳市格瑞邦科技有限公司 Dual-camera combined tablet personal computer enhanced display method

Also Published As

Publication number Publication date
CN106327507B (en) 2019-02-22

Similar Documents

Publication Publication Date Title
CN106327507A (en) Color image significance detection method based on background and foreground information
WO2022160771A1 (en) Method for classifying hyperspectral images on basis of adaptive multi-scale feature extraction model
EP3696726A1 (en) Ship detection method and system based on multidimensional scene characteristics
CN110472676A (en) Stomach morning cancerous tissue image classification system based on deep neural network
CN110120040A (en) Sectioning image processing method, device, computer equipment and storage medium
CN109697460A (en) Object detection model training method, target object detection method
CN107123123A (en) Image segmentation quality evaluating method based on convolutional neural networks
CN108154159B (en) A kind of method for tracking target with automatic recovery ability based on Multistage Detector
Dong et al. A multiscale self-attention deep clustering for change detection in SAR images
CN107730515A (en) Panoramic picture conspicuousness detection method with eye movement model is increased based on region
CN109447998A (en) Based on the automatic division method under PCANet deep learning model
CN108664838A (en) Based on the monitoring scene pedestrian detection method end to end for improving RPN depth networks
CN109363698A (en) A kind of method and device of breast image sign identification
Li et al. Breaking the resolution barrier: A low-to-high network for large-scale high-resolution land-cover mapping using low-resolution labels
CN108665483B (en) Cancer cell tracking method based on multi-feature fusion
CN110852330A (en) Behavior identification method based on single stage
CN110533100A (en) A method of CME detection and tracking is carried out based on machine learning
Shu et al. Center-point-guided proposal generation for detection of small and dense buildings in aerial imagery
Wang et al. Segmentation of corn leaf disease based on fully convolution neural network
CN116403121A (en) Remote sensing image water area segmentation method, system and equipment for multi-path fusion of water index and polarization information
CN106529441A (en) Fuzzy boundary fragmentation-based depth motion map human body action recognition method
Li et al. Region focus network for joint optic disc and cup segmentation
CN110033448B (en) AI-assisted male baldness Hamilton grading prediction analysis method for AGA clinical image
Li et al. Domain adaptive box-supervised instance segmentation network for mitosis detection
Sohail et al. Deep object detection based mitosis analysis in breast cancer histopathological images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190222

Termination date: 20210810