CN110111353A - A kind of image significance detection method absorbing chain based on markov background and prospect - Google Patents
A kind of image significance detection method absorbing chain based on markov background and prospect Download PDFInfo
- Publication number
- CN110111353A CN110111353A CN201910353130.6A CN201910353130A CN110111353A CN 110111353 A CN110111353 A CN 110111353A CN 201910353130 A CN201910353130 A CN 201910353130A CN 110111353 A CN110111353 A CN 110111353A
- Authority
- CN
- China
- Prior art keywords
- super
- pixel
- background
- node
- target image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 32
- 238000010521 absorption reaction Methods 0.000 claims abstract description 39
- 238000000034 method Methods 0.000 claims abstract description 11
- 230000007246 mechanism Effects 0.000 claims abstract description 4
- 239000011159 matrix material Substances 0.000 claims description 26
- 230000002745 absorbent Effects 0.000 claims description 14
- 238000012546 transfer Methods 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 3
- 241000208340 Araliaceae Species 0.000 claims 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 claims 1
- 235000003140 Panax quinquefolius Nutrition 0.000 claims 1
- 235000008434 ginseng Nutrition 0.000 claims 1
- 230000011218 segmentation Effects 0.000 claims 1
- 238000005457 optimization Methods 0.000 abstract description 3
- 238000004422 calculation algorithm Methods 0.000 description 4
- 238000005295 random walk Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000000644 propagated effect Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 239000012141 concentrate Substances 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000006854 communication Effects 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 230000007480 spreading Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20164—Salient point detection; Corner detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The present invention relates to a kind of image significance detection methods that chain is absorbed based on markov background and prospect to delete boundary candidate background collection B first according to boundary connectivity0The middle lower super-pixel of background probability value obtains boundary background collection B1;Then boundary priori notable figure S is combinedbg1In each node saliency value, in convex closure H, boundary background collection B1Region in addition increases background seed, and update obtains background subset B, then constructs background absorption Markov chain, generates background absorption notable figure Sbg2, merge notable figure Sbg1With Sbg2, obtain the first stage based on background notable figure Sbg;Then in the range of convex closure H, according to SbgForeground seeds collection F is selected, then constructs prospect absorbing Markov chain, the prospect for obtaining second stage absorbs notable figure Sfg;Finally, by two stages notable figure Sbg、SfgIt is merged, obtains combination notable figure S, and by smooth mechanism optimization, obtain final notable figure S*, the present invention is compared with the traditional method, and performance is significantly improved, and can more accurately detect conspicuousness target.
Description
Technical field
The present invention relates to a kind of image significance detection methods that chain is absorbed based on markov background and prospect, belong to figure
As detection technique field.
Background technique
Saliency detection is the conspicuousness (degree for attracting people's vision attention) for calculating various pieces in image,
To which the regional search of most significant (attracting degree highest) be come out.
Conspicuousness mechanism of transmission based on figure, it has also become one of common strategy in conspicuousness detection field.Researcher with
Based on graph theory, graph model is established for image, divides the image into multiple regions, a node in a region corresponding diagram,
And the side between node also defines therewith.According to some priori knowledges of image, the part of nodes of image can be labeled as seed
Then node designs the conspicuousness that propagation model propagates seed node, by Spreading and diffusion, each node is endowed accordingly in figure
Saliency value.Common priori has powerful connections priori, prospect priori, center priori, shape prior, color priori etc.;Common biography
Broadcasting model has Markov model, manifold ranking model, cellular Automation Model, random walk model etc..
In " Saliency Detection via Absorbing Markov Chain " paper that Jiang etc. is delivered, mention
The image significance detection method (abbreviation MC method) based on absorbing Marcov chain is gone out, this method is using boundary node as background
Subset, is copied as virtual absorption node, and all nodes are as transfering node in image, from any one transfer section
Point sets out, and carries out random walk, is absorbed time, the conspicuousness of Lai Hengliang transfering node according to arrival absorption node.
MC method is disadvantageous in that: first is that the selection of boundary background seed is not accurate enough, 1~2 boundary of certain images
In, it is possible that well-marked target;Second is that boundary background seed only covers wherein one in background seed specimen space
Point, it will affect the efficiency of propagation to a certain extent;Third is that, to some more special images, might have in communication process
Part background is not inhibited well, and the saliency value of individual singular points can be very big, and salient region is not protruded effectively;
Fourth is that only consider that the absorption based on background seed is propagated, circulation way unification;Fifth is that foreground and background is not equal enough in notable figure
Even, it is still necessary to advanced optimize processing.
Summary of the invention
Technical problem to be solved by the invention is to provide a kind of images that chain is absorbed based on markov background and prospect
Conspicuousness detection method, integrated application multiple technologies set about from two dimensions of background and prospect in one, can significantly improve figure
As the detection efficiency of conspicuousness.
In order to solve the above-mentioned technical problem the present invention uses following technical scheme: the present invention devises one kind can based on Ma Er
Husband's background and prospect absorb the image significance detection method of chain, for realizing the detection of conspicuousness target in target image, packet
Include following steps:
Step A. obtains the remarkable characteristic in target image, convex closure is constructed, subsequently into step B;
Step B. carries out super-pixel segmentation processing for target image, obtains each super-pixel, then obtains super picture two-by-two
Similarity between element, and enter step C;
It is borderline region that step C., which is defined along the region that target image edge one week, width are d, if wrapping according in super-pixel
Containing the pixel for being located at borderline region, then determine that the super-pixel belongs to boundary candidate background collection B0Rule, construct boundary candidate
Background collection B0, subsequently into step D;
Step D. is directed to boundary candidate background collection B according to the similarity between super-pixel two-by-two in target image respectively0In
Each super-pixel, obtain remaining each super-pixel that the similarity between the super-pixel is not less than default similarity threshold, and
In conjunction with the super-pixel, similar area corresponding to the super-pixel is collectively formed, and then obtains boundary candidate background collection B0In it is each
The corresponding similar area of super-pixel difference, subsequently into step E;
Step E. is directed to boundary candidate background collection B respectively0In similar area corresponding to each super-pixel, obtain super-pixel
The boundary connectivity of corresponding similar area obtains the super-pixel and belongs to background further according to the boundary connectivity of similar area
Then probability deletes boundary candidate background collection B0In, probability be less than the super-pixel of default background probability threshold value, be updated to boundary back
Scape collection B1, subsequently into step F;
Step F. obtains each super-pixel in target image and distinguishes retive boundary background collection B1Saliency value, and be averaged
Value, as the significant average value of opposite background;Then it is less than in region, simultaneously between borderline region and convex closure in selection target image
The super-pixel of default background remarkable threshold and the opposite significant average value of background, is added boundary background collection B1, it is updated to background seed
Collect B, and enters step G;
Step G. constructs non-directed graph G corresponding to whole super-pixel in target image, then according to side each in non-directed graph G
Weight constructs adjacency matrix W corresponding to non-directed graph G, and enters step H;
Step H. constructs background absorption Markov chain figure according to non-directed graph G and background subset BThen in conjunction with nothing
To adjacency matrix W corresponding to figure G, background absorption Markov chain figure is obtainedCorresponding incidence matrix A, and according to association
Matrix A obtains background absorption Markov chain figureCorresponding probability transfer matrix P, it is last according to the time is absorbed, it obtains
Each super-pixel is respectively with respect to the saliency value of background subset B in target image, and enters step I;
Super-pixel each in target image is distinguished retive boundary background collection B by step I.1Saliency value, with target image in
Saliency value of each super-pixel respectively with respect to background subset B is merged, and the opposite back of each super-pixel in target image is obtained
The saliency value of scape;Then the saliency value according to each super-pixel with respect to background, in conjunction with default opposite background remarkable threshold, for mesh
Each super-pixel carries out the division of prospect and background in logo image, realizes the division of prospect and background in target image.
As a preferred technical solution of the present invention, in the step I, it is opposite to obtain each super-pixel in target image
After the saliency value of background, into following steps:
In step J. selection target image each super-pixel with respect to maximum value in the saliency value of background presupposition multiple, as
Prospect remarkable threshold, then selects in convex closure, the saliency value of opposite background is not less than the super-pixel of prospect remarkable threshold, before composition
Scape subset F, subsequently into step K;
Step K. press step H method, be based on absorbing Markov chain, obtain target image in each super-pixel relatively before
The saliency value of scape subset F, i.e., each super-pixel and enters step L with respect to the saliency value of prospect in target image;
Step L. is according to default weight, saliency value and target image by super-pixel each in target image with respect to background
In each super-pixel merged with respect to the saliency value of prospect, obtain the saliency value of each super-pixel in target image, then root
According to the saliency value of each super-pixel, in conjunction with default opposite remarkable threshold, for super-pixel each in target image carry out prospect with
The division of prospect and background in target image is realized in the division of background.
As a preferred technical solution of the present invention, in the step L, each super-pixel is aobvious in acquisition target image
After work value, the saliency value using smooth mechanism for super-pixel in target image is smoothed update, then according to each
The saliency value of a super-pixel carries out prospect and background for super-pixel each in target image in conjunction with default opposite remarkable threshold
Division, realize target image in prospect and background division.
As a preferred technical solution of the present invention, in the step B, as follows:
Obtain the similarity sim between super-pixel two-by-twoij, wherein Vi、VjAny super picture respectively in target image
Element, dcolor(Vi,Vj) indicate super-pixel ViWith super-pixel VjIn the Euclidean distance of CIELAB color space, σ2Indicate default balance
Parameter, σ2=0.1.
As a preferred technical solution of the present invention, in the step E, it is directed to boundary candidate background collection B respectively0In it is each
Similar area corresponding to a super-pixel, as follows:
Obtain the boundary connectivity of similar area corresponding to super-pixelWherein,Indicate boundary candidate back
Scape collection B0In similar area corresponding to i-th of super-pixel boundary connectivity, δ () expression refers to function, works as super-pixelWhen,Otherwise Indicate boundary candidate background collection B0In i-th surpass picture
Similar area corresponding to element,Indicate boundary candidate background collection B0In j-th in similar area corresponding to i-th of super-pixel
Super-pixel,Indicate boundary candidate background collection B0In i-th of super-pixel, with boundary candidate background collection B0In i-th surpass picture
The similarity between j-th of super-pixel in similar area corresponding to element.
As a preferred technical solution of the present invention, in the step E, according to the boundary connectivity of similar area, press
Following formula:
Obtain the probability that the super-pixel belongs to backgroundWherein,Indicate boundary candidate background collection B0In i-th
A super-pixel belongs to the probability of background,Indicate boundary candidate background collection B0In similar area corresponding to i-th of super-pixel
Boundary connectivity,
As a preferred technical solution of the present invention, in the step F, according to the following formula:
Sbg1(i)=scolor(i)·wdis(i)
It obtains each super-pixel in target image and distinguishes retive boundary background collection B1Saliency value Sbg1(i), wherein Sbg1
(i) i-th of super-pixel retive boundary background collection B in target image is indicated1Saliency value, scolor(i) it indicates i-th in target image
A super-pixel and boundary background collection B1Color difference, Indicate side
Boundary background collection B1In j-th of super-pixel, nbIndicate boundary background collection B1The sum of middle super-pixel,Indicate target figure
I-th of super-pixel and boundary background collection B as in1In j-th of super-pixel CIELAB color space Euclidean distance,
wdis(i) i-th of super-pixel and boundary background collection B in target image are indicated1Spatial diversity, Indicate i-th of super-pixel centre coordinate position and boundary in target image
Background collection B1In Euclidean distance between j-th of super-pixel centre coordinate position,
It is section with super-pixel whole in target image in the step G as a preferred technical solution of the present invention
Point constitutes non-directed graph G in conjunction with the side between following regular definition node;
Each node is connected respectively with its spatial neighbors node and each node neighbours with its each spatial neighbors node respectively
Node is connected, and boundary background collection B1In all nodes between it is two two interconnected;
Then as follows:
Obtain the weight w on each side in non-directed graph Gi'j', construct adjacency matrix W corresponding to non-directed graph G, wherein Vi'、Vj'Point
It Biao Shi there are two super-pixel on side, w each other in non-directed graph Gi'j'Indicate the i-th ' super-pixel and jth in non-directed graph G ' super picture
The weight on side, d between elementcolor(Vi',Vj') indicate that there are two super-pixel on side in CIELAB color each other in non-directed graph G
The Euclidean distance in space, σ2Indicate default balance parameters.
As a preferred technical solution of the present invention, in the step H, using all super-pixel in target image as
Transfering node is replicated for the super-pixel in background subset B, as virtual absorption node, is based on non-directed graph G, with
Transfering node absorbs node, in conjunction with the side between following regular definition node, constitutes background absorption Markov chain figure
1) transfering node itself relationship: each transfering node exists from ring, i.e. transfering node may be connected to itself, side length power
Weight is 1;
2) relationship between transfering node: between transfering node between corresponding node in non-directed graph G relationship consistency;
3) relationship between transfering node and virtual absorbent node: if transfering node has the virtual absorbent section replicated to it
Point, then the transfering node is directed toward its corresponding virtual absorbent node, and unidirectional connection therewith, while being connected with the transfering node
Other transfering nodes, be respectively directed to virtual absorbent node corresponding to the transfering node, and unidirectional connection therewith;
4) relationship between virtual absorbent node: being not associated between each virtual absorption node;
5) virtual absorbent node itself relationship: each absorption node exists from ring, and side length weight is 1;
For background absorption Markov chain figureIn node rearrange, by transfering node come front, absorb section
Point comes below, according to this sequence, as follows:
Obtain background absorption Markov chain figureIncidence matrixWherein, n indicates background absorption Ma Er
It can husband's chain figureInterior joint sum, k indicate super-pixel sum in target image,Indicate background absorption Markov chain figureIn
I-th of node,Indicate background absorption Markov chain figureIn j-th of node,Indicate background absorption markov
Chain figureIn be connected with i-th of node the set of each node ID,It indicatesReplica node,Indicate background absorption
Markov chain figureIn withA node is connected the set of each node ID,It indicatesReplica node.
As a preferred technical solution of the present invention, in the step I, as follows:
Super-pixel each in target image is distinguished into retive boundary background collection B1Saliency value, with it is each super in target image
Saliency value of the pixel respectively with respect to background subset B is merged, and it is aobvious with respect to background to obtain each super-pixel in target image
Work value Sbg(i), wherein Sbg1(i) i-th of super-pixel retive boundary background collection B in target image is indicated1Saliency value, Sbg2(i)
Indicate saliency value of i-th of super-pixel with respect to background subset B in target image, θ is default balance factor;
In the step L, as follows:
S (i)=α Sbg(i)+(1-α)Sfg(i)
Saliency value S by super-pixel each in target image with respect to backgroundbg(i), with super-pixel phase each in target image
To the saliency value S of prospectfg(i) it is merged, obtains the saliency value S (i) of each super-pixel in target image, wherein α is default
Weight.
A kind of image significance detection method absorbing chain based on markov background and prospect of the present invention, use with
Upper technical solution compared with prior art, has following technical effect that
The designed image significance detection method that chain is absorbed based on markov background and prospect of the invention, beneficial effect
It is in particular in: first, by super-pixel similitude, rather than by the weight on side, similitude region is found, to optimize side
Boundary's connectivity algorithm calculates the probability that each super-pixel in boundary belongs to background, by the lower super-pixel of background probability value,
It concentrates and rejects from boundary candidate background, obtain accurate boundary background collection;Second, the color contrast based on spatial weighting, convex
Region other than packet, boundary background collection, suitably increases some background nodes as background seed, it is merged with boundary background collection
Background subset is obtained, to improve efficiency of algorithm;Third inhales boundary priori notable figure and background to protrude well-marked target
Notable figure fusion is received, laying the foundation based on background notable figure, and for the selection of second stage foreground seeds for first stage is obtained;
4th, accurate foreground seeds collection is screened in the range of convex closure based on first stage notable figure;5th, it makes full use of and is based on
Background and the complementarity based on foreground detection method have carried out effect spread respectively, and have rationally been merged and put down to result is propagated
Sliding optimization.
Detailed description of the invention
Fig. 1 is the process for the image significance detection method that present invention design absorbs chain based on markov background and prospect
Figure;
Fig. 2 is the present invention and other methods PR (accuracy rate, recall rate) curve comparison schematic diagram on ECSSD data set;
Fig. 3 is that the present invention and other methods accuracy rate, recall rate and the comparison of F-measure value on ECSSD data set are shown
It is intended to;
Fig. 4 is the present invention and other methods visual effect contrast schematic diagram in part on ECSSD data set.
Specific embodiment
Specific embodiments of the present invention will be described in further detail with reference to the accompanying drawings of the specification.
The present invention devises a kind of image significance detection method that chain is absorbed based on markov background and prospect, is used for
The detection for realizing conspicuousness target in target image, in practical application, as shown in Figure 1, specifically comprising the following steps.
Step A. enhances Harris operator using color, by using color characteristic, carries out the angle Harris to original image
Point detection obtains the remarkable characteristic in target image according to the remarkable characteristic detected, convex closure H is constructed, subsequently into step
Rapid B;Convex closure H has substantially determined the range of salient region in target image, is background area outside convex closure H, conspicuousness target exists
In convex closure H, but many background areas are still contained in convex closure H.
Step B. uses SLIC (Simple Linear Iterative Cluster, simple linear iteration cluster) algorithm,
Super-pixel segmentation processing is carried out for target image, obtains each super-pixel, then as follows:
Obtain the similarity sim between super-pixel two-by-twoij, and enter step C;Wherein, Vi、VjRespectively in target image
Any super-pixel, dcolor(Vi,Vj) indicate super-pixel ViWith super-pixel VjIn the Euclidean distance of CIELAB color space.
ci、cjRespectively super-pixel Vi、VjCIELAB color characteristics of mean vector.
σ2Indicate default balance parameters, in practical application, σ2=0.1, simijValue illustrates between two super-pixel closer to 1
It is more similar.
In many situations, 4 boundaries of image are background, but also have some images, are gone out on 1~2 boundary
A part of existing well-marked target, if this partial content is selected as background node, it will reduce and be based on background priori and suction
The conspicuousness detection accuracy of receipts.
It is borderline region that step C., which is defined along the region that target image edge one week, width are d, if wrapping according in super-pixel
Containing the pixel for being located at borderline region, then determine that the super-pixel belongs to boundary candidate background collection B0Rule, construct boundary candidate
Background collection B0, subsequently into step D.
Boundary candidate background collection B0The abscissa range of included super-pixel is { x | 0≤x≤d ∪ w-d≤x≤w }, indulges and sits
Marking range is { y | 0≤y≤d ∪ h-d≤y≤h }, and w*d is the resolution ratio of target image.
Step D. is directed to boundary candidate background collection B according to the similarity between super-pixel two-by-two in target image respectively0In
Each super-pixel, obtain remaining each super-pixel that the similarity between the super-pixel is not less than default similarity threshold, and
In conjunction with the super-pixel, similar area corresponding to the super-pixel is collectively formed, and then obtains boundary candidate background collection B0In it is each
Super-pixel corresponding similar area respectively in practical application, presets the value range of similarity threshold subsequently into step E
For [0.7,0.9].
Step E. is directed to boundary candidate background collection B respectively0In similar area corresponding to each super-pixel, by following public
Formula:
Obtain the boundary connectivity of similar area corresponding to super-pixelWherein,Indicate boundary candidate back
Scape collection B0In similar area corresponding to i-th of super-pixel boundary connectivity, δ () expression refers to function, works as super-pixelWhen,Otherwise Indicate boundary candidate background collection B0In i-th surpass
Similar area corresponding to pixel,Indicate boundary candidate background collection B0In jth in similar area corresponding to i-th of super-pixel
A super-pixel,Indicate boundary candidate background collection B0In i-th of super-pixel, with boundary candidate background collection B0In i-th surpass
The similarity between j-th of super-pixel in similar area corresponding to pixel.
Further according to the boundary connectivity of similar areaAs follows:
Obtain the probability that the super-pixel belongs to backgroundThen similar area boundary connectivity is bigger, belongs to background
Probability is bigger, whereinIndicate boundary candidate background collection B0In i-th of super-pixel belong to the probability of background,
Then boundary candidate background collection B is deleted0In, probability be less than the super-pixel of default background probability threshold value, be updated to side
Boundary background collection B1, subsequently into step F, in practical application, the value range for presetting background probability threshold value is [0.2,0.4].
If a super-pixel and boundary background collection B1Difference is bigger, then the super-pixel may more belong to well-marked target;If poor
A possibility that different very little, the then super-pixel belongs to background area, is larger, in addition, boundary background collection B1Tribute to super-pixel conspicuousness
It offers, it is usually higher from nearlyr contrast is obtained also with space length correlation.
Step F. is according to the following formula:
Sbg1(i)=scolor(i)·wdis(i)
It obtains each super-pixel in target image and distinguishes retive boundary background collection B1Saliency value Sbg1(i), wherein Sbg1
(i) i-th of super-pixel retive boundary background collection B in target image is indicated1Saliency value, scolor(i) it indicates i-th in target image
A super-pixel and boundary background collection B1Color difference, Indicate side
Boundary background collection B1In j-th of super-pixel, nbIndicate boundary background collection B1The sum of middle super-pixel,Indicate target figure
I-th of super-pixel and boundary background collection B as in1In j-th of super-pixel in the Euclidean distance of CIELAB color space, actually answer
In,wdis(i) i-th of super-pixel and boundary background collection B in target image are indicated1Spatial diversity, Indicate i-th of super-pixel centre coordinate position and boundary in target image
Background collection B1In Euclidean distance between j-th of super-pixel centre coordinate position, in practical application,
Then all super-pixel in target image are obtained and distinguish retive boundary background collection B1Saliency value Sbg1(i) be averaged
Value, as the significant average value of opposite background;Then in selection target image between borderline region and convex closure H in region, small simultaneously
In the super-pixel of default background remarkable threshold and the opposite significant average value of background, boundary background collection B is added1, it is updated to background kind
Subset B, and enter step G;In practical application, background remarkable threshold=0.05 is preset.
Step G. is using super-pixel whole in target image as node, in conjunction with following rule:
Each node is connected respectively with its spatial neighbors node and each node neighbours with its each spatial neighbors node respectively
Node is connected, and boundary background collection B1In all nodes between it is two two interconnected.
Side between definition node, construct non-directed graph G=<V, E corresponding to whole super-pixel in target image>, wherein
V is non-directed graph G interior joint set, and E is line set in non-directed graph G.
Then as follows:
Obtain the weight w on each side in non-directed graph Gi'j', adjacency matrix W corresponding to non-directed graph G is constructed, and enter step H;
Wherein, Vi'、Vj'There are two super-pixel on side, w between respectively indicating in non-directed graph Gi'j'Indicate the i-th ' super picture in non-directed graph G
The weight on side, d between element and jth ' super-pixelcolor(Vi',Vj') indicate that there are two super-pixel on side each other in non-directed graph G
In the Euclidean distance of CIELAB color space, σ2Indicate default balance parameters.
Step H. is according to non-directed graph G and background subset B, using all super-pixel in target image as transfering node,
Replicated for the super-pixel in background subset B, as virtual absorption node, be based on non-directed graph G, with transfering node,
Node is absorbed, in conjunction with the side between following regular definition node, constitutes background absorption Markov chain figure
1) transfering node itself relationship: each transfering node exists from ring, i.e. transfering node may be connected to itself, side length power
Weight is 1;
2) relationship between transfering node: between transfering node between corresponding node in non-directed graph G relationship consistency;
3) relationship between transfering node and virtual absorbent node: if transfering node has the virtual absorbent section replicated to it
Point, then the transfering node is directed toward its corresponding virtual absorbent node, and unidirectional connection therewith, while being connected with the transfering node
Other transfering nodes, be respectively directed to virtual absorbent node corresponding to the transfering node, and unidirectional connection therewith;
4) relationship between virtual absorbent node: being not associated between each virtual absorption node;
5) virtual absorbent node itself relationship: each absorption node exists from ring, and side length weight is 1;
Then it is directed to background absorption Markov chain figureIn node rearrange, by transfering node come front, inhale
Node is received to come below, according to this sequence, the adjacency matrix W in conjunction with corresponding to non-directed graph G, as follows:
Obtain background absorption Markov chain figureIncidence matrixWherein, n indicates background absorption Ma Er
It can husband's chain figureInterior joint sum, n=m+k, m indicate that super-pixel sum in background subset B, k indicate super picture in target image
Plain sum,Indicate background absorption Markov chain figureIn i-th of node,Indicate background absorption Markov chain figureIn
J-th of node,Indicate background absorption Markov chain figureIn be connected with i-th of node the set of each node ID,It indicatesReplica node,Indicate background absorption Markov chain figureIn withA node is connected each node sequence
Number set,It indicatesReplica node.
Then according to incidence matrix A, background absorption Markov chain figure is obtainedCorresponding probability transfer matrix P, most
Afterwards according to the time is absorbed, each super-pixel is respectively with respect to the saliency value S of background subset B in acquisition target imagebg2(i), this
Process is specific as follows.
On the basis of incidence matrix A, and then degree of finding out matrixMember on the diagonal of a matrix
Element is attached to the sum of weight of the node.Therefore, background absorption Markov chain figureProbability transfer matrix P=D-1A, it
Also sparse matrix, elementIndicate nodeIt is transferred toProbability.
Background absorption Markov chain figureCorresponding probability transfer matrix P can be write as simple canonical form, can
It indicates are as follows:
Wherein, sub- square matrixInclude the transition probability between any two transfering node;SubmatrixAny one probability for absorbing node is transferred to comprising any one transfering node;O is m*k dimension submatrix;I is indicated
M*m ties up the sub- square matrix of unit, which is meant that absorb and can not mutually shift between node, absorbs the probability that node is transferred to itself
It is 1.
Background absorption Markov chain figureFundamental matrix beElement in fundamental matrix MTable
Show random walk person from background absorption Markov chain figureTransfering nodeIt sets out, before reaching some and absorbing node,
Live through transfering nodeAverage time.
Random walk person is from transfering nodeSet out, eventually arrive at some absorb node, between by all transfers section
The total degree of point, as transfering nodeBe absorbed the time, corresponding value isNote element is 1 k dimension column
Vector e=[1,1 ..., 1]T, then the formula for being absorbed the time is calculated are as follows:
T=Me
For k dimensional vector, record each transfering node is absorbed the time.Transfering node'sValue is got over
Greatly, illustrate that the time for being absorbed into background seed node is slower, transfering nodeIt more may be well-marked target;Conversely, transfer section
Point'sIt is worth smaller, illustrates that the time for being absorbed into background seed node is faster, transfering nodeIt more may be background.
To time arrow T normalization is absorbed, vector is obtainedThen basisIt obtains each in target image
A super-pixel is respectively with respect to the saliency value S of background subset Bbg2(i)。
Finally enter step I.
Step I. is as follows:
Super-pixel each in target image is distinguished into retive boundary background collection B1Saliency value Sbg1(i), and in target image
Each super-pixel is respectively with respect to the saliency value S of background subset Bbg2(i) it is merged, obtains each super-pixel in target image
The saliency value S of opposite backgroundbg(i), subsequently into step J;Wherein, Sbg1(i) indicate that i-th of super-pixel is opposite in target image
Boundary background collection B1Saliency value, Sbg2(i) saliency value of i-th of super-pixel with respect to background subset B in target image, θ are indicated
To preset balance factor, in practical application, θ=5,To a certain extent, it can inhibit in background absorption notable figure
Ambient noise.
In step J. selection target image each super-pixel with respect to maximum value in the saliency value of background presupposition multiple, as
Prospect remarkable threshold, in practical application, 0.7 of each super-pixel with respect to maximum value in the saliency value of background in selection target image
Times, as prospect remarkable threshold, the saliency value in convex closure, with respect to background is then selected to be not less than the super picture of prospect remarkable threshold
Element constitutes foreground seeds collection F, subsequently into step K.
Step K. press step H method, be based on absorbing Markov chain, obtain target image in each super-pixel relatively before
The saliency value of scape subset F, i.e., saliency value S of each super-pixel with respect to prospect in target imagefg(i), and L is entered step.
Step L.Sbg(i) and Sfg(i) be it is complementary, the former can protrude well-marked target, and the latter can preferably inhibit background
Noise, according to default weight α, as follows:
S (i)=α Sbg(i)+(1-α)Sfg(i)
Saliency value S by super-pixel each in target image with respect to backgroundbg(i), with super-pixel phase each in target image
To the saliency value S of prospectfg(i) it is merged, obtains the saliency value S (i) of each super-pixel in target image, it is contemplated that Sbg(i)
With Sfg(i) contribution is suitable, and α=0.5 is taken in practical application.
In order to highlight conspicuousness target, weaken background area, using smooth mechanism in target image super-pixel it is aobvious
Work value is smoothed update, specific as follows.
S*=argmin (μ ∑i,jwij(S*(i)-S*(j))2+∑i(S*(i)-S(i))2)
In above formula, S Sbg(i) and Sfg(i) fused combination notable figure, i.e. S=[S (1), S (2) ..., S (k)]T;S*
For the final notable figure after smoothing processing, S*=[S*(1),S*(2),…,S*(k)]T。
The first item of function is smoothness constraint on the right of equation, and the Section 2 of function is fitting constraint on the right of equation, wherein
Parameter μ controls the balance of smoothness constraint and fitting constraint.That is, a good notable figure S*It should not be in adjacent super picture
Change too much between element, and should not differ too many with initial notable figure (combining notable figure S here).
By function item μ ∑ on the right of equationi,jwij(S*(i)-S*(j))2+∑i(S*(i)-S(i))2Derivative is set as 0, to count
Calculate minimal solution.By transformation, final notable figure S is obtained*Calculation formula.
S*=λ (D'-W+ λ I)-1S
In above formula, W is the adjacency matrix of non-directed graph G, W=(wij)k*k;Diagonal matrix D'=diag (∑jwij);I is unit
Matrix;λ=1/ (2 μ) takes λ=0.02 in practical application.
Then according to the saliency value of each super-pixel, in conjunction with default opposite remarkable threshold, for each super in target image
Pixel carries out the division of prospect and background, realizes the division of prospect and background in target image.
Based on as shown in Figures 2 to 4, the designed figure for absorbing chain based on markov background and prospect of above-mentioned technical proposal
As conspicuousness detection method, beneficial effect is in particular in: first, by super-pixel similitude, rather than by the weight on side,
It finds similitude region and calculates the probability that each super-pixel in boundary belongs to background to optimize boundary connectivity algorithm,
By the lower super-pixel of background probability value, concentrates and reject from boundary candidate background, obtain accurate boundary background collection;Second, base
In the color contrast of spatial weighting, region other than convex closure, boundary background collection suitably increases some background nodes as background
It is merged to obtain background subset, to improve efficiency of algorithm by seed with boundary background collection;Third, in order to protrude well-marked target,
Boundary priori notable figure is merged with background absorption notable figure, obtain the first stage based on background notable figure, and be second-order
The selection of section foreground seeds lays the foundation;4th, it is based on first stage notable figure, in the range of convex closure, before screening accurately
Scape subset;5th, it makes full use of based on background with based on the complementarity of foreground detection method, has carried out effect spread respectively, and
It is rationally merged and smooth optimization to result is propagated.
Embodiments of the present invention are explained in detail above in conjunction with attached drawing, but the present invention is not limited to above-mentioned implementations
Mode within the knowledge of a person skilled in the art can also be without departing from the purpose of the present invention
It makes a variety of changes.
Claims (10)
1. a kind of image significance detection method for absorbing chain based on markov background and prospect, for realizing in target image
The detection of conspicuousness target, which comprises the steps of:
Step A. obtains the remarkable characteristic in target image, convex closure is constructed, subsequently into step B;
Step B. for target image carry out super-pixel segmentation processing, obtain each super-pixel, then obtain two-by-two super-pixel it
Between similarity, and enter step C;
It is borderline region that step C., which is defined along the region that target image edge one week, width are d, if according to including position in super-pixel
In the pixel of borderline region, then determine that the super-pixel belongs to boundary candidate background collection B0Rule, construct boundary candidate background
Collect B0, subsequently into step D;
Step D. is directed to boundary candidate background collection B according to the similarity between super-pixel two-by-two in target image respectively0In it is each
A super-pixel obtains remaining each super-pixel that the similarity between the super-pixel is not less than default similarity threshold, and combines
The super-pixel collectively forms similar area corresponding to the super-pixel, and then obtains boundary candidate background collection B0In each super picture
The corresponding similar area of element difference, subsequently into step E;
Step E. is directed to boundary candidate background collection B respectively0In similar area corresponding to each super-pixel, it is right to obtain super-pixel institute
The boundary connectivity for answering similar area obtains the probability that the super-pixel belongs to background further according to the boundary connectivity of similar area,
Then boundary candidate background collection B is deleted0In, probability be less than the super-pixel of default background probability threshold value, be updated to boundary background collection
B1, subsequently into step F;
Step F. obtains each super-pixel in target image and distinguishes retive boundary background collection B1Saliency value, and obtain average value, make
For the significant average value of opposite background;Then it is less than in region, simultaneously between borderline region and convex closure in selection target image default
The super-pixel of background remarkable threshold and the opposite significant average value of background, is added boundary background collection B1, it is updated to background subset B,
And enter step G;
Step G. constructs non-directed graph G corresponding to whole super-pixel in target image, then according to the power on side each in non-directed graph G
Weight constructs adjacency matrix W corresponding to non-directed graph G, and enters step H;
Step H. constructs background absorption Markov chain figure according to non-directed graph G and background subset BThen in conjunction with non-directed graph G
Corresponding adjacency matrix W obtains background absorption Markov chain figureCorresponding incidence matrix A, and according to incidence matrix
A obtains background absorption Markov chain figureCorresponding probability transfer matrix P, it is last according to the time is absorbed, obtain target
Each super-pixel is respectively with respect to the saliency value of background subset B in image, and enters step I;
Super-pixel each in target image is distinguished retive boundary background collection B by step I.1Saliency value, with it is each in target image
Saliency value of the super-pixel respectively with respect to background subset B is merged, and each super-pixel is with respect to background in acquisition target image
Saliency value;Then the saliency value according to each super-pixel with respect to background, in conjunction with default opposite background remarkable threshold, for target figure
Each super-pixel carries out the division of prospect and background as in, realizes the division of prospect and background in target image.
2. a kind of image significance detection method that chain is absorbed based on markov background and prospect according to claim 1,
It is characterized in that, in the step I, after each super-pixel is with respect to the saliency value of background in acquisition target image, into following step
It is rapid:
In step J. selection target image each super-pixel with respect to maximum value in the saliency value of background presupposition multiple, as prospect
Then remarkable threshold selects in convex closure, super-pixel of the saliency value of opposite background not less than prospect remarkable threshold, composition prospect kind
Subset F, subsequently into step K;
The method that step K. presses step H is based on absorbing Markov chain, and each super-pixel is with respect to prospect kind in acquisition target image
The saliency value of subset F, i.e., each super-pixel and enters step L with respect to the saliency value of prospect in target image;
Step L. is each with respect in the saliency value and target image of background by super-pixel each in target image according to default weight
A super-pixel is merged with respect to the saliency value of prospect, obtains the saliency value of each super-pixel in target image, then according to each
The saliency value of a super-pixel carries out prospect and background for super-pixel each in target image in conjunction with default opposite remarkable threshold
Division, realize target image in prospect and background division.
3. a kind of image significance detection method that chain is absorbed based on markov background and prospect according to claim 2,
It is characterized in that, in acquisition target image after the saliency value of each super-pixel, being directed to using smooth mechanism in the step L
The saliency value of super-pixel is smoothed update in target image, then according to the saliency value of each super-pixel, in conjunction with default
Opposite remarkable threshold, the division of prospect and background is carried out for super-pixel each in target image, realizes prospect in target image
With the division of background.
4. a kind of image significance detection method that chain is absorbed based on markov background and prospect according to claim 3,
It is characterized in that, in the step B, as follows:
Obtain the similarity sim between super-pixel two-by-twoij, wherein Vi、VjAny super-pixel respectively in target image,
dcolor(Vi,Vj) indicate super-pixel ViWith super-pixel VjIn the Euclidean distance of CIELAB color space, σ2Indicate default balance ginseng
Number, σ2=0.1.
5. a kind of image significance detection method that chain is absorbed based on markov background and prospect according to claim 3,
It is characterized in that, being directed to boundary candidate background collection B respectively in the step E0In similar area corresponding to each super-pixel,
As follows:
Obtain the boundary connectivity of similar area corresponding to super-pixelWherein,Indicate boundary candidate background collection
B0In similar area corresponding to i-th of super-pixel boundary connectivity, δ () expression refers to function, works as super-pixelWhen,Otherwise Indicate boundary candidate background collection B0In phase corresponding to i-th of super-pixel
Like region,Indicate boundary candidate background collection B0In j-th of super-pixel in similar area corresponding to i-th of super-pixel,Indicate boundary candidate background collection B0In i-th of super-pixel, with boundary candidate background collection B0In corresponding to i-th of super-pixel
The similarity between j-th of super-pixel in similar area.
6. a kind of image significance detection method that chain is absorbed based on markov background and prospect according to claim 3,
It is characterized in that, in the step E, according to the boundary connectivity of similar area, as follows:
Obtain the probability that the super-pixel belongs to backgroundWherein,Indicate boundary candidate background collection B0In i-th surpass picture
Element belongs to the probability of background,Indicate boundary candidate background collection B0In similar area corresponding to i-th of super-pixel boundary
Connectivity,
7. a kind of image significance detection method that chain is absorbed based on markov background and prospect according to claim 3,
It is characterized in that, in the step F, according to the following formula:
Sbg1(i)=scolor(i)·wdis(i)
It obtains each super-pixel in target image and distinguishes retive boundary background collection B1Saliency value Sbg1(i), wherein Sbg1(i) it indicates
I-th of super-pixel retive boundary background collection B in target image1Saliency value, scolor(i) it indicates i-th to surpass picture in target image
Element and boundary background collection B1Color difference, Indicate boundary background collection
B1In j-th of super-pixel, nbIndicate boundary background collection B1The sum of middle super-pixel,It indicates i-th in target image
A super-pixel and boundary background collection B1In j-th of super-pixel CIELAB color space Euclidean distance,wdis(i)
Indicate i-th of super-pixel and boundary background collection B in target image1Spatial diversity, Indicate i-th of super-pixel centre coordinate position and boundary background collection B in target image1In in j-th of super-pixel
Euclidean distance between heart coordinate position,
8. a kind of image significance detection method that chain is absorbed based on markov background and prospect according to claim 3,
It is characterized in that, in the step G, using super-pixel whole in target image as node, in conjunction between following regular definition node
Side, constitute non-directed graph G;
Each node is connected respectively with its spatial neighbors node and each node neighbor node with its each spatial neighbors node respectively
It is connected, and boundary background collection B1In all nodes between it is two two interconnected;
Then as follows:
Obtain the weight w on each side in non-directed graph Gi'j', construct adjacency matrix W corresponding to non-directed graph G, wherein Vi'、Vj'Table respectively
Show that there are two super-pixel on side, w each other in non-directed graph Gi'j'Indicate in non-directed graph G the i-th ' super-pixel and jth ' super-pixel it
Between side weight, dcolor(Vi',Vj') indicate that there are two super-pixel on side in CIELAB color space each other in non-directed graph G
Euclidean distance, σ2Indicate default balance parameters.
9. a kind of image significance detection method that chain is absorbed based on markov background and prospect according to claim 8,
It is characterized in that, in the step H, using all super-pixel in target image as transfering node, in background subset B
Super-pixel replicated, as virtual absorption node, be based on non-directed graph G, with transfering node, node absorbed, in conjunction with as follows
Side between regular definition node constitutes background absorption Markov chain figure
1) transfering node itself relationship: each transfering node exists from ring, i.e. transfering node may be connected to itself, and side length weight is
1;
2) relationship between transfering node: relationship consistency between corresponding node in relationship and non-directed graph G between transfering node;
3) relationship between transfering node and virtual absorbent node: if there is the virtual absorbent node replicated to it in transfering node,
The transfering node is directed toward its corresponding virtual absorbent node, and unidirectional connection therewith, at the same be connected with the transfering node its
His transfering node, is respectively directed to virtual absorbent node corresponding to the transfering node, and unidirectional connection therewith;
4) relationship between virtual absorbent node: being not associated between each virtual absorption node;
5) virtual absorbent node itself relationship: each absorption node exists from ring, and side length weight is 1;
For background absorption Markov chain figureIn node rearrange, by transfering node come front, absorb node come
Below, according to this sequence, as follows:
Obtain background absorption Markov chain figureIncidence matrixWherein, n indicates background absorption markov
Chain figureInterior joint sum, k indicate super-pixel sum in target image,Indicate background absorption Markov chain figureIn i-th
A node,Indicate background absorption Markov chain figureIn j-th of node,Indicate background absorption Markov chain figureIn be connected with i-th of node the set of each node ID,It indicatesReplica node,Indicate background absorption horse
Markov's chain figureIn withA node is connected the set of each node ID,It indicatesReplica node.
10. a kind of image significance detection method that chain is absorbed based on markov background and prospect according to claim 3,
It is characterized in that, in the step I, as follows:
Super-pixel each in target image is distinguished into retive boundary background collection B1Saliency value, with target image in each super-pixel
The saliency value with respect to background subset B is merged respectively, saliency value of each super-pixel with respect to background in acquisition target image
Sbg(i), wherein Sbg1(i) i-th of super-pixel retive boundary background collection B in target image is indicated1Saliency value, Sbg2(i) it indicates
Saliency value of i-th of super-pixel with respect to background subset B in target image, θ are default balance factor;
In the step L, as follows:
S (i)=α Sbg(i)+(1-α)Sfg(i)
Saliency value S by super-pixel each in target image with respect to backgroundbg(i), before opposite with super-pixel each in target image
The saliency value S of scapefg(i) it is merged, obtains the saliency value S (i) of each super-pixel in target image, wherein α is default power
Weight.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910353130.6A CN110111353B (en) | 2019-04-29 | 2019-04-29 | Image significance detection method based on Markov background and foreground absorption chain |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910353130.6A CN110111353B (en) | 2019-04-29 | 2019-04-29 | Image significance detection method based on Markov background and foreground absorption chain |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110111353A true CN110111353A (en) | 2019-08-09 |
CN110111353B CN110111353B (en) | 2020-01-24 |
Family
ID=67487344
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910353130.6A Expired - Fee Related CN110111353B (en) | 2019-04-29 | 2019-04-29 | Image significance detection method based on Markov background and foreground absorption chain |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110111353B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111539420A (en) * | 2020-03-12 | 2020-08-14 | 上海交通大学 | Panoramic image saliency prediction method and system based on attention perception features |
CN111815582A (en) * | 2020-06-28 | 2020-10-23 | 江苏科技大学 | Two-dimensional code area detection method for improving background prior and foreground prior |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107609552A (en) * | 2017-08-23 | 2018-01-19 | 西安电子科技大学 | Salient region detection method based on markov absorbing model |
CN107862702A (en) * | 2017-11-24 | 2018-03-30 | 大连理工大学 | A kind of conspicuousness detection method of combination boundary connected and local contrast |
US20180307935A1 (en) * | 2015-03-24 | 2018-10-25 | Hrl Laboratories, Llc | System for detecting salient objects in images |
CN108846404A (en) * | 2018-06-25 | 2018-11-20 | 安徽大学 | A kind of image significance detection method and device based on the sequence of related constraint figure |
CN108921833A (en) * | 2018-06-26 | 2018-11-30 | 中国科学院合肥物质科学研究院 | A kind of the markov conspicuousness object detection method and device of two-way absorption |
CN109598735A (en) * | 2017-10-03 | 2019-04-09 | 斯特拉德视觉公司 | Method using the target object in Markov D-chain trace and segmented image and the equipment using this method |
US20190114780A1 (en) * | 2016-06-09 | 2019-04-18 | The Penn State Research Foundation | Systems and methods for detection of significant and attractive components in digital images |
-
2019
- 2019-04-29 CN CN201910353130.6A patent/CN110111353B/en not_active Expired - Fee Related
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180307935A1 (en) * | 2015-03-24 | 2018-10-25 | Hrl Laboratories, Llc | System for detecting salient objects in images |
US20190114780A1 (en) * | 2016-06-09 | 2019-04-18 | The Penn State Research Foundation | Systems and methods for detection of significant and attractive components in digital images |
CN107609552A (en) * | 2017-08-23 | 2018-01-19 | 西安电子科技大学 | Salient region detection method based on markov absorbing model |
CN109598735A (en) * | 2017-10-03 | 2019-04-09 | 斯特拉德视觉公司 | Method using the target object in Markov D-chain trace and segmented image and the equipment using this method |
CN107862702A (en) * | 2017-11-24 | 2018-03-30 | 大连理工大学 | A kind of conspicuousness detection method of combination boundary connected and local contrast |
CN108846404A (en) * | 2018-06-25 | 2018-11-20 | 安徽大学 | A kind of image significance detection method and device based on the sequence of related constraint figure |
CN108921833A (en) * | 2018-06-26 | 2018-11-30 | 中国科学院合肥物质科学研究院 | A kind of the markov conspicuousness object detection method and device of two-way absorption |
Non-Patent Citations (3)
Title |
---|
FENGLING JIANG: ""Saliency Detection via Bidirectional Absorbing Markov Chain"", 《ARXIV》 * |
GAO Z: ""SAMM: Surroundedness and Absorption Markov Model Based Visual Saliency Detection in Images"", 《IEEE ACCESS》 * |
WANG J: ""Saliency detection via background and foreground seed selection"", 《NEUROCOMPUTING》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111539420A (en) * | 2020-03-12 | 2020-08-14 | 上海交通大学 | Panoramic image saliency prediction method and system based on attention perception features |
CN111539420B (en) * | 2020-03-12 | 2022-07-12 | 上海交通大学 | Panoramic image saliency prediction method and system based on attention perception features |
CN111815582A (en) * | 2020-06-28 | 2020-10-23 | 江苏科技大学 | Two-dimensional code area detection method for improving background prior and foreground prior |
CN111815582B (en) * | 2020-06-28 | 2024-01-26 | 江苏科技大学 | Two-dimensional code region detection method for improving background priori and foreground priori |
Also Published As
Publication number | Publication date |
---|---|
CN110111353B (en) | 2020-01-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102521814B (en) | Wireless sensor network image fusion method based on multi-focus fusion and image splicing | |
CN108319972A (en) | A kind of end-to-end difference online learning methods for image, semantic segmentation | |
CN109543754A (en) | The parallel method of target detection and semantic segmentation based on end-to-end deep learning | |
CN108090960A (en) | A kind of Object reconstruction method based on geometrical constraint | |
CN110111353A (en) | A kind of image significance detection method absorbing chain based on markov background and prospect | |
CN111209918B (en) | Image saliency target detection method | |
CN101951511B (en) | Method for layering video scenes by analyzing depth | |
CN108665491A (en) | A kind of quick point cloud registration method based on local reference | |
CN108230338A (en) | A kind of stereo-picture dividing method based on convolutional neural networks | |
CN104616026B (en) | A kind of monitoring scene type discrimination method towards intelligent video monitoring | |
CN107833224B (en) | A kind of image partition method based on the synthesis of multilayer sub-region | |
CN105451244B (en) | A kind of cover probability estimation method of small base station cooperation | |
CN107194949B (en) | A kind of interactive video dividing method and system matched based on block and enhance Onecut | |
CN113610905B (en) | Deep learning remote sensing image registration method based on sub-image matching and application | |
CN112528879B (en) | Multi-branch pedestrian re-identification method based on improved GhostNet | |
CN111626927A (en) | Binocular image super-resolution method, system and device adopting parallax constraint | |
CN103761736B (en) | A kind of image partition method based on Bayes's harmony degree | |
CN110111357A (en) | A kind of saliency detection method | |
CN106023212A (en) | Super-pixel segmentation method based on pyramid layer-by-layer spreading clustering | |
CN107452013A (en) | Conspicuousness detection method based on Harris Corner Detections and Sugeno fuzzy integrals | |
Mukherjee et al. | Markov random field processing for color demosaicing | |
CN109146792A (en) | Chip image super resolution ratio reconstruction method based on deep learning | |
CN109636809A (en) | A kind of image segmentation hierarchy selection method based on scale perception | |
CN108960281A (en) | A kind of melanoma classification method based on nonrandom obfuscated data enhancement method | |
CN108009549A (en) | A kind of iteration cooperates with conspicuousness detection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20200124 |
|
CF01 | Termination of patent right due to non-payment of annual fee |