CN104899875A - Rapid image cooperation salient region monitoring method based on integration matching - Google Patents

Rapid image cooperation salient region monitoring method based on integration matching Download PDF

Info

Publication number
CN104899875A
CN104899875A CN201510258792.7A CN201510258792A CN104899875A CN 104899875 A CN104899875 A CN 104899875A CN 201510258792 A CN201510258792 A CN 201510258792A CN 104899875 A CN104899875 A CN 104899875A
Authority
CN
China
Prior art keywords
sigma
pixel
tau
integration
marking area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510258792.7A
Other languages
Chinese (zh)
Inventor
冯伟
尹雪飞
陈冬冬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201510258792.7A priority Critical patent/CN104899875A/en
Publication of CN104899875A publication Critical patent/CN104899875A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a rapid image cooperation salient region monitoring method based on integration matching. The method comprises steps of generating salient maps; establishing punishment maps on the basis of the salient maps; establishing color models of robustness and obtaining cosine similarity of two regions in the center of the image via the color models; and establishing high-dimensional integration matching method, and detecting the maximum cooperation salient region with the high-dimensional integration matching method. The method is quicker and more precise.

Description

A kind of rapid image based on integration coupling works in coordination with marking area monitoring method
Technical field
The present invention relates to computer vision and image processing field, particularly relate to a kind of rapid image based on integration coupling and work in coordination with marking area monitoring method, this method is by using Higher Dimensional Integration matching algorithm to realize on the super-pixel grid of rule.
Background technology
The background technology related in the present invention has: conspicuousness detection method (Saliency Detection Method): the conspicuousness detection method being also called single image, to be intended to from piece image by significant foreground target or extracted region out.In recent years, conspicuousness detection model is widely used---Iamge Segmentation, object detection and recognition, image mosaic etc.Conventional method has: SF [1], FT [2], RC [3]and HC [3].
The basic thought of SF is: conspicuousness is seen as a wave filter, uses the method for wave filter to accelerate.Perazzi mainly analyzes the formula of local and the overall situation two kinds of notable features, proposes a kind of method calculated in linear session.
The people such as FT:Achanta, from frequency field angle, propose a kind of method that marking area based on global contrast detects first.The method is using the saliency value of the Euclidean distance between each pixel value in Gassian low-pass filter image and the average pixel value of entire image as this point.The method is very simple, and time loss is very low, and the effect of experimental result in the full rate of precision ratio-difference also can be rated as excellent.But FT can lose efficacy in the following two cases:
1. the color of marking area accounts for the major part in image, passes through method [4]after calculating, background can have higher saliency value;
2. contain a small amount of outstanding color in background, the saliency value of this part color in such background also can be very high.
HC: based on the method for histogram contrast, the significance value of each pixel is determined by the color distortion of pixel every other in it and image, obtains full resolution Saliency maps picture;
RC: based on the method for local contrast, first Iamge Segmentation is become zonule, the dividing method adopted is the segmentation based on figure, basic segmentation thought is using the summit of each pixel as non-directed graph, dissimilar degree between two pixels is as the weight on limit, require that the weight limit on the limit on two summits connected in same area is less than the minimal weight on the limit on the summit connecting zones of different, carry out summit conclusion and region merging technique in an iterative process; The significance value in each region is determined by it and the space length in other all regions and the color distortion of area pixel number weighting; Space length is the Euclidean distance of two regional barycenters, distributes less weights compared with far region;
Above four kinds of methods just detect the remarkable object of single image, but all tend in most cases detect remarkable object from video sequence, and use above four kinds of methods, then can go to detect remarkable object at prerequisite next frame one frame of the front and back correlativity of ignorance frame of video, at this moment accuracy inevitably will decline.
Collaborative remarkable detection method (Co-Saliency Detection Method) be partial to from an image to or multiple image detect the marking area that these images are common, and ignore well-marked target exclusive in every width image.Conventional method has: PC [5], MS [6], CC [7].The present invention also belongs to this class.
PC: detect collaborative remarkable object under preposition attention scheme (preattentive scheme).
MS: single image significantly schemes the linear combination that SSIM (single image saliency map) and multiple image significantly scheme MISM (multi-image saliency map).
CC: detect collaborative remarkable object by the method using contrast (contrast), space (spatial) carrys out the conspicuousness of computed image cluster with the clue associating (corresponding).
The time cost of above three methods is higher and accuracy is low.
Super-pixel grid (Superpixel Grid): image is after super-pixel segmentation, and what obtain is irregular super-pixel structure, and when using it to process image, this irregular structure causes efficiency comparison low.For any irregular super-pixel structure is carried out Regularization sequence, in recent years, in image processing field, some researchists propose the concept of a super-pixel grid.Utilize the super-pixel segmentation result of target image, it is dynamically placed in a virtual rectangular node based on the coordinate of each block super-pixel: this network should retain connectivity between the overall coherence of former super-pixel structure, super-pixel, local and global characteristics as much as possible; The structure that introducing dummy node keeps grid regular and transferring energy.Some common methods generating super-pixel grid in recent years have SuperLatice [8], LatticeCut [9], TurboPixel [10], SEEDS [11]and SP-Grid [12].
Integrogram (Integral Image): in order to the feature in quick any one region of computed image [13]in be suggested.In integrogram, the value of any one pixel (x, y) can be expressed as:
S ( x , y ) = Σ x ' = 1 x Σ y ' = 1 y i ( x ' , y ' )
Wherein, S (x, y) represents the value of integrogram mid point (x, y), and i (x', y') represents the value of pixel (x', y') in original image.Use formula:
r(x,y)=r(x,y-1)+i(x,y)
S(x,y)=S(x-1,y)+r(x,y)
Carry out the integrogram that interative computation can obtain original image.
Wherein, r (x, y) represent xth row from first element to y element and, (-1, y)=0 (for r (x, y-1), being equivalent to y value is 0 in definition iteration lower limit r (x ,-1)=S;
For S (x-1, y), being equivalent to x value is 0); Definition and the r (x, y) of r (x, y-1) are similar, and definition and the S (x, y) of S (x-1, y) are similar.
Summary of the invention
The invention provides a kind of rapid image based on integration coupling and work in coordination with marking area monitoring method, the present invention by set up a remarkable punishment figure, a robust colour model and Higher Dimensional Integration matching process and utilize this method from multiple image, extract maximum collaborative remarkable object, described below:
Rapid image based on integration coupling works in coordination with a marking area monitoring method, and it is characterized in that, described rapid image is worked in coordination with marking area monitoring method and comprised the following steps:
Generate significantly figure, the basis of described remarkable figure is set up and significantly punishes figure;
Set up the colour model of robust, obtained the cosine similarity in image pair two regions by described colour model;
Build Higher Dimensional Integration matching process, by described Higher Dimensional Integration matching process, maximum collaborative marking area is detected.
Wherein, the described colour model setting up robust, the step being obtained the cosine similarity in image pair two regions by described colour model is specially:
I '=(i r/ Θ, i g/ Θ, i b/ Θ), wherein Θ=max (i r, i g, i b)
Each pixel in the colour model that i ' is robust, i r, i g, i bfor three values of pixel;
The cosine similarity in two regions is:
E co ( h 1 , h 2 ) = h 1 T · h 2 | | h 1 T | | · | | h 2 | |
Histogram corresponding to two regions is respectively h 1and h 2.
Wherein, described structure Higher Dimensional Integration matching process, by described Higher Dimensional Integration matching process to maximum collaborative marking area
The step of carrying out detecting is specially:
1) Fast Construction of two-dimensional integration figure and the quick calculating of rectangle region thresholding;
2) on the basis of two-dimensional integration figure, carry out the Fast Construction of Higher Dimensional Integration table and the quick calculating of high-order rectangle region thresholding;
3) calculate the similarity in two higher-dimension regions, realize the integration coupling in higher-dimension region, obtain maximum collaborative marking area.
The beneficial effect of technical scheme provided by the invention is: compared to other collaborative conspicuousness detection method (Co-saliency Detection Method), collaborative conspicuousness detection method proposed by the invention more fast, more accurate.
Accompanying drawing explanation
Fig. 1 is the process flow diagram that the rapid image mated based on integration works in coordination with marking area monitoring method;
Fig. 2 is the experimental result comparison diagram of the colour model of standard (R, G, B) colour model and robust;
Fig. 3 is the expression figure of S (x, y) in two-dimensional integration figure;
Fig. 4 is any rectangular area (p in original image 0, p 1) value R (p 0, p 1) represent figure;
Fig. 5 is based on public data collection [6]testing result comparison diagram;
Fig. 6 is based on public data collection [6]performance comparision figure;
Fig. 7 is data set (the CMU-Cornell iCoseg data set based on Cornell University [14]) testing result comparison diagram;
Fig. 8 is data set (the CMU-Cornell iCoseg data set based on Cornell University [14]) Performance comparision figure.
Embodiment
For making the object, technical solutions and advantages of the present invention clearly, below embodiment of the present invention is described further in detail.
The present invention is intended to from multiple image, extract total remarkable object rapidly and accurately.First, for every piece image, this monitoring method all can obtain corresponding remarkable punishment figure ± M, on the occasion of representing that this pixel is significant in figure, should be encouraged; And negative value represents that these pixel right and wrong are significant, should be punished.Secondly, this method propose the colour model that could be removed or reduce the robust of illumination effect, in this colour model, simply and efficiently can also obtain the cosine similarity in image pair two regions.Finally, by using Higher Dimensional Integration matching algorithm on the super-pixel grid of rule, finally maximum collaborative marking area is extracted from image pair.
Embodiment 1
101: generate and significantly scheme, the basis of described remarkable figure is set up and significantly punishes figure;
102: the colour model setting up robust, the cosine similarity in image pair two regions is obtained by described colour model;
103: build Higher Dimensional Integration matching process, by described Higher Dimensional Integration matching process, maximum collaborative marking area is detected.
Wherein, the described colour model setting up robust, the step being obtained the cosine similarity in image pair two regions by described colour model is specially:
I '=(i r/ Θ, i g/ Θ, i b/ Θ), wherein Θ=max (i r, i g, i b)
Each pixel in the colour model that i ' is robust, i r, i g, i bfor three values of pixel;
The cosine similarity in two regions is:
E co ( h 1 , h 2 ) = h 1 T · h 2 | | h 1 T | | · | | h 2 | |
Histogram corresponding to two regions is respectively h 1and h 2.
Wherein, described structure Higher Dimensional Integration matching process, is specially the step that maximum collaborative marking area detects by described Higher Dimensional Integration matching process:
1) Fast Construction of two-dimensional integration figure and the quick calculating of rectangle region thresholding;
2) on the basis of two-dimensional integration figure, carry out the Fast Construction of Higher Dimensional Integration table and the quick calculating of high-order rectangle region thresholding;
3) calculate the similarity in two higher-dimension regions, realize the integration coupling in higher-dimension region, obtain maximum collaborative marking area.
Embodiment 2
201: set up and significantly punish figure (Distinct Punishment Map);
Wherein, significantly punish in figure ± M on the occasion of expression this pixel be significant, should be encouraged; And negative value represents that these pixel right and wrong are significant, should be punished.The step setting up remarkable punishment figure is:
1) significantly figure (Saliency Map) is generated;
For a width h 1* h 2image I, HC-Map (based on histogrammic remarkable figure) and RC-Map (the remarkable figure based on region) is significantly schemed respectively by Histogram-Based Contrast Method and Region-Based Contrast Method, for representing convenient, be designated as S respectively 1and S 2, obtain Summed Saliency Map (SSM, the remarkable figure after merging).
SSM=ω 1·S 12·S 2={s i,j|1≤i≤h 1,1≤j≤h 2} (1)
Wherein, S 1and S 2for remarkable figure, ω that the first step obtains 1and ω 1for corresponding to S 1and S 2weight factor, i and j is variable, h 1and h 2represent that remarkable figure's is wide and high respectively, ω 1+ ω 2=1, s i,jfor the i-th row jth column element of SSM.
2) set up on the basis of remarkable figure and significantly punish figure ± M.
On the basis of the SSM obtained in the first step, to the element s in SSM i,jrevise as follows, obtain new element
s′ i,j
s i , j ' = s i , j - μ h 1 · h 2 Σ i = 1 h 1 Σ j = 1 h 2 s i , j , Wherein μ ∈ (0,1] (2)
With s ' i,jthe remarkable seal formed for element is SSM '.
Then, element ρ in punishment figure ± M is calculated i,jvalue:
ρ i , j = s i , j ' V + if s i , j ' > 0 - s i , j ' V - if s i , j ' ≤ 0 ,
Wherein, V +, V -, be intermediate variable, be defined as follows:
V + = Σ p = 1 h 1 Σ q = 1 h 2 s p , q + , s p , q + = s i , j ' if s i , j ' > 0 0 if s i , j ' ≤ 0
V - = Σ p = 1 h 1 Σ q = 1 h 2 s p , q - , s p , q - = s i , j ' if s i , j ' > 0 0 if s i , j ' ≤ 0
Such as: for any one region P i,j, its conspicuousness is:
wherein, E i,jfor intermediate variable, be defined as: E i,j=(x, y) | i (x, y) ∈ P i,j.
202: the colour model (Robust Color Model) setting up robust;
In (R, G, B) colour model of standard, there are three value i respectively for the pixel of in image r, i g, i bcorrespond on corresponding R, G, B tri-passages, their value is all between 0-255.Therefore, to any one pixel i=(i r, i g, i b) ∈ I, invention defines a new pixel value:
I '=(i r/ Θ, i g/ Θ, i b/ Θ), wherein Θ=max (i r, i g, i b)
The colour model (Robust Color Model) of robust is exactly build on the basis of new pixel value i '.Original image, after colour model (the Robust Color Model) process of robust, just obtains a new image I '.Then, by the value n decile of each passage, such as n=6 is made.For arbitrary region, corresponding histogram h can be obtained through statistics.Usual use cosine similarity measures the similarity in two regions.Such as, for two region q 1and q 2, corresponding histogram is respectively h 1and h 2, then the cosine similarity in these two regions is:
E co ( h 1 , h 2 ) = h 1 T · h 2 | | h 1 T | | · | | h 2 | | - - - ( 3 )
In fig. 2, two width images of the first row on right side are the image under (R, G, B) colour model of standard, and the top chart in left side is the histogram corresponding to image under (R, G, B) colour model of standard; Two width images of the second row on right side are the image under the colour model (Robust Color Model) of robust, the histogram corresponding to the image under the colour model (Robust Color Model) that left side chart is on the lower rod; The two width images of right side the third line are the ground truth of two width images.
In the histogram of two, left side, horizontal ordinate represents the numerical value that R, G, channel B are corresponding, and ordinate represents the frequency dropping on corresponding numerical value.This shows, do not concentrate in the COLOR COMPOSITION THROUGH DISTRIBUTION of (R, G, B) colour model hypograph of standard, and illumination is the main cause causing this distribution not concentrated; Because the colour model (Robust Color Model) of robust eliminates illumination factor, therefore the COLOR COMPOSITION THROUGH DISTRIBUTION of image is more concentrated, meanwhile, the image after process closer to ground truth.
In the cosine similarity calculating collaborative marking area (duck), the cosine similarity that colour model proposed by the invention calculates is 0.971, and (the R of standard, G, B) result of model is 0.627, experimental result absolutely proves that the colour model (Robust Color Model) of employing robust is more accurate than (R, G, B) model of standard.
301: Higher Dimensional Integration matching process (High Dimensional Integral Searching Method)
1) Fast Construction of two-dimensional integration figure and the quick calculating of rectangle region thresholding;
As the method for " background technology " introduction structure integrogram, in integrogram, the value of any one pixel (x, y) can be expressed as:
S ( x , y ) = Σ x ' = 1 x Σ y ' = 1 y i ( x ' , y ' ) - - - ( 4 )
For the integrogram of two dimension, definition R (p 0, p 1) be expressed as in original image from pixel p 0=(x 0, y 0) to pixel p 1=(x 1, y 1) the value of any one rectangular area in integrogram, then have:
R ( p 0 , p 1 ) = Σ x = x 0 x 1 Σ y = y 0 y 1 i ( x , y ) - - - ( 5 )
Wherein i (x, y) represents the value of pixel (x, y) in original image.
In figure 3, H and W represents the length of original image and wide respectively, point (x, y) pixel in original image is represented, gray area is expressed as the value S (x of rectangular area in integrogram from pixel (1,1) to pixel (x, y) in original image, y) (its concept also can be expressed as: the value of any one pixel (x, y) in integrogram).
In the diagram, H and W represents the length of original image and wide, some p respectively 0=(x 0, y 0) and p 1=(x 1, y 1) be two different pixels in original image, gray area represents in original image from pixel p 0=(x 0, y 0) to pixel p 1=(x 1, y 1) the value of any one rectangular area in integrogram, be designated as R (p 0, p 1).
(5) formula can also continue to optimize:
R ( p 0 , p 1 ) = ( Σ x = 1 x 1 - Σ x = 1 x 0 ) ( Σ y = 1 y 1 - Σ y = 1 y 0 ) i ( x , y ) = ( Σ x = 1 x 1 - Σ x = 1 x 0 ) ( S ( x , y 1 ) - S ( x , y 0 - 1 ) ) = S ( x 1 , y 1 ) - S ( x 1 , y 0 - 1 ) - S ( x 0 - 1 , y 1 ) + S ( x 0 - 1 , y 0 - 1 ) - - - ( 6 )
Wherein, S (x, y 1), S (x, y 0-1), S (x 1, y 1), S (x 1, y 0-1), S (x 0-1, y 1), S (x 0-1, y 0-1) definition can the definition of S (x, y) in analogy formula (4).Under the prerequisite obtaining integrogram, R (p 0, p 1) can be obtained by a step addition and two step subtractions, time complexity reduces greatly.
If p 0=p 1=(x, y), then
R(p 0,p 1)=i(x,y)
So,
i ( x , y ) = R ( p 0 , p 1 ) = Σ x ' = x x Σ y ' = y y i ( x ' , y ' ) = S ( x , y ) - S ( x , y - 1 ) - S ( x - 1 , y ) + S ( x - 1 , y - 1 )
Wherein, the definition of S (x, y-1), S (x-1, y), S (x-1, y-1) also can be analogous to the definition of S (x, y) in formula (4).
After transposition, can obtain:
S(x,y)=S(x,y-1)+S(x-1,y)-S(x-1,y-1)+i(x,y) (7)
Wherein, S (1,1)=i (1,1), S (x, y)=0 is as x<1 or y<1.
Therefore, utilize (7) formula, the integrogram of two dimension can be constructed rapidly.
2) Fast Construction of Higher Dimensional Integration table and the quick calculating of high-order rectangle region thresholding;
The related notion of two-dimensional integration figure is above expanded to Higher Dimensional Integration table, Table I is tieed up for a known n n, suppose Table I nmiddle arbitrary element is at p=(x 1, x 2..., x n) value be:
S ( p ) = S ( x 1 , x 2 , . . . , x n ) = &Sigma; x 1 ' x 1 &Sigma; x 2 ' = 1 x 2 . . . &Sigma; x n ' x n i ( x 1 ' , x 2 ' , . . . , x n ' ) - - - ( 8 )
Be analogous to formula (4), S (p) is analogous to S (x, y), represents Higher Dimensional Integration Table I nthe value of mid point p; I (x ' 1, x ' 2..., x ' n) be analogous to i (x ', y '), represent pixel in former dimensional images (x ' 1, x ' 2..., x ' n) value.
In Higher Dimensional Integration Table I nin, p 0 = ( x 1 0 , x 2 0 , . . . , x n 0 ) With p 1 = ( x 1 1 , x 2 1 , . . . , x n 1 ) Between element and be:
R ( p 0 , p 1 ) = &Sigma; x 1 = x 1 0 x 1 1 &Sigma; x 2 = x 2 0 x 2 1 . . . &Sigma; x n = x n 0 x n 1 i ( x 1 , x 2 , . . . , x n )
Be analogous to formula (5), R (p 0, p 1) be analogous to R (p 0, p 1), be expressed as in former dimensional images from pixel to pixel any one higher-dimension region value in the table of integrals,
Above formula can continue to be optimized for:
R ( p 0 , p 1 ) = ( &Sigma; x 1 = 1 x 1 1 - &Sigma; x 1 = 1 x 1 0 ) . . . ( &Sigma; x n = 1 x n 1 - &Sigma; x n = 1 x n 0 ) i ( x 1 , x 2 , . . . , x n ) = ( &Sigma; x 1 = 1 x 1 1 - &Sigma; x 1 = 1 x 1 0 ) . . . ( &Sigma; x n - 1 = 1 x n - 1 1 - &Sigma; x n - 1 = 0 x n - 1 0 ) ( S ( x 1 , x 2 , . . . , x n - 1 , x n 1 ) ) - S ( x 1 , x 2 , . . . , x n - 1 , x n 0 - 1 ) = &Sigma; &tau; &Element; { 0,1 } n ( - 1 ) n - | | &tau; | | 1 | &CenterDot; S ( p &tau; + &tau; - 1 )
Wherein, S ( x 1 , x 2 , . . . , x n - 1 , x n 1 ) With S ( x 1 , x 2 , . . . , x n - 1 , x n 0 - 1 ) Definition be analogous to S (x in formula (8) 1, x 2..., x n) definition, τ i∈ { 0,1} n, 1=(1,1 ..., 1), p τfor the summit of higher-dimension rectangular area.
Therefore, available (9) formula calculates the value in higher-dimension region rapidly.
In like manner, if p 0=p 1=(x 1, x 2..., x n), then R (p 0, p 1)=i (x 1, x 2..., x n)
So,
i ( x 1 , x 2 , . . . , x n ) = R ( p 0 , p 1 ) = &Sigma; &tau; &Element; { 0,1 } n ( - 1 ) n - | | &tau; | | 1 &CenterDot; S ( p &tau; + &tau; - 1 ) = S ( p 1 ) + &Sigma; &tau; &Element; T ( - 1 ) n - | | &tau; | | 1 &CenterDot; S ( p &tau; + &tau; - 1 )
Transposition can obtain,
S ( x 1 , x 2 , . . . , x n ) = S ( p 1 ) = &Sigma; &tau; &Element; T ( - 1 ) n + 1 - | | &tau; | | 1 &CenterDot; S ( p &tau; + &tau; - 1 ) + i ( x 1 , x 2 , . . . , x n ) - - - ( 10 )
Wherein, Τ={ 0,1} n/ 1={0,1} n1 (1,1 ..., 1), if p ∈ Τ, then S (p)=0.
(10) formula of utilization can construct Higher Dimensional Integration table rapidly.
3) Higher Dimensional Integration matching process.
In this part, introduce and how from two width images, to detect total maximum region.Known Higher Dimensional Integration Table I nand J n, have R 1∈ I n, R 2∈ J n(R 1and R 2be respectively Higher Dimensional Integration Table I nand J nin region), if p 1=(x 1,1, x 1,2..., x 1, n), p 2=(x 2,1, x 2,2..., x 2, n) be respectively R 1and R 2in the coordinate of any one element, R 1by with between element form ( with represent region R respectively 1starting pixels point and stop pixel), R 2by with between element form ( with represent region R respectively 2starting pixels point and stop pixel).Definition p 1and p 2similarity be:
d p 1 , p 2 = i ( p 1 ) &CenterDot; i ( p 2 )
Wherein, i (p 1) and i (p 2) can be analogous to i in formula (8) (x ' 1, x ' 2..., x ' n) definition.
Then, R 1and R 2similarity be:
R ( p 0 , p 1 ) = &Sigma; x 1,1 = x 1,1 0 x 1,1 1 . . . &Sigma; x 1 , n = x 1 , n 0 x n 1 &Sigma; x 2,1 = x 2,1 0 x 2,1 1 . . . &Sigma; x 2 , n = x 2 , n 0 x 2 , n 1 d p 1 , p 2
Wherein, p 0 = ( p 1 0 , p 2 0 ) , p 1 = ( p 1 1 , p 2 1 )
Above formula can be optimized for:
R ( p 0 , p 1 ) = ( &Sigma; x 1 , 1 = 1 x 1,1 1 - &Sigma; x 1,1 = 1 x 1,1 0 ) . . . ( &Sigma; x 2 , n = 1 x 2 , n 1 - &Sigma; x 2 , n = 1 x 2 , n 0 ) d p 1 , p 2 = ( &Sigma; x 1,1 = 1 x 1,1 1 - &Sigma; x 1,1 = 1 x 1,1 0 ) . . . ( &Sigma; x 2 , n = 1 x 2 , n 1 - &Sigma; x 2 , n = 1 x 2 , n 0 ) ( S ( x 1,1 , x 1,2 , . . . , x 2 , n - 1 , x 2 , n 1 ) ) - S ( x 1,1 , x 1,2 , . . . , x 2 , n - 1 , x 2 , n 0 - 1 ) = &Sigma; &tau; &Element; { 0,1 } n ( - 1 ) 2 n - | | &tau; | | 1 | &CenterDot; S ( p &tau; + &tau; - 1 )
Wherein, with be analogous to S (x in formula (8) 1, x 2..., x n) definition, τ 1, τ 2∈ { 0,1} n, τ=(τ 1, τ 2) ∈ { 0,1} 2n, p &tau; = ( p &tau; 1 , p &tau; 2 ) , p 0 = ( p 1 0 , p 2 0 ) , p 1 = ( p 1 1 , p 2 1 )
Utilize formula (11) that the similarity in two higher-dimension regions can be calculated.
If p 1 0 = p 1 1 = p 1 = ( x 1.1 , . . . , x 1 , n ) , p 2 0 = p 2 1 = p 2 = ( x 2.1 , . . . , x 2 , n ) , I.e. p 0=p 1, then have
R ( p 0 , p 1 ) = d p 1 , p 2
Can obtain
d p 1 , p 2 = R ( p 0 , p 1 ) = &Sigma; &tau; &Element; { 0,1 } 2 n ( - 1 ) 2 n - | | &tau; | | 1 &CenterDot; S ( p &tau; + &tau; - 1 ) = S ( p 1 ) + &Sigma; &tau; &Element; T ( - 1 ) 2 n - | | &tau; | | 1 &CenterDot; S ( p &tau; + &tau; - 1 )
Transplant,
S ( x 1,1 , . . . , x 2 , n ) = S ( p 1 ) = &Sigma; &tau; &Element; T ( - 1 ) 2 n + 1 - | | &tau; | | 1 &CenterDot; S ( p &tau; + &tau; - 1 ) + d p 1 , p 2 - - - ( 12 )
Wherein, Τ={ 0,1} 2n/ 1, if p ∈ Τ, then S (p)=0
(12) formula of utilization can go out the table of integrals of higher-dimension by Fast Construction.
204: based on the detection method of the maximum collaborative marking area of Higher Dimensional Integration coupling.
For two width images of an image pair, first use SEEDS algorithm, obtain the super-pixel grid G of corresponding rule 1and G 2, suppose G 1wide and height be respectively h 1,1, h 1,2, G 2wide and height be respectively h 2,1, h 2,2, sub-grid G 1 ' &Subset; G 1 , G 2 ' ; &Subset; G 2 . G ' 1by P 1 0 = ( x 1,1 0 , x 1,2 0 ) With P 1 1 = ( x 1,1 1 , x 1,2 1 ) Between pixel form, G ' 2by P 2 0 = ( x 2,1 0 , x 2,2 0 ) With P 2 1 = ( x 2,1 1 , x 2,2 1 ) Between pixel form.
In the present invention, the collaborative significance function of two sub-grids is defined as:
E ( G 1 ' , G 2 ' ) = &alpha; 1 E co ( G 1 ' , G 2 ' ) + &alpha; 2 2 ( E &PlusMinus; M 1 ( G 1 ' ) + E &PlusMinus; M 2 ( G 2 ' ) ) - - - ( 13 )
Wherein, α 1and α 2for weight factor, and α 1+ α 2=1; E co(G ' 1, G ' 2) be analogous to E in formula (3) co(h 1, h 2) definition, represent G ' 1with G ' 2cosine similarity; with for local conspicuousness, they are defined as:
E &PlusMinus; M 1 ( G 1 ' ) = &Sigma; i = x 1,1 0 x 1,1 1 &Sigma; j = x 1,2 0 x 1,2 1 &rho; i , j 1
E &PlusMinus; M 2 ( G 2 ' ) = &Sigma; i = x 2,1 0 x 2,1 1 &Sigma; j = x 2,2 0 x 2,2 1 &rho; i , j 2
Wherein, with be respectively punishment figure ± M 1with ± M 2arbitrary element.
Can be calculated fast by formula (6) with
E &PlusMinus; M 1 ( G 1 ' ) = S &PlusMinus; M 1 ( x 1,1 1 , x 1,2 1 ) - S &PlusMinus; M 1 ( x 1,1 1 , x 1,2 1 - 1 ) - S &PlusMinus; M 1 ( x 1,1 0 - 1 , x 1,2 1 ) + S &PlusMinus; M 1 ( x 1,1 0 - 1 , x 1,2 0 - 1 ) E &PlusMinus; M 2 ( G 2 &prime; ) = S &PlusMinus; M 2 ( x 2,1 1 , x 2,2 1 ) - S &PlusMinus; M 2 ( x 2,1 1 , x 2,2 0 - 1 ) - S &PlusMinus; M 2 ( x 2,1 0 - 1 , x 2,2 1 ) + S &PlusMinus; M 2 ( x 2,1 0 - 1 , x 2,2 0 - 1 ) - - - ( 14 )
Wherein, all can be analogous to S (x in formula (8) 1, x 2..., x n) definition.
Obtaining G 1and G 2histogrammic situation under, the histogram H of arbitrary sub-grid all can be expressed as:
(h krepresent the histogram that a kth region is corresponding)
Then G ' 1with G ' 2similarity can be expressed as:
E co ( G 1 ' , G 2 ' ) = H 1 T &CenterDot; H 2 H 1 T &CenterDot; H 1 &CenterDot; H 2 T &CenterDot; H 2
( 15 )
For G 1and G 2in any one super-pixel point P 1=(x 1,1, x 1,2) ∈ G 1, P 2=(x 2,1, x 2,2) ∈ G 2, histogram h 1and h 2correspond respectively to P 1and P 2, then P 1and P 2similarity definition be:
By (12) Shi Ke get:
S ( x 1,1 , x 1,2 , x 2,1 , x 2,2 ) = &Sigma; &tau; &Element; T ( - 1 ) 5 - | | &tau; | | 1 &CenterDot; S ( p &tau; + &tau; - 1 ) + d p 1 , p 2
Wherein, Τ={ 0,1} 4/ 1, if p ∈ Τ, then S (p)=0
By (11) Shi Ke get:
R 1 ( G 1 &prime; , G 2 &prime; ) = &Sigma; x 1,1 = x 1,1 0 x 1,1 1 &Sigma; x 1,2 = x 1,2 0 x 1,2 1 &Sigma; x 2,1 = x 2,1 0 x 2,1 1 &Sigma; x 2,2 = x 2,2 0 x 2,2 1 d p 1 , p 2 = ( &Sigma; x 1,1 = 1 x 1,1 1 - &Sigma; x 1,1 = 1 x 1,1 0 ) . . . ( &Sigma; x 2,2 = 1 x 2,2 1 - &Sigma; x 2,2 = 1 x 2,2 0 ) d p 1 , p 2 = ( &Sigma; x 1,1 = 1 x 1,1 1 - &Sigma; x 1,1 = 1 x 1,1 0 ) . . . ( &Sigma; x 2,2 = 1 x 2 , 1 1 - &Sigma; x 2,2 = 1 x 2,1 0 ) ( S 1 ( x 1,1 , x 1,2 , x 2,1 , x 2,2 ) - S 1 ( x 1,1 , x 1,2 , x 2,1 , x 2,2 0 - 1 ) ) = &Sigma; &tau; &Element; { 0,1 } 4 ( - 1 ) 4 - | | &tau; | | 1 &CenterDot; S 1 ( p &tau; + &tau; - 1 )
Wherein, S 1(x 1,1, x 1,2, x 2,1, x 2,2) and also S (x in formula (8) can be analogous to 1, x 2..., x n) definition, τ 1, τ 2∈ { 0,1} 2, τ=(τ 1, τ 2) ∈ { 0,1} 4, | | &tau; | | 1 = &Sigma; i = 1 n &tau; i , p &tau; = ( p &tau; 1 , p &tau; 2 ) , p 0 = ( p 1 0 , p 2 0 ) , p 1 = ( p 1 1 , p 2 1 ) ,
In like manner can obtain:
R 2 ( G 1 ' , G 1 ' ) = &Sigma; &tau; &Element; { 0,1 } 4 ( - 1 ) 4 - | | &tau; | | 1 &CenterDot; S 2 ( p &tau; + &tau; - 1 )
R 3 ( G 2 ' , G 2 ' ) = &Sigma; &tau; &Element; { 0,1 } 4 ( - 1 ) 4 - | | &tau; | | 1 &CenterDot; S 3 ( p &tau; + &tau; - 1 )
So (11) formula can be rewritten as:
E co ( G 1 ' , G 2 ' ) = R 1 ( G 1 ' , G 2 ' ) R 2 ( G 1 ' , G 1 ' ) &CenterDot; R 3 ( G 2 ' , G 2 ' ) - - - ( 16 )
Comprehensively (13), (14) and (16) Shi Ke get:
E ( G 1 ' , G 2 ' ) = &alpha; 1 R 1 ( G 1 ' , G 2 ' ) R 2 ( G 1 ' , G 1 ' ) &CenterDot; R 3 ( G 2 ' , G 2 ' ) + &alpha; 2 2 ( E &PlusMinus; M 1 ( G 1 ' ) + E &PlusMinus; M 2 ( G 2 ' ) )
Wherein, α 1and α 2in formula (13), provide definition.By the sub-grid of exhaustive image pair to (G ' 1, G ' 2), the present invention just can obtain maximum collaborative marking area.
The feasibility of this method is verified below with concrete test, described below:
Respectively at public data collection [6]with data set (the CMU-Cornell iCoseg data set of Cornell University [14]) upper test conspicuousness detection method (Saliency Detection Method): SF [1], FT [2], RC [3], HC [3], CS [15], SR [16], PD [17]with collaborative conspicuousness detection method (Co-Saliency Detection Method): PC [5], MS [6], CC [7]performance, and the working time of relatively above method and the inventive method and Detection results.
1, soft hardware equipment is tested
MATLAB 2010, Intel Core i7-26003.4GHz, 8GB internal memory.
2, Preparatory work of experiment
1) at public data collection [6]in, get 105 width images and test (containing personage, flower, motorbus, ship and animal etc.), these images are maximum is no more than 128*128.In the groundtruth that public data collection provides, detect target and be marked as 1, background is marked as 0.
2) at data set (the CMU-Cornell iCoseg data set of Cornell University [14]) in, get 46 width images to (containing plurality of kinds of contents), be no more than 128*128 with sampled images is maximum.
3) accuracy rate F value (F-measure) represents.It is defined as:
F &beta; = ( 1 + &beta; 2 ) &CenterDot; ( P &CenterDot; R ) &beta; 2 &CenterDot; P + R
Wherein, with document [1] [6] [7] in setting the same, the present invention also sets β 2=0.3, P represents accurate rate (Precision), and R represents recall rate (Recall), and their definition is respectively:
P = identified &cap; groundtruth identified , R = identified &cap; groundtruth groundtruth
In definition, identified represents the region detected by above mentioned method, and groundtruth represents the real estate of the remarkable object of image pair.
When running method of the present invention, the super-pixel grid used is by SEEDS [11]method generates, meanwhile, and the ω in setting formula (1) 1=0.9, ω 1=0.1, μ=1.6 in formula (2), the α in formula (13) 1=0.3, α 2=0.7.Significantly scheme to obtain binary (Binary Saliency Map), setting adaptive threshold T α(Adaptive Threshold) is:
T &alpha; = 2 H &CenterDot; W &Sigma; i = 1 H &Sigma; j = 1 W S i , j
Wherein, H and W represents the length of given remarkable figure and wide, S respectively i,jfor the element of the i-th row jth row in given remarkable figure.
3, experiment content
On the computer of laboratory, open MATLAB, on two data sets, run the code (having download at the homepage of each author) of each method respectively, finally draw working time and testing result.
1) Experimental comparison and conclusion
Based on the experimental result run on public data collection, delineate Fig. 6.Wherein, horizontal ordinate represents various method, and ordinate represents the accuracy rate of corresponding method.As seen from Figure 6, accuracy rate of the present invention and F value the highest.In Figure 5, then illustrate the testing result of Part Methods, wherein first two columns is original image pair, third and fourth row are ground truth, five, six testing results being classified as RC, seven, eight testing results being classified as SF, the like, last two are classified as testing result of the present invention, therefrom can obviously find out testing result of the present invention closer to ground truth.In Table 1, then compared for the difference of the average operating time of distinct methods on public data collection, can draw thus, detect in (Co-Saliency Detection) in collaborative conspicuousness, the present invention's shortest time used, on average only 0.218 second consuming time.
Table 1: the working time based on disclosed data set contrasts
Conclusion can be obtained: no matter the present invention is on testing result or in accuracy rate from Fig. 5, Fig. 6 and table 1, all than other collaborative conspicuousness detection method (Co-saliency Detection Method) or be that conspicuousness detection method (Saliency Detection Method) is outstanding, simultaneously on average operating time, also consuming time fewer than other collaborative conspicuousness detection method (Co-saliency Detection Method), be also better than some conspicuousness detection methods (Saliency Detection Method) such as CS and PD.
Based on data set (the CMU-Cornell iCoseg data set in Cornell University [14]) the upper result run, can draw out Fig. 7, wherein, horizontal ordinate still represents method, and ordinate represents the accuracy rate of corresponding method.As can be seen from Figure 7, recall rate of the present invention and accuracy rate are all told me higher than other detection method.In fig. 8, then illustrate the testing result of Part Methods, wherein first two columns is original image pair, third and fourth row are ground truth, five, six testing results being classified as RC, seven, eight testing results being classified as SF, the like, last two are classified as testing result of the present invention, therefrom can obviously find out equally testing result of the present invention closer to ground truth.In table 2, also compared for data set (the CMU-Cornell iCoseg data set of distinct methods in Cornell University [14]) on the difference of average operating time, can draw thus, detect in (Co-Saliency Detection) in collaborative conspicuousness, the present invention's shortest time used, on average only 0.231 second consuming time.
Table 2: based on data set (the CMU-Cornell iCoseg data set of Kang Neier university [14]) working time contrast
Conclusion can be obtained: the present invention is testing result and accuracy rate (F value) from Fig. 7, Fig. 8 and table 2, all than other collaborative conspicuousness detection method (Co-saliency Detection Method) or be that conspicuousness detection method (Saliency Detection Method) is outstanding, simultaneously in average operating time, also consuming time fewer than other collaborative conspicuousness detection method (Co-saliency Detection Method), be also better than some conspicuousness detection methods (Saliency Detection Method) such as CS and PD.
Testing result on two data sets shows: compared with other collaborative conspicuousness detection methods (Co-saliency Detection Method), collaborative conspicuousness detection method proposed by the invention sooner, more accurate.
List of references
[1]Perazzi F,Krahenbuhl P,Pritch Y,et al.Saliency filters:Contrast based filtering for salient region detection[C]//Computer Vision and Pattern Recognition(CVPR),2012IEEE Conference on.IEEE,2012:733-740.
[2]Achanta R,Hemami S,Estrada F,et al.Frequency-tuned salient region detection[C]//Computer Vision and Pattern Recognition,2009.CVPR 2009.IEEE Conference on.IEEE,2009:1597-1604.
[3]Cheng M M,Zhang G X,Mitra N J,et al.Global contrast based salient region detection[C]//Computer Vision and Pattern Recognition(CVPR),2011IEEE Conference on.IEEE,2011:409-416.
[4]Ma Y F,Zhang H J.Contrast-based image attention analysis by using fuzzy growing[C]//Proceedings of the 11th ACM International Conference on Multimedia.ACM,2003:374-381.
[5]Chen H T.Preattentive co-saliency detection[C]//Image Processing(ICIP),201017th IEEE International Conference on.IEEE,2010:1117-1120.
[6]Li H,Ngan K N.A co-saliency model of image pairs[J].Image Processing,IEEE Transactions on,2011,20(12):3365-3375.
[7]Fu H,Cao X,Tu Z.Cluster-based co-saliency detection[J].Image Processing,IEEE Transactions on,2013,22(10):3766-3778.
[8]Moore A P,Prince S,Warrell J,et al.Superpixel lattices[C]//Computer Vision and Pattern Recognition,2008.CVPR 2008.IEEE Conference on.IEEE,2008:1-8.
[9]Moore A P,Prince S J D,Warrell J.“Lattice Cut”-Constructing superpixels using layer constraints[C]//Computer Vision and Pattern Recognition(CVPR),2010IEEE Conference on.IEEE,2010:2117-2124.
[10]Levinshtein A,Stere A,Kutulakos K N,et al.Turbopixels:Fast superpixels using geometric flows[J].Pattern Analysis and Machine Intelligence,IEEE Transactions on,2009,31(12):2290-2297.
[11]Van den Bergh M,Boix X,Roig G,et al.SEEDS:Superpixels extracted via energy-driven sampling[M]//Computer Vision–ECCV 2012.Springer Berlin Heidelberg,2012:13-26.
[12]Li L,Feng W,Wan L,et al.Maximum cohesive grid of superpixels for fast object localization[C]//Computer Vision and Pattern Recognition(CVPR),2013IEEE Conference on.IEEE,2013:3174-3181.
[13]Viola P,Jones M.Rapid object detection using a boosted cascade of simple features[C]//Computer Vision and Pattern Recognition,2001.CVPR 2001.Proceedings of the2001IEEE Computer Society Conference on.IEEE,2001,1:I-511-I-518vol.1.
[14]Batra D,Kowdle A,Parikh D,et al.iCoseg:Interactive co-segmentation with intelligent scribble guidance[C]//Computer Vision and Pattern Recognition(CVPR),2010IEEE Conference on.IEEE,2010:3169-3176.
[15]Itti L,Koch C,Niebur E.A model of saliency-based visual attention for rapid scene analysis[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,1998,20(11):1254-1259.
[16]Hou X,Zhang L.Saliency detection:A spectral residual approach[C]//Computer Vision and Pattern Recognition,2007.CVPR'07.IEEE Conference on.IEEE,2007:1-8.
[17]Margolin R,Tal A,Zelnik-Manor L.What makes a patch distinct?[C]//Computer Vision and Pattern Recognition(CVPR),2013IEEE Conference on.IEEE,2013:1139-1146.
It will be appreciated by those skilled in the art that accompanying drawing is the schematic diagram of a preferred embodiment, the invention described above embodiment sequence number, just to describing, does not represent the quality of embodiment.
The foregoing is only preferred embodiment of the present invention, not in order to limit the present invention, within the spirit and principles in the present invention all, any amendment done, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (3)

1. the rapid image based on integration coupling works in coordination with a marking area monitoring method, and it is characterized in that, described rapid image is worked in coordination with marking area monitoring method and comprised the following steps:
Generate significantly figure, the basis of described remarkable figure is set up and significantly punishes figure;
Set up the colour model of robust, obtained the cosine similarity in image pair two regions by described colour model;
Build Higher Dimensional Integration matching process, by described Higher Dimensional Integration matching process, maximum collaborative marking area is detected.
2. a kind of rapid image based on integration coupling according to claim 1 works in coordination with marking area monitoring method, it is characterized in that, the described colour model setting up robust, the step being obtained the cosine similarity in image pair two regions by described colour model is specially:
I '=(i r/ Θ, i g/ Θ, i b/ Θ), wherein Θ=max (i r, i g, i b)
Each pixel in the colour model that i ' is robust, i r, i g, i bfor three values of pixel;
The cosine similarity in two regions is
E co ( h 1 , h 2 ) = h 1 T &CenterDot; h 2 | | h 1 T | | &CenterDot; | | h 2 | |
Histogram corresponding to two regions is respectively h 1and h 2.
3. a kind of rapid image based on integration coupling according to claim 1 works in coordination with marking area monitoring method, it is characterized in that, described structure Higher Dimensional Integration matching process, is specially the step that maximum collaborative marking area detects by described Higher Dimensional Integration matching process:
1) Fast Construction of two-dimensional integration figure and the quick calculating of rectangle region thresholding;
2) on the basis of two-dimensional integration figure, carry out the Fast Construction of Higher Dimensional Integration table and the quick calculating of high-order rectangle region thresholding;
3) calculate the similarity in two higher-dimension regions, realize the integration coupling in higher-dimension region, obtain maximum collaborative marking area.
CN201510258792.7A 2015-05-20 2015-05-20 Rapid image cooperation salient region monitoring method based on integration matching Pending CN104899875A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510258792.7A CN104899875A (en) 2015-05-20 2015-05-20 Rapid image cooperation salient region monitoring method based on integration matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510258792.7A CN104899875A (en) 2015-05-20 2015-05-20 Rapid image cooperation salient region monitoring method based on integration matching

Publications (1)

Publication Number Publication Date
CN104899875A true CN104899875A (en) 2015-09-09

Family

ID=54032520

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510258792.7A Pending CN104899875A (en) 2015-05-20 2015-05-20 Rapid image cooperation salient region monitoring method based on integration matching

Country Status (1)

Country Link
CN (1) CN104899875A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106910228A (en) * 2017-03-06 2017-06-30 赛诺威盛科技(北京)有限公司 The connection method of Slab exploded chart pictures
CN107767404A (en) * 2017-06-23 2018-03-06 北京理工大学 A kind of remote sensing images sequence moving target detection method based on improvement ViBe background models
CN109712143A (en) * 2018-12-27 2019-05-03 北京邮电大学世纪学院 A kind of Fast image segmentation method based on super-pixel multiple features fusion

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7366323B1 (en) * 2004-02-19 2008-04-29 Research Foundation Of State University Of New York Hierarchical static shadow detection method
CN103942774A (en) * 2014-01-20 2014-07-23 天津大学 Multi-target collaborative salient-region detection method based on similarity propagation
CN104240256A (en) * 2014-09-25 2014-12-24 西安电子科技大学 Image salient detecting method based on layering sparse modeling

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7366323B1 (en) * 2004-02-19 2008-04-29 Research Foundation Of State University Of New York Hierarchical static shadow detection method
CN103942774A (en) * 2014-01-20 2014-07-23 天津大学 Multi-target collaborative salient-region detection method based on similarity propagation
CN104240256A (en) * 2014-09-25 2014-12-24 西安电子科技大学 Image salient detecting method based on layering sparse modeling

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
尹雪飞: "协同显著目标区域检测及其参数优化方法研究", 《万方数据知识服务平台》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106910228A (en) * 2017-03-06 2017-06-30 赛诺威盛科技(北京)有限公司 The connection method of Slab exploded chart pictures
CN107767404A (en) * 2017-06-23 2018-03-06 北京理工大学 A kind of remote sensing images sequence moving target detection method based on improvement ViBe background models
CN109712143A (en) * 2018-12-27 2019-05-03 北京邮电大学世纪学院 A kind of Fast image segmentation method based on super-pixel multiple features fusion
CN109712143B (en) * 2018-12-27 2021-01-26 北京邮电大学世纪学院 Rapid image segmentation method based on superpixel multi-feature fusion

Similar Documents

Publication Publication Date Title
CN104134234B (en) A kind of full automatic three-dimensional scene construction method based on single image
CN108121991B (en) Deep learning ship target detection method based on edge candidate region extraction
CN108510504B (en) Image segmentation method and device
CN111275696B (en) Medical image processing method, image processing method and device
CN108537239B (en) Method for detecting image saliency target
CN109376611A (en) A kind of saliency detection method based on 3D convolutional neural networks
CN110738207A (en) character detection method for fusing character area edge information in character image
CN111986099A (en) Tillage monitoring method and system based on convolutional neural network with residual error correction fused
CN111161317A (en) Single-target tracking method based on multiple networks
CN104715251B (en) A kind of well-marked target detection method based on histogram linear fit
CN108846404B (en) Image significance detection method and device based on related constraint graph sorting
CN111414954B (en) Rock image retrieval method and system
CN110827312B (en) Learning method based on cooperative visual attention neural network
CN108629783A (en) Image partition method, system and medium based on the search of characteristics of image density peaks
CN103632153B (en) Region-based image saliency map extracting method
CN110222760A (en) A kind of fast image processing method based on winograd algorithm
CN110852327A (en) Image processing method, image processing device, electronic equipment and storage medium
CN107067037A (en) A kind of method that use LLC criterions position display foreground
CN108388901B (en) Collaborative significant target detection method based on space-semantic channel
CN104899875A (en) Rapid image cooperation salient region monitoring method based on integration matching
CN104715476B (en) A kind of well-marked target detection method based on histogram power function fitting
CN105023264A (en) Infrared image remarkable characteristic detection method combining objectivity and background property
CN104778683A (en) Multi-modal image segmenting method based on functional mapping
CN115115847B (en) Three-dimensional sparse reconstruction method and device and electronic device
CN110796716A (en) Image coloring method based on multiple residual error networks and regularized transfer learning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20150909

WD01 Invention patent application deemed withdrawn after publication