CN106296632A - A kind of well-marked target detection method analyzed based on amplitude spectrum - Google Patents

A kind of well-marked target detection method analyzed based on amplitude spectrum Download PDF

Info

Publication number
CN106296632A
CN106296632A CN201510271210.9A CN201510271210A CN106296632A CN 106296632 A CN106296632 A CN 106296632A CN 201510271210 A CN201510271210 A CN 201510271210A CN 106296632 A CN106296632 A CN 106296632A
Authority
CN
China
Prior art keywords
well
marked target
image
amplitude spectrum
notable
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510271210.9A
Other languages
Chinese (zh)
Other versions
CN106296632B (en
Inventor
郑海永
朱亚菲
赵红苗
姬光荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ocean University of China
Original Assignee
Ocean University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ocean University of China filed Critical Ocean University of China
Priority to CN201510271210.9A priority Critical patent/CN106296632B/en
Publication of CN106296632A publication Critical patent/CN106296632A/en
Application granted granted Critical
Publication of CN106296632B publication Critical patent/CN106296632B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a kind of well-marked target detection method analyzed based on amplitude spectrum, wherein, comprise the following steps: image zooming-out brightness I, agonist character RG and the agonist character BY obtained;These three characteristic pattern is transformed in frequency domain by quaternary number Fourier transformation, obtains the amplitude spectrum of image, phase spectrum and intrinsic axle spectrum;Utilize size and the center of well-marked target of well-marked target in image signatures operator detection image;By utilizing specific relation between amplitude spectrum optimal filter yardstick and well-marked target size, it is thus achieved that optimal filter yardstick corresponding to different well-marked targets, the amplitude spectrum to image carries out the gaussian filtering of different scale respectively;Determine corresponding to each well-marked target the weighted value of optimum notable figure according to central authorities' prejudice Gauss distribution and well-marked target position, it is thus achieved that different notable figures carry out adaptive Gauss weight fusion, calculate the notable figure after merging;Notable figure after merging carries out gaussian filtering operation;Notable angle value obtains final notable figure after being normalized.The method of the present invention can suppress background fast and effectively, highlights well-marked target uniformly, the more notable information retaining image.

Description

A kind of well-marked target detection method analyzed based on amplitude spectrum
Technical field
The invention belongs to technical field of image processing, be specifically related to a kind of well-marked target detection method analyzed based on amplitude spectrum.
Background technology
Along with the development of computer science and technology, visual attention computation model is increasingly becoming computer vision and image procossing research The focus that person is interested, and more and more it is applied to computer vision field, such as image segmentation, target recognition, image weight Orientation and video compress etc..
Vision attention is referred to as significance detection in the research of computer realm.At present, the research of relevant significance detection is permissible It is roughly divided into two classes: watch Focus prediction and significance target detection attentively.Watch Focus prediction attentively to be intended to by calculating notable Tu Laimo Measuring point is observed in personification.Feature Fusion theory and neurobiology framework (Koch C, Ullman in Koch Yu Ullman proposition S.Shifts in selective visual attention:Towards the underlying neural circuitry. Matters of Intelligence.Springer Netherlands, 1987:115 141.) inspiration under, Itti etc. People establishes first bottom-up significance detection computation model (Itti L, Koch C, Niebur E.A model of saliency-based visual attention for rapid scene analysis.IEEE Transactions on Pattern Analysis and Machine Intelligence,1998,20:1254–1259.).This model utilizes line Property filtering method extract the feature such as the color of image, brightness and direction and obtain multiple dimensioned characteristic pattern gaussian pyramid, pass through Central authorities are the poorest and normalization operator obtains characteristic remarkable picture, then merged by linear fusion mechanism by these characteristic remarkable pictures Become notable figure, the transfer of the strategy domination visual focus that employing winner takes entirely.What this model was the most complete simulates vision attention Mechanism, has the effect of milestone at computer significance detection field.Watch focus model attentively obtain the biggest improvement and send out Exhibition, but the result of prediction often trends towards highlighting the region of the texture comparatively dense such as edge and angle point, rather than whole target, because of This, this kind of model application is the highest.
The whole object that well-marked target detection is used to detect in given scenario significantly, can arouse people's attention, and it is complete Whole splits.The well-marked target Feature Fusion of local, region and the overall situation is got up by Liu et al. by condition random field (Liu T,Sun J,Zheng N,et al.Learning to detect a salient object.IEEE Conference on Computer Vision and Pattern Recognition,2007.1–8.);Cheng et al. propose based on Significance detection method (Cheng M, Zhang G, Mitra N J, the et al.Global contrast of region contrast based salient region detection.IEEE Conference on Computer Vision and Pattern Recognition,2011.409–416.);Recently, more research method focuses on how so that significance detection knot Fruit the most accurately, more robust, such as method (Jiang H, Wang J, Yuan Z, the et al.Salient of critical region Feature Fusion object detection:A discriminative regional feature integration approach.IEEE Conference on Computer Vision and Pattern Recognition, 2013.2083 2090.), base Flow pattern sequence (Yang C, Zhang L, Lu H, et al.Saliency detection via graph-based in figure manifold ranking.IEEE Conference on Computer Vision and Pattern Recognition,2013. 3166 3173.) and layering method (Yan Q, Xu L, Shi J, et al.Hierarchical saliency detection.IEEE Conference on Computer Vision and Pattern Recognition,2013.1155 1162.) etc., although the precision of well-marked target detection is more and more higher, but during processing feature selection more and more, Algorithm becomes increasingly complex so that amount of calculation is increasing, therefore, is not easy to carry out real-time process.
In order to the simplest, quick, effectively and do not rely on classification or other prioris, frequency domain saliency detection attracts The research of more and more people.First significance detection is incorporated in frequency domain by Hou and Zhang, it is proposed that compose remaining Significance detection algorithm (Hou X, Zhang L.Saliency detection:A spectral residual approach. IEEE Conference on Computer Vision and Pattern Recognition, 2007.1 8.), Guo Et al. think and remove amplitude spectrum, only retain a phase spectrum just can recover significantly scheming of image and without using residual spectra (Guo C, Ma Q,Zhang L.Spatio-temporal saliency detection using phase spectrum of quaternion fourier transform.IEEE Conference on Computer Vision and Pattern Recognition, 2008.1 8.), the result drawn is about the same with SR algorithm.Afterwards, frequency domain saliency mould Type has been got back more extension, such as pulse discrete cosine transform model and the frequency domain decomposition normalized mode of pulse principal component analysis Type (Bian P, Zhang L.Visual saliency:A biologically plausible contourlet-like frequency domain approach.Cognitive Neurodynamics,2010,4:189–198.).Above Method achieves certain effect, but they only can detect the smaller area that edge, texture are complicated, for target Bigger region detection effect is unsatisfactory.Recently, Li et al. proposes significance detection algorithm based on spectrum metric space (Li J,Levine M D,An X,et al.Visual saliency based on scale-space analysis in the frequency domain.IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013,35:996 1010.) to some extent solve this problem, but owing to utilizing entropy The method selecting single notable figure, on the one hand causes marking area to highlight uneven, on the other hand can lose many significance letters Breath.In view of this technological deficiency, it is desirable to provide a kind of well-marked target detection method analyzed based on amplitude spectrum, the method can Suppress background fast and effectively, highlight well-marked target uniformly, the more notable information retaining image.
Summary of the invention
It is an object of the invention to overcome defect of the prior art, it is provided that a kind of well-marked target detection analyzed based on amplitude spectrum Method, the method can suppress background fast and effectively, highlight well-marked target uniformly, the more notable letter retaining image Breath.
For reaching above-mentioned purpose, the present invention provide a kind of based on amplitude spectrum analyze well-marked target detection method, wherein, including Following steps:
S1: the original image that size is M × N is converted size is the image of m × n;
Its three kinds of features of S2: the image zooming-out that step S1 is obtained, special including brightness I, agonist character RG and antagonism Levy BY;
S3: three kinds of characteristic patterns in step S2 are transformed in frequency domain by quaternary number Fourier transformation, obtain the width of image Degree spectrum, phase spectrum and intrinsic axle spectrum;
S4: utilize size and the center of well-marked target of well-marked target in image signatures operator detection image;
S5: by utilizing specific relation between amplitude spectrum optimal filter yardstick and well-marked target size, it is thus achieved that difference show Writing the optimal filter yardstick that target is corresponding, the amplitude spectrum to image carries out the gaussian filtering of different scale respectively;
S6: amplitude spectrum filtered in step S5 is carried out in quaternary number Fu with the phase spectrum in step S3 and intrinsic axle spectrum Leaf inverse transformation, obtains the optimum notable figure corresponding to different well-marked target;
S7: determine optimum notable figure corresponding to each well-marked target according to central authorities' prejudice Gauss distribution and well-marked target position Weighted value, the different notable figure obtained by S6 carries out adaptive Gauss weight fusion, calculates the notable figure after merging;
S8: the notable figure after merging carries out gaussian filtering operation;
S9: the image obtaining step S8 carries out size adjusting, changes into the image that size is M × N by image;
S10: the notable angle value in step S9 obtains final notable figure after being normalized.
In described step S1, the value of m and n is respectively 256 and 256;
In described step S4, the obtaining step of well-marked target size and well-marked target center is:
Step 1: utilize image signatures operator that the image of step S1 is reconstructed conversion, detect K well-marked target;
Step 2: the imagery exploitation Gaussian filter after reconstruct is carried out low-pass filtering, eliminates noise;
Step 3: utilize maximum variance between clusters that the image obtained in step 2 is carried out adaptive threshold fuzziness, obtain a width Bianry image;
Step 4: calculated the size of kth image well-marked target by minimum enclosed rectangle algorithm: highly hkAnd width Degree wk, and the center of well-marked target: (mk,nk)。
In described step S5, between kth well-marked target size and amplitude spectrum optimal filter yardstick, specific relation is:Wherein, σkRepresent the filter scale of optimum gaussian kernel corresponding to kth well-marked target, H and W is height and the width of image respectively, hkAnd wkRepresenting height and the width of kth well-marked target respectively, α is One adjustment factor.
In described step S5, the amplitude spectrum filtering gaussian kernel corresponding to kth well-marked target isThe wherein coordinate position in u and v representative image amplitude spectrum respectively.
Kth well-marked target central authorities prejudice Gauss distribution ω in described step S7kFor: Wherein, (mc,nc) center of representative image, (mk,nk) representing the center of kth well-marked target, η is a regulation parameter.
In described step S7, the expression formula of Gauss weight fusion is:
Described α is set to 1.5.
Described η is set to 16.
Concrete operating procedure is:
S1: the original image that size is M × N is converted into the image that size is m × n, and wherein the value of m and n is 256 Hes 256;
S2: extract three kinds of features of image, including brightness I, agonist character RG and agonist character BY;
S3: three kinds of characteristic patterns are transformed in frequency domain by quaternary number Fourier transformation, obtain the amplitude spectrum of image, phase spectrum Compose with intrinsic axle;
S4: utilize image signatures operator to obtain size and the center of well-marked target of well-marked target in image;
S5: by utilizing the different notable mesh that between amplitude spectrum optimal filter yardstick and well-marked target size, specific relation obtains The optimal filter yardstick that mark is corresponding, carries out the gaussian filtering of different scale to the amplitude spectrum of image;
S6: filtered amplitude spectrum is carried out quaternary number Fourier inversion with original image phase spectrum and intrinsic axle spectrum, obtains not With the optimum notable figure corresponding to well-marked target;
S7: according to well-marked target position and central authorities' prejudice Gauss distribution, determines optimum notable figure corresponding to each well-marked target Weighted value, carries out adaptive Gauss weight fusion by the optimum notable figure corresponding to difference well-marked targets, calculates showing after merging Write figure;
S8: the notable figure after merging carries out gaussian filtering operation;
S9: the image obtaining step S8 carries out size adjusting, changes into, the image after processing, the figure that size is M × N Picture
S10: to obtaining final notable figure after image significance value normalization.
The obtaining step of the center of well-marked target size and well-marked target is in step s 4:
Step 1: utilize image signatures operator that the image of step S1 is reconstructed conversion, detect K well-marked target;
Step 2: the imagery exploitation Gaussian filter after reconstruct is carried out low-pass filtering, eliminates noise;
Step 3: utilize maximum variance between clusters that the image obtained in step 2 is carried out adaptive threshold fuzziness, obtain a width Bianry image;
Step 4: calculated the size of kth image well-marked target by minimum enclosed rectangle algorithm: highly hkAnd width Degree wk, and the center of well-marked target: (mk,nk).Minimum enclosed rectangle algorithm is the minimum rectangle calculated by area, The principle of this algorithm is: is rotated in the range of 90 degree with the increment of each about 3 degree on the border of object, often rotates once Minimum and maximum x and the y value of the boundary rectangle boundary point on record once its coordinate system direction, after rotating to a certain angle, The area of boundary rectangle minimizes, and the parameter taking area minimum enclosed rectangle is minimum enclosed rectangle..
Between kth well-marked target size and amplitude spectrum optimal filter yardstick, specific relation is in step s 5:Wherein, σkRepresent the filter scale of optimum gaussian kernel corresponding to kth well-marked target, H and W is height and the width of image respectively, hkAnd wkRepresenting height and the width of kth well-marked target respectively, α is One adjustment factor, is set to 1.5.Being derived from the filtering of the amplitude spectrum corresponding to kth well-marked target gaussian kernel isWherein (u, v) coordinate position of representative image amplitude spectrum.
Kth well-marked target central authorities prejudice Gauss distribution ω in the step s 7kFor:Wherein, (mc,nc) center of representative image, the center of kth well-marked target is (mk,nk), η is a parameter, arranges η and is 16.The expression formula of Gauss weight fusion is:
Advantages of the present invention is: the present invention is by the specific pass between given well-marked target size and optimum amplitude spectrum filter scale System, it is proposed that the new method of adaptive optimal scale selection, has drawn adaptive weighting convergence strategy, it is possible to ratio is the most equal The even well-marked target detected in image, remains more notable information.
Accompanying drawing explanation
Fig. 1 is the algorithm thinking of the image significance detection method of the specific embodiment of the invention;
Fig. 2 is the algorithm flow of the image significance detection method of the specific embodiment of the invention;
Fig. 3 is the well-marked target testing result figure of the specific embodiment of the invention;
Wherein: (a) natural image;B the calculated correspondence of () present invention is significantly schemed;C () ground-truth schemes.
Detailed description of the invention
Combine specific embodiment below against accompanying drawing the present invention is described in further details.The embodiment of the present invention will disclose As follows, but it is not limited to the present invention.It should be pointed out that, for any those familiar with ordinary skill in the art, Without departing from the inventive concept of the premise, it is also possible to make some deformation and improvement.These broadly fall into the protection model of the present invention Enclose.
The design thinking of the present invention is as shown in Figure 1: I is the original image of input, in image containing two vary in size aobvious Write target.First pass through certain method and obtain the size of well-marked target, big target R in example1Size be a × b, little Target size is c × d, the particular kind of relationship between well-marked target size and optimum amplitude spectrum filter scale obtain well-marked target Corresponding amplitude spectrum optimal filter yardstick K1And K2, process through frequency domain and after conversion, respectively obtain and significantly scheme M1And M2, Obtain final significantly scheming M finally by adaptive weighting syncretizing mechanism.
As in figure 2 it is shown, be the flow chart of image well-marked target detection method in this detailed description of the invention, comprise the following steps:
The original image bilinear interpolation algorithm that size is M × N is converted into the image of m × n by S1, and wherein m and n is respectively Being set to 256 and 256, this process improves the speed of image procossing further and can curb image high-frequency region Noise;
S2 extracts three kinds of features of image, including brightness I, agonist character RG and agonist character BY;
In this step, brightness and agonist character belong to traditional feature, have corresponding ripe computational methods to calculate Arriving, formula calculated as below only illustrates, no longer in detail the process that specifically calculates of narration eigenvalue used:
Brightness value I:I=(r+g+b)/3;
Agonist character RG:RG=| (r-(g+b)/2)-(g-(r+b)/2) |;
Agonist character BY:BY=| (b-(r+g)/2)-((r+g)/2-(| r-g |)/2-b) |;
The characteristic pattern quaternary number extracted is indicated by S3, then carries out quaternary number Fourier transformation, is transformed in frequency domain, Obtain the amplitude spectrum of image, phase spectrum and intrinsic axle spectrum;
In this step image (x, y) place quaternary number q (x, y) is expressed as: Q (x, y)=0+RG (x, y) μ1+BY(x,y)μ2+I(x,y)μ3, it should be noted that the image that we process is Still image, so the motion feature in formula is arranged to 0, then makes f1(x, y)=RG (x, y) μ1、 f2(x, y)=BY (x, y)+I (x, y) μ1, wherein, μ1、μ2And μ3Represent imaginary number index, then size is M × N's Quaternary number Fourier transformation Q [u, v] of image is represented by Q [u, v]=F1[u,v]+F2[u,v]μ2, wherein, F i [ u , v ] = 1 MN Σ x = 0 M - 1 Σ y = 0 n - 1 e - j 2 π ( ux M + vy N ) f i ( x , y ) , I=1,2, (u v) represents location of pixels corresponding in frequency domain. Amplitude spectrum A (u, v)=| | Q [u, v] | |, the phase spectrum of imageIntrinsic axle is composedWherein R (Q [u, v]) and I (Q [u, v]) represents real part and the imaginary part of Q [u, v] respectively, | | | | generation Table modulus value.
S4 utilizes image signatures operator to obtain size and the position of well-marked target;
In this step, it is assumed that the well-marked target of image has K, in order to obtain well-marked target size, carry out following steps:
Step 1: first with image signatures operator, the image that step (1) obtains being reconstructed conversion, detection K is notable Target;
Image signatures principle is the amplitude spectrum information abandoning signal in the range of whole image frequency domain, only retains the discrete remaining of image String conversion section symbol, it is used for detecting the well-marked target of image, and then expanded application is to detect size and the position of well-marked target Putting, its expression formula is: ImageSignature (x)=sign (DCT (x)), and wherein sign is sign function, and DCT is Discrete cosine transform, the expression formula of reconstruct is:Wherein, IDCT represent from Dissipate cosine inverse transformation.
Step 2: the imagery exploitation Gaussian filter after reconstruct is carried out low-pass filtering, filter scale is set to 11.2, eliminates The noise of reconstruct image;
Step 3: utilize maximum variance between clusters that the image after low-pass filtering is carried out adaptive threshold fuzziness, obtain one two Value image;
The principle of maximum variance between clusters foundation is to utilize class variance as criterion, chooses and makes inter-class variance maximum and side in class The image intensity value of difference minimum is as optimal threshold.Maximum variance between clusters can understand as follows: because variance is that intensity profile is equal A kind of tolerance of even property, variance yields is the biggest, illustrates that two parts difference of pie graph picture is the biggest, when partial target mistake is divided into background Or part background mistake is divided into target that two parts difference all can be caused to diminish, the segmentation therefore making inter-class variance maximum means wrong point Probability is minimum.
The gray level of original-gray image be L gray level be the pixel number of i be ni, whole pixels are N, and normalization is straight Fang Tu: P i = n i N , Σ i = 0 L - 1 P i = 1
Gray level is divided into two classes: C by threshold value t0=(0,1 ..., t) and C1=(t+1, t+2 ... L-1)
The probability of this two class is respectively as follows:
ω 0 = Σ i = 0 t P i = ω ( t ) , ω 1 = Σ i = t + 1 L - 1 P i = 1 - ω ( t )
Average is:
μ 0 = Σ i = 0 t i P i ω 0 = μ ( t ) ω ( t ) , μ 1 = Σ i = t + 1 L - 1 i P i ω 1 = μ T ( t ) - μ ( t ) 1 - ω ( t )
In above formula μ ( t ) = Σ i = 0 t i P i , μ T ( t ) = Σ i = 0 L - 1 i P i
Variance:
σ 0 2 = Σ i = 0 t ( i - μ 0 ) 2 P i ω 0 , σ 1 2 = Σ i = t + 1 L - 1 ( i - μ 1 ) 2 P i ω 1
Variance within clusters is: σ ω 2 = ω 0 σ 0 2 + ω 1 σ 1 2
Inter-class variance is:
σ B 2 = ω 0 ( μ 0 - μ T ) 2 + ω 1 ( μ 1 - μ T ) 2 = ω 0 ω 1 ( μ 1 - μ 0 ) 2
Population variance is: σ T 2 = σ B 2 + σ ω 2
Changing the value of t, t value when making inter-class variance obtain maximum is optimal threshold.With the optimal threshold tried to achieve to ash Degree image carries out binaryzation.
Step 4: calculate well-marked target size corresponding to kth well-marked target by minimum enclosed rectangle algorithm: high Degree hkWith width wk, and the center of well-marked target: (mk,nk);
In this step, minimum enclosed rectangle algorithm is the minimum rectangle calculated by area, and the principle of this algorithm is: by the border of object Rotate in the range of 90 degree with the increments of each about 3 degree, often rotate once record once on its coordinate system direction external Minimum and maximum x and the y value of square boundary point, after rotating to a certain angle, the area of boundary rectangle minimizes, and takes The parameter of area minimum enclosed rectangle is minimum enclosed rectangle.
S5 is by utilizing specific relation between amplitude spectrum optimal filter yardstick and well-marked target size, it is thus achieved that different notable mesh The optimal filter yardstick that mark is corresponding, the amplitude spectrum to image carries out the gaussian filtering of different scale respectively;
In this step, between given amplitude spectrum well-marked target size and optimal filter yardstick corresponding to kth well-marked target Specific relation is:Wherein, σkRepresent the optimum height corresponding to kth well-marked target The filter scale of this core, H and W is height and the width of image respectively, hkAnd wkRepresent respectively well-marked target height and Width, α is a regulation parameter, is set to 1.5, then give amplitude spectrum filtering gaussian kernel and beWherein (u, v) coordinate position in representative image amplitude spectrum, amplitude spectrum gaussian filtering table Reaching formula is: A ~ k = g ( u , v ; σ k ) * A ( u , v ) .
Filtered amplitude spectrum is carried out quaternary number Fourier inversion with original image phase spectrum and intrinsic axle spectrum by S6, obtains difference Optimum notable figure corresponding to well-marked target:Wherein Q-1For quaternary, number Fourier is anti- Conversion;
S7 determines the power of optimum notable figure corresponding to each well-marked target according to well-marked target position and central authorities' prejudice Gauss distribution Weight values, carries out adaptive Gauss weight fusion, after being calculated fusion by the optimum notable figure corresponding to different well-marked targets Notable figure;
In this step, definition kth well-marked target central authorities prejudice Gauss distribution ωkFor: Wherein, (mc,nc) center of representative image, η is a parameter, in our experiment, η=16.Gauss weight fusion Expression formula be:Wherein, SkThe optimum notable figure of kth for obtaining in step (5).
Notable figure after S8 will merge carries out gaussian filtering operation, and filter scale is set to 10.2 in this experiment;
S9 carries out size adjusting to image, and it is M × N's that the imagery exploitation bilinear interpolation algorithm after process is changed into size Image;
S10 obtains final notable figure after the notable angle value normalization to obtaining in (9).
If image gray levels is X, the distribution mechanism of image is respectively XmaxAnd Xmin, then image normalization is: X normal = X - X min X max - X min .
Background noise can be suppressed by the notable figure obtained by the present embodiment, highlight well-marked target uniformly, remain relatively Many notable information.
The comparative result of well-marked target detection example is shown in Fig. 3,
Fig. 3 (a) show 6 natural images, and what these images had comprises big well-marked target, and comprising of having is little notable Target, have comprises multiple well-marked target, and some backgrounds are relatively complicated,
Fig. 3 (b) is SR model (Hou X, Zhang L.Saliency detection:A spectral residual approach.IEEE Conference on Computer Vision and Pattern Recognition,2007.1 8. the notable figure) obtained,
Fig. 3 (c) is HFT model (Li J, Levine M D, An X, et al.Visual saliency based on scale-space analysis in the frequency domain.IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013,35:996 1010.) the notable figure obtained,
Fig. 3 (d) show the present invention and merges the final notable figure obtained through adaptive weighting,
Fig. 3 (e) is schemed for the ground-truth obtained by handmarking,
As can be seen from the figure it is more uniform that the well-marked target that profit is obtained by the present invention highlights, and remarkable result is brighter Aobvious.
To sum up, the present invention is by the particular kind of relationship between given well-marked target size and optimum amplitude spectrum filter scale, it is proposed that The new method of adaptive optimal scale selection, has drawn adaptive weighting convergence strategy, it is possible to detect more rapidly and uniformly Well-marked target in image, remains more notable information.

Claims (9)

1. the well-marked target detection method analyzed based on amplitude spectrum, it is characterised in that comprise the following steps:
S1: the original image that size is M × N is converted size is the image of m × n;
Its three kinds of features of S2: the image zooming-out that step S1 is obtained, special including brightness I, agonist character RG and antagonism Levy BY;
S3: three kinds of characteristic patterns in step S2 are transformed in frequency domain by quaternary number Fourier transformation, obtain the width of image Degree spectrum, phase spectrum and intrinsic axle spectrum;
S4: utilize size and the center of well-marked target of well-marked target in image signatures operator detection image;
S5: by utilizing specific relation between amplitude spectrum optimal filter yardstick and well-marked target size, it is thus achieved that difference show Writing the optimal filter yardstick that target is corresponding, the amplitude spectrum to image carries out the gaussian filtering of different scale respectively;
S6: amplitude spectrum filtered in step S5 is carried out in quaternary number Fu with the phase spectrum in step S3 and intrinsic axle spectrum Leaf inverse transformation, obtains the optimum notable figure corresponding to different well-marked target;
S7: determine optimum notable figure corresponding to each well-marked target according to central authorities' prejudice Gauss distribution and well-marked target position Weighted value, the different notable figure obtained by S6 carries out adaptive Gauss weight fusion, calculates the notable figure after merging;
S8: the notable figure after merging carries out gaussian filtering operation;
S9: the image obtaining step S8 carries out size adjusting, changes into the image that size is M × N by image;
S10: the notable angle value in step S9 obtains final notable figure after being normalized.
2. according to the well-marked target detection method analyzed based on amplitude spectrum according to claim 1, it is characterised in that In step S1, the value of m and n is respectively 256 and 256.
The well-marked target detection method analyzed based on amplitude spectrum the most according to claim 1, it is characterised in that step In S4, the obtaining step of well-marked target size and well-marked target center is:
Step 1: utilize image signatures operator that the image of step S1 is reconstructed conversion, detect K well-marked target;
Step 2: the imagery exploitation Gaussian filter after reconstruct is carried out low-pass filtering, eliminates noise;
Step 3: utilize maximum variance between clusters that the image obtained in step 2 is carried out adaptive threshold fuzziness, obtain a width Bianry image;
Step 4: calculated the size of kth image well-marked target by minimum enclosed rectangle algorithm: highly hkAnd width Degree wk, and the center of well-marked target: (mk,nk)。
The well-marked target detection method analyzed based on amplitude spectrum the most according to claim 1, it is characterised in that step In S5, between kth well-marked target size and amplitude spectrum optimal filter yardstick, specific relation is:Wherein, σkRepresent the filter scale of optimum gaussian kernel corresponding to kth well-marked target, H and W is height and the width of image respectively, hkAnd wkRepresenting height and the width of kth well-marked target respectively, α is One adjustment factor.
5. according to the well-marked target detection method analyzed based on amplitude spectrum described in claim 1 and 3, it is characterised in that In step S5, the amplitude spectrum filtering gaussian kernel corresponding to kth well-marked target is The wherein coordinate position in u and v representative image amplitude spectrum respectively.
The well-marked target detection method analyzed based on amplitude spectrum the most according to claim 1, it is characterised in that step Kth well-marked target central authorities prejudice Gauss distribution ω in S7kFor:Wherein, (mc,nc) generation The center of table image, (mk,nk) representing the center of kth well-marked target, η is a regulation parameter.
The well-marked target detection method analyzed based on amplitude spectrum the most according to claim 1, it is characterised in that step In S7, the expression formula of Gauss weight fusion is:
The well-marked target detection method analyzed based on amplitude spectrum the most according to claim 3, it is characterised in that described α be set to 1.5.
The well-marked target detection method analyzed based on amplitude spectrum the most according to claim 5, it is characterised in that described η be set to 16.
CN201510271210.9A 2015-05-25 2015-05-25 A kind of well-marked target detection method based on amplitude spectrum analysis Active CN106296632B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510271210.9A CN106296632B (en) 2015-05-25 2015-05-25 A kind of well-marked target detection method based on amplitude spectrum analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510271210.9A CN106296632B (en) 2015-05-25 2015-05-25 A kind of well-marked target detection method based on amplitude spectrum analysis

Publications (2)

Publication Number Publication Date
CN106296632A true CN106296632A (en) 2017-01-04
CN106296632B CN106296632B (en) 2018-10-19

Family

ID=57635094

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510271210.9A Active CN106296632B (en) 2015-05-25 2015-05-25 A kind of well-marked target detection method based on amplitude spectrum analysis

Country Status (1)

Country Link
CN (1) CN106296632B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107220952A (en) * 2017-06-09 2017-09-29 河南科技大学 A kind of multi-scale image smoothing method based on conspicuousness
CN110633708A (en) * 2019-06-28 2019-12-31 中国人民解放军军事科学院国防科技创新研究院 Deep network significance detection method based on global model and local optimization
CN112907595A (en) * 2021-05-06 2021-06-04 武汉科技大学 Surface defect detection method and device
CN113591708A (en) * 2021-07-30 2021-11-02 金陵科技学院 Meteorological disaster monitoring method based on satellite-borne hyperspectral image
CN113947530A (en) * 2021-10-21 2022-01-18 河北工业大学 Image redirection method based on relative significance detection

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150071532A1 (en) * 2013-09-11 2015-03-12 Omron Corporation Image processing device, computer-readable recording medium, and image processing method
CN104537681A (en) * 2015-01-21 2015-04-22 北京联合大学 Method and system for extracting spectrum-separated visual salient region

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150071532A1 (en) * 2013-09-11 2015-03-12 Omron Corporation Image processing device, computer-readable recording medium, and image processing method
CN104537681A (en) * 2015-01-21 2015-04-22 北京联合大学 Method and system for extracting spectrum-separated visual salient region

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JIAN LI ET AL: "Visual Saliency Based on Scale-Space Analysis in the Frequency Domain", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
黎万义 等: "引入视觉注意机制的目标跟踪方法综述_黎万义", 《自动化学报》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107220952A (en) * 2017-06-09 2017-09-29 河南科技大学 A kind of multi-scale image smoothing method based on conspicuousness
CN107220952B (en) * 2017-06-09 2020-03-31 河南科技大学 Multi-scale image smoothing method based on significance
CN110633708A (en) * 2019-06-28 2019-12-31 中国人民解放军军事科学院国防科技创新研究院 Deep network significance detection method based on global model and local optimization
CN112907595A (en) * 2021-05-06 2021-06-04 武汉科技大学 Surface defect detection method and device
CN113591708A (en) * 2021-07-30 2021-11-02 金陵科技学院 Meteorological disaster monitoring method based on satellite-borne hyperspectral image
CN113591708B (en) * 2021-07-30 2023-06-23 金陵科技学院 Meteorological disaster monitoring method based on satellite-borne hyperspectral image
CN113947530A (en) * 2021-10-21 2022-01-18 河北工业大学 Image redirection method based on relative significance detection
CN113947530B (en) * 2021-10-21 2024-04-30 河北工业大学 Image redirection method based on relative saliency detection

Also Published As

Publication number Publication date
CN106296632B (en) 2018-10-19

Similar Documents

Publication Publication Date Title
Rao et al. A deep learning approach to detection of splicing and copy-move forgeries in images
CN101271525B (en) Fast image sequence characteristic remarkable picture capturing method
Tang Wavelet theory approach to pattern recognition
Romero et al. Unsupervised deep feature extraction for remote sensing image classification
Wang et al. Detection and localization of image forgeries using improved mask regional convolutional neural network
Gao et al. Image super-resolution with sparse neighbor embedding
CN106296632A (en) A kind of well-marked target detection method analyzed based on amplitude spectrum
CN104809731B (en) A kind of rotation Scale invariant scene matching method based on gradient binaryzation
CN105809173B (en) A kind of image RSTN invariable attribute feature extraction and recognition methods based on bionical object visual transform
Wang et al. LiDAR data classification using morphological profiles and convolutional neural networks
CN107633229A (en) Method for detecting human face and device based on convolutional neural networks
Premaratne et al. Image matching using moment invariants
CN104021567B (en) Based on the fuzzy altering detecting method of image Gauss of first numeral law
CN106504207A (en) A kind of image processing method
Kim et al. Exposing fake faces through deep neural networks combining content and trace feature extractors
CN107103266A (en) The training of two-dimension human face fraud detection grader and face fraud detection method
Xu et al. Salient object detection from distinctive features in low contrast images
CN111476727A (en) Video motion enhancement method for face changing video detection
CN104050674B (en) Salient region detection method and device
CN107665347A (en) Vision significance object detection method based on filtering optimization
CN107040790A (en) A kind of video content certification and tampering location method based on many granularity Hash
Hsu et al. Object detection using structure-preserving wavelet pyramid reflection removal network
CN104680189A (en) Pornographic image detection method based on improved bag-of-words model
CN109711420A (en) The detection and recognition methods of alveolar hydalid target based on human visual attention mechanism
Dhanya et al. A state of the art review on copy move forgery detection techniques

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant