CN105354824A - Region extraction-based two-parameter constant false alarm detection method - Google Patents

Region extraction-based two-parameter constant false alarm detection method Download PDF

Info

Publication number
CN105354824A
CN105354824A CN201510641963.4A CN201510641963A CN105354824A CN 105354824 A CN105354824 A CN 105354824A CN 201510641963 A CN201510641963 A CN 201510641963A CN 105354824 A CN105354824 A CN 105354824A
Authority
CN
China
Prior art keywords
region
target
point
collection
sampling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510641963.4A
Other languages
Chinese (zh)
Other versions
CN105354824B (en
Inventor
杜兰
代慧
王兆成
肖金国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201510641963.4A priority Critical patent/CN105354824B/en
Publication of CN105354824A publication Critical patent/CN105354824A/en
Application granted granted Critical
Publication of CN105354824B publication Critical patent/CN105354824B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Other Investigation Or Analysis Of Materials By Electrical Means (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The present invention discloses a region extraction-based two-parameter constant false alarm detection method to mainly solve the problems of low detection speed and parameter setting caused object missed detection in an existing SAR image object detection detection technology. The method comprises the following implementation steps: extracting positive and negative sample sets on trained images with target marks, training a specification gradient feature based template w by using a linear classifier SVM, and selecting an effective size set for the extracted positive sample set on the basis of an initial image size; then extracting an area under an effective size on a tested image on the basis of the template w and the effective size set; and detecting the extracted area by using a two-parameter constant false alarm so as to obtain a candidate area, inhibiting NMS of the candidate area by using a non-maximum value, and removing a large number of superimposed areas to obtain a finally remained area as a final detection result. Compared with conventional two-parameter CFAR detection, the detection method disclosed by the present invention has the advantages of high detection speed and high detection probability, and is applicable to rapid detection of SAR image objects.

Description

Based on the DP-CFAR detection method of extracted region
Technical field
The invention belongs to Radar Technology field, particularly a kind of CFAR detection method, is used in synthetic-aperture radar SAR image and detects target quickly and efficiently.
Background technology
The 1950's radar imaging technology grow up, and obtains the development of advancing by leaps and bounds in afterwards 60 years, at present, in military affairs, agricultural, geology, ocean, disaster, paint all many-sides such as survey and be widely used.
Synthetic-aperture radar SAR is a kind of active sensor utilizing microwave to carry out perception, they are compared with other sensors such as infrared, optics, SAR imaging is not by the restriction of the condition such as illumination, weather, can carry out observation that is round-the-clock, round-the-clock to interested target, SAR image automatic target detection is subject to paying close attention to more and more widely.
The tertiary treatment flow process that SAR automatic target detection ATR method takes U.S.'s Lincoln laboratory to propose usually.This flow process adopts a kind of layering attention mechanism, and its implementation procedure is: first, carries out check processing to view picture SAR image, is not obviously order target area, obtains potential target region in removing image; Then, target is carried out to potential target area and differentiates process, to reject natural clutter false-alarm wherein, or reject obviously large or little than target region; By detection and the stage of discriminating of target, obtain target region of interest ROI; Finally, then to target ROI Classification and Identification is carried out.In this treatment mechanism, need data volume to be processed gradually reducing, so just can improve the efficiency of target identification system.
SAR image target detection is the first step in automatic target detection ATR flow scheme design, and its importance is self-evident.How to detect that potential target region is also a large study hotspot of the application of SAR image decipher in recent years fast and effectively.
In existing SAR image object detection method, two-parameter CFAR detection algorithm is most widely used.Two-parameter CFAR detection algorithm is a kind of traditional SAR image object detection method, and the prerequisite of the method application is that object and background clutter has higher contrast in SAR image.These 3 windows of target window, protecting window and backdrop window are provided with in two-parameter CFAR detection algorithm.Wherein, target window is the window that possible contain object pixel, and protecting window is that backdrop window is the window containing background clutter in order to prevent object pixel to be mixed into the window arranged in background clutter.Two-parameter CFAR is the hypothesis of Gaussian distribution based on the statistical distribution pattern of background clutter.By moving window, each pixel in SAR image is traveled through.In the process of each moving window, parameter estimation is carried out to background clutter by calculating the average of all pixels in backdrop window and variance and determines a threshold value with this, if the pixel in target window is greater than this threshold value just think object pixel, otherwise just think that it is clutter pixel.Because this algorithm needs to repeat to do identical process to pixel each in SAR image, cause this method long for detection time, the method will according to the prior imformation Offered target window of SAR image target simultaneously, protecting window, backdrop window, for the target that size difference is too large, the unreasonable meeting of optimum configurations causes background clutter parameter estimation to be forbidden, thus cause the undetected of target, simultaneously to target closer apart, also a target can be regarded as because clustering distance is unreasonable, appear at same surveyed area, difficulty is caused to the subsequent treatment after detecting.
Summary of the invention
The object of the invention is to for above-mentioned the deficiencies in the prior art, propose a kind of DP-CFAR detection method based on extracted region, to reduce the detection time of target, improve the accuracy of detection of target.
For achieving the above object, technical scheme of the present invention comprises the steps:
(1) in the markd training set Tr of band, positive sample set P and negative sample collection N is extracted;
(2) the positive negative sample in the positive and negative sample set obtained is down sampled to fixed measure 8 × 8, to each sample extraction specification Gradient Features g ' after down-sampling, specification Gradient Features collection G is formed with the specification Gradient Features of all down-sampling samples, training linear classifier SVM, obtains the template W of 8 × 8;
(3) build effective set of dimensions and close AS:
(3a) initial 36 different picture sizes, form S set={ (W 1× H 1) ... (W l× H l) ..., (W 36× H 36), wherein, W l, H lthe wide and high of l picture size respectively, 1≤l≤36 and l is integer; These sizes are with radix for 2, and power increases successively from minT=3 to maxT=8, i.e. W l, H l∈ { 8,16,32,64,128,256};
(3b) according to the positive sample set P obtained in step (1) and the initialized size S set of step (3a), effective dimensions set is obtained A S = { ( W As 1 × H As 1 ) , ... ( W As i × H As i ) ... , ( W As n s × H As n s ) } , 1≤As ias in≤36 corresponding S set iindividual element, 1≤i≤ns and i is integer, ns represents the number of effective dimensions;
(4) region of test pattern J under effective dimensions collection is extracted:
(4a) according to the size in effective dimensions set A S, to test pattern J down-sampling, obtain the down-sampling figure under different size, and extract the standardization Gradient Features figure of down-sampling figure, form specification Gradient Features atlas { F 1... F i..., F ns, wherein F ibe Gradient Features figure, the 1≤i≤ns and i is integer of standardizing under i-th effective dimensions;
(4b) with the template W obtained in step (2) to specification Gradient Features atlas { F 1... F i..., F nsin each characteristic pattern carry out sliding window, obtain shot chart { s 1... s i..., s ns, to each width shot chart, suppress NMS to select K local maximum by non-maximal value, K region is extracted in the position according to local maximum on test pattern J, forms regional ensemble R i, same operation is done to ns size, finally obtains ns × K region, form set of regions R;
(5) to the set of regions R that step (4b) obtains, each region is looked as a whole, with in this region 80% the average M of the strongest pixel value represent this region, by the pixel value estimated background clutter average in the hollow frame of 3 pixel wide outside this region and standard deviation and according to the false-alarm probability Pr arranged, obtaining the local detection threshold Th in each region, candidate region is elected in region regional average value M being greater than local detection threshold Th as, obtains candidate region collection R ';
(6) suppress NMS to remove a large amount of overlapping region to the non-maximal value of candidate region collection R ' that step (5) obtains, remaining region forms regional ensemble R d', be testing result.
The present invention compared with prior art, has the following advantages:
1. detection speed is fast
Existing two-parameter CAFR detection method is when detecting, due to window will be slided to each pixel in image, and need when each sliding window the model parameter estimating background clutter in reference window, the i.e. average of clutter and standard deviation, the computing of parameter estimation mainly addition, multiplication and division, so the algorithm complex of two-parameter CAFR is only relevant with the size of image.The present invention first obtains a small amount of region, then carries out DP-CFAR detection to the region of extracting.Do DP-CFAR to region to detect and biparametric CFAR detects unlike it by the region that obtains as a whole, instead of slide window to each pixel value, its complexity is mainly relevant with the areal of extraction.
Extracted region C++ code of the present invention is write, the two-value that simultaneously make use of Gradient Features is similar to, and then utilize the bit manipulation of computer-internal, accelerate arithmetic speed, avoid the drawback of two-parameter CFAR to all pixel process, Detection task can be completed in the shorter time, accelerate detection speed.
2. can process the different target of image mesoscale, improve the accuracy of detection of target
When existing two-parameter CFAR detection method detects image, need to utilize the prior imformation parameters of target in image, if target size difference is too large in image, optimum configurations is unreasonable, the undetected of target will be caused, also can cause tested apart from close target is a target simultaneously, and appears at a surveyed area.And the present invention detects based on the region of extracting, each region is looked as a whole, to avoid in two-parameter CFAR that optimum configurations is unreasonable brings problem.
Below in conjunction with drawings and Examples, the present invention is described in further detail.
Accompanying drawing explanation
Fig. 1 is realization flow figure of the present invention;
Fig. 2 is the markd training set of band used during the present invention tests;
In Fig. 3 is the test set used during the present invention tests, and wherein Fig. 3 (a) is the test pattern containing 4 targets, and Fig. 3 (b) is the test pattern containing 7 targets, and Fig. 3 (c) is the test pattern containing 3 targets;
Fig. 4 is the testing result obtained the test set in Fig. 3 with the present invention, wherein Fig. 4 (a) is the testing result to Fig. 3 (a), Fig. 4 (b) is the testing result to Fig. 3 (b), and Fig. 4 (c) is the testing result to Fig. 3 (c);
Fig. 5 be with traditional two-parameter CFAR under unified parameters to the testing result of test set, wherein Fig. 5 (a) is the testing result to Fig. 3 (a), Fig. 5 (b) is the testing result to Fig. 3 (b), and Fig. 5 (c) is the testing result to Fig. 3 (c);
Embodiment
With reference to Fig. 1, performing step of the present invention is as follows:
Step 1, extracts positive and negative sample set from the markd training set of band.
(1.1) training set is inputted:
If training set Tr={ is (I 1, B 1) ... (I j, B j) ..., (I m, B m),
Wherein, m is the number of training data, I jthe jth width training image in training set,
B j={ b 1 j... b k j..., b tj jthat the set of target target frame is gazed in the acceptance of the bid of jth width training image,
Tj is the number of target in jth width image;
B k j=(l, t, r, b) be the target frame of a kth label target in jth width training image, wherein, l represents the horizontal ordinate of target frame upper left point, t represents the ordinate of target frame upper left point, and r represents the horizontal ordinate of target frame lower-right most point, and b represents the ordinate of target frame lower-right most point;
(1.2) at training image I jthe positive sample of middle extraction:
(1.2a) at described B jin a kth target b k j=(l, t, r, b) extracts one group of regional ensemble Rb k j={ rb 1... rb n..., rb nb, wherein: rb n=(l ', t ', r ', b ') and represent the n-th region, the horizontal ordinate of upper left, l ' expression region point, the ordinate of upper left, t ' expression region point, the horizontal ordinate of r ' expression region lower-right most point, the ordinate of b ' expression region lower-right most point,
Nb represents the number in region, nb=(w max-w min+ 1) × (h max-h min+ 1),
w m i n = m a x ( c e i l ( log 10 ( r - l ) log 10 ( 2 ) ) - 0.5 , 3 ) , w m a x = m i n ( c e i l ( log 10 ( r - l ) log 10 ( 2 ) ) + 1.5 , 8 )
h min = max ( c e i l ( log 10 ( b - t ) log 10 ( 2 ) ) - 0.5 , 3 ) , h max = min ( c e i l ( log 10 ( b - t ) log 10 ( 2 ) ) - 0.5 , 8 ) ;
l′=l;t′=t;r′=l+2 p;b′=t+2 q
Wherein, p = int ( k h m a x - h m i n + 1 ) + w m i n , q = mod ( k h m a x - h m i n + 1 ) + h m i n ;
Function ceil (a) represents gets the smallest positive integral being not less than a, and max (a, d) represents the maximal value of getting in both, min (a, d) represent the minimum value of getting in both, business's computing is asked in int () expression, and mod () represents modulo operation;
(1.2b) from regional ensemble Rb k jmiddle extraction target b k jthe positive sample areas Pb at=(l, t, r, b) place k j:
Zoning set Rb k j={ rb 1... rb n..., rb nbin each region and target b k jthe coverage rate ovl of=(l, t, r, b), is more than or equal to the region of 0.5, as target b by coverage rate ovl k jthe positive sample areas Pb that=(l, t, r, b) place extracts k j, wherein, the n-th region rb n=(l ', t ', r ', b ') and target b k jthe computing formula of the coverage rate ovl of=(l, t, r, b) is:
o v l = s i z e ( rb n ∩ b k j ) s i z e ( rb n ∪ b k j ) ,
In formula, size () represent number of pixels in region and;
(1.2c) according to step (1.2a) and (1.2b) to jth width training image target frame set B j={ b 1 j... b k j..., b tj jin the positive sample areas of all Objective extraction, form training image I jthe positive sample set X extracted j=Pb 1 j∪ ... ∪ Pb k j∪ ... ∪ Pb tj j;
(1.3) at training image I jmiddle extraction negative sample collection:
(1.3a) with randomly generated test problems rand () stochastic generation four number x 1, y 1, x 2, y 2, form region Nr 1=(x 1', y 1', x ' 2, y ' 2), wherein:
X 1the horizontal ordinate of ' expression upper left, region point, x 1 ′ = m i n ( mod ( x 1 W ) + 1 , mod ( x 2 W ) + 1 ) ,
Y 1the ordinate of ' expression upper left, region point, y 1 ′ = m i n ( mod ( y 1 H ) + 1 , mod ( y 2 H ) + 1 ) ,
X 2the horizontal ordinate of ' expression region lower-right most point, x 2 ′ = m i n ( mod ( x 1 W ) + 1 , mod ( x 2 W ) + 1 ) ,
Y 2the ordinate of ' expression region lower-right most point, y 2 ′ = m a x ( mod ( y 1 H ) + 1 , mod ( y 2 H ) + 1 ) ,
In formula, W, H submeter represents training image I jwide and high;
(1.3b) zoning Nr 1=(x 1', y 1', x ' 2, y ' 2) and jth width training image target frame set B j={ b 1 j... b k j..., b tj jin the coverage rate ovl of all target frames, if the minimum value of coverage rate ovl is less than 0.5, then region Nr 1=(x 1', y 1', x ' 2, y ' 2) be negative sample;
(1.3c) iteration (1.3a) and (1.3b) are after totally 50 times, the negative sample composing training image I obtained jnegative sample set Y j;
(1.4) to training set Tr={ (I 1, B 1) ... (I j, B j) ..., (I m, B m) in other training images repeat step (1.2) and (1.3), obtain positive sample set P=X 1∪ ... ∪ X j∪ ... ∪ X mwith negative sample collection N=Y 1∪ ... ∪ Y j∪ ... ∪ Y m.
Step 2, with the specification Gradient Features collection training linear classifier SVM of positive and negative sample set, obtains the template W of 8 × 8
(2.1) the specification Gradient Features g ' of a sample in positive and negative sample set is extracted:
(2.1a) fixed measure 8 × 8 is down sampled to, to each sample after down-sampling, with horizontal gradient template A=[-1,0,1] and the VG (vertical gradient) template A of one dimension to the positive and negative sample set unification that step 1 obtains t, obtain this sample gradient map g in the horizontal direction xthe gradient map g of=F*A and vertical direction y=F*A t, form the gradient map g=|g of this sample x|+| g y|; Wherein, F represent down-sampling after positive and negative sample set in a sample, T represents and asks transposition, and * represents and asks convolution;
(2.1b) to gradient map g standardization, the gradient map g that standardizes is obtained n=min (g, 255);
(2.1c) the standardization Gradient Features figure g will obtained npull into column vector by row, obtain specification Gradient Features g ';
(2.2) extract the specification Gradient Features of all down-sampling samples, form specification Gradient Features collection G with the specification Gradient Features of all down-sampling samples;
(2.3) with specification Gradient Features collection G training linear classifier SVM, the template W of 8 × 8 is obtained.
Step 3, according to positive sample set, selects effective dimensions collection.
(3.1) initial pictures set of dimensions:
Initial 36 different picture sizes, form S set={ (W 1× H 1) ... (W l× H l) ..., (W 36× H 36), wherein, W l, H lthe wide and high of l picture size respectively, 1≤l≤36 and l is integer; These sizes are with radix for 2, and power increases successively from minT=3 to maxT=8, i.e. W l, H l∈ { 8,16,32,64,128,256};
(3.2) positive sample set P=X is calculated 1∪ ... ∪ X j∪ ... ∪ X min the size label of each positive sample:
If region r nto jth width training image I jthe positive sample set X extracted ja middle kth target b k jn-th region at=(l, t, r, b) place, wherein, l represents the horizontal ordinate of target frame upper left point, and t represents the ordinate of target frame upper left point, and r represents the horizontal ordinate of target frame lower-right most point, and b represents the ordinate of target frame lower-right most point,
Then the computing formula of the size label sl of this positive sample is as follows:
Sl=6 × (q-minT)+(p-minT)+1,1≤sl≤36 and sl is integer,
Wherein, p = int ( n h m a x - h m i n + 1 ) + w m i n , q = mod ( n h m a x - h m i n + 1 ) + h min
w min = max ( c e i l ( log 10 ( r - l ) log 10 ( 2 ) ) - 0.5 , min T ) , w max = min ( c e i l ( log 10 ( r - 1 ) log 10 ( 2 ) ) + 1.5 , max T )
h min = max ( c e i l ( log 10 ( b - t ) log 10 ( 2 ) ) - 0.5 , min T ) , h max = min ( c e i l ( log 10 ( b - t ) log 10 ( 2 ) ) - 0.5 , max T ) ;
(3.3) according to the size label of positive sample, positive number of samples under adding up each size label, the size label positive number of samples being more than or equal to 5 is considered as effective dimensions label, and forming effective dimensions label set is { As 1... As i..., As ns, wherein, As ithe size label of i-th effective dimensions, 1≤As ias in≤36 corresponding S set iindividual element, 1≤i≤ns and i is integer, ns represents the number of effective dimensions;
(3.4) at initial pictures size set S={ (W 1× H 1) ... (W l× H l) ..., (W 36× H 36) middle selection As iindividual element obtain i-th effective dimensions effective dimensions collection is obtained by effective dimensions label set A S = { ( W As 1 × H As 1 ) , ... ( W As i × H As i ) ... , ( W As n s × H As n s ) } .
Step 4, extracts the region of test pattern under effective dimensions collection.
(4.1) according to effective dimensions collection A S = { ( W As 1 × H As 1 ) , ... ( W As i × H As i ) ... , ( W As n s × H As n s ) } Down-sampling is carried out to test pattern J, obtains the image after one group of down-sampling be the image after i-th effective dimensions down-sampling, this image is of a size of
Wherein, represent i-th down-sampled images Js iwide,
represent i-th down-sampled images Js iheight,
In formula, W ', H ' be the wide and high of test pattern J respectively, function ceil (a) represents gets the smallest positive integral being not less than a;
(4.2) to the image Js after down-sampling iextraction specification gradient map: F i=min (| Js i* A|+|Js i* A t|, 255), wherein, A=[-1,0,1], T represents transposition, and * represents and asks convolution;
(4.3) use template W to standardization gradient map { F 1... F i..., F nsin each width characteristic pattern carry out sliding window, obtain shot chart { s 1... s i..., s ns, wherein s i=W*F i, * represents and asks convolution, F irepresent the specification gradient map under i-th effective dimensions, s irepresent the shot chart under i-th effective dimensions;
(4.4) non-maximal value is utilized to suppress NMS algorithm at the shot chart s of i-th size iin select K local maximum of this shot chart, form location sets Ms with the coordinate position of K local maximum of this shot chart i:
(4.4a) by shot chart s iarrange from big to small by its score value, obtain the shot chart s after arranging i', and arrangement shot chart s i' in all scoring positions be all labeled as very;
(4.4b) shot chart s will be arranged i' middle position mark is true and the score position coordinate that score value is maximum puts into location sets Ms iin, and be false by four neighbourhood signatures of this position and this position;
(4.4c) continue step (4.4b), until obtain the coordinate position of K local maximum, form location sets Ms i={ (u 1', v 1') ... (u j', v j') ..., (u k', y ' k), (u j', v j') represent the position coordinates of a jth local maximum point in shot chart, 1≤j≤K and j is integer;
(4.5) according to location sets Ms i, extract test pattern J i-th size under K region, form regional ensemble wherein represent the jth region under i-th size,
V 1represent the horizontal ordinate of upper left, region point,
V 2represent the ordinate of upper left, region point,
V 3represent the horizontal ordinate of region lower-right most point,
V 4represent the ordinate of region lower-right most point,
In formula, function ceil (a) represents gets the smallest positive integral being not less than a;
(4.6) to shot chart { s 1... s i..., s nsin shot chart under other sizes under each effective dimensions, extract K region according to step (4.4) and (4.5);
(4.7) regional ensemble that obtains under ns effective dimensions of test pattern:
R=R 1∪ ... ∪ R i∪ ... ∪ R ns, wherein, R ithe regional ensemble extracted under representing i-th effective dimensions.
Step 5, obtains candidate region.
(5.1) to each region in regional ensemble R, with in this region 80% the average M of the strongest pixel value represent this region;
(5.2) by the average of the calculated for pixel values background clutter in the hollow frame of 3 pixel wide outside each region and standard deviation computing formula is as follows:
μ c ^ = 1 N c Σ i , j ∈ Ω c x ( i , j ) , σ c ^ = 1 N c Σ i , j ∈ Ω c ( x ( i , j ) - μ c ^ ) 2 ,
In formula, Ω crepresent clutter region, N crepresent clutter area pixel number;
(5.3) the local detection threshold Th in each region is calculated, wherein K cFARfalse-alarm probability Pr according to arranging calculates ,computing formula is: K cFAR-1(1-Pr), wherein Φ () is Standard Normal Distribution, Φ -1() is the inverse function of Standard Normal Distribution;
(5.4) by the average M in each region compared with the local detection threshold Th in this region, if Th > M, then retain this region, otherwise, delete this region, formation candidate region, the region collection R ' finally obtained.
Step 6, removes overlapping candidate region, obtains testing result.
(6.1) all regions in the collection R ' of candidate region are arranged from big to small by its average M, obtain the candidate region collection R after arranging 1';
(6.2) calculated permutations candidate region collection R 1' in first region and the coverage rate in other regions, from arrangement candidate region collection R 1' middle removal coverage rate is more than or equal to the region of 0.01, and by this R 1' in first region put into regional ensemble R d' in;
(6.3) arrangement candidate region collection R is upgraded 1'=R 1'-R d', and return in step (6.2), until arrangement candidate region collection R 1' be empty, the regional ensemble R finally obtained d' be testing result.
Effect of the present invention can be further illustrated by following experiment:
1. experiment condition
Experiment operation platform: MATLABR2012a, VisualStudio2012, Intel (R) Core (TM) i5-4590CPU3.30GHZ, Windows7 Ultimate.
Test the data that data used are complete polarization and single polarization in RADARSAT-2 database, the test set shown in the training set shown in Fig. 2 and Fig. 3 intercepts out in RADARSAT-2 data.In training set and test set, rectangle frame mark is the target frame of target in image.The target of data centralization is all naval vessel, and the size on naval vessel differs, and using pixel number as unit, wherein target maximum size is wide is 96, and height is 16, and target minimum dimension is wide is 12, and height is 8.
In experiment, the optimum configurations of detection method: the number K=150 of each size lower area, false alarm rate Pr=10 -2, initial pictures size set S={ (W 1× H 1) ... (W l× H l) ..., (W 36× H 36), wherein W l, H lthe wide and high of l picture size respectively, 1≤l≤36 and l is integer; These sizes are with radix for 2, and power increases successively from minT=3 to maxT=8, i.e. W l, H l∈ { 8,16,32,64,128,256}
In experiment, the optimum configurations of two-parameter CFAR detection method: false alarm rate Pr=10 -6, next sample number of times r=2, the long mG=27 of protection window half, the long mB=30 of background window half.Minimum target region area S min=4, maximum target region area S max=300, target maximum length Len=25, slice size Q=100.
2. experiment content:
Experiment 1, first train the training set shown in Fig. 2 by the inventive method, then carry out target detection to the test set in Fig. 3, result is as shown in Figure 4;
Experiment 2, by traditional two-parameter CFAR detection method, target detection is carried out to the test pattern in Fig. 3, the concrete operations that two-parameter CFAR detects are with reference to the chapter 2 " SAR target detection method research " in master's thesis " SAR target detection and identification algorithm are studied and Software for Design " of Xian Electronics Science and Technology University Li Li in 2013, testing result as shown in Figure 5, wherein, Fig. 5 (a) is the testing result to Fig. 3 (a), Fig. 5 (b) is the testing result to Fig. 3 (b), and Fig. 5 (c) is the testing result to Fig. 3 (c).
The testing result of experiment 1 and experiment 2 is as table 1:
Table 1 the inventive method and two-parameter CFAR contrast test pattern detection case
3. interpretation
As can be seen from Table 1, for the SAR image data that experiment is used, the present invention's application has carried out target detection based on the DP-CFAR detection method of extracted region, shows that detection method of the present invention has good performance.When in table 1, test pattern detects with traditional two-parameter CFAR, because optimum configurations is unreasonable, there is the undetected of target, to the testing result of Fig. 3 (a) with Fig. 3 (b) test pattern as can be seen from table 1, target number is greater than undetected number and detects target area number sum;
It can also be seen that from table 1, the inventive method detection test pattern time used is about 1/3 of traditional two-parameter CFAR detection time, because the inventive method is very short with the region time used obtaining test pattern in the training stage, therefore the present invention improves detection efficiency to a certain extent.
Comparison diagram 3 and testing result Fig. 5 also can find out, all there is the undetected of target in three width test patterns, can find out have two targets to appear at a surveyed area from Fig. 5 (a) and Fig. 5 (b), mainly because target is at a distance of comparatively near, optimum configurations is unreasonable to be caused.Visible, the present invention, compared to traditional two-parameter CFAR detection method, not only can ensure higher verification and measurement ratio, there will not be multiple target in the situation of a surveyed area simultaneously.Comparison diagram 4 and Fig. 5 can find out, SAR image object detection method of the present invention has the high advantage of detection probability, and testing result is region not of uniform size simultaneously, the target that can differ for size.
To sum up, SAR image object detection method of the present invention has the advantage that algorithm execution speed is fast, detection probability is high, is fast a kind of and effective detection method, and has adaptability to the target of different size, have a good application prospect.

Claims (8)

1., based on the DP-CFAR detection method of extracted region, comprise the steps:
(1) in the markd training set Tr of band, positive sample set P and negative sample collection N is extracted;
(2) the positive negative sample in the positive and negative sample set obtained is down sampled to fixed measure 8 × 8, to each sample extraction specification Gradient Features g ' after down-sampling, specification Gradient Features collection G is formed with the specification Gradient Features of all down-sampling samples, training linear classifier SVM, obtains the template W of 8 × 8;
(3) build effective set of dimensions and close AS:
(3a) initial 36 different picture sizes, form S set={ (W 1× H 1) ... (W l× H l) ..., (W 36× H 36), wherein, W l, H lthe wide and high of l picture size respectively, 1≤l≤36 and l is integer; These sizes are with radix for 2, and power increases successively from minT=3 to maxT=8, i.e. W l, H l∈ { 8,16,32,64,128,256};
(3b) according to the positive sample set P obtained in step (1) and the initialized size S set of step (3a), effective dimensions set is obtained A S = { ( W As 1 × H As 1 ) , ... ( W As i × H As i ) ... , ( W As n s × H As n s ) } , 1≤As ias in≤36 corresponding S set iindividual element, 1≤i≤ns and i is integer, ns represents the number of effective dimensions;
(4) region of test pattern J under effective dimensions collection is extracted:
(4a) according to the size in effective dimensions set A S, to test pattern J down-sampling, obtain the down-sampling figure under different size, and extract the standardization Gradient Features figure of down-sampling figure, form specification Gradient Features atlas { F 1... F i..., F ns, wherein F ibe Gradient Features figure, the 1≤i≤ns and i is integer of standardizing under i-th effective dimensions;
(4b) with the template W obtained in step (2) to specification Gradient Features atlas { F 1... F i..., F nsin each characteristic pattern carry out sliding window, obtain shot chart { s 1... s i..., s ns, to each width shot chart, suppress NMS to select K local maximum by non-maximal value, K region is extracted in the position according to local maximum on test pattern J, forms regional ensemble R i, same operation is done to ns size, finally obtains ns × K region, form set of regions R;
(5) to the set of regions R that step (4b) obtains, each region is looked as a whole, with in this region 80% the average M of the strongest pixel value represent this region, by the pixel value estimated background clutter average in the hollow frame of 3 pixel wide outside this region and standard deviation and according to the false-alarm probability Pr arranged, obtaining the local detection threshold Th in each region, candidate region is elected in region regional average value M being greater than local detection threshold Th as, obtains candidate region collection R ';
(6) suppress NMS to remove a large amount of overlapping region to the non-maximal value of candidate region collection R ' that step (5) obtains, remaining region forms regional ensemble R d', be testing result.
2. method according to claim 1, wherein said step extracts positive and negative sample set in (1) in training set, carries out as follows:
(1a) training set is defined:
If training set Tr={ is (I 1, B 1) ... (I j, B j) ..., (I m, B m), wherein m is the number of training data, I jthe jth width training image in training set, B j={ b 1 j... b k j..., b tj jthat the set of target target frame is gazed in the acceptance of the bid of jth width image, tj is the number of target in jth width image; b k j=(l, t, r, b) be the target frame of a kth label target in jth width training image, wherein, l represents the horizontal ordinate of target frame upper left point, t represents the ordinate of target frame upper left point, and r represents the horizontal ordinate of target frame lower-right most point, and b represents the ordinate of target frame lower-right most point;
(1b) at training image I jthe positive sample of middle extraction:
(1b.1) to B jin a kth target b k j=(l, t, r, b) extracts one group of region Rb k j={ rb 1... rb n..., rb nb, wherein, nb represents the number in region, rb n=(l ', t ', r ', b ') and represent the n-th region, the horizontal ordinate of upper left, l ' expression region point, the ordinate of upper left, t ' expression region point, the horizontal ordinate of r ' expression region lower-right most point, the ordinate of b ' expression region lower-right most point;
(1b.2) from regional ensemble Rb k j={ rb 1... rb n..., rb nbmiddle extraction target b k jthe positive sample areas Pb at=(l, t, r, b) place k j;
(1b.3) according to (1b.1) and (1b.2) to B j={ b 1 j... b k j..., b tj jin the positive sample areas of all Objective extraction, form training image I jthe positive sample set X extracted j=Pb 1 j∪ ... ∪ Pb k j∪ ... ∪ Pb tj j;
(1c) at training image I jmiddle extraction negative sample collection:
(1c.1) stochastic generation four number, forms region Nr 1=(x 1', y ' 1, x ' 2, y ' 2), wherein, x 1the horizontal ordinate of ' expression upper left, region point, y 1the ordinate of ' expression upper left, region point, x 2the horizontal ordinate of ' expression region lower-right most point, y 2the ordinate of ' expression region lower-right most point;
(1c.2) zoning Nr 1=(x 1', y ' 1, x ' 2, y ' 2) and training image I jin all target B j={ b 1 j... b k j..., b tj jcoverage rate ovl, if the minimum value of coverage rate ovl is less than 0.5, then region Nr 1=(x 1', y ' 1, x ' 2, y ' 2) be negative sample;
(1c.3) iteration (1c.1) and (1c.2) are after totally 50 times, the negative sample composing training image I obtained jnegative sample set Y j;
(1d) to training set Tr={ (I 1, B 1) ... (I j, B j) ..., (I m, B m) in other training images repeat step (1b) and (1c), obtain positive sample set P=X 1∪ ... ∪ X j∪ ... ∪ X mwith negative sample collection N=Y 1∪ ... ∪ Y j∪ ... ∪ Y m.
3. method according to claim 2, the areal nb wherein in step (1b.1), is calculated as follows:
nb=(w max-w min+1)×(h max-h min+1)
Wherein, w m i n = m a x ( c e i l ( log 10 ( r - l ) log 10 ( 2 ) ) - 0.5 , 3 ) , w max = min ( c e i l ( log 10 ( r - l ) log 10 ( 2 ) ) + 1.5 , 8 )
h m i n = m a x ( c e i l ( log 10 ( b - t ) log 10 ( 2 ) ) - 0.5 , 3 ) , h m a x = m i n ( c e i l ( log 10 ( b - t ) log 10 ( 2 ) ) - 0.5 , 8 ) ;
Function ceil (a) represents gets the smallest positive integral being not less than a, and max (a, d) represents the maximal value of getting in both, and min (a, d) represents the minimum value of getting in both.
4. method according to claim 2, wherein in step (1b.2) from regional ensemble Rb k jmiddle extraction target b k jthe positive sample areas Pb at place k j, be zoning set Rb k j={ rb 1... rb n..., rb nbin each region and target b k jthe coverage rate ovl of=(l, t, r, b), is more than or equal to the region of 0.5, as target b by coverage rate ovl k jthe positive sample areas Pb that=(l, t, r, b) place extracts k j,
Wherein, the n-th region rb n=(l ', t ', r ', b ') and target b k jthe computing formula of the coverage rate ovl of=(l, t, r, b) is:
o v l = s i z e ( rb n ∩ b k j ) s i z e ( rb n ∪ b k j ) , Size () represent number of pixels in region and.
5. method according to claim 1, to each sample extraction specification Gradient Features g ' after down-sampling in wherein said step (2), carry out as follows:
(2a) to each sample after down-sampling, with horizontal gradient template A=[-1,0,1] and the VG (vertical gradient) template A of one dimension t, with this sample gradient map g in the horizontal direction xthe gradient map g of=F*A and vertical direction y=F*A t, form the gradient map g=|g of this sample x|+| g y|, wherein, F represent down-sampling after positive and negative sample set in a sample, T represents and asks transposition, and * represents and asks convolution;
(2b) to gradient map g standardization, the gradient map g that standardizes is obtained n=min (g, 255);
(2c) the standardization Gradient Features figure g will obtained npull into column vector by row, obtain specification Gradient Features g '.
6. method according to claim 1, extract the standardization Gradient Features figure of down-sampling figure in wherein said step (4a), carry out as follows:
(4a.1) according to effective dimensions collection A S = { ( W As 1 × H As 1 ) , ... ( W As i × H As i ) ... , ( W As n s × H As n s ) } Down-sampling is carried out to test pattern J, obtains the image { Js after one group of down-sampling 1... Js i..., Js ns, Js ibe the image after i-th down-sampling, this image is of a size of computing formula is as follows:
W As i ′ = c e i l ( 8 × W ′ / W As i ) , H As i ′ = c e i l ( 8 × H ′ / H As i )
Wherein, W ', H ' be the wide and high of test pattern J respectively, function ceil (a) represents gets the smallest positive integral being not less than a;
(4a.2) to the image Js after down-sampling iextraction specification gradient map: F i=min (| Js i* A|+|Js i* A t|, 255), wherein, A=[-1,0,1], T represents transposition, and * represents and asks convolution.
7. method according to claim 1, extract the region of test pattern under an effective dimensions in wherein said step (4b), carry out as follows:
(4b.1) use template W to standardization gradient map { F 1... F i..., F nsin each width characteristic pattern carry out sliding window, obtain shot chart { s 1... s i..., s ns, wherein s i=W*F i, * represents and asks convolution;
(4b.2) non-maximal value is utilized to suppress NMS to the shot chart s under i-th size iprocess, select K local maximum in this shot chart, form location sets Ms with its position coordinates i={ (u 1', v 1') ... (u j', v j') ..., (u k', y ' k), (u j', v j') represent the position coordinates of a jth local maximum point in shot chart, 1≤j≤K and j is integer;
(4b.3) according to location sets Ms i, extract test pattern J i-th size under K region, form regional ensemble wherein represent the jth region under i-th size,
V 1represent the horizontal ordinate of upper left, region point,
V 2represent the ordinate of upper left, region point,
V 3represent the horizontal ordinate of region lower-right most point,
V 4represent the ordinate of region lower-right most point,
Function ceil (a) represents gets the smallest positive integral being not less than a.
8. method according to claim 1, wherein said step (6) suppresses NMS to remove the region of the middle overlap of candidate region collection R ' according to non-maximal value, carries out as follows:
(6a) all regions in the collection R ' of candidate region are arranged from big to small by its average M, obtain the candidate region collection R after arranging 1';
(6b) calculated permutations candidate region collection R 1' in first region and the coverage rate in other regions, from arrangement candidate region collection R 1' middle removal coverage rate is more than or equal to the region of 0.01, and by this R 1' in first region put into regional ensemble R d' in;
(6c) arrangement candidate region collection R is upgraded 1'=R 1'-R d', and return in step (6b), until arrangement candidate region collection R 1' be empty, the regional ensemble R finally obtained d' be testing result.
CN201510641963.4A 2015-09-30 2015-09-30 DP-CFAR detection method based on extracted region Active CN105354824B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510641963.4A CN105354824B (en) 2015-09-30 2015-09-30 DP-CFAR detection method based on extracted region

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510641963.4A CN105354824B (en) 2015-09-30 2015-09-30 DP-CFAR detection method based on extracted region

Publications (2)

Publication Number Publication Date
CN105354824A true CN105354824A (en) 2016-02-24
CN105354824B CN105354824B (en) 2018-03-06

Family

ID=55330791

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510641963.4A Active CN105354824B (en) 2015-09-30 2015-09-30 DP-CFAR detection method based on extracted region

Country Status (1)

Country Link
CN (1) CN105354824B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106326938A (en) * 2016-09-12 2017-01-11 西安电子科技大学 SAR image target discrimination method based on weakly supervised learning
CN107064899A (en) * 2017-04-18 2017-08-18 西安电子工程研究所 A kind of Biparametric Clutter Map CFAR detection method of adaptive threshold
CN107153180A (en) * 2017-06-15 2017-09-12 中国科学院声学研究所 A kind of Target Signal Detection and system
CN107942329A (en) * 2017-11-17 2018-04-20 西安电子科技大学 Motor platform single-channel SAR is to surface vessel object detection method
CN109588182A (en) * 2018-11-23 2019-04-09 厦门大学 One kind with moulding Mangrove landscape calibration method on large area beach
CN109978017A (en) * 2019-03-06 2019-07-05 开易(北京)科技有限公司 Difficult specimen sample method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1831558A (en) * 2006-04-21 2006-09-13 清华大学 Single-channel synthetic aperture radar moving-target detection method based on multi-apparent subimage paire
US9057783B2 (en) * 2011-01-18 2015-06-16 The United States Of America As Represented By The Secretary Of The Army Change detection method and system for use in detecting moving targets behind walls, barriers or otherwise visually obscured

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1831558A (en) * 2006-04-21 2006-09-13 清华大学 Single-channel synthetic aperture radar moving-target detection method based on multi-apparent subimage paire
US9057783B2 (en) * 2011-01-18 2015-06-16 The United States Of America As Represented By The Secretary Of The Army Change detection method and system for use in detecting moving targets behind walls, barriers or otherwise visually obscured

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
蔡春 等: "Weibull杂波背景下扩展目标的恒虚警率检测", 《空军雷达学院学报》 *
郝程鹏等: "一种K-分布杂波背景下的双参数恒虚警检测器", 《电子与信息学报》 *
陈新 等: "一种利用SAR和可见光图像融合检测目标的方法", 《信号处理》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106326938A (en) * 2016-09-12 2017-01-11 西安电子科技大学 SAR image target discrimination method based on weakly supervised learning
CN106326938B (en) * 2016-09-12 2019-03-08 西安电子科技大学 SAR image target discrimination method based on Weakly supervised study
CN107064899A (en) * 2017-04-18 2017-08-18 西安电子工程研究所 A kind of Biparametric Clutter Map CFAR detection method of adaptive threshold
CN107153180A (en) * 2017-06-15 2017-09-12 中国科学院声学研究所 A kind of Target Signal Detection and system
CN107942329A (en) * 2017-11-17 2018-04-20 西安电子科技大学 Motor platform single-channel SAR is to surface vessel object detection method
CN107942329B (en) * 2017-11-17 2021-04-06 西安电子科技大学 Method for detecting sea surface ship target by maneuvering platform single-channel SAR
CN109588182A (en) * 2018-11-23 2019-04-09 厦门大学 One kind with moulding Mangrove landscape calibration method on large area beach
CN109588182B (en) * 2018-11-23 2021-03-26 厦门大学 Method for building mangrove landscape landmarks on large-area mudflat
CN109978017A (en) * 2019-03-06 2019-07-05 开易(北京)科技有限公司 Difficult specimen sample method and system

Also Published As

Publication number Publication date
CN105354824B (en) 2018-03-06

Similar Documents

Publication Publication Date Title
CN108510467B (en) SAR image target identification method based on depth deformable convolution neural network
CN105354824A (en) Region extraction-based two-parameter constant false alarm detection method
Malof et al. Automatic detection of solar photovoltaic arrays in high resolution aerial imagery
CN105427314B (en) SAR image object detection method based on Bayes's conspicuousness
CN104361340B (en) The SAR image target quick determination method for being detected and being clustered based on conspicuousness
CN107730515B (en) Increase the panoramic picture conspicuousness detection method with eye movement model based on region
CN102867196A (en) Method for detecting complex sea-surface remote sensing image ships based on Gist characteristic study
CN101975940A (en) Segmentation combination-based adaptive constant false alarm rate target detection method for SAR image
CN103198480B (en) Based on the method for detecting change of remote sensing image of region and Kmeans cluster
CN103810503A (en) Depth study based method for detecting salient regions in natural image
CN104408482A (en) Detecting method for high-resolution SAR (Synthetic Aperture Radar) image object
Liu et al. A hybrid method for segmenting individual trees from airborne lidar data
CN106557740B (en) The recognition methods of oil depot target in a kind of remote sensing images
CN102968799A (en) Integral image-based quick ACCA-CFAR SAR (Automatic Censored Cell Averaging-Constant False Alarm Rate Synthetic Aperture Radar) image target detection method
CN106778687A (en) Method for viewing points detecting based on local evaluation and global optimization
CN102842044B (en) Method for detecting variation of remote-sensing image of high-resolution visible light
CN102542293A (en) Class-I extraction and classification method aiming at high-resolution SAR (Synthetic Aperture Radar) image scene interpretation
CN107330390A (en) A kind of demographic method based on graphical analysis and deep learning
CN104361351A (en) Synthetic aperture radar (SAR) image classification method on basis of range statistics similarity
CN107203761B (en) Road width estimation method based on high-resolution satellite image
Shu et al. Center-point-guided proposal generation for detection of small and dense buildings in aerial imagery
CN103824302A (en) SAR (synthetic aperture radar) image change detecting method based on direction wave domain image fusion
CN105512622A (en) Visible remote-sensing image sea-land segmentation method based on image segmentation and supervised learning
CN110533025A (en) The millimeter wave human body image detection method of network is extracted based on candidate region
CN105303566B (en) A kind of SAR image azimuth of target method of estimation cut based on objective contour

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant