CN105354824B - DP-CFAR detection method based on extracted region - Google Patents

DP-CFAR detection method based on extracted region Download PDF

Info

Publication number
CN105354824B
CN105354824B CN201510641963.4A CN201510641963A CN105354824B CN 105354824 B CN105354824 B CN 105354824B CN 201510641963 A CN201510641963 A CN 201510641963A CN 105354824 B CN105354824 B CN 105354824B
Authority
CN
China
Prior art keywords
region
mrow
msub
target
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510641963.4A
Other languages
Chinese (zh)
Other versions
CN105354824A (en
Inventor
杜兰
代慧
王兆成
肖金国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201510641963.4A priority Critical patent/CN105354824B/en
Publication of CN105354824A publication Critical patent/CN105354824A/en
Application granted granted Critical
Publication of CN105354824B publication Critical patent/CN105354824B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Other Investigation Or Analysis Of Materials By Electrical Means (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a kind of DP-CFAR detection method based on extracted region, mainly solves the problems, such as that detection speed is slow in existing SAR image target detection technique and parameter setting causes target missing inspection.Implementation step is:Positive and negative sample set is extracted to the training image with target label, trains a template w based on specification Gradient Features with linear classifier SVM, and effective set of dimensions is selected based on initial picture size to the positive sample collection of extraction;Template w and effective set of dimensions are then based on to the region under test image extraction effective dimensions;The region of extraction is detected with DP-CFAR again to obtain candidate region, and NMS is suppressed with non-maximum to candidate region, removes a large amount of overlapping regions, last remaining region is final testing result.The advantages of present invention detects compared to traditional two-parameter CFAR has detection speed fast and detection probability is high, suitable for the quick detection of SAR image target.

Description

DP-CFAR detection method based on extracted region
Technical field
The invention belongs to Radar Technology field, more particularly to a kind of CFAR detection method, available in synthetic aperture thunder Target is quickly and efficiently detected up in SAR image.
Background technology
Radar imaging technology is to grow up the 1950s, the hair advanced by leaps and bounds in 60 years afterwards Exhibition, at present, military affairs, agricultural, geology, ocean, disaster, paint survey etc. all many-sides be widely used.
Synthetic aperture radar SAR is a kind of active sensor perceived using microwave, it is with infrared, optics etc. other Sensor is compared, and SAR imagings are not limited by conditions such as illumination, weather, can carry out round-the-clock, whole day to target interested When observation, SAR image automatic target detection is by more and more extensive concern.
The tertiary treatment flow that U.S.'s Lincoln laboratory proposes is usually taken in SAR automatic target detection ATR methods.The flow Attention mechanism is layered using one kind, its implementation process is:First, detection process is carried out to view picture SAR image, removed bright in image Aobvious is not mesh target area, obtains potential target region;Then, target discriminating processing is carried out to potential target area, to pick Except natural clutter false-alarm therein, or reject region substantially bigger than target or small;Detection and discriminating stage by target, Obtain target region of interest ROI;Finally, then to target ROI Classification and Identification is carried out., it is necessary to handle in this treatment mechanism Data volume gradually reducing, can thus improve the efficiency of target identification system.
SAR image target detection is the first step in automatic target detection ATR flow scheme designs, and its importance is self-evident. How fast and effectively to detect potential target region is also a big study hotspot of the application of SAR image interpretation in recent years.
In existing SAR image object detection method, two-parameter CFAR detection algorithms are most widely used.It is two-parameter CFAR detection algorithms are a kind of traditional SAR image object detection methods, and the premise of this method application is the target in SAR image There is higher contrast with background clutter.Target window, protecting window and background are provided with two-parameter CFAR detection algorithms This 3 windows of window.Wherein, target window be possible the window containing object pixel, protecting window is to prevent target picture Element is mixed into the window set in background clutter, and backdrop window is the window containing background clutter.Two-parameter CFAR is based on the back of the body The statistical distribution pattern of scape clutter is the hypothesis of Gaussian Profile.By sliding window, to each pixel progress time in SAR image Go through.During each sliding window, by calculating average and the variance of all pixels in backdrop window come miscellaneous to background Ripple carries out parameter Estimation and determines a threshold value with this, if the pixel in target window is taken as mesh more than this threshold value Pixel is marked, it is clutter pixel to be otherwise considered as it.Because the algorithm needs that each pixel in SAR image is repeated to do identical Processing, it is long to cause this method detection time, while this method will set target according to the prior information of SAR image target Window, protecting window, backdrop window, for the too big target of size difference, parameter setting is unreasonable to cause background clutter to be joined Number estimation is inaccurate, so as to cause the missing inspection of target, while at a distance of closer target, as clustering distance it is unreasonable and It is considered as a target, appears in same detection zone, difficulty is caused to the subsequent treatment after detection.
The content of the invention
It is an object of the invention to for above-mentioned the deficiencies in the prior art, propose a kind of two-parameter perseverance based on extracted region False-alarm detection method, to reduce the detection time of target, improve the accuracy of detection of target.
To achieve the above object, technical scheme comprises the following steps:
(1) in markd training set Tr, positive sample collection P and negative sample collection N is extracted;
(2) the positive negative sample in obtained positive and negative sample set is down sampled to fixed dimension 8 × 8, to every after down-sampling Individual sample extraction specification Gradient Features g ', specification Gradient Features collection G, instruction are formed with the specification Gradient Features of all down-sampling samples Practice linear classifier SVM, obtain the template W of one 8 × 8;
(3) build effective set of dimensions and close AS:
(3a) initial 36 different picture size, form set S={ (W1×H1),...(Wl×Hl)...,(W36× H36), wherein, Wl,HlIt is the wide and high of l-th picture size respectively, 1≤l≤36 and l is integer;These sizes using radix as 2, power increases successively from min T=3 to max T=8, i.e. Wl,Hl∈{8,16,32,64,128,256};
The size set S that (3b) initializes according to the positive sample collection P and step (3a) obtained in step (1), obtain effectively Size set1≤AsiAs in≤36 corresponding set SiIt is individual Element, 1≤i≤ns and i are integer, and ns represents the number of effective dimensions;
(4) regions of the test image J under effective dimensions collection is extracted:
The size of (4a) in effective dimensions set AS, to test image J down-samplings, obtain adopting under under different sizes Master drawing, and the standardization Gradient Features figure of down-sampling figure is extracted, form specification Gradient Features atlas { F1,...Fi...,Fns, its Middle FiFor the Gradient Features figure that standardizes under i-th of effective dimensions, 1≤i≤ns and i is integer;
(4b) is with the template W obtained in step (2) to specification Gradient Features atlas { F1,...Fi...,FnsIn it is each Characteristic pattern carries out sliding window, obtains shot chart { s1,...si...,sns, to each width shot chart, suppress NMS choosings with non-maximum Go out K local maximum, K region is extracted on test image J according to the position of local maximum, forms regional ensemble Ri, Same operation is done to ns size, finally gives ns × K region, forms set of regions R;
(5) the set of regions R obtained to step (4b), each region is considered as an entirety, with the region 80% most The average M of strong pixel value represents the region, estimates background clutter with the pixel value outside the region in the hollow frame of 3 pixel wides AverageAnd standard deviationAnd according to the false-alarm probability Pr of setting, the local detection threshold value Th in each region is obtained, region Average M elects candidate region as in the region more than local detection threshold value Th, obtains candidate region collection R ';
(6) NMS is suppressed with non-maximum to the candidate region collection R ' that step (5) obtains and removes a large amount of overlapping regions, remained Under region form regional ensemble Rd', as testing result.
The present invention compared with prior art, has advantages below:
1. detection speed is fast
Existing two-parameter CAFR detection methods detection when, due to will to each pixel sliding window in image, and The average and standard deviation of the model parameter, i.e. clutter of background clutter in estimation reference window, parameter Estimation are needed in each sliding window The mainly computing of addition, multiplication and division, so two-parameter CAFR algorithm complex is only relevant with the size of image.The present invention is A small amount of region is first obtained, DP-CFAR detection then is carried out to the region of extraction.DP-CFAR inspection is done to region It is surveyed unlike two-parameter CFAR detections using obtained region as integrally, rather than each pixel value is slided Window, its complexity are mainly relevant with the areal of extraction.
The extracted region of the present invention is write with C++ codes, while make use of the two-value of Gradient Features approximate, and then utilizes The bit manipulation of computer-internal, accelerates arithmetic speed, avoids the drawbacks of two-parameter CFAR is handled all pixels point, can be with Detection task is completed in the shorter time, accelerates detection speed.
2. the different target of image mesoscale can be handled, the accuracy of detection of target is improved
, it is necessary to be set using the prior information of target in image when existing two-parameter CFAR detection methods detect to image Parameter is put, if target size difference is too big in image, parameter setting is unreasonable, may result in the missing inspection of target, while also can It is a target to cause closely located target to be tested, and appears in a detection zone.And the present invention is the area based on extraction Domain is detected, and each region is considered as into an entirety, avoids in two-parameter CFAR that parameter setting is unreasonable to bring problem.
The present invention is described in further detail below in conjunction with drawings and examples.
Brief description of the drawings
Fig. 1 is the implementation process figure of the present invention;
Fig. 2 is the markd training set of band used in present invention experiment;
In Fig. 3 is the test set that uses in present invention experiment, and wherein Fig. 3 (a) is the test image containing 4 targets, Fig. 3 (b) it is the test image containing 7 targets, Fig. 3 (c) is the test image containing 3 targets;
Fig. 4 is the testing result obtained with the present invention to the test set in Fig. 3, and wherein Fig. 4 (a) is the detection to Fig. 3 (a) As a result, Fig. 4 (b) is the testing result to Fig. 3 (b), and Fig. 4 (c) is the testing result to Fig. 3 (c);
It with traditional two-parameter CFAR under unified parameters is pair to the testing result of test set, wherein Fig. 5 (a) that Fig. 5, which is, Fig. 3 (a) testing result, Fig. 5 (b) are the testing results to Fig. 3 (b), and Fig. 5 (c) is the testing result to Fig. 3 (c);
Embodiment
Reference picture 1, step is as follows for of the invention realizing:
Step 1, positive and negative sample set is extracted from markd training set.
(1.1) training set is inputted:
If training set Tr={ (I1,B1),...(Ij,Bj)...,(Im,Bm),
Wherein, m is the number of training data, IjIt is the jth width training image in training set,
Bj={ b1 j,...bk j...,btj jIt is that target target frame set is gazed in the acceptance of the bid of jth width training image,
Tj is the number of target in jth width image;
bk j=(l, t, r, b) is the target frame of k-th of label target in jth width training image, wherein, l represents target frame The abscissa of upper left point, t represent the ordinate of target frame upper left point, and r represents the abscissa of target frame lower-right most point, and b represents target The ordinate of frame lower-right most point;
(1.2) in training image IjMiddle extraction positive sample:
(1.2a) is in the BjIn k-th of target bk j=(l, t, r, b) extracts one group of regional ensemble Rbk j={ rb1, ...rbn...,rbnb, wherein:rbnN-th of region of=(l ', t ', r ', b ') expressions, the abscissa of l ' expressions region upper left point, The ordinate of t ' expressions region upper left point, the abscissa of r ' expressions region lower-right most point, the ordinate of b ' expressions region lower-right most point,
Nb represents the number in region, nb=(wmax-wmin+1)×(hmax-hmin+ 1),
L '=l;T '=t;R '=l+2p;B '=t+2q
Wherein,
Function ceil (a) expressions take the smallest positive integral not less than a, and max (a, d) represents to take the maximum in both, min (a, d) represents to take the minimum value in both, and int () represents to ask business's computing, and mod () represents modulo operation;
(1.2b) is from regional ensemble Rbk jMiddle extraction target bk jThe positive sample region Pb at=(l, t, r, b) placesk j
Zoning set Rbk j={ rb1,...rbn...,rbnbIn each region and target bk j=(l, t, r's, b) covers Lid rate ovl, coverage rate ovl is more than or equal to 0.5 region, as target bk jThe positive sample region of=(l, t, r, b) places extraction Pbk j, wherein, n-th of region rbn=(l ', t ', r ', b ') and target bk j=(l, t, r, b) coverage rate ovl calculation formula For:
In formula, size () represent region in number of pixels and;
(1.2c) is according to step (1.2a) and (1.2b) to jth width training image target frame set Bj={ b1 j, ...bk j...,btj jIn all Objective extraction positive sample regions, form to training image IjThe positive sample collection X of extractionj=Pb1 j ∪...∪Pbk j∪...∪Pbtj j
(1.3) in training image IjMiddle extraction negative sample collection:
(1.3a) generates four number x at random with randomly generated test problems rand ()1、y1、x2、y2, form region Nr1=(x1′, y1′,x′2,y′2), wherein:
x1The abscissa of ' expression region upper left point,
y1The ordinate of ' expression region upper left point,
x2The abscissa of ' expression region lower-right most point,
y2The ordinate of ' expression region lower-right most point,
In formula, W, H divide table to represent training image IjIt is wide and high;
(1.3b) zoning Nr1=(x1′,y1′,x′2,y′2) and jth width training image target frame set Bj= {b1 j,...bk j...,btj jIn all target frames coverage rate ovl, if coverage rate ovl minimum value be less than 0.5, region Nr1=(x1′,y1′,x′2,y′2) it is negative sample;
(1.3c) iteration (1.3a) and (1.3b) be after totally 50 times, obtained negative sample composing training image IjNegative sample This set Yj
(1.4) to training set Tr={ (I1,B1),...(Ij,Bj)...,(Im,Bm) in other training image repeat steps (1.2) and (1.3), positive sample collection P=X is obtained1∪...∪Xj∪...∪XmWith negative sample collection N=Y1∪...∪Yj∪...∪ Ym
Step 2, with the specification Gradient Features collection training linear classifier SVM of positive and negative sample set, the mould of one 8 × 8 is obtained Plate W
(2.1) the specification Gradient Features g ' of a sample in positive and negative sample set is extracted:
The positive and negative sample set that (2.1a) obtains to step 1 is uniformly down sampled to fixed dimension 8 × 8, after down-sampling Each sample, with one-dimensional horizontal gradient template A=[- 1,0,1] and vertical gradient template AT, the sample is obtained in level side To gradient map gx=F*A and vertical direction gradient map gy=F*AT, form the gradient map g=of the sample | gx|+|gy|;Its In, the sample in positive and negative sample set after F expression down-samplings, T represents to seek transposition, and * represents to seek convolution;
(2.1b) standardizes to gradient map g, obtains the gradient map g that standardizesN=min (g, 255);
The standardization Gradient Features figure g that (2.1c) will be obtainedNColumn vector is pulled into by row, obtains specification Gradient Features g ';
(2.2) the specification Gradient Features of all down-sampling samples are extracted, with the specification Gradient Features of all down-sampling samples Form specification Gradient Features collection G;
(2.3) specification Gradient Features collection G training linear classifier SVM are used, obtain the template W of one 8 × 8.
Step 3, according to positive sample collection, effective dimensions collection is selected.
(3.1) initial pictures set of dimensions:
Initial 36 different picture size, form set S={ (W1×H1),...(Wl×Hl)...,(W36×H36), Wherein, Wl,HlIt is the wide and high of l-th picture size respectively, 1≤l≤36 and l is integer;These sizes are using radix as 2, power Increase successively from min T=3 to max T=8, i.e. Wl,Hl∈{8,16,32,64,128,256};
(3.2) positive sample collection P=X is calculated1∪...∪Xj∪...∪XmIn each positive sample size label:
If region rnIt is to jth width training image IjThe positive sample collection X of extractionjIn k-th of target bk j=(l, t, r, b) places N-th of region, wherein, l represents the abscissa of target frame upper left point, and t represents the ordinate of target frame upper left point, and r represents mesh The abscissa of frame lower-right most point is marked, b represents the ordinate of target frame lower-right most point,
Then the size label sl of positive sample calculation formula is as follows:
Sl=6 × (q-min T)+(p-min T)+1,1≤sl≤36 and sl are integer,
Wherein,
(3.3) according to the size label of positive sample, positive sample number under each size label is counted, positive sample number is big It is considered as effective dimensions label in the size label equal to 5, it is { As to form effective dimensions label set1,...Asi...,Asns, its In, AsiIt is the size label of i-th of effective dimensions, 1≤AsiAs in≤36 corresponding set SiIndividual element, 1≤i≤ns and i is Integer, ns represent the number of effective dimensions;
(3.4) in initial pictures size set S={ (W1×H1),...(Wl×Hl)...,(W36×H36) in selection the AsiIndividual elementObtain i-th of effective dimensionsEffective dimensions collection is obtained by effective dimensions label set
Step 4, region of the test image under effective dimensions collection is extracted.
(4.1) according to effective dimensions collectionTo test chart As J progress down-samplings, the image after one group of down-sampling is obtained After being i-th of effective dimensions down-sampling Image, the size of the image is
Wherein,Represent i-th of down-sampled images JsiWidth,
Represent i-th of down-sampled images JsiHeight,
In formula, W ', H ' are the wide and high of test image J respectively, and function ceil (a) represents to take the smallest positive integral not less than a;
(4.2) to the image Js after down-samplingiExtraction specification gradient map:Fi=min (| Jsi*A|+|Jsi*AT|, 255), its In, A=[- 1,0,1], T represent transposition, and * represents to seek convolution;
(4.3) with template W to the gradient map { F that standardizes1,...Fi...,FnsIn each width characteristic pattern carry out sliding window, obtain To shot chart { s1,...si...,sns, wherein si=W*Fi, * represent ask convolution, FiRepresent the specification under i-th of effective dimensions Gradient map, siRepresent the shot chart under i-th of effective dimensions;
(4.4) shot chart s of the NMS algorithms in i-th of size is suppressed using non-maximumiIn select the K of the shot chart Local maximum, location sets Ms is formed with the coordinate position of K local maximum of the shot charti
(4.4a) is by shot chart siArranged from big to small by its score value, the shot chart s after being arrangedi', and arrange Component si' in all scoring positions mark and be;
(4.4b) will arrange shot chart siThe score position coordinate that ' middle position mark is true and score value is maximum is put into Location sets MsiIn, and be false by four neighbourhood signatures of the position and the position;
(4.4c) continues step (4.4b), until obtaining the coordinate position of K local maximum, forms location sets Msi ={ (u1′,v1′),...(uj′,vj′)...,(uK′,y′K), (uj′,vj') represent j-th of local maximum point in shot chart Position coordinates, 1≤j≤K and j is integer;
(4.5) according to location sets Msi, test image J is in i-th of size for extractionUnder K region, form Regional ensembleWhereinJ-th of region under i-th of size is represented,
v1The abscissa of region upper left point is represented,
v2The ordinate of region upper left point is represented,
v3The abscissa of region lower-right most point is represented,
v4The ordinate of region lower-right most point is represented,
In formula, function ceil (a) represents to take the smallest positive integral not less than a;
(4.6) to shot chart { s1,...si...,snsIn shot chart under other sizes according to step (4.4) and (4.5) K region is extracted under each effective dimensions;
(4.7) regional ensemble that test image obtains under ns effective dimensions:
R=R1∪...∪Ri∪...∪Rns, wherein, RiRepresent the regional ensemble extracted under i-th of effective dimensions.
Step 5, candidate region is obtained.
(5.1) to each region in regional ensemble R, this is represented with the average M of 80% most strong pixel value in the region Region;
(5.2) with the average of the calculated for pixel values background clutter outside each region in the hollow frame of 3 pixel widesWith Standard deviationCalculation formula is as follows:
In formula, ΩcRepresent clutter region, NcRepresent clutter area pixel number;
(5.3) the local detection threshold value Th in each region is calculated,Wherein KCFARAccording to setting False-alarm probability Pr is calculated,Calculation formula is:KCFAR-1(1-Pr), wherein Φ () are Standard Normal Distributions, Φ-1() is the inverse function of Standard Normal Distribution;
(5.4) the average M in each region is compared with the local detection threshold value Th in the region, if Th > M, retain The region, otherwise, the region is deleted, the region finally given forms candidate region collection R '.
Step 6, overlapping candidate region is removed, obtains testing result.
(6.1) collection R ' middle all areas in candidate region are arranged from big to small by its average M, the candidate regions after being arranged Domain collection R1′;
(6.2) calculated permutations candidate region collection R1' in first region and other regions coverage rate, from arrangement candidate Set of regions R1' middle the region for removing coverage rate and being more than or equal to 0.01, and by the R1' in first region be put into regional ensemble Rd′ In;
(6.3) renewal arrangement candidate region collection R1'=R1′-Rd', and in return to step (6.2), until arranging candidate region Collect R1' for sky, the regional ensemble R finally givend' it is testing result.
The effect of the present invention can be further illustrated by following experiment:
1. experiment condition
Test operation platform:MATLAB R2012a,Visual Studio 2012,Intel(R)Core(TM)i5-4590 CPU@3.30GHZ, Windows7 Ultimates.
Data used in experiment are the data of complete polarization and single polarization in RADARSAT-2 databases, the training set shown in Fig. 2 It is that interception comes out in RADARSAT-2 data with the test set shown in Fig. 3.Rectangle frame marks in training set and test set It is the target frame of target in image.Target in data set is all naval vessel, and the size on naval vessel differs, and list is used as using pixel number Position, wherein target full-size a width of 96, a height of 16, target minimum dimension a width of 12, a height of 8.
In experiment, the parameter setting of detection method:The number K=150, false alarm rate Pr=in region under each size 10-2, initial pictures size set S={ (W1×H1),...(Wl×Hl)...,(W36×H36), wherein Wl,HlIt is l-th respectively Wide and high, 1≤l≤36 and l is integer of picture size;These sizes are using radix as 2, and power is from min T=3 to max T=8 Increase successively, i.e. Wl,Hl∈{8,16,32,64,128,256}
In experiment, the parameter setting of two-parameter CFAR detection methods:False alarm rate Pr=10-6, next sample number r=2, protection Half long mG=27 of window, half long mB=30 of background window.Minimum target region area Smin=4, maximum target region area Smax= 300, target maximum length Len=25, slice size Q=100.
2. experiment content:
Experiment 1, is first trained with the inventive method to the training set shown in Fig. 2, and then the test set in Fig. 3 is carried out Target detection, as a result as shown in Figure 4;
Experiment 2, target detection is carried out with traditional two-parameter CFAR detection methods to the test image in Fig. 3, two-parameter Master thesis of the concrete operations of CFAR detections with reference to Xian Electronics Science and Technology University Li Li in 2013《SAR target detections with Identification algorithm is studied and Software for Design》In chapter 2 " SAR target detection method research ", testing result as shown in figure 5, its In, Fig. 5 (a) is the testing result to Fig. 3 (a), and Fig. 5 (b) is the testing result to Fig. 3 (b), and Fig. 5 (c) is the inspection to Fig. 3 (c) Survey result.
The testing result such as table 1 of experiment 1 and experiment 2:
The inventive method of table 1 and two-parameter CFAR contrast to test image detection case
3. analysis of experimental results
As can be seen from Table 1, for testing SAR image data used, the present invention applies double ginsengs based on extracted region Number CFAR detection method has carried out target detection, shows the detection method of the present invention and has good performance.Test chart in table 1 During as being detected with traditional two-parameter CFAR, because parameter setting is unreasonable, there is the missing inspection of target, to Fig. 3 from table 1 (a) and Fig. 3 (b) test images testing result can be seen that target number more than missing inspection number with detection target area number Sum;
It can also be seen that the time used in the inventive method detection test image is about traditional two-parameter CFAR from table 1 The 1/3 of detection time, because the inventive method is in the training stage and to obtain the time used in the region of test image very short, therefore The present invention improves detection efficiency to a certain extent.
Comparison diagram 3 and testing result Fig. 5 are it can also be seen that the missing inspection of target all occur in three width test images, from Fig. 5 (a) and Fig. 5 (b) is it can be seen that have two targets to appear in a detection zone, mainly due to target at a distance of relatively near, parameter Set unreasonable caused.It can be seen that the present invention compared to traditional two-parameter CFAR detection methods for, can not only ensure compared with High verification and measurement ratio, while be not in situation of multiple targets in a detection zone.Comparison diagram 4 and Fig. 5 can be seen that this hair Bright SAR image object detection method has the advantages of detection probability is high, while testing result is region not of uniform size, can pin The target to differ to size.
To sum up, SAR image object detection method of the invention has the advantages of algorithm execution speed is fast, detection probability is high, It is a kind of quick and effective detection method, and it is adaptable to various sizes of target, before there is good application Scape.

Claims (8)

1. the DP-CFAR detection method based on extracted region, comprises the following steps:
(1) in markd training set Tr, positive sample collection P and negative sample collection N is extracted;
(2) the positive negative sample in obtained positive and negative sample set is down sampled to fixed dimension 8 × 8, to each sample after down-sampling This Extraction specificationization Gradient Features g ', the standardization of sample image is formed with the standardization Gradient Features figure of all down-sampling samples Gradient Features atlas G, training linear classifier SVM, obtain the template W of one 8 × 8;
(3) build effective set of dimensions and close AS:
(3a) initial 36 different picture size, form set S={ (W1×H1),...(Wl×Hl)...,(W36×H36), Wherein, Wl,HlIt is the wide and high of l-th picture size respectively, 1≤l≤36 and l is integer;These sizes are using radix as 2, power Increase successively from minT=3 to maxT=8, i.e. Wl,Hl∈{8,16,32,64,128,256};
The size set S that (3b) initializes according to the positive sample collection P and step (3a) obtained in step (1), obtains effective dimensions Set1≤AsiAs in≤36 corresponding set SiIndividual member Element, 1≤i≤ns and i are integer, and ns represents the number of effective dimensions;
(4) regions of the test image J under effective dimensions collection is extracted:
The size of (4a) in effective dimensions set AS, to test image J down-samplings, obtains the down-sampling under different sizes Figure, and the standardization Gradient Features figure of down-sampling figure is extracted, form the standardization Gradient Features atlas { F of test image1, ...Fi...,Fns, wherein FiFor the Gradient Features figure that standardizes under i-th of effective dimensions;
(4b) is with the template W obtained in step (2) to the Gradient Features atlas { F that standardizes1,...Fi...,FnsIn each feature Figure carries out sliding window, obtains shot chart { s1,...si...,sns, to each width shot chart, suppress NMS with non-maximum and select K Local maximum, K region is extracted on test image J according to the position of local maximum, forms regional ensemble Ri, to ns Size does same operation, finally gives ns × K region, forms set of regions R;
(5) the set of regions R obtained to step (4b), each region is considered as an entirety, with the region 80% most strong picture The average M of plain value represents the region, estimates background clutter average with the pixel value outside the region in the hollow frame of 3 pixel widesAnd standard deviationAnd according to the false-alarm probability Pr of setting, the local detection threshold value Th in each region is obtained, regional average value M Candidate region is elected in region more than local detection threshold value Th as, obtains candidate region collection R ';
(6) NMS is suppressed with non-maximum to the candidate region collection R ' that step (5) obtains and removes a large amount of overlapping regions, it is remaining Region forms regional ensemble Rd', as testing result.
2. according to the method for claim 1, wherein positive and negative sample set is extracted in training set in the step (1), by such as Lower step is carried out:
(1a) defines training set:
If training set Tr={ (I1,B1),...(Ij,Bj)...,(Im,Bm), wherein m is the number of training data, IjIt is training set In jth width training image, Bj={ b1 j,...bk j...,btj jIt is that target target frame set is gazed in the acceptance of the bid of jth width training image, Tj is the number of target in jth width training image;bk j=(l, t, r, b) is the mesh of k-th of label target in jth width training image Frame is marked, wherein, l represents the abscissa of target frame upper left point, and t represents the ordinate of target frame upper left point, and r represents target frame bottom right The abscissa of point, b represent the ordinate of target frame lower-right most point;
(1b) is in training image IjMiddle extraction positive sample:
(1b.1) is to BjIn k-th of target bk j=(l, t, r, b) extracts one group of region Rbk j={ rb1,...rbn...,rbnb, Wherein, nb represents the number in region, rbn=(l ', t ', r ', b ') represents n-th of region, the horizontal seat of l ' expressions region upper left point Mark, the ordinate of t ' expressions region upper left point, the abscissa of r ' expressions region lower-right most point, the vertical seat of b ' expressions region lower-right most point Mark;
(1b.2) is from regional ensemble Rbk j={ rb1,...rbn...,rbnbIn extraction target bk jThe positive sample at=(l, t, r, b) places Region Pbk j
(1b.3) is according to (1b.1) and (1b.2) to Bj={ b1 j,...bk j...,btj jIn all Objective extraction positive sample areas Domain, form to training image IjThe positive sample collection X of extractionj=Pb1 j∪...∪Pbk j∪...∪Pbtj j
(1c) is in training image IjMiddle extraction negative sample collection:
(1c.1) generates four numbers at random, forms region Nr1=(x1′,y1′,x2′,y2'), wherein, x1' represent region upper left point Abscissa, y1The ordinate of ' expression region upper left point, x2The abscissa of ' expression region lower-right most point, y2' represent region lower-right most point Ordinate;
(1c.2) zoning Nr1=(x1′,y1′,x2′,y2') and training image IjIn all target Bj={ b1 j,...bk j..., btj jCoverage rate ovl, if coverage rate ovl minimum value be less than 0.5, region Nr1=(x1′,y1′,x2′,y2') it is negative Sample;
(1c.3) iteration (1c.1) and (1c.2) be after totally 50 times, obtained negative sample composing training image IjNegative sample collection Close Yj
(1d) is to training set Tr={ (I1,B1),...(Ij,Bj)...,(Im,Bm) in other training image repeat steps (1b) and (1c), obtain positive sample collection P=X1∪...∪Xj∪...∪XmWith negative sample collection N=Y1∪...∪Yj∪...∪Ym
3. according to the method for claim 2, the wherein areal nb in step (1b.1), it is calculated as follows:
Nb=(wmax-wmin+1)×(hmax-hmin+1)
Wherein,
<mrow> <msub> <mi>h</mi> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> <mo>=</mo> <mi>m</mi> <mi>a</mi> <mi>x</mi> <mrow> <mo>(</mo> <mi>c</mi> <mi>e</mi> <mi>i</mi> <mi>l</mi> <mo>(</mo> <mfrac> <mrow> <msub> <mi>log</mi> <mn>10</mn> </msub> <mrow> <mo>(</mo> <mi>b</mi> <mo>-</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> <mrow> <msub> <mi>log</mi> <mn>10</mn> </msub> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>)</mo> <mo>-</mo> <mn>0.5</mn> <mo>,</mo> <mn>3</mn> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>h</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> <mo>=</mo> <mi>m</mi> <mi>i</mi> <mi>n</mi> <mrow> <mo>(</mo> <mi>c</mi> <mi>e</mi> <mi>i</mi> <mi>l</mi> <mo>(</mo> <mfrac> <mrow> <msub> <mi>log</mi> <mn>10</mn> </msub> <mrow> <mo>(</mo> <mi>b</mi> <mo>-</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> <mrow> <msub> <mi>log</mi> <mn>10</mn> </msub> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>)</mo> <mo>-</mo> <mn>0.5</mn> <mo>,</mo> <mn>8</mn> <mo>)</mo> </mrow> <mo>;</mo> </mrow>
Function ceil (a) expressions take the smallest positive integral not less than a, and max (a, d) represents to take the maximum in both, min (a, d) Represent to take the minimum value in both.
According to the method for claim 2,4. wherein from regional ensemble Rb in step (1b.2)k jMiddle extraction target bk jPlace is just Sample areas Pbk j, it is zoning set Rbk j={ rb1,...rbn...,rbnbIn each region and target bk j=(l, t, R, b) coverage rate ovl, by coverage rate ovl be more than or equal to 0.5 region, as target bk jExtract just at=(l, t, r, b) places Sample areas Pbk j,
Wherein, n-th of region rbn=(l ', t ', r ', b ') and target bk j=(l, t, r, b) coverage rate ovl calculation formula For:
Size () represent region in number of pixels and.
5. according to the method for claim 1, wherein to each sample extraction specification after down-sampling in the step (2)Change Gradient Features g ', carry out as follows:
(2a) uses one-dimensional horizontal gradient template A=[- 1,0,1] and vertical to each sample after down-sampling respectively Gradient template ATTo its gradient map g in the horizontal direction of the sample extractionx=F*A and vertical direction gradient map gy=F*AT, structure Into the gradient map g=of the sample | gx|+|gy|, wherein, the sample in positive and negative sample set after F expression down-samplings, T tables Show and seek transposition, * represents to seek convolution;
(2b) standardizes to gradient map g, obtains the Gradient Features figure g that standardizesN=min (g, 255);
The standardization Gradient Features figure g that (2c) will be obtainedNColumn vector is pulled into by row, obtains specification Gradient Features g '.
6. according to the method for claim 1, wherein extracting the standardization Gradient Features of down-sampling figure in the step (4a) Figure, is carried out as follows:
(4a.1) is according to effective dimensions collectionTest image J is entered Row down-sampling, obtain the image { Js after one group of down-sampling1,...Jsi...,Jsns, JsiAfter being i-th of effective dimensions down-sampling Image, the size of the image isCalculation formula is as follows:
<mrow> <msubsup> <mi>W</mi> <mrow> <msub> <mi>As</mi> <mi>i</mi> </msub> </mrow> <mo>&amp;prime;</mo> </msubsup> <mo>=</mo> <mi>c</mi> <mi>e</mi> <mi>i</mi> <mi>l</mi> <mrow> <mo>(</mo> <mn>8</mn> <mo>&amp;times;</mo> <msup> <mi>W</mi> <mo>&amp;prime;</mo> </msup> <mo>/</mo> <msub> <mi>W</mi> <mrow> <msub> <mi>As</mi> <mi>i</mi> </msub> </mrow> </msub> <mo>)</mo> </mrow> <mo>,</mo> <msubsup> <mi>H</mi> <mrow> <msub> <mi>As</mi> <mi>i</mi> </msub> </mrow> <mo>&amp;prime;</mo> </msubsup> <mo>=</mo> <mi>c</mi> <mi>e</mi> <mi>i</mi> <mi>l</mi> <mrow> <mo>(</mo> <mn>8</mn> <mo>&amp;times;</mo> <msup> <mi>H</mi> <mo>&amp;prime;</mo> </msup> <mo>/</mo> <msub> <mi>H</mi> <mrow> <msub> <mi>As</mi> <mi>i</mi> </msub> </mrow> </msub> <mo>)</mo> </mrow> </mrow>
Wherein, W ', H ' are the wide and high of test image J respectively, and function ceil (a) represents to take the smallest positive integral not less than a;
(4a.2) is to the image Js after down-samplingiExtraction specification Gradient Features figure:Fi=min (| Jsi*A|+|Jsi*AT|, 255), wherein, A=[- 1,0,1], T represent transposition, and * represents to seek convolution.
7. according to the method for claim 1, wherein extraction test image is under an effective dimensions in the step (4b) Region, carry out as follows:
(4b.1) is with template W to the Gradient Features atlas { F that standardizes1,...Fi...,FnsIn each width characteristic pattern carry out sliding window, Obtain shot chart { s1,...si...,sns, wherein si=W*Fi, * represent seek convolution;
(4b.2) suppresses NMS to the shot chart s under i-th of size using non-maximumiHandled, select in the shot chart K Local maximum, location sets Ms is formed with its position coordinatesi={ (u1′,v1′),...(uj′,vj′)...,(uK′,vK'), (uj′,vj') represent position coordinates of j-th of local maximum point in shot chart, 1≤j≤K and j is integer;
(4b.3) is according to location sets Msi, test image J is in i-th of size for extractionUnder K region, form area Gather in domainWhereinJ-th of region under i-th of size is represented,
v1The abscissa of region upper left point is represented,
v2The ordinate of region upper left point is represented,
v3The abscissa of region lower-right most point is represented,
v4The ordinate of region lower-right most point is represented,
Function ceil (a) represents to take the smallest positive integral not less than a.
8. according to the method for claim 1, wherein the step (6) suppresses NMS according to non-maximum removes candidate region Collect region overlapping in R ', carry out as follows:
(6a) is arranged the middle all areas of candidate region collection R ' by its average M from big to small, the candidate region collection after being arranged R1′;
(6b) calculated permutations candidate region collection R1' in first region and other regions coverage rate, from arrangement candidate region collection R1' middle the region for removing coverage rate and being more than or equal to 0.01, and by the R1' in first region be put into regional ensemble Rd' in;
(6c) renewal arrangement candidate region collection R1'=R1′-Rd', and in return to step (6b), until arranging candidate region collection R1' be Sky, the regional ensemble R finally givend' it is testing result.
CN201510641963.4A 2015-09-30 2015-09-30 DP-CFAR detection method based on extracted region Active CN105354824B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510641963.4A CN105354824B (en) 2015-09-30 2015-09-30 DP-CFAR detection method based on extracted region

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510641963.4A CN105354824B (en) 2015-09-30 2015-09-30 DP-CFAR detection method based on extracted region

Publications (2)

Publication Number Publication Date
CN105354824A CN105354824A (en) 2016-02-24
CN105354824B true CN105354824B (en) 2018-03-06

Family

ID=55330791

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510641963.4A Active CN105354824B (en) 2015-09-30 2015-09-30 DP-CFAR detection method based on extracted region

Country Status (1)

Country Link
CN (1) CN105354824B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106326938B (en) * 2016-09-12 2019-03-08 西安电子科技大学 SAR image target discrimination method based on Weakly supervised study
CN107064899A (en) * 2017-04-18 2017-08-18 西安电子工程研究所 A kind of Biparametric Clutter Map CFAR detection method of adaptive threshold
CN107153180B (en) * 2017-06-15 2020-02-07 中国科学院声学研究所 Target signal detection method and system
CN107942329B (en) * 2017-11-17 2021-04-06 西安电子科技大学 Method for detecting sea surface ship target by maneuvering platform single-channel SAR
CN109588182B (en) * 2018-11-23 2021-03-26 厦门大学 Method for building mangrove landscape landmarks on large-area mudflat
CN109978017B (en) * 2019-03-06 2021-06-01 开易(北京)科技有限公司 Hard sample sampling method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1831558A (en) * 2006-04-21 2006-09-13 清华大学 Single-channel synthetic aperture radar moving-target detection method based on multi-apparent subimage paire
US9057783B2 (en) * 2011-01-18 2015-06-16 The United States Of America As Represented By The Secretary Of The Army Change detection method and system for use in detecting moving targets behind walls, barriers or otherwise visually obscured

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1831558A (en) * 2006-04-21 2006-09-13 清华大学 Single-channel synthetic aperture radar moving-target detection method based on multi-apparent subimage paire
US9057783B2 (en) * 2011-01-18 2015-06-16 The United States Of America As Represented By The Secretary Of The Army Change detection method and system for use in detecting moving targets behind walls, barriers or otherwise visually obscured

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Weibull杂波背景下扩展目标的恒虚警率检测;蔡春 等;《空军雷达学院学报》;20060630;第20卷(第2期);第111-113页 *
一种K-分布杂波背景下的双参数恒虚警检测器;郝程鹏等;《电子与信息学报》;20070331;第29卷(第3期);第756-759页 *
一种利用SAR和可见光图像融合检测目标的方法;陈新 等;《信号处理》;20100930;第26卷(第9期);第1408-1413页 *

Also Published As

Publication number Publication date
CN105354824A (en) 2016-02-24

Similar Documents

Publication Publication Date Title
CN105354824B (en) DP-CFAR detection method based on extracted region
CN105427314B (en) SAR image object detection method based on Bayes&#39;s conspicuousness
CN108510467B (en) SAR image target identification method based on depth deformable convolution neural network
CN105809198B (en) SAR image target recognition method based on depth confidence network
CN102722891B (en) Method for detecting image significance
CN104361340B (en) The SAR image target quick determination method for being detected and being clustered based on conspicuousness
Yang et al. Traffic sign recognition in disturbing environments
CN103955922B (en) Method for detecting flaws of printed fabric based on Gabor filter
CN107730515A (en) Panoramic picture conspicuousness detection method with eye movement model is increased based on region
CN109284704A (en) Complex background SAR vehicle target detection method based on CNN
CN103116763A (en) Vivo-face detection method based on HSV (hue, saturation, value) color space statistical characteristics
CN102968799A (en) Integral image-based quick ACCA-CFAR SAR (Automatic Censored Cell Averaging-Constant False Alarm Rate Synthetic Aperture Radar) image target detection method
CN105335975B (en) Polarization SAR image segmentation method based on low-rank decomposition and statistics with histogram
CN110705565A (en) Lymph node tumor region identification method and device
CN105512622B (en) A kind of visible remote sensing image sea land dividing method based on figure segmentation and supervised learning
CN107203761B (en) Road width estimation method based on high-resolution satellite image
Hou et al. SAR image ship detection based on visual attention model
CN106557740A (en) The recognition methods of oil depot target in a kind of remote sensing images
CN104537384B (en) A kind of SAR target discrimination methods of combination likelihood ratio judgement
Chen et al. Plant leaf segmentation for estimating phenotypic traits
CN105303566B (en) A kind of SAR image azimuth of target method of estimation cut based on objective contour
CN103778430A (en) Rapid face detection method based on combination between skin color segmentation and AdaBoost
CN108256557A (en) The hyperspectral image classification method integrated with reference to deep learning and neighborhood
US8588509B1 (en) Efficient scanning for EM based target localization
CN107665347A (en) Vision significance object detection method based on filtering optimization

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant