CN103971127A - Forward-looking radar imaging sea-surface target key point detection and recognition method - Google Patents

Forward-looking radar imaging sea-surface target key point detection and recognition method Download PDF

Info

Publication number
CN103971127A
CN103971127A CN201410211693.9A CN201410211693A CN103971127A CN 103971127 A CN103971127 A CN 103971127A CN 201410211693 A CN201410211693 A CN 201410211693A CN 103971127 A CN103971127 A CN 103971127A
Authority
CN
China
Prior art keywords
target
image
window
point
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410211693.9A
Other languages
Chinese (zh)
Other versions
CN103971127B (en
Inventor
杨卫东
张洁
邹腊梅
毕立人
李静
桑农
严航宇
桂文军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201410211693.9A priority Critical patent/CN103971127B/en
Publication of CN103971127A publication Critical patent/CN103971127A/en
Application granted granted Critical
Publication of CN103971127B publication Critical patent/CN103971127B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a forward-looking radar imaging sea-surface target key point detection and recognition method. The method includes the steps that radar echo data are quantized into a gray scale image; a region of interest is extracted from the gray scale image, partition is performed, and a target region partition image is obtained; the radar echo data and the region of interest of the image are used, and a target region peak point information matrix is obtained; information merging is conducted on the target region partition image and the target region peak point information matrix, the number K of peak points in a target region in a merging result is obtained through counting, the previous K peak points are selected and serve as target effective peak points and form a target effective peak point image in a binaryzation mode, target axial features are extracted, and a target position is determined; the center of gravity of target energy is calculated and serves as the target key point. According to forward-looking radar target features, multiple mode identification methods are used comprehensively, the inherent features of a target can be reserved, meanwhile interference factors such as artifacts and side lobes can be restrained, and the recognition accuracy and the positioning precision of radar imaging sea surface target key points are improved.

Description

The recognition methods of a kind of forward-looking radar imaging sea-surface target critical point detection
Technical field
The invention belongs to target detection, mode identification technology, be specifically related to the recognition methods of a kind of forward-looking radar imaging sea-surface target critical point detection, the method can effectively suppress the disturbing factor such as artifact, secondary lobe in retaining target inherent characteristic, improves recognition correct rate and positioning precision to radar imagery sea-surface target key point.
Background technology
The Ship Target Detection identification that utilizes radar imagery technology to realize is the gordian technique of many civilian and military fields, is widely used in monitoring, the field such as military.In forward-looking radar echo, strong echoed signal means that detector searches strong scattering point at this place.Strong scattering point is normally caused by two reflecting bodys in Ship Target and corner reflector.These reflection parts are distributed on whole Ship Target, present different intensity sizes with the difference of azimuth of target.But, normal radar transmitting linear FM signal, because the two-dimensional frequency supporting domain of imaging system is limited, makes synthetic-aperture radar (Synthetic Aperture Radar, SAR) impulse response function distance to orientation to being sinc function, cause sidelobe level very high.Due to the processing window limited length of radar data, and there is phase error in radar return data, form secondary lobe, cause the noise of multiplication, and produce and interfere with near scatterer, very large on picture quality impact.The existence of secondary lobe causes the target image obtaining in forward sight imaging to have artifact, in intensity, will be weaker than real target point, and this is also the basis of the identification of target critical point and hi-Fix.False target and real goal can be distinguished, also need to consider the forward sight imaging under different points of view to target, whether the spacing of false target and real goal in image, there is the situation of overlapping interference.Therefore, be necessary before forward-looking radar target's feature-extraction, detection and Identification, original radar echo signal data are carried out to pre-service, to reduce noise effect, improve the signal to noise ratio (S/N ratio) of image, outstanding target signature information, utilizes target signature identification target, thereby improve the probability of target critical point identification, and utilize the energy barycenter of target to determine the key point of target.
In existing disclosed document, it is to realize by the processing to original radar echo signal mostly that the forward-looking radar imaging sea-surface target of mentioning detects recognition methods.But owing to restricted by SAR image-forming mechanism, target slice image is subject to the impact of the factors such as targeted attitude, background characteristics value and sensor imaging attitude, show higher changeableness, thereby easily cause the interference to recognition result, there is erroneous judgement and misjudgement.The people such as Zhang Hong have proposed the recognition methods of based target peak value feature in " identification of High Resolution SAR Images target ", are 20~40 the span of taking experimental data to add up to provide for choosing of peak point number.It is determined for peak point number, does not provide an effective criterion and method, can only be according to experience in engineering application.In existing open source literature, maximum entropy dividing method has advantages of good stability, but is easily subject to background interference, and segmentation result exists false target, gained target information inaccurate.
Summary of the invention
The target the present invention is directed under complex environment is extracted problem, proposes the recognition methods of a kind of forward-looking radar imaging sea-surface target critical point detection, specifically comprises:
(1) original radar two dimension echo data is quantified as to 2-D gray image data;
(2) 2-D gray image step (1) being obtained utilizes the method for based target physical dimension and degree of confidence to carry out region of interest extraction, obtains target area gray level image;
(3) use maximum entropy to cut apart to target area gray level image, must arrive target area and cut apart image;
(4) utilize radar two dimension echo data and target area gray level image, the peak point information in the radar two dimension echo data of extraction target area, obtains target area peak point information matrix;
(5) image is cut apart in target area and target area peak point information matrix carries out information fusion, in statistics fusion results, peak point number K in target area, counts out as effective peak;
(6) peak point in the peak point information matrix of target area is arranged by size, chosen a front K peak point as target effective peak point, target area peak point information matrix two-value is turned to target effective peak point image;
(7) extract the axial feature of target in target effective peak point image, get rid of false-alarm point and disturb, determine target location;
(8) utilize target location and target energy center of gravity, determine target critical point.
Further, described step (1) specifically comprises:
(1.1) the floating point values threshold value that selection quantizes, establishing floating point values upper threshold is L max, under threshold value, be limited to L min, wherein:
L min = Totalpix * min T Totalpix
L max=N*(TLength*Margin) 2+L min,if L max<Totalpix
L max=Totalpix,if L max>Totalpix
N is the target maximum number that may contain in background, Totalpix is the number of pixels of original image, TLength is the pixel count of target length in image, and Margin is surplus to ensure that target can complete demonstration, and minT is that the minimum of target two dimension echo data may floating point values;
(1.2) selective value is L respectively maxfloating point values and value for L minfloating point values as threshold value Level255 and Level0; To each data point in original radar two dimension echo data, if being greater than Level255, floating point values gives gray-scale value 255, be less than Level0 and give gray-scale value 0, to being worth at L maxwith L minbetween floating point values carry out linear interpolation, determine its gray-scale value, the formula of linear interpolation is as follows:
g ( x , y ) = 0 , if ( f ( x , y ) < Level 0 ) f ( x , y ) - Level 0 Level 255 - Level 0 * 255 , if ( Level 0 &le; f ( x , y ) &le; Level 255 ) 255 , if ( f ( x , y ) > Level 255 )
Wherein f (x, y) is the floating-point numerical value that data point (x, y) is located radar two dimension echo data, and g (x, y) is gray-scale value corresponding to this point (x, y) after linear interpolation.
Further, described step (2) specifically comprises:
(2.1), first according to target sizes and imaging resolution, using the interval under the state of the normal course of target as constraint, determine region of interest window length and width; Investigate the region of interest window centered by s point, in window, the statistic of pixel value has average μ swith center of gravity G s:
&mu; s = 1 n &Sigma; i = 1 n g ( x i , y i )
x &OverBar; = &Sigma; ( x , y ) &Element; &Omega; x &CenterDot; g ( x , y ) / &Sigma; ( x , y ) &Element; &Omega; g ( x , y )
y &OverBar; = &Sigma; ( x , y ) &Element; &Omega; y &CenterDot; g ( x , y ) / &Sigma; ( x , y ) &Element; &Omega; g ( x , y )
G s = ( x &OverBar; , y &OverBar; )
Wherein, n is the number of pixel in the window of region of interest, and g (x, y) is the gray-scale value that in the window of region of interest, (x, y) locates pixel, and Ω is the contained region of region of interest window;
(2.2) ask for the degree of confidence μ of the region of interest window centered by s point s, and center of gravity G in window swith window center coordinate O sdistance d (G s, O s), wherein:
ρ s=μ s/[d(G s,O s)+1]
d (G s, O s) be center of gravity G in window swith window center coordinate O sbetween Euclidean distance, wherein x, y is the row and column of presentation video respectively;
(2.3) in 2-D gray image, determine region of interest window according to following principle, as target area gray level image:
If be background in the region of interest window centered by s point, due to background pixel value difference not quite and all lower, now apart from d (G s, O s) ≈ 0, degree of confidence ρ s≈ μ ssuitable with background pixel value average, now Bu Shi region of interest, region of interest window region;
If a part is that a target part is background, now μ in window in the region of interest window centered by s point sincrease, but center of gravity G in while window sbe partial to the region of high pixel value, d (G s, O s) also increase, until region of interest window comprises whole target, be defined as region of interest;
If comprised whole target in the region of interest window centered by s point, now μ sreach maximum, maximal value is relevant to the pixel value of target, simultaneously center of gravity G in window swith window center O sdistance reduces; When apart from d (G s, O s) degree of confidence ρ when ≈ 0 sreach maximum, simultaneously ρ sfor local maximum, now window region in region of interest is region of interest.
Further, described step (3) specifically comprises:
(3.1) be identified for segmentation threshold Th that target area gray level image is cut apart, segmentation threshold Th is target entropy Entropy owith background entropy Entropy bthe maximal value of sum, wherein:
Background entropy Entropy B = - &Sigma; i = 0 Th p ( i ) P t ln p ( i ) P t
Target entropy Entropy O = - &Sigma; i = Th + 1 m p ( i ) H t ln p ( i ) H t
P (i) represents that corresponding grey scale level is the probability size of i, the maximum gray scale of m presentation video, P tand H trepresent respectively background and target in image pixel grey scale distribute probability and;
(3.2) according to segmentation threshold Th, target area gray level image is cut apart, must be arrived target area and cut apart image O (x, y), be specially:
If the entropy of certain gray level is greater than Th, think that this gray level is target; If the entropy of certain gray level is less than Th, think that this gray level is background, cuts apart image O (x, y) thereby must arrive target area.
Further, described step (4) is specially:
According to radar two dimension echo data and target area gray level image, adopt first order difference method, the peak point information in the radar two dimension echo data of extraction target area, obtains target area peak point information matrix G (x, y), wherein:
G ( x , y ) = 255 , if ( f ( x , y ) - f ( x , y + 1 ) > 0 , f ( x , y ) - f ( x , y - 1 ) > 0 ) 0 , else
(x, y) represents the point in two-dimentional echoed signal, x represent distance to, y represent orientation to, note radar two dimension echo floating point values is f (x, y).
Further, described step (5) is specially:
Image O (x, y) is cut apart to and target area peak point information matrix G (x, y) carries out information fusion in target area, obtain the image R (x, y) after merging;
R(x,y)=G(x,y)·O(x,y)/255
Image R (x, y) after fusion is 0/255 bianry image, and the pixel that in image, gray level is 255 is target area and cuts apart peak point in objective area in image, adds up these peak point numbers K
K = C ount { ( x i , y i ) | ( x i , y i ) &Element; R ( x , y ) p ( x i , y i ) = 255
Wherein, p (x i, y i) be (x in the image R (x, y) after merging i, y i) the gray-scale value size located.
Further, described step (6) specifically comprises:
(6.1) peak point in the peak point information matrix of target area is sorted by size, choose the effective peak point characteristic information of front K maximum peak point as target;
(6.2) the effective peak point characteristic information of target is carried out to binary conversion treatment, by this K extreme point place pixel assignment gray level 255, rest of pixels point assignment is 0, obtains target effective peak point image.
Further, described step (7) specifically comprises:
(7.1) in target effective peak point image to representing that k candidate target point utilizes least square fitting, search out a straight line that makes residual sum of squares (RSS) minimum, i.e. the axial straight line of target, this straight line is identical with the diagonal of target;
(7.2) using the length of target, width, Diagonal Dimension as constraint, taking the target rectangle of target length, wide constraint as window, allow the diagonal line of window along the axial rectilinear movement of target, find on the position on axial straight line and make maximum the window's position of contained peak point number in window, the corresponding position of window is target location;
(7.3) the peak point information flag outside window is false-alarm point, and in window, contained peak point information has been determined the position of target.
Further, in described step (7.1), the axial straight line of calculating target is specially:
To target pixel points set V, estimate the parameter k of straight line y=kx+b, b, makes residual sum of squares (RSS) minimum, wherein (x i, y i) ∈ V.
Further, described step (8) is specially:
Ask for the grey scale centre of gravity of target area set it as the energy barycenter of target area, be the key point of target, wherein to ask for formula as follows for the grey scale centre of gravity of target area:
X &OverBar; = &Sigma; ( x , y ) &Element; T x &CenterDot; f ( x , y ) / &Sigma; ( x , y ) &Element; T f ( x , y )
Y &OverBar; = &Sigma; ( x , y ) &Element; T y &CenterDot; f ( x , y ) / &Sigma; ( x , y ) &Element; T f ( x , y )
F (x, y) is the floating point values that in radar two dimension echo data, (x, y) locates, and Τ is target area.
In general, compared with prior art, the present invention has following beneficial effect to the above technical scheme of conceiving by the present invention:
(1) when utilizing radar return data peaks dot information to reject false target information, utilizing OTSU image to cut apart constraint effective peak counts out, choose in the self-adaptation of engineering applications peak point number K thereby realize, make the more standardization of result obtaining, can not change because of operator's difference;
(2) in the image quantization stage, utilize the relevant informations such as target and background size, set up the calculation criterion to quantization threshold, thereby made radar two dimension echo data show and to process with the form of image, for the image conversion processing of radar two dimension echo data provides Data support;
(3) according to radar image distance to orientation to imaging characteristics, utilize radar bearing to maximum point information determine target area, thereby on the basis that does not affect performance, realized the information extraction of target maximum value, local maximum extracting method has reduced calculated amount relatively;
(4) utilize the axial feature of target, the location of realize target and false-alarm point are got rid of, and make target location more accurate;
(5) proposed a kind of target region of interest extracting method that utilizes target physical dimension and degree of confidence, the region of interest of extraction is more accurate, has reduced the operand of subsequent treatment;
(6) treatment scheme of forward-looking radar imaging sponge target critical spot check detection identifying method to forward-looking radar echoed signal that the present invention proposes, the results show the validity of this flow process.
In sum, the present invention is according to forward-looking radar target property, Integrated using various modes recognition methods, can in retaining target inherent characteristic, suppress the disturbing factor such as artifact, secondary lobe, improve recognition correct rate and the positioning precision of radar imagery sea-surface target key point.Carry out target detection identification by this method, target is can discrimination high.
Brief description of the drawings
Fig. 1 is the overview flow chart of forward-looking radar imaging sea-surface target critical point detection of the present invention recognition methods;
Fig. 2 is the image after the quantification of original radar two dimension echo data in one embodiment of the invention;
Fig. 3 is the peak value characteristic pattern of original radar two dimension echo data;
Fig. 4 is that after original radar two dimension echo data in Fig. 2 is quantized, image segmentation result shows;
Fig. 5 is image object effective peak hum pattern after quantizing in original radar two dimension echo data in Fig. 2;
Fig. 6 is that in Fig. 2, original radar two dimension echo data quantizes the axial feature constraint result of target figure in rear image;
Fig. 7 is that in Fig. 2, original radar two dimension echo data quantizes target critical point location result in rear image.
Embodiment
In order to make object of the present invention, technical scheme and advantage clearer, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein, only in order to explain the present invention, is not intended to limit the present invention.In addition,, in each embodiment of described the present invention, involved technical characterictic just can combine mutually as long as do not form each other conflict.
When the object of the invention is to retain target inherent characteristic, suppress the disturbing factor such as artifact, secondary lobe, improve recognition correct rate and the positioning precision of radar imagery sea-surface target key point.But in forward-looking radar echo, strong echoed signal means that detector searches strong scattering point at this place.Strong scattering point is normally caused by two reflecting bodys in Ship Target and corner reflector.These reflection parts are distributed on whole Ship Target, present different intensity sizes with the difference of azimuth of target.But, normal radar transmitting linear frequency modulation (Linear Frequency Modulation, LFM) signal, because the two-dimensional frequency supporting domain of imaging system is limited, the impulse response function that makes SAR distance to orientation to being sinc function, sidelobe level is very high.Because secondary lobe may form the noise of multiplication, and produce and interfere with near scatterer, very large on picture quality impact.The existence of secondary lobe causes the target image obtaining in forward sight imaging to have artifact, in intensity, will be weaker than real target point, and this is also the basis of target identification and hi-Fix.False target and real goal can be distinguished, also need to consider the forward sight imaging under different points of view to target, whether the spacing of false target and real goal in image, there is the situation of overlapping interference.The target property and the Sea background characteristic that the present invention is based in radar image are made hypothesis estimation.First, after utilizing radar original echo data and processing, carry out information fusion through the image of over-segmentation, obtain the peak value characteristic information of target area; Then, utilize the axial feature knowledge of target to get rid of interference, identification target.Finally, according to the energy barycenter localizing objects key point of target.
The present invention, first by the feature extraction of based target peak value, obtains the peak value characteristic pattern of target; Through quantizing to become image, after waiting resolution adjustment, use maximum entropy to cut apart original echo data, obtain cutting apart image; By the peak value characteristic pattern of gained with cut apart image and merge and obtain peak point number threshold value, can obtain effective target peak information, utilize axial feature and the energy barycenter of target, identify target, locator key point.Experimental result shows, when algorithm can suppress to retain target inherent characteristic in literary composition, reduces various disturbing factors, improves recognition correct rate and the positioning precision of target critical point.
The invention provides the recognition methods of a kind of forward-looking radar imaging sea-surface target critical point detection, as shown in Figure 1, the method detailed process is its overall procedure:
(1) original radar two dimension echo data is quantified as to 2-D gray image data
First original radar two dimension echo data is quantized, be transformed into the 2-D gray image data that the gray-scale value in 0~255 scope can carry out Digital Image Processing.The method that original radar two dimension echo data is mapped as to 256 grades of gray level images is: the floating point values of original radar two dimension echo data is sorted from small to large; Select the floating point values threshold value quantizing according to following principle, suppose that floating point values upper threshold is L max, under threshold value, be limited to L min, have
L min = Totalpix * min T Totalpix
L max=N*(TLength*Margin) 2+L min,if L max<Totalpix
L max=Totalpix,if L max>Totalpix
Wherein, N is the target maximum number that may contain in background, the number of pixels that Totalpix is original image, TLength is the pixel count of target length in image, Margin is surplus, ensures the complete demonstration of target energy, and minT is that the minimum of target two dimension echo data may floating point values.
Selective value is L respectively maxfloating point values and value for L minfloating point values as threshold value Level255 and Level0; To each pixel, if being greater than Level255, floating point values gives gray-scale value 255, be less than Level0 and give gray-scale value 0, to being worth at L maxwith L minbetween floating point values carry out linear interpolation, determine its gray-scale value.The formula of linear interpolation is as follows
g ( x , y ) = 0 , if ( f ( x , y ) < Level 0 ) f ( x , y ) - Level 0 Level 255 - Level 0 * 255 , if ( Level 0 &le; f ( x , y ) &le; Level 255 ) 255 , if ( f ( x , y ) > Level 255 )
Wherein f (x, y) is the floating-point numerical value that pixel (x, y) is located radar two dimension echo data, and g (x, y) is gray-scale value corresponding to this point after linear interpolation, and the image after quantification as shown in Figure 2.
(2) 2-D gray image step (1) being obtained utilizes the method for based target physical dimension and degree of confidence to carry out region of interest extraction, obtains target area gray level image
First according to target sizes and imaging resolution, using the interval under the state of the normal course of target as constraint, determine suitable region of interest window length and width.Investigate the region of interest window centered by s point, in window, the statistic of pixel value has average μ swith center of gravity G s.
&mu; s = 1 n &Sigma; i = 1 n g ( x i , y i )
x &OverBar; = &Sigma; ( x , y ) &Element; &Omega; x &CenterDot; g ( x , y ) / &Sigma; ( x , y ) &Element; &Omega; g ( x , y )
y &OverBar; = &Sigma; ( x , y ) &Element; &Omega; y &CenterDot; g ( x , y ) / &Sigma; ( x , y ) &Element; &Omega; g ( x , y )
G s = ( x &OverBar; , y &OverBar; )
Wherein, n is the number of pixel in the window of region of interest, and g (x, y) is the gray-scale value that (x, y) locates pixel, and Ω is the contained region of region of interest window.
By μ swhether be an evaluation criterion of region of interest as decision window, μ sin the local window of higher explanation, pixel value is larger, is more likely target; Meanwhile, due to hope target lock-on the center in region of interest, center of gravity G in window swith window center coordinate O sdistance d (G s, O s) as another evaluation criterion.The region of interest degree of confidence of definition centered by s point as:
ρ s=μ s/[d(G s,O s)+1]
Apart from d (G s, O s) be chosen for center of gravity G in window swith window center coordinate O sbetween Euclidean distance, wherein x, y is the row and column of presentation video respectively:
d ( G s , O s ) = ( x G - x O ) 2 + ( y G - y O ) 2
To three kinds of regions in 2-D gray image, process respectively:
If be background in the region of interest window centered by s point, due to background pixel value difference not quite and all lower, now apart from d (G s, O s) ≈ 0, degree of confidence ρ s≈ μ ssuitable with background pixel value average, now Bu Shi region of interest, region of interest window region;
If a part is that a target part is background, now μ in window in the region of interest window centered by s point sincrease, but center of gravity G in while window sbe partial to the region of high pixel value, d (G s, O s) also increase, until region of interest window comprises whole target, be defined as region of interest;
If comprised whole target in the region of interest window centered by s point, now μ sreach maximum, maximal value is relevant to the pixel value of target, simultaneously center of gravity G in window swith window center O sdistance reduces; When apart from d (G s, O s) degree of confidence ρ when ≈ 0 sreach maximum, simultaneously ρ sfor local maximum, now window region in region of interest is region of interest.
Here we are called target area gray level image the region of interest obtaining.
(3) use maximum entropy to cut apart to target area gray level image, must arrive target area and cut apart image;
Use maximum entropy to cut apart to target area gray level image, must arrive target area and cut apart image;
If image is divided into two parts of target and background by threshold value Th, the entropy of objective definition and background is respectively:
Background entropy Entropy B = - &Sigma; i = 0 Th p ( i ) P t ln p ( i ) P t
Target entropy Entropy O = - &Sigma; i = Th + 1 m p ( i ) H t ln p ( i ) H t
Wherein, P (i) represents that corresponding grey scale level is the probability size of i, the maximum gray scale of m presentation video, P tand H trepresent respectively background and target in image pixel grey scale distribute probability and.
Calculate the maximal value of target entropy and background entropy sum as threshold value
Th = arg max Th ( Entropy B + Entropy O )
If the entropy of certain gray level is greater than Th, think that this gray level is target; If the entropy of certain gray level is less than Th, think that this gray level is background.Can arrive like this target area and cut apart image O (x, y), the image after cutting apart as shown in Figure 4.
(4) utilize radar two dimension echo data and target area gray level image, the peak point information in the radar two dimension echo data of extraction target area, obtains target area peak point information matrix;
Utilize radar two dimension echo data and target area gray level image, the peak point information in the radar two dimension echo data of extraction target area, obtains target area peak point information matrix;
In radar two dimension echo data, scattering center can define 2 class extremal features points: two-dimentional extreme vertex and one dimension extreme point.Definition extreme point:
p i = 1 , if min ( a i - a j ) > 0 &ForAll; a j &Element; U ( a i ) 0 , else
Wherein U (a i) represent with a icentered by local neighborhood (do not comprise a ipoint).Due to extreme point, in orientation, upwards dynamic range is very large, and therefore the present invention considers orientation one dimension extreme point upwards.
Adopt first order difference method to extract to extreme point defined above.To the point (x, y) in two-dimentional echoed signal, x represent distance to, y represent orientation to, note radar two dimension echo floating point values is f (x, y), definition:
G ( x , y ) = 255 , if ( f ( x , y ) - f ( x , y + 1 ) > 0 , f ( x , y ) - f ( x , y - 1 ) > 0 ) 0 , else
Utilizing the G (x, y) that above formula calculates is target area peak point information matrix, as shown in Figure 3.
(5) image is cut apart in target area and target area peak point information matrix carries out information fusion, peak point number K in the target area of image is cut apart in statistics target area, counts out as effective peak.
Image O (x, y) is cut apart to and target area peak point information matrix G (x, y) carries out information fusion in target area, obtain the image R (x, y) after merging.
R(x,y)=G(x,y)·O(x,y)/255
Image R (x, y) after fusion is 0/255 bianry image, and the pixel that in image, gray level is 255 is target area and cuts apart peak point in objective area in image, adds up these peak point numbers K
K = C ount { ( x i , y i ) | ( x i , y i ) &Element; R ( x , y ) p ( x i , y i ) = 255
Wherein, p (x i, y i) be (x in the image R (x, y) after merging i, y i) the gray-scale value size located.
(6) peak point in the peak point information matrix of target area is arranged by size, chosen a front K peak point as target effective peak point, binaryzation obtains target effective peak point image.
Peak point in the peak point information matrix of target area, by intensity sequence, is chosen to the effective peak point characteristic information of front K maximum peak point as target.The effective peak point characteristic information of binary conversion treatment target, by this K extreme point place pixel assignment gray level 255, rest of pixels point assignment is 0, obtains target effective peak point image, as shown in Figure 5.
(7) extract the axial feature of target in target effective peak point image, exclusive segment false-alarm point disturbs, and determines target location, as shown in Figure 6.
Before in target effective peak point image, in k candidate target point, may sneak into noise spot or false-alarm point, need to carry out axis projection candidate's impact point is screened.Axis projection has utilized the axial length information of target, and the spatial relation of candidate target point is retrained.
This method adopts the data fitting method of least square method here, and the optimal function that it finds data by the quadratic sum of minimum error is mated.To target pixel points set V, estimate the parameter k of straight line y=kx+b, b, makes residual sum of squares (RSS) minimum, wherein (x i, y i) ∈ V.
Axially the algorithm of feature constraint is as follows:
1) in target effective peak point image to representing that k candidate target point utilizes least square fitting, search out a straight line that makes residual sum of squares (RSS) minimum, the axial straight line of target, identical with the diagonal of target;
2) using the length of target, width, Diagonal Dimension as constraint, taking the target rectangle of target length, wide constraint as window, allow the diagonal line of window along the axial rectilinear movement of target, find on the position on axial straight line and make maximum the window's position of contained peak point number in window, the corresponding position of window is target location;
3) the peak point information flag outside window is false-alarm point, and in window, contained peak point information has been determined the position of target.
(8) utilize target location and target energy center of gravity, determine target critical point.
By position and the dimension information of target, can obtain the peak point information in the position of target, calculate the energy barycenter of these peak points as the key point of target, as shown in Figure 7.
According to the characteristic of radar imagery, radar echo intensity shows the information of strong scattering point, and in objective definition of the present invention region, grey scale centre of gravity positions as the key point of target.Grey scale centre of gravity method is " energy " as this point by the gray-scale value of each pixel in region, and the center of gravity formula in its required region is as follows:
X &OverBar; = &Sigma; ( x , y ) &Element; T x &CenterDot; f ( x , y ) / &Sigma; ( x , y ) &Element; T f ( x , y )
Y &OverBar; = &Sigma; ( x , y ) &Element; T y &CenterDot; f ( x , y ) / &Sigma; ( x , y ) &Element; T f ( x , y )
F (x, y) is the floating point values that in radar two dimension echo data, (x, y) locates, and Τ is target area, the grey scale centre of gravity of target area, i.e. the energy barycenter of target area.
Those skilled in the art will readily understand; the foregoing is only preferred embodiment of the present invention; not in order to limit the present invention, all any amendments of doing within the spirit and principles in the present invention, be equal to and replace and improvement etc., within all should being included in protection scope of the present invention.

Claims (10)

1. the recognition methods of forward-looking radar imaging sea-surface target critical point detection, is characterized in that, described method comprises:
(1) original radar two dimension echo data is quantified as to 2-D gray image data;
(2) 2-D gray image step (1) being obtained utilizes the method for based target physical dimension and degree of confidence to carry out region of interest extraction, obtains target area gray level image;
(3) use maximum entropy to cut apart to target area gray level image, must arrive target area and cut apart image;
(4) utilize radar two dimension echo data and target area gray level image, the peak point information in the radar two dimension echo data of extraction target area, obtains target area peak point information matrix;
(5) image is cut apart in target area and target area peak point information matrix carries out information fusion, in statistics fusion results, peak point number K in target area, counts out as effective peak;
(6) peak point in the peak point information matrix of target area is arranged by size, chosen a front K peak point as target effective peak point, target area peak point information matrix two-value is turned to target effective peak point image;
(7) extract the axial feature of target in target effective peak point image, get rid of false-alarm point and disturb, determine target location;
(8) utilize target location and target energy center of gravity, determine target critical point.
2. the method for claim 1, is characterized in that, described step (1) specifically comprises:
(1.1) the floating point values threshold value that selection quantizes, establishing floating point values upper threshold is L max, under threshold value, be limited to L min, wherein:
L min = Totalpix * min T Totalpix
L max=N*(TLength*Margin) 2+L min,if L max<Totalpix
L max=Totalpix,if L max>Totalpix
N is the target maximum number that may contain in background, Totalpix is the number of pixels of original image, TLength is the pixel count of target length in image, and Margin is surplus to ensure that target can complete demonstration, and minT is that the minimum of target two dimension echo data may floating point values;
(1.2) selective value is L respectively maxfloating point values and value for L minfloating point values as threshold value Level255 and Level0; To each data point in original radar two dimension echo data, if being greater than Level255, floating point values gives gray-scale value 255, be less than Level0 and give gray-scale value 0, to being worth at L maxwith L minbetween floating point values carry out linear interpolation, determine its gray-scale value, the formula of linear interpolation is as follows:
g ( x , y ) = 0 , if ( f ( x , y ) < Level 0 ) f ( x , y ) - Level 0 Level 255 - Level 0 * 255 , if ( Level 0 &le; f ( x , y ) &le; Level 255 ) 255 , if ( f ( x , y ) > Level 255 )
Wherein f (x, y) is the floating-point numerical value that data point (x, y) is located radar two dimension echo data, and g (x, y) is gray-scale value corresponding to this point (x, y) after linear interpolation.
3. method as claimed in claim 1 or 2, is characterized in that, described step (2) specifically comprises:
(2.1), first according to target sizes and imaging resolution, using the interval under the state of the normal course of target as constraint, determine region of interest window length and width; Investigate the region of interest window centered by s point, in window, the statistic of pixel value has average μ swith center of gravity G s:
&mu; s = 1 n &Sigma; i = 1 n g ( x i , y i )
x &OverBar; = &Sigma; ( x , y ) &Element; &Omega; x &CenterDot; g ( x , y ) / &Sigma; ( x , y ) &Element; &Omega; g ( x , y )
y &OverBar; = &Sigma; ( x , y ) &Element; &Omega; y &CenterDot; g ( x , y ) / &Sigma; ( x , y ) &Element; &Omega; g ( x , y )
G s = ( x &OverBar; , y &OverBar; )
Wherein, n is the number of pixel in the window of region of interest, and g (x, y) is the gray-scale value that in the window of region of interest, (x, y) locates pixel, and Ω is the contained region of region of interest window;
(2.2) ask for the degree of confidence μ of the region of interest window centered by s point s, and center of gravity G in window swith window center coordinate O sdistance d (G s, O s), wherein:
ρ s=μ s/[d(G s,O s)+1]
d (G s, O s) be center of gravity G in window swith window center coordinate O sbetween Euclidean distance, wherein x, y is the row and column of presentation video respectively;
(2.3) in 2-D gray image, determine region of interest window according to following principle, as target area gray level image:
If be background in the region of interest window centered by s point, due to background pixel value difference not quite and all lower, now apart from d (G s, O s) ≈ 0, degree of confidence ρ s≈ μ ssuitable with background pixel value average, now Bu Shi region of interest, region of interest window region;
If a part is that a target part is background, now μ in window in the region of interest window centered by s point sincrease, but center of gravity G in while window sbe partial to the region of high pixel value, d (G s, O s) also increase, until region of interest window comprises whole target, be defined as region of interest;
If comprised whole target in the region of interest window centered by s point, now μ sreach maximum, maximal value is relevant to the pixel value of target, simultaneously center of gravity G in window swith window center O sdistance reduces; When apart from d (G s, O s) degree of confidence ρ when ≈ 0 sreach maximum, simultaneously ρ sfor local maximum, now window region in region of interest is region of interest.
4. the method as described in claims 1 to 3 any one, is characterized in that, described step (3) is specially:
(3.1) be identified for segmentation threshold Th that target area gray level image is cut apart, segmentation threshold Th is target entropy Entropy owith background entropy Entropy bthe maximal value of sum, wherein:
Background entropy Entropy B = - &Sigma; i = 0 Th p ( i ) P t ln p ( i ) P t
Target entropy Entropy O = - &Sigma; i = Th + 1 m p ( i ) H t ln p ( i ) H t
P (i) represents that corresponding grey scale level is the probability size of i, the maximum gray scale of m presentation video, P tand H trepresent respectively background and target in image pixel grey scale distribute probability and;
(3.2) according to segmentation threshold Th, target area gray level image is cut apart, must be arrived target area and cut apart image O (x, y), be specially:
If the entropy of certain gray level is greater than Th, think that this gray level is target; If the entropy of certain gray level is less than Th, think that this gray level is background, cuts apart image O (x, y) thereby must arrive target area.
5. the method as described in claim 1 to 4 any one, is characterized in that, described step (4) is specially:
According to radar two dimension echo data and target area gray level image, adopt first order difference method, the peak point information in the radar two dimension echo data of extraction target area, obtains target area peak point information matrix G (x, y), wherein:
G ( x , y ) = 255 , if ( f ( x , y ) - f ( x , y + 1 ) > 0 , f ( x , y ) - f ( x , y - 1 ) > 0 ) 0 , else
(x, y) represents the point on radar two dimension echo data, x represent distance to, y represent orientation to, remember that its radar two dimension echo floating point values is f (x, y).
6. the method as described in claim 1 to 5 any one, is characterized in that, described step (5) is specially:
Image O (x, y) is cut apart to and target area peak point information matrix G (x, y) carries out information fusion in target area, obtain the image R (x, y) after merging;
R(x,y)=G(x,y)·O(x,y)/255
Image R (x, y) after fusion is 0/255 bianry image, and the pixel that in image, gray level is 255 is target area and cuts apart peak point in objective area in image, adds up these peak point numbers K,
K = C ount { ( x i , y i ) | ( x i , y i ) &Element; R ( x , y ) p ( x i , y i ) = 255
Wherein, p (x i, y i) be (x in the image R (x, y) after merging i, y i) the gray-scale value size located.
7. the method as described in claim 1 to 6 any one, is characterized in that, described step (6) is specially:
(6.1) peak point in the peak point information matrix of target area is sorted by size, choose the effective peak point characteristic information of front K maximum peak point as target;
(6.2) the effective peak point characteristic information of target is carried out to binary conversion treatment, by this K extreme point place pixel assignment gray level 255, rest of pixels point assignment is 0, obtains target effective peak point image.
8. the method as described in claim 1 to 7 any one, is characterized in that, described step (7) specifically comprises:
(7.1) in target effective peak point image to representing that k candidate target point utilizes least square fitting, search out a straight line that makes residual sum of squares (RSS) minimum, i.e. the axial straight line of target, this straight line is identical with the diagonal of target;
(7.2) using the length of target, width, Diagonal Dimension as constraint, taking the target rectangle of target length, wide constraint as window, allow the diagonal line of window along the axial rectilinear movement of target, find on the position on axial straight line and make maximum the window's position of contained peak point number in window, the corresponding position of window is target location;
(7.3) the peak point information flag outside window is false-alarm point, and in window, contained peak point information has been determined the position of target.
9. method as claimed in claim 8, is characterized in that, the axial straight line that calculates target in described step (7.1) is specially:
To target pixel points set V, estimate the parameter k of straight line y=kx+b, b, makes residual sum of squares (RSS) minimum, wherein (x i, y i) ∈ V.
10. the method as described in claim 1 to 9 any one, is characterized in that, described step (8) is specially:
Ask for the grey scale centre of gravity of target area set it as the energy barycenter of target area, be the key point of target, wherein to ask for formula as follows for the grey scale centre of gravity of target area:
X &OverBar; = &Sigma; ( x , y ) &Element; T x &CenterDot; f ( x , y ) / &Sigma; ( x , y ) &Element; T f ( x , y )
Y &OverBar; = &Sigma; ( x , y ) &Element; T y &CenterDot; f ( x , y ) / &Sigma; ( x , y ) &Element; T f ( x , y )
F (x, y) is the floating point values that in radar two dimension echo data, (x, y) locates, and Τ is target area.
CN201410211693.9A 2014-05-16 2014-05-16 Forward-looking radar imaging sea-surface target key point detection and recognition method Active CN103971127B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410211693.9A CN103971127B (en) 2014-05-16 2014-05-16 Forward-looking radar imaging sea-surface target key point detection and recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410211693.9A CN103971127B (en) 2014-05-16 2014-05-16 Forward-looking radar imaging sea-surface target key point detection and recognition method

Publications (2)

Publication Number Publication Date
CN103971127A true CN103971127A (en) 2014-08-06
CN103971127B CN103971127B (en) 2017-04-26

Family

ID=51240598

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410211693.9A Active CN103971127B (en) 2014-05-16 2014-05-16 Forward-looking radar imaging sea-surface target key point detection and recognition method

Country Status (1)

Country Link
CN (1) CN103971127B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105205450A (en) * 2015-08-24 2015-12-30 辽宁工程技术大学 SAR image target extraction method based on irregular marked point process
CN106340046A (en) * 2016-08-19 2017-01-18 中国电子科技集团公司第二十八研究所 Radar target location analysis method based on graphic geographic information
CN107340503A (en) * 2017-07-02 2017-11-10 中国航空工业集团公司雷华电子技术研究所 A kind of method for suppressing false sea-surface target based on digital elevation map
CN107632305A (en) * 2017-09-11 2018-01-26 河海大学 A kind of seabed part landform sense of autonomy perception method and device that survey technology is swept based on section sonar
CN109086815A (en) * 2018-07-24 2018-12-25 中国人民解放军国防科技大学 Floating point number discretization method in decision tree model based on FPGA
CN109087319A (en) * 2018-08-17 2018-12-25 北京华航无线电测量研究所 A kind of manufacture method of mask and system
CN109460764A (en) * 2018-11-08 2019-03-12 中南大学 A kind of satellite video ship monitoring method of combination brightness and improvement frame differential method
CN109765554A (en) * 2018-11-14 2019-05-17 北京遥感设备研究所 A kind of radar foresight imaging system and method
CN110766005A (en) * 2019-10-23 2020-02-07 森思泰克河北科技有限公司 Target feature extraction method and device and terminal equipment
CN111414910A (en) * 2020-03-18 2020-07-14 上海嘉沃光电科技有限公司 Small target enhancement detection method and device based on double convolutional neural network
CN111695529A (en) * 2020-06-15 2020-09-22 北京师范大学 X-ray source detection method based on human skeleton key point detection algorithm
CN112215137A (en) * 2020-10-10 2021-01-12 中国电子科技集团公司第十四研究所 Low false alarm target detection method based on region constraint
CN113642650A (en) * 2021-08-16 2021-11-12 上海大学 Multi-scale template matching and self-adaptive color screening based multi-beam sonar sunken ship detection method
CN115410370A (en) * 2022-08-31 2022-11-29 南京慧尔视智能科技有限公司 Abnormal parking detection method and device, electronic equipment and storage medium
CN116400351A (en) * 2023-03-21 2023-07-07 大连理工大学 Radar echo image target object processing method based on self-adaptive region growing method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6259396B1 (en) * 1999-08-26 2001-07-10 Raytheon Company Target acquisition system and radon transform based method for target azimuth aspect estimation
CN103197302A (en) * 2013-04-02 2013-07-10 电子科技大学 Target location extraction method applicable to through-the-wall radar imaging

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6259396B1 (en) * 1999-08-26 2001-07-10 Raytheon Company Target acquisition system and radon transform based method for target azimuth aspect estimation
CN103197302A (en) * 2013-04-02 2013-07-10 电子科技大学 Target location extraction method applicable to through-the-wall radar imaging

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王义敏: "基于机载合成孔径雷达图像的对地目标检测方法研究", 《中国博士学位论文全文数据库信息科技辑》 *

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105205450A (en) * 2015-08-24 2015-12-30 辽宁工程技术大学 SAR image target extraction method based on irregular marked point process
CN105205450B (en) * 2015-08-24 2018-08-07 辽宁工程技术大学 A kind of SAR image target extraction method based on irregular mark point process
CN106340046B (en) * 2016-08-19 2019-05-10 南京莱斯电子设备有限公司 A kind of radar target position analysis method based on image conversion geography information
CN106340046A (en) * 2016-08-19 2017-01-18 中国电子科技集团公司第二十八研究所 Radar target location analysis method based on graphic geographic information
CN107340503B (en) * 2017-07-02 2020-11-27 中国航空工业集团公司雷华电子技术研究所 Method for inhibiting false sea surface targets based on digital elevation map
CN107340503A (en) * 2017-07-02 2017-11-10 中国航空工业集团公司雷华电子技术研究所 A kind of method for suppressing false sea-surface target based on digital elevation map
CN107632305A (en) * 2017-09-11 2018-01-26 河海大学 A kind of seabed part landform sense of autonomy perception method and device that survey technology is swept based on section sonar
CN109086815A (en) * 2018-07-24 2018-12-25 中国人民解放军国防科技大学 Floating point number discretization method in decision tree model based on FPGA
CN109086815B (en) * 2018-07-24 2021-08-31 中国人民解放军国防科技大学 Floating point number discretization method in decision tree model based on FPGA
CN109087319A (en) * 2018-08-17 2018-12-25 北京华航无线电测量研究所 A kind of manufacture method of mask and system
CN109460764B (en) * 2018-11-08 2022-02-18 中南大学 Satellite video ship monitoring method combining brightness characteristics and improved interframe difference method
CN109460764A (en) * 2018-11-08 2019-03-12 中南大学 A kind of satellite video ship monitoring method of combination brightness and improvement frame differential method
CN109765554A (en) * 2018-11-14 2019-05-17 北京遥感设备研究所 A kind of radar foresight imaging system and method
CN110766005A (en) * 2019-10-23 2020-02-07 森思泰克河北科技有限公司 Target feature extraction method and device and terminal equipment
CN111414910A (en) * 2020-03-18 2020-07-14 上海嘉沃光电科技有限公司 Small target enhancement detection method and device based on double convolutional neural network
CN111414910B (en) * 2020-03-18 2023-05-02 上海嘉沃光电科技有限公司 Small target enhancement detection method and device based on double convolution neural network
CN111695529A (en) * 2020-06-15 2020-09-22 北京师范大学 X-ray source detection method based on human skeleton key point detection algorithm
CN111695529B (en) * 2020-06-15 2023-04-25 北京师范大学 X-ray source detection method based on human skeleton key point detection algorithm
CN112215137A (en) * 2020-10-10 2021-01-12 中国电子科技集团公司第十四研究所 Low false alarm target detection method based on region constraint
CN112215137B (en) * 2020-10-10 2024-04-26 中国电子科技集团公司第十四研究所 Low false alarm target detection method based on region constraint
CN113642650A (en) * 2021-08-16 2021-11-12 上海大学 Multi-scale template matching and self-adaptive color screening based multi-beam sonar sunken ship detection method
CN113642650B (en) * 2021-08-16 2024-02-20 上海大学 Multi-beam sonar sunken ship detection method based on multi-scale template matching and adaptive color screening
CN115410370A (en) * 2022-08-31 2022-11-29 南京慧尔视智能科技有限公司 Abnormal parking detection method and device, electronic equipment and storage medium
CN116400351A (en) * 2023-03-21 2023-07-07 大连理工大学 Radar echo image target object processing method based on self-adaptive region growing method
CN116400351B (en) * 2023-03-21 2024-05-17 大连理工大学 Radar echo image target object processing method based on self-adaptive region growing method

Also Published As

Publication number Publication date
CN103971127B (en) 2017-04-26

Similar Documents

Publication Publication Date Title
CN103971127A (en) Forward-looking radar imaging sea-surface target key point detection and recognition method
Tao et al. Robust CFAR detector based on truncated statistics in multiple-target situations
O'Sullivan et al. SAR ATR performance using a conditionally Gaussian model
US6337654B1 (en) A-scan ISAR classification system and method therefor
US10042048B1 (en) Superpixels for improved structure and terrain classification using multiple synthetic aperture radar image products
US7116265B2 (en) Recognition algorithm for the unknown target rejection based on shape statistics obtained from orthogonal distance function
US20170039727A1 (en) Methods and Systems for Detecting Moving Objects in a Sequence of Image Frames Produced by Sensors with Inconsistent Gain, Offset, and Dead Pixels
US10571560B2 (en) Detecting objects in images
JP2010197337A (en) Device, method and program for detecting artifact
Huang et al. A new SAR image segmentation algorithm for the detection of target and shadow regions
JP2011048485A (en) Device and method for detecting target
CN104156929A (en) Infrared weak and small target background inhibiting method and device on basis of global filtering
US8369572B2 (en) System and method for passive automatic target recognition (ATR)
CN104537675A (en) SAR image of bilateral CFAR ship target detection method
US20240004031A1 (en) Multi radar object detection
Zhang et al. Marine radar monitoring IoT system and case study of target detection based on PPI images
EP1515160B1 (en) A target shadow detector for synthetic aperture radar
Gao et al. Fast two‐dimensional subset censored CFAR method for multiple objects detection from acoustic image
US10467474B1 (en) Vehicle track detection in synthetic aperture radar imagery
JP5294923B2 (en) Artifact detection device, artifact detection method, and artifact detection program
JP6751063B2 (en) Radar signal processing device, radar signal processing method, and program
EP3647994B1 (en) Automated generation of training images
JP2019219339A (en) Radar signal processor
Gu et al. Fast iterative censoring CFAR algorithm for ship detection from SAR images
Ding et al. Correlation between SAR system resolution and target detection

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant