CN103971127B - Forward-looking radar imaging sea-surface target key point detection and recognition method - Google Patents

Forward-looking radar imaging sea-surface target key point detection and recognition method Download PDF

Info

Publication number
CN103971127B
CN103971127B CN201410211693.9A CN201410211693A CN103971127B CN 103971127 B CN103971127 B CN 103971127B CN 201410211693 A CN201410211693 A CN 201410211693A CN 103971127 B CN103971127 B CN 103971127B
Authority
CN
China
Prior art keywords
target
image
window
point
target area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410211693.9A
Other languages
Chinese (zh)
Other versions
CN103971127A (en
Inventor
杨卫东
张洁
邹腊梅
毕立人
李静
桑农
严航宇
桂文军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201410211693.9A priority Critical patent/CN103971127B/en
Publication of CN103971127A publication Critical patent/CN103971127A/en
Application granted granted Critical
Publication of CN103971127B publication Critical patent/CN103971127B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Radar Systems Or Details Thereof (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a forward-looking radar imaging sea-surface target key point detection and recognition method. The method includes the steps that radar echo data are quantized into a gray scale image; a region of interest is extracted from the gray scale image, partition is performed, and a target region partition image is obtained; the radar echo data and the region of interest of the image are used, and a target region peak point information matrix is obtained; information merging is conducted on the target region partition image and the target region peak point information matrix, the number K of peak points in a target region in a merging result is obtained through counting, the previous K peak points are selected and serve as target effective peak points and form a target effective peak point image in a binaryzation mode, target axial features are extracted, and a target position is determined; the center of gravity of target energy is calculated and serves as the target key point. According to forward-looking radar target features, multiple mode identification methods are used comprehensively, the inherent features of a target can be reserved, meanwhile interference factors such as artifacts and side lobes can be restrained, and the recognition accuracy and the positioning precision of radar imaging sea surface target key points are improved.

Description

A kind of forward-looking radar is imaged sea-surface target critical point detection recognition methodss
Technical field
The invention belongs to target detection, mode identification technology, and in particular to a kind of forward-looking radar is imaged sea-surface target Critical point detection recognition methodss, the method retain target inherent character while can effectively suppress artifact, secondary lobe etc. interference because Element, improves the recognition correct rate and positioning precision to radar imagery sea-surface target key point.
Background technology
The Ship Target Detection identification realized using radar imaging technology is many civilian and military field key technology, It is widely used in fields such as monitoring, military affairs.In forward-looking radar echo, strong echo-signal means that detector is searched in this place Rope is to strong scattering point.Two face reflectors and corner reflector during strong scattering point is typically by Ship Target are caused.These reflections Component distribution is presented different intensity sizes on whole Ship Target with the different of azimuth of target.However, normal radar Transmitting linear FM signal, due to the two-dimensional frequency supporting domain of imaging system it is limited so that synthetic aperture radar (Synthetic Aperture Radar, SAR) impulse response function be sinc functions to orientation in distance, cause sidelobe level very It is high.Because the process length of window of radar data is limited, and there is phase error in radar return data, form secondary lobe, lead The noise of multiplication is caused, and interference is produced with neighbouring scattering object, very big is affected on picture quality.The presence of secondary lobe causes forword-looking imaging There is artifact in the target image of middle acquisition, real target point is weaker than in intensity, this be also target critical point identification with it is high-precision The basis of degree positioning.False target can be distinguished with real goal, in addition it is also necessary to consider to target under different points of view Forword-looking imaging, the spacing of false target and real goal in image, if there is the situation for overlapping interference.Therefore, it is necessary to Before forward-looking radar target's feature-extraction, detection and identification, pretreatment is carried out to original radar echo signal data, made an uproar with reducing Sound shadow is rung, and improves the signal to noise ratio of image, and prominent target signature information recognizes target, closes so as to improve target using target characteristic The probability of key point identification, and the energy barycenter using target determines the key point of target.
In existing disclosed document, the forward-looking radar imaging sea-surface target detection recognition method mentioned is mostly by right The process of original radar echo signal is realizing.However, because by the restriction of SAR image-forming mechanisms, target slice image receives target The impact of the factors such as attitude, background characteristics value and sensor imaging attitude, shows higher changeableness, right so as to easily cause , there is erroneous judgement and misjudge in the interference of recognition result.Zhang Hong et al. exists《High Resolution SAR Images target recognition》In propose and be based on The recognition methodss of target peak feature, for selection that peak value is counted out is 20~40 taking of taking experimental data statistics to provide Value scope.It does not provide an effective criterion and method, on engineer applied for the determination that peak value is counted out Can only be empirically.Maximum entropy dividing method has the advantages that good stability in existing open source literature, but easily dry by background Disturb, it is inaccurate that segmentation result has false target, gained target information.
The content of the invention
The present invention proposes that a kind of forward-looking radar is imaged sea-surface target key point for the Objective extraction problem under complex environment Detection recognition method, specifically includes:
(1) original radar two dimension echo data is quantified as into 2-D gray image data;
(2) 2-D gray image that step (1) is obtained is carried out using the method for being based on target physical dimension and confidence level Region of interest is extracted, and obtains target area gray level image;
(3) target area gray level image is split using maximum entropy, obtains target area segmentation figure picture;
(4) using radar two dimension echo data and target area gray level image, the radar two dimension echo of target area is extracted Peak point information in data, obtains target area peak point information matrix;
(5) information fusion, statistics fusion knot are carried out to target area segmentation figure picture and target area peak point information matrix Peak value is counted out K in target area in fruit, is counted out as effective peak;
(6) peak point in the peak point information matrix of target area is sized, K peak point is used as target before choosing Effective peak point, to target area peak point information matrix two-value target effective peak value dot image is turned to;
(7) target axial direction feature in target effective peak value dot image is extracted, the interference of false-alarm point is excluded, target location is determined;
(8) using target location and target energy center of gravity, target critical point is determined.
Further, the step (1) specifically includes:
(1.1) the floating point values threshold value for being quantified is selected, if floating point values upper threshold is Lmax, bottom threshold is Lmin, its In:
Lmax=N* (TLength*Margin)2+Lmin, if Lmax< Totalpix
Lmax=Totalpix, if Lmax> Totalpix
N is the target maximum number that may contain in background, Totalpix for original image number of pixels, TLength For target length pixel count in the picture, Margin can completely be shown for surplus with ensureing target, and minT is that target two dimension is returned The minimum of wave number evidence may floating point values;
(1.2) respectively selective value is LmaxFloating point values and value be LminFloating point values as threshold value Level255 and Level0;To each data point in original radar two dimension echo data, if floating point values gives gray value 255 more than Level255, Gray value 0 is given less than Level0, to value in LmaxWith LminBetween floating point values then carry out linear interpolation, determine its gray value, The formula of linear interpolation is as follows:
Wherein f (x, y) is the floating-point values of data point (x, y) place radar two dimension echo data, and g (x, y) is linear interpolation The corresponding gray value of point (x, y) afterwards.
Further, the step (2) specifically includes:
(2.1) first according to target sizes and imaging resolution, using the interval under the normal course state of target as constraint, Determine region of interest window length and width;The region of interest window centered on s points is investigated, the statistic of pixel value has mean μ in windowsWith Center of gravity Gs
Wherein, n is the number of pixel in region of interest window, and g (x, y) is (x, y) place pixel in region of interest window Gray value, Ω is region contained by region of interest window;
(2.2) the confidence level μ of the region of interest window centered on s points is asked fors, and center of gravity G in windowsSit with window center Mark OsApart from d (Gs,Os), wherein:
ρss/[d(Gs,Os)+1]
d(Gs,Os) it is center of gravity G in windowsWith window center coordinate OsBetween Euclidean distance, wherein x, y represents respectively the row and column of image;
(2.3) region of interest window is determined in 2-D gray image according to following principles, as target area gray level image:
If being background in the region of interest window centered on s points, because background pixel value difference is less and all than relatively low, this When apart from d (Gs,Os) ≈ 0, confidence level ρs≈μsSuitable with background pixel value average, now region of interest window region is not Region of interest;
If a part is background for a target part in the region of interest window centered on s points, now μ in windowsIncrease, but together When window in center of gravity GsIt is partial to the region of high pixel value, d (Gs,Os) also increase, until region of interest window includes whole target, it is determined that For region of interest;
If including whole target in the region of interest window centered on s points, now μsReach maximum, maximum and target Pixel value is related, while center of gravity G in windowsWith window center OsDistance reduces;When apart from d (Gs,Os) ≈ 0 when confidence level ρsReach most Greatly, while ρsFor local maximum, now region of interest window region is region of interest.
Further, the step (3) specifically includes:
(3.1) determine that the segmentation threshold Th for being split to target area gray level image, segmentation threshold Th are target Entropy EntropyOWith background entropy EntropyBThe maximum of sum, wherein:
Background entropy
Target entropy
P (i) represents corresponding grey scale level as the probability size of i, and m represents the maximum gray scale of image, PtAnd HtThe back of the body is represented respectively Scape and target in the picture pixel grey scale distribution probability and;
(3.2) target area gray level image is split according to segmentation threshold Th, obtains target area segmentation figure as O (x, y), specially:
If the entropy of certain gray level is more than Th, then it is assumed that the gray level is target;If the entropy of certain gray level is less than Th, then it is assumed that The gray level is background, so as to obtain target area segmentation figure as O (x, y).
Further, the step (4) is specially:
According to radar two dimension echo data and target area gray level image, using first-order difference method, target area is extracted Radar two dimension echo data in peak point information, obtain target area peak point information matrix G (x, y), wherein:
(x, y) represents the point in two-dimentional echo-signal, and x represents distance to y represents orientation, and note radar two dimension echo is floated Point value is f (x, y).
Further, the step (5) is specially:
To target area segmentation figure as O (x, y) and target area peak point information matrix G (x, y) carry out information fusion, obtain Image R (x, y) to after fusion;
R (x, y)=G (x, y) O (x, y)/255
Image R (x, y) after fusion is 0/255 bianry image, and gray level is that 255 pixel is target area in image Peak point in regional partition objective area in image, counts these peak values and counts out K
Wherein, p (xi,yi) be fusion after image R (x, y) in (xi,yi) place gray value size.
Further, the step (6) specifically includes:
(6.1) peak point in the peak point information matrix of target area is sorted by size, chooses maximum front K peak point As the effective peak point characteristic information of target;
(6.2) binary conversion treatment is carried out to the effective peak point characteristic information of target, will this K extreme point place pixel Point assignment gray level 255, rest of pixels point is entered as 0, obtains target effective peak value dot image.
Further, the step (7) specifically includes:
(7.1) least square fitting is utilized to representing k candidate target point in target effective peak value dot image, is found The minimum straight line of residual sum of squares (RSS), the i.e. axial straight line of target are made to one, the straight line is identical with the diagonal of target;
(7.2) using the length of target, width, Diagonal Dimension as constraint, with target length, the target square of wide constraint Shape is window, allows the diagonal of window to move linearly along the axial direction of target, to find and make window on the position on axial straight line Interior contained peak value is counted out most the window's positions, and the position corresponding to window is target location;
(7.3) the peak point information flag outside window is false-alarm point, and peak point information contained in window determines target Position.
Further, the axial straight line that target is calculated in the step (7.1) is specially:
To object pixel point set V, parameter k of straight line y=kx+b, b so that residual sum of squares (RSS) are estimatedMinimum, wherein (xi,yi)∈V。
Further, the step (8) is specially:
Ask for the grey scale centre of gravity of target areaAs the energy barycenter of target area, the as pass of target Key point, it is as follows that the grey scale centre of gravity of wherein target area asks for formula:
F (x, y) is the floating point values at (x, y) place in radar two dimension echo data, and Τ is target area.
In general, by the contemplated above technical scheme of the present invention compared with prior art, the present invention has following Beneficial effect:
(1) while rejecting false target information using radar return data peaks point information, using OTSU image segmentations Constraint effective peak count out, so as to realize engineering applications peak value count out K self adaptation choose, so as to get result More standardization, will not change because of the different of operator;
(2) in the image quantization stage, using relevant informations such as target and background sizes, the calculating to quantization threshold is established Criterion, so that radar two dimension echo data can in the form of images show and process, it is the figure of radar two dimension echo data Pictureization process provides data and supports;
(3) according to radar image distance to the imaging characteristicses with orientation, using radar bearing to maximum point information Target area is determined, so as to realize Target Max information retrieval on the basis of performance is not affected, with respect to local maximum Extracting method reduces amount of calculation;
(4) using the axial feature of target, the positioning and false-alarm point for realizing target is excluded, and makes target location more accurate;
(5) the target region of interest extracting method of a kind of utilization target physical dimension and confidence level, the sense of extraction are proposed Region of interest is more accurate, reduces the operand of subsequent treatment;
(6) forward-looking radar imaging sponge target critical point detection recognition method proposed by the present invention is believed forward-looking radar echo Number handling process, the results show effectiveness of the flow process.
In sum, the present invention has comprehensively used various modes recognition methodss, Neng Gou according to forward-looking radar target property Suppress the interference factors such as artifact, secondary lobe while retaining target inherent character, improve the knowledge of radar imagery sea-surface target key point Other accuracy and positioning precision.Target detection identification is carried out by this method, the recognizable rate of target is high.
Description of the drawings
Fig. 1 is the overview flow chart that forward-looking radar of the present invention is imaged sea-surface target critical point detection recognition methodss;
Fig. 2 is the image in one embodiment of the invention after original radar two dimension echo data quantization;
Fig. 3 is the sharp peaks characteristic figure of original radar two dimension echo data;
Fig. 4 is that image segmentation result after original radar two dimension echo data quantization in Fig. 2 is shown;
Fig. 5 is image object effective peak hum pattern after quantifying in original radar two dimension echo data in Fig. 2;
Fig. 6 is target axial direction feature constraint result figure in image after original radar two dimension echo data quantization in Fig. 2;
Fig. 7 is target critical point location result in image after original radar two dimension echo data quantization in Fig. 2.
Specific embodiment
In order that the objects, technical solutions and advantages of the present invention become more apparent, it is right below in conjunction with drawings and Examples The present invention is further elaborated.It should be appreciated that specific embodiment described herein is only to explain the present invention, and It is not used in the restriction present invention.As long as additionally, technical characteristic involved in invention described below each embodiment Not constituting conflict each other just can be mutually combined.
The purpose of the present invention is to suppress the interference factors such as artifact, secondary lobe while retaining target inherent character, improves radar The recognition correct rate and positioning precision of imaging sea-surface target key point.However, in forward-looking radar echo, strong echo-signal meaning Detector and search strong scattering point in this place.Strong scattering point be typically by Ship Target in two face reflectors and corner reflector Cause.These reflection parts are distributed on whole Ship Target, different intensity are presented with the different of azimuth of target big It is little.However, normal radar transmitting linear frequency modulation (Linear Frequency Modulation, LFM) signal, due to imaging system The two-dimensional frequency supporting domain of system is limited so that the impulse response function of SAR is sinc functions, secondary lobe in distance to orientation Level is very high.Because secondary lobe is likely to form the noise of multiplication, and produces interference with neighbouring scattering object, picture quality is affected very Greatly.The presence of secondary lobe causes the target image obtained in forword-looking imaging to there is artifact, and real target point is weaker than in intensity, this It is also the basis of target recognition and hi-Fix.False target can be distinguished with real goal, in addition it is also necessary to consider right Forword-looking imaging of the target under different points of view, the spacing of false target and real goal in image, if exist and overlap interference Situation.The present invention makes the assumption that estimation based on the target property and Sea background characteristic in radar image.First, it is former using radar The image of Jing over-segmentations carries out information fusion after beginning echo data and process, obtains the sharp peaks characteristic information of target area;Then, Using the axial feature knowledge exclusive PCR of target, target is recognized.Finally, target critical is positioned according to the energy barycenter of target Point.
The present invention by the feature extraction based on target peak, obtains the sharp peaks characteristic figure of target first;By original echo Data become image through quantization, are split using maximum entropy after grade resolution adjustment, obtain segmentation figure picture;By the peak of gained Value tag figure carries out fusion and obtains peak point quantity threshold with segmentation figure picture, can obtain effective target peak information, utilizes The axial feature of target and energy barycenter, recognize target, position key point.Test result indicate that, algorithm can suppress to protect in text While staying target inherent character, various interference factors are reduced, improve the recognition correct rate and positioning precision of target critical point.
The invention provides a kind of forward-looking radar is imaged sea-surface target critical point detection recognition methodss, its overall procedure is as schemed Shown in 1, the method detailed process is:
(1) original radar two dimension echo data is quantified as into 2-D gray image data
Original radar two dimension echo data is quantified first, being transformed into the gray value in the range of 0~255 can be carried out The 2-D gray image data of Digital Image Processing.Original radar two dimension echo data is mapped as into the side of 256 grades of gray level images Method is:The floating point values of original radar two dimension echo data is ranked up from small to large;Carrying out selection according to following principle is carried out The floating point values threshold value of quantization, it is assumed that floating point values upper threshold is Lmax, bottom threshold is Lmin, then have
Lmax=N* (TLength*Margin)2+Lmin, if Lmax< Totalpix
Lmax=Totalpix, if Lmax> Totalpix
Wherein, N is the target maximum number that may contain in background, and Totalpix is the number of pixels of original image, TLength is target length pixel count in the picture, and Margin is surplus, it is ensured that target can completely show that minT is target The minimum of two-dimentional echo data may floating point values.
Respectively selective value is LmaxFloating point values and value be LminFloating point values as threshold value Level255 and Level0;To every Individual pixel, if floating point values gives gray value 255 more than Level255, less than Level0 gray value 0 is given, to value in LmaxWith LminBetween floating point values then carry out linear interpolation, determine its gray value.The formula of linear interpolation is as follows
Wherein f (x, y) is the floating-point values of pixel (x, y) place radar two dimension echo data, and g (x, y) is linear interpolation The corresponding gray value of point afterwards, the image after quantization is as shown in Figure 2.
(2) 2-D gray image that step (1) is obtained is carried out using the method for being based on target physical dimension and confidence level Region of interest is extracted, and obtains target area gray level image
First according to target sizes and imaging resolution, using the interval under the normal course state of target as constraint, it is determined that Suitable region of interest window length and width.The region of interest window centered on s points is investigated, the statistic of pixel value has mean μ in windows With center of gravity Gs
Wherein, n is the number of pixel in region of interest window, and g (x, y) is the gray value of (x, y) place pixel, and Ω is sense Region contained by region of interest window.
By μsIt is whether an evaluation criterion of region of interest as decision window, μsPixel value in higher explanation local window It is bigger, more it is likely to be target;Simultaneously as wish target lock-on at the center of region of interest, center of gravity G in windowsAnd window Mouth centre coordinate OsApart from d (Gs,Os) as another evaluation criterion.Region of interest confidence level of the definition centered on s points For:
ρss/[d(Gs,Os)+1]
Apart from d (Gs,Os) it is chosen for center of gravity G in windowsWith window center coordinate OsBetween Euclidean distance, wherein x, y difference Represent the row and column of image:
To three kinds of regions in 2-D gray image, it is respectively processed:
If being background in the region of interest window centered on s points, because background pixel value difference is less and all than relatively low, this When apart from d (Gs,Os) ≈ 0, confidence level ρs≈μsSuitable with background pixel value average, now region of interest window region is not Region of interest;
If a part is background for a target part in the region of interest window centered on s points, now μ in windowsIncrease, but together When window in center of gravity GsIt is partial to the region of high pixel value, d (Gs,Os) also increase, until region of interest window includes whole target, it is determined that For region of interest;
If including whole target in the region of interest window centered on s points, now μsReach maximum, maximum and target Pixel value is related, while center of gravity G in windowsWith window center OsDistance reduces;When apart from d (Gs,Os) ≈ 0 when confidence level ρsReach most Greatly, while ρsFor local maximum, now region of interest window region is region of interest.
Here the region of interest for obtaining we be referred to as target area gray level image.
(3) target area gray level image is split using maximum entropy, obtains target area segmentation figure picture;
Target area gray level image is split using maximum entropy, target area segmentation figure picture is obtained;
If threshold value Th is divided the image into as two parts of target and background, then the entropy for defining target and background is respectively:
Background entropy
Target entropy
Wherein, P (i) represent corresponding grey scale level as i probability size, m represents the maximum gray scale of image, PtAnd HtRespectively Represent background and target in the picture the probability of pixel grey scale distribution and.
The maximum of target entropy and background entropy sum is calculated as threshold value
If the entropy of certain gray level is more than Th, then it is assumed that the gray level is target;If the entropy of certain gray level is less than Th, then it is assumed that The gray level is background.Target area segmentation figure can so be obtained as O (x, y), the image after segmentation is as shown in Figure 4.
(4) using radar two dimension echo data and target area gray level image, the radar two dimension echo of target area is extracted Peak point information in data, obtains target area peak point information matrix;
Using radar two dimension echo data and target area gray level image, the radar two dimension echo data of target area is extracted In peak point information, obtain target area peak point information matrix;
Scattering center can define 2 class extremal features points in radar two dimension echo data:Two-dimentional extreme vertex and one-dimensional pole Value point.Define extreme point:
Wherein U (ai) represent with aiCentered on local neighborhood (not including aiPoint).Due to extreme point in orientation dynamic Scope is very big, therefore the present invention considers the one-dimensional extreme point in orientation.
Extreme point defined above is extracted using first-order difference method.To the point (x, y) in two-dimentional echo-signal, x tables Show distance to y represents orientation, and note radar two dimension echo floating point values is f (x, y), is defined:
It is target area peak point information matrix using the calculated G (x, y) of above formula, as shown in Figure 3.
(5) information fusion is carried out to target area segmentation figure picture and target area peak point information matrix, target area is counted Peak value is counted out K in the target area of regional partition image, is counted out as effective peak.
To target area segmentation figure as O (x, y) and target area peak point information matrix G (x, y) carry out information fusion, obtain Image R (x, y) to after fusion.
R (x, y)=G (x, y) O (x, y)/255
Image R (x, y) after fusion is 0/255 bianry image, and gray level is that 255 pixel is target area in image Peak point in regional partition objective area in image, counts these peak values and counts out K
Wherein, p (xi,yi) be fusion after image R (x, y) in (xi,yi) place gray value size.
(6) peak point in the peak point information matrix of target area is sized, K peak point is used as target before choosing Effective peak point, binaryzation obtains target effective peak value dot image.
Peak point in the peak point information matrix of target area is sorted by intensity, maximum front K peak point conduct is chosen The effective peak point characteristic information of target.The effective peak point characteristic information of binary conversion treatment target, will this K extreme point institute In pixel assignment gray level 255, rest of pixels point is entered as 0, obtains target effective peak value dot image, as shown in Figure 5.
(7) target axial direction feature in target effective peak value dot image is extracted, the interference of exclusive segment false-alarm point determines target position Put, as shown in Figure 6.
Noise spot or false-alarm point may be mixed in target effective peak value dot image in front k candidate target point, have been needed Carry out axis projection to screen the impact point of candidate.Axis projection make use of the axial length information of target, to waiting The spatial relation for selecting impact point enters row constraint.
Here using the data fitting method of method of least square, it finds number to this method by minimizing the quadratic sum of error According to optimal function matching.To object pixel point set V, parameter k of straight line y=kx+b, b so that residuals squares are estimated WithMinimum, wherein (xi,yi)∈V。
The algorithm of axial feature constraint is as follows:
1) least square fitting is utilized to representing k candidate target point in target effective peak value dot image, searches out one Bar makes the minimum straight line of residual sum of squares (RSS), the i.e. axial straight line of target, identical with the diagonal of target;
2) using the length of target, width, Diagonal Dimension as constraint, with target length, the target rectangle of wide constraint For window, allow the diagonal of window to move linearly along the axial direction of target, find and made in window on the position on axial straight line Contained peak value is counted out most the window's positions, and the position corresponding to window is target location;
3) the peak point information flag outside window is false-alarm point, and peak point information contained in window determines the position of target Put.
(8) using target location and target energy center of gravity, target critical point is determined.
By the positions and dimensions information of target, the peak point information in the position of target can be obtained, calculate these peak values Point energy barycenter as target key point, as shown in Figure 7.
According to the characteristic of radar imagery, radar echo intensity shows the information of strong scattering point, and the present invention defines target area Interior grey scale centre of gravity is positioned as the key point of target.Grey scale centre of gravity method is to regard the gray value of each pixel in region " energy " of the point, the center of gravity formula in its required region is as follows:
F (x, y) is the floating point values at (x, y) place in radar two dimension echo data, and Τ is target area,It is target The energy barycenter of the grey scale centre of gravity in region, i.e. target area.
As it will be easily appreciated by one skilled in the art that the foregoing is only presently preferred embodiments of the present invention, not to The present invention, all any modification, equivalent and improvement made within the spirit and principles in the present invention etc. are limited, all should be included Within protection scope of the present invention.

Claims (15)

1. a kind of forward-looking radar is imaged sea-surface target critical point detection recognition methodss, it is characterised in that methods described includes:
(1) original radar two dimension echo data is quantified as into 2-D gray image data;
(2) 2-D gray image obtained to step (1) is emerging using sense is carried out based on the method for target physical dimension and confidence level Interesting area extracts, and obtains target area gray level image;
(3) target area gray level image is split using maximum entropy, obtains target area segmentation figure picture;
(4) using radar two dimension echo data and target area gray level image, the radar two dimension echo data of target area is extracted In peak point information, obtain target area peak point information matrix;
(5) information fusion is carried out to target area segmentation figure picture and target area peak point information matrix, in statistics fusion results Peak value is counted out K in target area, is counted out as effective peak;
(6) peak point in the peak point information matrix of target area is sized, K peak point is used as target effective before choosing Peak point, to target area peak point information matrix two-value target effective peak value dot image is turned to;
(7) target axial direction feature in target effective peak value dot image is extracted, the interference of false-alarm point is excluded, target location is determined;
(8) using target location and target energy center of gravity, target critical point is determined.
2. the method for claim 1, it is characterised in that the step (1) specifically includes:
(1.1) the floating point values threshold value for being quantified is selected, if floating point values upper threshold is Lmax, bottom threshold is Lmin, wherein:
L min = T o t a l p i x * min T T o t a l p i x
Lmax=N* (TLength*Margin)2+Lmin, if Lmax< Totalpix
Lmax=Totalpix, if Lmax> Totalpix
N is the target maximum number that may contain in background, and Totalpix is the number of pixels of original image, and TLength is mesh Mark length pixel count in the picture, Margin can completely show that minT is target two dimension number of echoes for surplus to ensure target According to minimum may floating point values;
(1.2) respectively selective value is LmaxFloating point values and value be LminFloating point values as threshold value Level255 and Level0;It is right Each data point in original radar two dimension echo data, if floating point values gives gray value 255 more than Level255, less than Level0 Gray value 0 is given, to value in LmaxWith LminBetween floating point values then carry out linear interpolation, determine its gray value, linear interpolation Formula is as follows:
g ( x , y ) = 0 , i f ( f ( x , y ) < L e v e l 0 ) f ( x , y ) - L e v e l 0 L e v e l 255 - L e v e l 0 * 255 , i f ( L e v e l 0 &le; f ( x , y ) < L e v e l 255 ) 255 , i f ( f ( x , y ) > L e v e l 255 )
Wherein f (x, y) is the floating-point values of data point (x, y) place radar two dimension echo data, and g (x, y) is should after linear interpolation The corresponding gray value of point (x, y).
3. the method for claim 1, it is characterised in that the step (2) specifically includes:
(2.1) first according to target sizes and imaging resolution, using the interval under the normal course state of target as constraint, it is determined that Region of interest window length and width;The region of interest window centered on s points is investigated, the statistic of pixel value has mean μ in windowsAnd center of gravity Gs
&mu; s = 1 n &Sigma; i = 1 n g ( x i , y i )
x &OverBar; = &Sigma; ( x , y ) &Element; &Omega; x &CenterDot; g ( x , y ) / &Sigma; ( x , y ) &Element; &Omega; g ( x , y )
y &OverBar; = &Sigma; ( x , y ) &Element; &Omega; y &CenterDot; g ( x , y ) / &Sigma; ( x , y ) &Element; &Omega; g ( x , y )
G s = ( x &OverBar; , y &OverBar; )
Wherein, n is the number of pixel in region of interest window, and g (x, y) is the gray scale of (x, y) place pixel in region of interest window Value, Ω is region contained by region of interest window;
(2.2) the confidence level μ of the region of interest window centered on s points is asked fors, and center of gravity G in windowsWith window center coordinate Os Apart from d (Gs,Os), wherein:
ρss/[d(Gs,Os)+1]
d(Gs,Os) it is center of gravity G in windowsWith window center coordinate OsBetween Europe Formula distance, wherein x, y represents respectively the row and column of image;
(2.3) region of interest window is determined in 2-D gray image according to following principles, as target area gray level image:
If being background in the region of interest window centered on s points, because background pixel value difference is less and all than relatively low, now away from From d (Gs,Os) ≈ 0, confidence level ρs≈μsSuitable with background pixel value average, now region of interest window region is not that sense is emerging Interesting area;
If a part is background for a target part in the region of interest window centered on s points, now μ in windowsIncrease, but while window Interior center of gravity GsIt is partial to the region of high pixel value, d (Gs,Os) also increase, until region of interest window includes whole target, it is defined as sense Region of interest;
If including whole target in the region of interest window centered on s points, now μsReach the pixel of maximum, maximum and target Value is related, while center of gravity G in windowsWith window center OsDistance reduces;When apart from d (Gs,Os) ≈ 0 when confidence level ρsReach maximum, While ρsFor local maximum, now region of interest window region is region of interest.
4. the method for claim 1, it is characterised in that the step (3) is specially:
(3.1) determine that the segmentation threshold Th for being split to target area gray level image, segmentation threshold Th are target entropy EntropyOWith background entropy EntropyBThe maximum of sum, wherein:
Background entropy
Target entropy
P (i) represents corresponding grey scale level as the probability size of i, and m represents the maximum gray scale of image, PtAnd HtRepresent respectively background and Target in the picture pixel grey scale distribution probability and;
(3.2) target area gray level image is split according to segmentation threshold Th, obtains target area segmentation figure as O (x, y), Specially:
If the entropy of certain gray level is more than Th, then it is assumed that the gray level is target;If the entropy of certain gray level is less than Th, then it is assumed that the ash Degree level is background, so as to obtain target area segmentation figure as O (x, y).
5. the method as described in any one of Claims 1-4, it is characterised in that the step (4) is specially:
According to radar two dimension echo data and target area gray level image, using first-order difference method, the thunder of target area is extracted Up to the peak point information in two-dimentional echo data, target area peak point information matrix G (x, y) is obtained, wherein:
G ( x , y ) = 255 , i f ( f ( x , y ) - f ( x , y + 1 ) > 0 , f ( x , y ) - f ( x , y - 1 ) > 0 ) 0 , e l s e
(x, y) represents the point on radar two dimension echo data, and x represents distance to y represents orientation, remembers its radar two dimension echo Floating point values is f (x, y).
6. the method as described in any one of Claims 1-4, it is characterised in that the step (5) is specially:
To target area segmentation figure as O (x, y) and target area peak point information matrix G (x, y) carry out information fusion, melted Image R (x, y) after conjunction;
R (x, y)=G (x, y) O (x, y)/255
Image R (x, y) after fusion is 0/255 bianry image, and gray level is that 255 pixel is target area point in image Peak point in objective area in image is cut, these peak values is counted and is counted out K, i.e.,
K = C o u n t { ( x i , y i ) | ( x i , y i ) &Element; R ( x , y ) p ( x i , y i ) = 255 }
Wherein, p (xi,yi) be fusion after image R (x, y) in (xi,yi) place gray value size.
7. method as claimed in claim 5, it is characterised in that the step (5) is specially:
To target area segmentation figure as O (x, y) and target area peak point information matrix G (x, y) carry out information fusion, melted Image R (x, y) after conjunction;
R (x, y)=G (x, y) O (x, y)/255
Image R (x, y) after fusion is 0/255 bianry image, and gray level is that 255 pixel is target area point in image Peak point in objective area in image is cut, these peak values is counted and is counted out K, i.e.,
K = C o u n t { ( x i , y i ) | ( x i , y i ) &Element; R ( x , y ) p ( x i , y i ) = 255 }
Wherein, p (xi,yi) be fusion after image R (x, y) in (xi,yi) place gray value size.
8. the method as described in any one of Claims 1-4, it is characterised in that the step (6) is specially:
(6.1) peak point in the peak point information matrix of target area is sorted by size, chooses maximum front K peak point conduct The effective peak point characteristic information of target;
(6.2) binary conversion treatment is carried out to the effective peak point characteristic information of target, will this K extreme point place pixel tax Value gray level 255, rest of pixels point is entered as 0, obtains target effective peak value dot image.
9. method as claimed in claim 6, it is characterised in that the step (6) is specially:
(6.1) peak point in the peak point information matrix of target area is sorted by size, chooses maximum front K peak point conduct The effective peak point characteristic information of target;
(6.2) binary conversion treatment is carried out to the effective peak point characteristic information of target, will this K extreme point place pixel tax Value gray level 255, rest of pixels point is entered as 0, obtains target effective peak value dot image.
10. the method as described in any one of Claims 1-4, it is characterised in that the step (7) specifically includes:
(7.1) least square fitting is utilized to representing k candidate target point in target effective peak value dot image, searches out one Bar makes the minimum straight line of residual sum of squares (RSS), the i.e. axial straight line of target, and the straight line is identical with the diagonal of target;
(7.2) using the length of target, width, Diagonal Dimension as constraint, the target rectangle with target length, wide constraint is Window, allows the diagonal of window to move linearly along the axial direction of target, to find and make institute in window on the position on axial straight line Count out most the window's positions containing peak value, the position corresponding to window is target location;
(7.3) the peak point information flag outside window is false-alarm point, and peak point information contained in window determines the position of target Put.
11. methods as claimed in claim 9, it is characterised in that the step (7) specifically includes:
(7.1) least square fitting is utilized to representing k candidate target point in target effective peak value dot image, searches out one Bar makes the minimum straight line of residual sum of squares (RSS), the i.e. axial straight line of target, and the straight line is identical with the diagonal of target;
(7.2) using the length of target, width, Diagonal Dimension as constraint, the target rectangle with target length, wide constraint is Window, allows the diagonal of window to move linearly along the axial direction of target, to find and make institute in window on the position on axial straight line Count out most the window's positions containing peak value, the position corresponding to window is target location;
(7.3) the peak point information flag outside window is false-alarm point, and peak point information contained in window determines the position of target Put.
12. methods as claimed in claim 10, it is characterised in that the axial straight line tool of target is calculated in the step (7.1) Body is:
To object pixel point set V, parameter k of straight line y=kx+b, b so that residual sum of squares (RSS) are estimated Minimum, wherein (xi,yi)∈V。
13. methods as claimed in claim 11, it is characterised in that the axial straight line tool of target is calculated in the step (7.1) Body is:
To object pixel point set V, parameter k of straight line y=kx+b, b so that residual sum of squares (RSS) are estimated Minimum, wherein (xi,yi)∈V。
14. methods as described in any one of Claims 1-4, it is characterised in that the step (8) is specially:
Ask for the grey scale centre of gravity of target areaAs the energy barycenter of target area, the as key point of target, It is as follows that the grey scale centre of gravity of wherein target area asks for formula:
X &OverBar; = &Sigma; ( x , y ) &Element; T x &CenterDot; f ( x , y ) / &Sigma; ( x , y ) &Element; T f ( x , y )
Y &OverBar; = &Sigma; ( x , y ) &Element; T y &CenterDot; f ( x , y ) / &Sigma; ( x , y ) &Element; T f ( x , y )
F (x, y) is the floating point values at (x, y) place in radar two dimension echo data, and Τ is target area.
15. methods as claimed in claim 10, it is characterised in that the step (8) is specially:
Ask for the grey scale centre of gravity of target areaAs the energy barycenter of target area, the as key point of target, It is as follows that the grey scale centre of gravity of wherein target area asks for formula:
X &OverBar; = &Sigma; ( x , y ) &Element; T x &CenterDot; f ( x , y ) / &Sigma; ( x , y ) &Element; T f ( x , y )
Y &OverBar; = &Sigma; ( x , y ) &Element; T y &CenterDot; f ( x , y ) / &Sigma; ( x , y ) &Element; T f ( x , y )
F (x, y) is the floating point values at (x, y) place in radar two dimension echo data, and Τ is target area.
CN201410211693.9A 2014-05-16 2014-05-16 Forward-looking radar imaging sea-surface target key point detection and recognition method Active CN103971127B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410211693.9A CN103971127B (en) 2014-05-16 2014-05-16 Forward-looking radar imaging sea-surface target key point detection and recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410211693.9A CN103971127B (en) 2014-05-16 2014-05-16 Forward-looking radar imaging sea-surface target key point detection and recognition method

Publications (2)

Publication Number Publication Date
CN103971127A CN103971127A (en) 2014-08-06
CN103971127B true CN103971127B (en) 2017-04-26

Family

ID=51240598

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410211693.9A Active CN103971127B (en) 2014-05-16 2014-05-16 Forward-looking radar imaging sea-surface target key point detection and recognition method

Country Status (1)

Country Link
CN (1) CN103971127B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105205450B (en) * 2015-08-24 2018-08-07 辽宁工程技术大学 A kind of SAR image target extraction method based on irregular mark point process
CN106340046B (en) * 2016-08-19 2019-05-10 南京莱斯电子设备有限公司 A kind of radar target position analysis method based on image conversion geography information
CN107340503B (en) * 2017-07-02 2020-11-27 中国航空工业集团公司雷华电子技术研究所 Method for inhibiting false sea surface targets based on digital elevation map
CN107632305B (en) * 2017-09-11 2021-04-09 河海大学 Autonomous sensing method and device for local submarine topography based on profile sonar scanning technology
CN109086815B (en) * 2018-07-24 2021-08-31 中国人民解放军国防科技大学 Floating point number discretization method in decision tree model based on FPGA
CN109087319B (en) * 2018-08-17 2021-07-02 北京华航无线电测量研究所 Mask manufacturing method and system
CN109460764B (en) * 2018-11-08 2022-02-18 中南大学 Satellite video ship monitoring method combining brightness characteristics and improved interframe difference method
CN109765554A (en) * 2018-11-14 2019-05-17 北京遥感设备研究所 A kind of radar foresight imaging system and method
CN110766005B (en) * 2019-10-23 2022-08-26 森思泰克河北科技有限公司 Target feature extraction method and device and terminal equipment
CN111414910B (en) * 2020-03-18 2023-05-02 上海嘉沃光电科技有限公司 Small target enhancement detection method and device based on double convolution neural network
CN111695529B (en) * 2020-06-15 2023-04-25 北京师范大学 X-ray source detection method based on human skeleton key point detection algorithm
CN112215137B (en) * 2020-10-10 2024-04-26 中国电子科技集团公司第十四研究所 Low false alarm target detection method based on region constraint
CN113642650B (en) * 2021-08-16 2024-02-20 上海大学 Multi-beam sonar sunken ship detection method based on multi-scale template matching and adaptive color screening
CN115410370A (en) * 2022-08-31 2022-11-29 南京慧尔视智能科技有限公司 Abnormal parking detection method and device, electronic equipment and storage medium
CN116400351B (en) * 2023-03-21 2024-05-17 大连理工大学 Radar echo image target object processing method based on self-adaptive region growing method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6259396B1 (en) * 1999-08-26 2001-07-10 Raytheon Company Target acquisition system and radon transform based method for target azimuth aspect estimation
CN103197302A (en) * 2013-04-02 2013-07-10 电子科技大学 Target location extraction method applicable to through-the-wall radar imaging

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6259396B1 (en) * 1999-08-26 2001-07-10 Raytheon Company Target acquisition system and radon transform based method for target azimuth aspect estimation
CN103197302A (en) * 2013-04-02 2013-07-10 电子科技大学 Target location extraction method applicable to through-the-wall radar imaging

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于机载合成孔径雷达图像的对地目标检测方法研究;王义敏;《中国博士学位论文全文数据库信息科技辑》;20070429;1-105页 *

Also Published As

Publication number Publication date
CN103971127A (en) 2014-08-06

Similar Documents

Publication Publication Date Title
CN103971127B (en) Forward-looking radar imaging sea-surface target key point detection and recognition method
CN107145874B (en) Ship target detection and identification method in complex background SAR image
US6337654B1 (en) A-scan ISAR classification system and method therefor
Yang et al. Ship detection from optical satellite images based on sea surface analysis
Liao et al. Using SAR images to detect ships from sea clutter
US8422738B1 (en) Adaptive automated synthetic aperture radar vessel detection method with false alarm mitigation
CN107808383B (en) Rapid detection method for SAR image target under strong sea clutter
CN107025654B (en) SAR image self-adaptive ship detection method based on global iterative inspection
JP5305985B2 (en) Artifact detection device, artifact detection method, and artifact detection program
CN108171193B (en) Polarized SAR (synthetic aperture radar) ship target detection method based on super-pixel local information measurement
CN101727662A (en) SAR image nonlocal mean value speckle filtering method
CN111476159A (en) Method and device for training and detecting detection model based on double-angle regression
JP2008292449A (en) Automatic target identifying system for detecting and classifying object in water
US10497128B2 (en) Method and system for sea background modeling and suppression on high-resolution remote sensing sea images
CN106646469B (en) SAR ship detection optimization method based on VC Method
CN110765912B (en) SAR image ship target detection method based on statistical constraint and Mask R-CNN
CN106156758B (en) A kind of tidal saltmarsh method in SAR seashore image
CN108765403A (en) A kind of SAR image two-parameter CFAR detection methods under target-rich environment
CN105184804A (en) Sea surface small target detection method based on airborne infrared camera aerially-photographed image
Yang et al. Evaluation and mitigation of rain effect on wave direction and period estimation from X-band marine radar images
EP1515160B1 (en) A target shadow detector for synthetic aperture radar
KR101770742B1 (en) Apparatus and method for detecting target with suppressing clutter false target
JP5294923B2 (en) Artifact detection device, artifact detection method, and artifact detection program
Li et al. An improved CFAR scheme for man-made target detection in high resolution SAR images
CN113011376B (en) Marine ship remote sensing classification method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant