CN112907567A - SAR image ordered artificial structure extraction method based on spatial reasoning method - Google Patents

SAR image ordered artificial structure extraction method based on spatial reasoning method Download PDF

Info

Publication number
CN112907567A
CN112907567A CN202110296688.2A CN202110296688A CN112907567A CN 112907567 A CN112907567 A CN 112907567A CN 202110296688 A CN202110296688 A CN 202110296688A CN 112907567 A CN112907567 A CN 112907567A
Authority
CN
China
Prior art keywords
points
inference
distance
sar image
spatial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110296688.2A
Other languages
Chinese (zh)
Other versions
CN112907567B (en
Inventor
翟玮
王菁晗
肖修来
邓津
武震
张璇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Earthquake Administration Of Gansu Province
Original Assignee
Earthquake Administration Of Gansu Province
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Earthquake Administration Of Gansu Province filed Critical Earthquake Administration Of Gansu Province
Priority to CN202110296688.2A priority Critical patent/CN112907567B/en
Publication of CN112907567A publication Critical patent/CN112907567A/en
Application granted granted Critical
Publication of CN112907567B publication Critical patent/CN112907567B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Abstract

The invention discloses an SAR image ordered artificial structure extraction method based on a spatial reasoning method, which comprises the following steps: firstly, preprocessing an SAR image by using a rapid two-parameter constant false alarm algorithm and a FAST corner detection algorithm to obtain characteristic points of a structure; then, establishing a spatial relationship between the characteristic points, acquiring the characteristic points of a conventional punctiform structure and the spatial relationship characteristic of a special-shaped planar structure, modeling the spatial relationship characteristic, and acquiring three parameters of the spatial relationship and two parameters of morphological characteristics; and finally, carrying out spatial reasoning judgment on unidentified points after SAR image preprocessing by using the three parameters and the two parameters to obtain a structure identification result. The method can not only carry out high-precision identification and reasoning on the earth surface structures with special morphological characteristics, but also have better identification and reasoning precision on point targets without special morphological characteristics, and provides an effective solution for extracting isolated structures distributed orderly in the SAR image.

Description

SAR image ordered artificial structure extraction method based on spatial reasoning method
Technical Field
The invention belongs to the technical field of image processing and information extraction, and particularly relates to an SAR image ordered artificial structure extraction method based on a spatial reasoning method.
Background
After natural disasters (such as earthquakes, landslides and debris flows) occur, effective information of a disaster area is rapidly acquired, and the method is very important for initiating emergency rescue actions. The remote sensing satellite can rapidly and accurately carry out large-scale imaging on the ground and provide important information about disaster areas, but after a disaster occurs, the influence of weather such as overcast and rainy days is large, and the effect of the optical remote sensing image in disaster relief decision is limited. The Synthetic Aperture Radar (SAR) image has the advantages of being relatively insensitive to atmospheric conditions, strong in penetrating power, free of influence of sunlight and the like, so that the radar remote sensing technology is widely and deeply applied to natural disaster research.
When a learner utilizes the SAR image to carry out single structure identification research, the imaging characteristics and morphological structure characteristics of the single structure are utilized to carry out single structure body semantic model modeling, and then the structure elements in the semantic model are utilized to identify the single structure, so that the large single structure in the high-resolution SAR image can be accurately and rapidly extracted. The trainee extracts isolated structures by a processing method of reconstructing a ground object by using complementarity of optical data and synthetic aperture radar data through a pair of high-resolution optical and SAR images. Because the artificial target is complex in scattering and is influenced by sensor parameters such as wavelength, azimuth direction, spatial resolution, incidence angle and the like, a specific distribution type is difficult to find in the SAR image for description, the distribution characteristics of the SAR image are not consistent with most of the currently used statistical models, and isolated structures are difficult to identify and extract by a simpler method. In addition, the high-resolution SAR data are few, and the rapid response requirement of disaster emergency cannot be met, so that how to use the SAR data to rapidly extract the post-disaster information has important disaster reduction research value.
The characteristics of the artificial structures and the features of the topographic highlight points in the SAR image are very similar, if the rapid two-parameter CFAR algorithm and the FAST corner detection algorithm are only adopted for processing, a large number of topographic highlight points are inevitably mixed in the recognition result, so that the recognition accuracy of the target object is low, and the timely feedback of information after disasters is not facilitated, therefore, how to accurately distinguish the artificial structures from the topographic highlight points in the complex environment is also of great importance for recognizing the earthquake situations.
The spatial relationship is one of important theoretical problems in a Geographic Information System (GIS), and plays an important role in GIS data modeling, spatial query, spatial analysis, spatial reasoning and comprehensive mapping. The invention provides a method for quickly identifying artificial structures in SAR images by using a spatial reasoning method, aiming at quickly and efficiently identifying isolated structures and accurately distinguishing target ground objects from terrain highlight points, introducing relevant theories and technologies of spatial relations.
Disclosure of Invention
Aiming at the defects pointed out in the background technology, the invention provides an SAR image ordered artificial structure extraction method based on a spatial reasoning method, and aims to solve the problems of identification of highlight mountain interference in an SAR image and missing of a single structure caused by SAR imaging characteristics.
In order to achieve the purpose, the invention adopts the technical scheme that:
a SAR image ordered artificial structure extraction method based on a spatial reasoning method comprises the following steps:
(1) respectively preprocessing the SAR image by using a FAST two-parameter Constant False Alarm Rate (CFAR) algorithm and a FAST corner detection algorithm, and respectively acquiring characteristic points of a conventional punctiform structure and characteristic points of a special-shaped planar structure in the SAR image;
(2) respectively establishing spatial relations between the characteristic points to obtain the corresponding spatial relation characteristics of the characteristic points of a conventional punctiform structure and two isolated structures of a special-shaped planar structure;
(3) modeling the obtained spatial relationship characteristics of the conventional punctiform structure to obtain three parameters of the spatial relationship of the conventional punctiform structure; respectively modeling the obtained spatial relationship characteristic and morphological characteristic of the planar structure with the special shape, and obtaining a spatial relationship three-parameter and a morphological characteristic two-parameter of the planar structure with the special shape; the three parameters of the spatial relationship are the angle T of the characteristic pointangleOffset ToffsetAnd a distance TdistanceThe morphological characteristic double parameter is the radius C of the characteristic pointrAnd an included angle Cangle
(4) Performing space inference judgment on the conventional punctiform structures which are not identified after SAR image preprocessing under the constraint of conditions of vacancy inference between two points, left-right constraint inference, starting point inference and ending point inference by using three parameters of the spatial relationship of the conventional punctiform structures to obtain the identification results of the conventional punctiform structures; and (3) performing space inference judgment on the planar structure with the special shape which is not identified after SAR image preprocessing by using three parameters of the spatial relationship and two parameters of morphological characteristics of the planar structure with the special shape under the constraints of two-point vacancy inference, left-right constraint inference, starting point inference and ending point inference to obtain the identification result of the planar structure with the special shape.
Preferably, the fast two-parameter Constant False Alarm Rate (CFAR) algorithm preprocesses the SAR image as follows:
firstly, carrying out first-stage global filtering on an SAR image to filter out a large number of clutter;
then, the result of the first-stage global filtering is subjected to second-stage global filtering: sorting the pixel gray values of the first-level global filtering result, selecting the first k pixel gray values to estimate a potential target threshold value, and generating a binary image with a to-be-detected region;
secondly, endowing the to-be-detected region in the binary image with the gray value of the original image;
and finally, implementing a local double-parameter CFAR algorithm on each monitoring point of the to-be-detected area on the image to realize rapid detection.
Preferably, the value of k is 65% of the gray value of the image pixel after the first-level global filtering.
Preferably, the first-stage global filtering has the following formula for determining the potential target detection in the SAR image:
Figure BDA0002984610520000031
Figure BDA0002984610520000032
wherein p isfaIs constant false alarm rate, TgIs a first-stage global filtering threshold, fb(x) Is a clutter probability distribution function of the ground, and I (x, y) is a pixel value of an original image, Ig1(x, y) are the first level globally filtered pixel values.
Preferably, the judgment criterion of the second-stage global filtering on the detection of the potential target in the first-stage global filtering result is as follows:
Figure BDA0002984610520000033
wherein μ is a gray average of the first k gray values of the pixels, σ is a variance of the first k gray values of the pixels, α is a second-level global filter coefficient, and α is 0.95.
Preferably, the criterion of the local two-parameter CFAR algorithm for target detection is as follows:
Figure BDA0002984610520000041
wherein, mubIs the mean value, σ, of the gray values of the pixels of the background windowbAnd t is the variance of the gray value of the pixels in the background window, t is a detection parameter for controlling the constant false alarm rate, and t is 2.5.
Preferably, the method for reasoning the vacancy between the two points comprises the following steps:
firstly, an abnormal value E of the proximity distance is obtaineddistanceAnd a line where two points before and after the abnormal value and the abnormal value are located, and acquiring an angle T of the lineangleOffset ToffsetAnd a distance TdistanceThree parameters are calculated, a linear equation in a two-dimensional space is obtained, then the distance in the three parameters is compared with the distance abnormal value, and the number of points with missing detection in the distance abnormal value is obtained under the following condition;
Figure BDA0002984610520000042
then by an offset ToffsetAnd a distance TdistanceAs a constraint range, according to the angle TangleExtending the distance to the inference direction to obtain the inference coordinates of the missed detection points, and arranging the missed detection points into a line according to the size of the horizontal and vertical coordinates in the inference coordinates.
Preferably, the left-right constraint reasoning method is as follows:
first, the number P of points on a certain line is obtainednumThen obtaining the number P of points of two adjacent lines of the linelnumAnd PrnumCarrying out constraint reasoning, and judging conditions as follows:
if|Plnum-Prnum|≤2,thenPnum=[0.5*(Flnum+Prnum)]
if|Plnum-Prnum|>2,thenPnum=Pnum
wherein [0.5 (P)lnum+Prnum)]The table is not more than 0.5 (P)lnum+Prnum) Maximum integer of (2)
Preferably, the starting point reasoning method comprises:
with the help of the left-right constraint reasoning condition, firstly, a sequence with the most points in each direction is obtained, and the three-parameter angle F of the connecting line of the initial points is solved by taking the initial point of the sequence with the most points as a reference pointangleOffset FoffsetDistance FdistanceAnd a linear equation; then with FangleAnd TangleTwo directions as the direction of inference, with Foffset、FdistanceAnd Toffset、TdistanceAs a constraint, the missing starting point is solved and updated into a linear sequence.
Preferably, the end point inference method is as follows:
determining the number of missing points in a sequence by means of the conditions of the left-right constraint reasoning, and then determining the linear characteristic of the sequence and the conventional three-parameter angle T of the sequenceangleOffset ToffsetAnd a distance TdistanceTo infer the remaining end points.
Compared with the defects and shortcomings of the prior art, the invention has the following beneficial effects:
(1) the method comprises the steps of firstly carrying out primary target point or region identification on an SAR image by utilizing a rapid two-parameter constant false alarm algorithm and a FAST corner detection algorithm, and then realizing extraction and inference of orderly-distributed isolated structures by virtue of a spatial relationship, morphological characteristics and a spatial inference method. The method not only can identify and reason the earth surface structures with special morphological characteristics with high precision, but also has good precision for identifying and reasoning point targets without unique morphological characteristics, and provides an effective solution for identifying isolated structures distributed orderly in SAR images.
(2) According to the method, after two-stage global filtering is carried out on the image by adopting a rapid double-parameter constant false alarm algorithm, the gray value of the original image is given to the detection area in the binary image so as to obtain the real condition of the ground object, and a local double-parameter CFAR algorithm is carried out on each monitoring point of the detection area on the image, so that rapid detection is realized.
Drawings
Fig. 1 is a flowchart of a method for extracting feature points of an ordered artificial structure of an SAR image based on a spatial inference method according to an embodiment of the present invention.
Fig. 2 is a flowchart of a fast two-parameter CFAR algorithm according to an embodiment of the present invention.
FIG. 3 is a SAR image of a region of interest provided by an embodiment of the present invention.
Fig. 4 is a schematic diagram of a two-parameter CFAR detection window provided in the embodiment of the present invention.
Fig. 5 is a processing result of the fast two-parameter CFAR algorithm according to the embodiment of the present invention.
Fig. 6 is a schematic diagram of a FAST corner detection algorithm according to an embodiment of the present invention.
Fig. 7 shows the result of FAST corner detection algorithm provided in the embodiment of the present invention.
Fig. 8 is the electric tower and wind power generator result data after SAR image preprocessing provided by the embodiment of the invention.
Fig. 9 shows the traversal direction and the traversal initial point detection window when the spatial relationship is established for the feature points according to the embodiment of the present invention.
Fig. 10 shows the result of the starting point detection provided by the embodiment of the present invention.
Fig. 11 is an initial linear detection result of the electric tower provided by the embodiment of the invention.
FIG. 12 is a cross shape feature parameter diagram provided by an embodiment of the present invention.
Fig. 13 is an initial linear detection result of the wind turbine provided by the embodiment of the invention.
Fig. 14 is a schematic diagram of a detection result and an abnormality of the electric tower under the three-parameter constraint provided by the embodiment of the invention.
Fig. 15 is a subsequent detection result of the electric tower provided by the embodiment of the present invention.
Fig. 16 is a schematic diagram of a detection result and an abnormality of the wind turbine under the constraint of three parameters according to the embodiment of the present invention.
FIG. 17 is a subsequent testing result of the wind turbine provided by the embodiment of the present invention.
Fig. 18 is a reasoning result of the two spatial shortcomings provided by the embodiment of the present invention.
Fig. 19 is a starting point update reasoning result provided by the embodiment of the present invention.
Fig. 20 is an end point inference result provided by an embodiment of the invention.
FIG. 21 is a spatial reasoning final result provided by embodiments of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The invention provides a space reasoning method-based SAR image ordered artificial structure extraction method, which realizes the identification of two isolated structures, namely a conventional punctiform structure and a special-shaped planar structure in an SAR image. Generally, buildings on the ground are gathered and distributed and are located in dense places, isolated buildings are mostly artificial structures without living functions, such as power towers, wind driven generators and the like, the isolated buildings are mostly distributed in sparse areas, and are basically background ground objects such as grasses, bare soil and the like around the isolated buildings, so that the isolated buildings are prominently displayed in the SAR image. However, in the SAR image, the pixel points having the same gray value cannot represent the same feature. Furthermore, when topographic highlight elements such as dried river beds and bare rocks are distributed around structures in the research area, large-area randomly distributed highlight white noise appears on the SAR image, and therefore, isolated structures mixed in the image are difficult to extract. Therefore, the invention introduces a spatial reasoning method and provides an SAR image ordered artificial structure feature point extraction method based on the spatial reasoning method, and a flow chart is shown in figure 1.
Firstly, preprocessing an SAR image by utilizing a FAST two-parameter Constant False Alarm Rate (CFAR) algorithm and a FAST corner detection algorithm respectively, and acquiring characteristic points of a conventional punctiform structure and characteristic points of a special-shaped planar structure in the SAR image respectively, namely corresponding characteristic data;
then, respectively establishing spatial relations of the feature points to obtain spatial relation features corresponding to the feature points of the conventional punctiform structure and the feature points of the special-shaped planar structure;
secondly, modeling the obtained spatial relationship characteristics of the conventional point-like structure to obtain a spatial relationship three-parameter angle T of the conventional point-like structureangleTo offset from each otherDisplacement ToffsetAnd a distance Tdistance(ii) a Respectively modeling the obtained spatial relation characteristics and morphological characteristics of the special-shaped planar structure to obtain a spatial relation three-parameter angle T of the special-shaped planar structureangleOffset ToffsetAnd a distance TdistanceAnd morphological characteristics of the two-parameter radius CrAnd an included angle Cangle
Finally, performing space inference judgment on the conventional punctiform structures which are not identified after SAR image preprocessing by using three parameters of the spatial relationship of the conventional punctiform structures under the constraint of conditions of vacancy inference between two points, left-right constraint inference, starting point inference and ending point inference to obtain the identification results of the conventional punctiform structures; and (3) performing space inference judgment on the planar structure with the special shape which is not identified after SAR image preprocessing by using three parameters of the spatial relationship and two parameters of morphological characteristics of the planar structure with the special shape under the constraints of two-point vacancy inference, left-right constraint inference, starting point inference and ending point inference to obtain the identification result of the planar structure with the special shape. In the SAR image recognition process, the target ground object also has geographic localization, and it also has attributes of position, shape, angle, etc., and can also be described by a discrete target set or field. Therefore, the feature identified in the SAR image is considered to have the property of space.
The traditional double-parameter CFAR detection algorithm can well detect local complex conditions by setting a target window, a protection window and a background window, and accords with the distribution condition of isolated structures in a research area of the invention. However, the whole image needs to be traversed, and when a large-size SAR image is traversed, a large amount of computing resources are consumed, and the time is long, so that real-time detection is difficult to achieve, and rapid acquisition of disaster information after a natural disaster occurs is not facilitated. Therefore, the invention provides an improved fast two-parameter CFAR algorithm, the flow chart refers to FIG. 2, firstly, a first-stage global filtering is carried out on an SAR image, and a large amount of clutter is filtered; then, the result of the first-stage global filtering is subjected to second-stage global filtering: sorting the pixel gray values of the first-level global filtering result, selecting the first k pixel gray values to estimate a potential target threshold value, and generating a binary image with a to-be-detected region, wherein the value of k is the best 65% of the pixel gray value of the image after the first-level global filtering; assigning the area to be detected in the binary image to the gray value of the original image; and finally, implementing a local double-parameter CFAR algorithm on each monitoring point of the to-be-detected area on the image to realize rapid detection.
The invention detects the point-like ground object by the FAST two-parameter CFAR algorithm, but is not suitable for detecting the ground object with the unique morphological structure, therefore, the FAST angular point detection algorithm is introduced to further process the image, the important characteristics of the image graph can be kept, and simultaneously, the data volume of the information can be effectively reduced, so that the image contains higher information volume.
Example 1
1. Experimental data and study area
The method takes the southwest suburb of Yumen city, Gansu province as a research area, uses ALOS-2 satellite data, and adopts a single polarization mode in the horizontal direction. The specific parameters of the satellite are shown in table 1:
Figure BDA0002984610520000081
hundreds of wind driven generators, thousands of supporting facilities, dry riverbeds, exposed rock masses and other ground objects are distributed in the research area. Wherein, the wind power generator and the matched electric tower are distributed in a linear regular distribution mode, and topographic highlight elements such as dry riverbeds, shelters and the like are mixed in the wind power generator and the matched electric tower. The research selects an area of about 25 square kilometers, and the SAR image of the research area is shown in figure 3.
2. Method of producing a composite material
Structures such as wind driven generators, electric towers, power plants and the like, exposed rock masses, dry rivers and other terrain highlight point ground objects are distributed in a research area in a crossed mode, the terrain highlight points are mutually staggered with the wind driven generators and the electric towers, if a FAST two-parameter CFAR algorithm and a FAST angular point detection algorithm are adopted for processing, a large number of terrain highlight points are mixed in a recognition result, the recognition accuracy of a target object is low, and accurate feedback of information after disasters is not facilitated. The artificial structures in the area comprise conventional punctiform structures and special-shaped planar structures, the electric tower is identified as the conventional punctiform structures, and the wind driven generator is identified as the special-shaped planar structures.
2.1 fast two-parameter CFAR Algorithm
The influence of structures in the research area on the ground clutter is small, so that in the SAR image, the background dominance is assumed, a global CFAR detection algorithm is adopted, and the formula for judging the potential target is as follows:
Figure BDA0002984610520000091
Figure BDA0002984610520000092
wherein p isfaIs constant false alarm rate, TgIs a first-stage global filtering threshold, fb(x) Is a clutter probability distribution function of the ground, and I (x, y) is a pixel value of an original image, Ig1(x, y) are the first level globally filtered pixel values. In order to detect potential targets as much as possible, the method is realized by setting a higher value of a constant false alarm rate, the target of the image subjected to the first-level global filtering is assumed to be dominant, then the image is sequenced according to the sequence of gray values from high to low, the gray values of the first k pixels are selected to estimate a target threshold, and k in the method is 65% of the pixel value of the image subjected to the first-level filtering. The detection and judgment rule of the secondary filtering on the potential target is as follows:
Figure BDA0002984610520000093
wherein μ is a gray level average value of the first k pixel values, σ is a variance of the first k pixel values, and α is a second-level global filter coefficient, which obtains an optimal value (α is 0.95) through repeated experiments, and more targets can be detected by using the optimal value.
After two-stage global filtering, the number of clutter pixels in the whole image is greatly reduced, but for the pixel points to be detected, the total amount of clutter is still much, and in order to ensure the accuracy of target detection, a local two-parameter CFAR algorithm is further adopted to carry out secondary identification on the potential target. During detection, the detection window of the two-parameter CFAR algorithm is found to move, and the background window has repeated data detected last time, wherein the number of the repeated data is Wb(Wb-1)-Wp(Wb+1), therefore can recycle in the algorithm, simplify the traversal mode, reduce the calculation time. The window diagram is shown in fig. 4.
The judgment criterion of the local two-parameter CFAR to the target is as follows:
Figure BDA0002984610520000101
wherein, mubMean value of pixel gray values, σ, for the background windowbThe variance of the gray value of the pixel in the background window is taken as t, the detection parameter for controlling the constant false alarm rate is taken as t, the value is determined according to the optimal result of a plurality of experiments, and the value is preferably 2.5.
The processing result of the fast two-parameter CFAR algorithm is shown in fig. 5, and it can be seen that the image change is not very obvious from the original data to the first-level global filtering stage, because the gray values of the pixels in the original image are all higher, and the landform highlights greatly affect the structure. In structures, the electricity towers distributed around the structure are almost mixed with the terrain highlight at the upper part of the image and are difficult to distinguish. The wind power generator in the middle of the image is mixed with the reflection and refraction of the wind power generator, so that the accurate position of the wind power generator is difficult to see. After the second-stage global filtering, the terrain highlight is basically removed, only parts which are difficult to remove because the brightness is similar to that of the structure are left, and the power plant structure below the image is reserved because the brightness is similar. After the subsequent processing of the local double-parameter CFAR algorithm, the high-brightness area of the structure is scattered to form a plurality of scattered point distributions, and the morphological characteristics of the wind driven generator are also disassembled. In addition, a small number of landform highlight points and electric towers are distributed in a cross mode, and the target identification precision is interfered. However, the electric tower structures distributed in a dotted manner on the image are completely reserved, and the expected result is met.
2.2 FAST corner detection Algorithm
Firstly, a pixel point P is selected in the picture, and the pixel value I of the pixel point P is obtainedP(ii) a Then defining a threshold value t, using point P as centre of a circle, planning a discretization Bresenham circle whose radius r is 5 pixels, its boundary has 20 points for comparison, t is 0.8X IPThe detection scheme is shown in fig. 6.
As shown in fig. 7, although the cross-shaped feature of the wind turbine is completely displayed, as shown in fig. 7(b), it can be seen that although the target object is more prominent, the number of the miscellaneous points around the target object is more, and therefore, the gray value and the connected region screening are added in the subsequent process.
2.2.1 Gray-value screening in FAST corner detection
Due to the difference in gray scale values between the topographical highlight points and the target structure, a threshold value may be selected to distinguish between topographical highlight points and the target structure, above which the target object is located, and below which the topographical highlight points are located. Sorting all gray values in the FAST corner detection result image, respectively taking the pixel values of the top 50%, 55%, 60%, 65% and 70% to calculate the average value and the variance, and selecting the value with the smallest variance and the value with little difference from the average value as the threshold. As shown in fig. 7(c), the screening result shows that the threshold is representative and the screening of the image is accurate, but some non-target terrain highlights and power plant buildings are also screened below the image, and further processing is required.
2.2.2 screening of the linking region in FAST corner detection
Connected Component (Connected Component) generally refers to an image area composed of foreground pixels having the same pixel value and located adjacently in an image. The present invention uses Seed Filling to obtain the communication area of the image. In the present study, if the pixel values around the seed are the same, the screening result is broken and the shape feature is not obvious, so the range threshold F is settThe threshold is 40% to 100% of the starting pixel gray value. In order to enable the characteristics of the wind driven generator to be more rounded, eight communication channels selected by us are positioned adjacently. Then, screening is performed according to the number of pixels in the communication area, and as shown in fig. 7(d), it can be seen that the result of screening in the communication area is relatively complete, but due to the limitation of the result of screening the gray value of the previous step, structures of the plant area below the image still exist, and the topographic highlight points similar to the gray value of the wind driven generator are mixed around the wind driven generator.
2.3 spatial reasoning method
2.3.1 establishment of the starting Point
And performing a series of operations such as intersection negation and masking by using the result of the FAST two-parameter CFAR algorithm and the result of the FAST corner detection to obtain two parts of data, namely the data of the electric tower surrounding the image (as shown in fig. 8 (a)) and the data of the wind driven generator distributed in the middle of the image and having unique morphological characteristics (as shown in fig. 8 (b)). The spatial relationship between the two data in fig. 8 is established by first establishing the traversal direction, the tower data starts to traverse from the middle to the periphery due to the distribution mode of surrounding images, and the wind power generator is distributed in a series, so the traversal from the top to the bottom is selected. Then determining the traversal starting point, fixing the radius of the detection window of the traversal starting point to be 1 pixel unit, carrying out extension traversal on 16 directions of a target unit P, wherein the target unit is the maximum gray value point of the communication area, and the extension times are obtained according to the distance P between every two communication areasdRounded mode. Traversing the extension to the boundary, if gray values which are not zero exist in a certain direction, determining a communication area where the extension is located, and calculating the distance P between the two communication areasd2And the included angle P of the connecting line of the two communicating areasangle. According to the included angle PangleIn the opposite direction P180°+angleAnd a distance Pd2As a constraint, a re-extension traversal is performed up to a certain point PangleOr P180°+angleIf the direction extension traversal has a value of 0, it can be determined as the starting point of the traversal, and the window of the extension traversal is as shown in fig. 9.
The detection window of fig. 9 can be used to detect the traversal starting point more quickly, the detection result of fig. 8 is shown in fig. 10, it can be seen from fig. 10(a) and (b) that the detection result is relatively complete, and a small number of wind power generators can be completely detected, and the detection result of the electric towers can be seen that some electric towers are confused with terrain highlights due to the results of the FAST two-parameter CFAR algorithm and the FAST corner detection algorithm, and two or more extremely close points appear in each sequence, as shown in fig. 10(d) and (e), which can seriously interfere with the target identification accuracy and the subsequent spatial reasoning. In addition, some towers and wind turbines are not detected due to the gray scale value of the image or the limitation of the communication area, as shown in fig. 10(c), and the linear distribution of the point sequence lacks a part in the middle, which requires further processing.
Through the establishment of the starting point, the basic spatial characteristics of each linearly distributed structure are determined: from this position, spatial information such as the structure distribution angle and distance of the linear distribution can be obtained. The spatial information can then be used to derive unknown structures that are not shown.
2.3.2 establishment of spatial relationships
After the starting point is determined, a direction of linear distribution of the search structure is required, and if the search is performed only according to the distance, the detection result is a mixed feature of the target and the terrain highlight point, and the error of the identification result is very large. Because the characteristics of the electric tower and the wind driven generator are different and the required characteristic parameters are different, the invention divides the image into two parts to establish a spatial relation model.
2.3.2.1 electric tower space relation model establishment under three-parameter constraint
Two-dimensionalThe most basic parameters of the linear shape in space are slope and offset, so the angle T is added in the search of the electric towerangleOffset ToffsetAnd a distance TdistanceAnd three spatial parameters are searched. Because the electricity towers in the image are distributed around the image, the image is divided into an upper direction, a lower direction and a right direction to respectively perform traversal retrieval. And performing extension traversal on each starting point by using the starting point detection window established in the last step. According to the detection result, if the detection meets a non-zero value and the number of the extending traversal is larger than the diameter of the communication area of the starting point, the communication area caused by the non-zero pixel is the next target point, and then the k slope and the offset value of the connecting line of the two points are determined according to the coordinate values of the two points in the image. And obtaining the mode in the slope according to the obtained slope, calculating to obtain an offset angle, and taking the angle +/-5 degrees as a reference value to serve as a subsequent retrieval direction. The initial electric tower test results are shown in fig. 11 (a). To make the results more comprehensible, the starting point shape is changed to an o-shape, and the second point retrieved is set to an x-shape. It can be seen that there is some deviation in the initial detection, for example, in the box in fig. 11(b), since the correct next sequential point of the point is farther from the last point of the adjacent line, the point identification fails, which is why the mode of an angle is determined in the front as the subsequent direction, so that the error in fig. 11(b) can be avoided, and the accuracy of the subsequent target identification is improved. After calculation, the T of the electric tower above the imageangleThe range is 45 degrees to 50 degrees; t of electric tower on right side of imageangleThe range is-68 degrees to-70 degrees and-75 degrees to-80 degrees; t of electric tower under imageangleThe range is 0 to 3 degrees, and the distance T between the electric towersdistanceBetween 18 and 20 pixels.
2.3.2.2 wind driven generator space relation model establishment under three-parameter and two-parameter constraint
The wind driven generators are few in number and single in distribution form, linear distribution characteristic modeling can be directly carried out, and the wind driven generators are detected in a top-down retrieval direction. However, there are many high spots around the wind turbine, which affect the recognition result and result in too many or wrong target pointsTherefore, the unique cross morphological characteristics of the wind driven generator are added to the three parameters of the search of the electric tower for carrying out search restriction. Adding radius C to the cross morphological featurerAnd an included angle CangleTwo morphological parameters, radius CrThe minimum value of the distance between the maximum point of the central gray scale representing the morphological feature and the outermost periphery of the transverse edge and the longitudinal edge, and the included angle CangleThe angle between adjacent edges in the morphological feature. Take wind-driven generator as an example, CangleIs the angle between the transverse and longitudinal sides of the cross. The method comprises the steps of carrying out morphological skeleton extraction by zooming a cross communication area of a starting point to obtain a skeleton of the cross communication area, and then obtaining a minimum radius and a cross included angle according to the skeleton of the communication area. C in the cross feature is calculatedangleBetween 78 and 81 degrees; crBetween 7 and 9 pixels; distance T between adjacent cross featuresdistanceBetween 200 and 220 pixels. A cross feature two parameter diagram is shown in fig. 12.
By using morphological characteristic double parameters and conventional three parameter limit detection, the wind driven generator can be more accurately detected at the second point. For comparing the results, the conventional three parameters are taken as unnecessary parameters, and only two types of parameters need to be satisfied in the detection, while two types of parameters of the cross feature need to be satisfied. Two types of parameter constraints of the cross feature are as follows:
Figure BDA0002984610520000141
the result is shown in fig. 13(a), and two situations appear in the detection result, namely, the normal situation shown in fig. 13(b) meets three types of conventional parameters and also meets two types of parameters of the cross feature; secondly, as shown in fig. 13(c), the abnormal condition is satisfied, two types of parameters in the cross feature and two types of conventional parameters in the three types of parameters are satisfied, and the abnormal result is amplified, although the distance parameter in the three conventional parameters is not satisfied, the detection result is most consistent and close to the cross feature, but the cross feature closest to the cross feature is not reflected basically, and only a transverse line is formed. Namely, under the constraint of two characteristic parameters in the conventional three-parameter and cross characteristic, the constraint of the cross characteristic has better detection effect.
2.3.3 detecting remaining points from spatial relationships
According to the detection result obtained in the last step, subsequent retrieval is carried out on the points with the residual linear distribution, the retrieval is carried out on the electric tower by using the conventional three parameters as constraints, and the final detection result of the electric tower is shown in FIG. 14. After enlargement, the basic result of the search is TdistanceDistance parameter or TangleWhen the angle parameter does not satisfy the search condition, the search is stopped, as shown in fig. 14(b) and (c), in fig. 14(b), since the distance between the previous point and the next point is too long, the search is stopped without conforming to the set distance; in fig. 14(c), the angle between the previous point and the next point is too large from the angle difference between the constraints, so that the search is stopped. With the detection result, an equation of linear distribution of each electric tower can be established in a two-dimensional space, then the residual points are substituted into each equation, and a threshold value is set to limit the error. The point can be classified into the distribution line, and the detection of the whole distribution line is further completed. In this experiment, the error was limited to 5 pixels, and the result obtained is shown in fig. 15(a), and the search result was almost accurate. Further, due to the error limitation, the topographic highlight similar to the target structure in fig. 15(b) is eliminated and does not appear in the detection result.
The follow-up detection of the wind driven generator has a plurality of constraint parameters, and only two constraint parameters are needed for three conventional parameters, so that the detection result is very accurate, and the wind driven generator which is in deviation linear distribution due to the terrain can be identified. The recognition result is shown in fig. 16 (a). As can be seen in fig. 16(b), the strong constraining effect of the cross feature, although the third point is far from the distribution line, is still accurately identified due to its own cross feature; whereas less was identified in fig. 16(c) because the constraint of the cross feature was not satisfied. In the subsequent detection, only the error value is limited within a certain range, if two types of parameters of the cross feature are used for limitation, detection omission can be caused, the identification precision is influenced, and the overall result is shown in fig. 17 (a). In addition, it was found in the results that this type of cross feature in fig. 17 b and (c) has two ground objects at the wrong position (left side) or cross feature because of the error constraint that can express the correct. If the position of the cross is judged only according to the cross characteristics, the error of the detection result is overlarge.
2.3.4 implementation of spatial inference method
(1) Inference of a gap between two points
In the image after the last step of detection, it is found that the distance between some two adjacent points is too long in some lines, and is basically longer than the distance between other two adjacent points, because the gray value of the point is not highly influenced by surrounding ground objects, the double missing detection is performed by a FAST double-parameter CFAR algorithm and a FAST corner point detection algorithm. Therefore, these proximity distance abnormal values E are acquired firstdistanceAnd two points before and after the abnormal value and a line where the two points are located are caused, and the conventional three parameters of the line are obtained: angle TangleOffset ToffsetAnd a distance TdistanceAnd solving a linear equation in the two-dimensional space. And comparing the distance in the three parameters with the abnormal value of the distance to obtain the number of points possibly existing in the abnormal distance value. The conditions are as follows:
Figure BDA0002984610520000161
the number of points which are determined to be missed can be determined according to the angle TangleOffset ToffsetAnd a distance TdistanceThe triple limitation is used for reasoning the missed detection point, the distance and the offset are used as the constraint range, the angle extension is used as the reasoning direction, and the reasoning coordinate P of the missed detection point is obtainedx、PyAnd arranged into distribution lines according to the size of the horizontal and vertical coordinates. To make it more accurate, a linear equation test is performed by introducing the inferred coordinates into the distribution line, and if the error is larger than 2 grids, the grid is takenThe average of the inferred coordinates and the verified coordinates is used as a new inferred coordinate to improve the inference accuracy of the gap between the two points, and the result is shown in fig. 18.
(2) Left-right constraint reasoning
Three similar lines for angle transition can exist in the electric towers which are distributed around the corners in the image, the target points of the three lines are fewer and are not similar to the surrounding distribution lines, and the three lines can also have points which are missed to be detected; in addition, the distribution of the electric towers at the upper left and lower sides of the image decreases from right to left due to the influence of topography. If the inference only according to the previous step is finished, a plurality of point inferences influenced by the terrain can not be obtained, and the target extraction precision is greatly influenced. For this purpose, a left-right constraint condition is proposed, which is to first acquire the number of points P of a certain linenumThen obtaining the number P of points of two adjacent lineslnum、PrnumIf P islnumAnd PrnumIf the difference is not large or equal, P is assumed to benumHas a value of PlnumAnd PrnumAverage value of (a). The specific judgment conditions are as follows:
Figure BDA0002984610520000162
the judgment condition also provides a basis for subsequent inference of the starting point and the end point.
(3) Starting point reasoning
Due to the influence of the topography of a research area, the distribution number of the electric towers in a certain area is reduced, and points with uncertain reasoning number have great errors, so that a method for performing initial point reasoning by using a left-right constraint reasoning condition is provided. The starting point may not be detected due to the terrain highlight, the terrain, or the high gray-scale value of the starting point, as with other conventional points. And traversing all the distribution lines by means of the left-right constraint reasoning provided in the previous step, and judging whether the distribution lines are lines with decreasing sequence numbers or corners. And judging the due points through the constraint of the inference condition. Due to the regular distribution of the electric towers, the starting point of each distribution line can be obtainedFor a new line, the starting point of concealment is obtained by this line in conjunction with the distribution line of each sequence. Therefore, by acquiring the sequence with the most points in each direction and taking the starting point as the reference point, three parameters and a linear equation of the connecting line of the starting point are obtained. Obtaining a conventional three-parameter angle F of the starting pointangleOffset FoffsetAnd a distance FdistanceThen with FangleAnd TangleTwo directions are used as reasoning directions, and F is used as a reasoning directionoffset、FdistanceAnd Toffset、TdistanceAs a constraint condition, solving the missing starting point and updating the starting point into the linear distribution point sequence, the fuzzy inference of the starting point is completed, and the inference result is shown in fig. 19 below.
(4) End point reasoning
Under the influence of left and right constraint reasoning conditions, similar situations can exist in the end points of each sequence when a reasoning starting point is found, and a small number of end points are displayed darkly due to factors such as terrain highlight points, terrain or low gray value of the end points, so that the end points are not identified. Through the inference of the vacant points between the two steps and the updating inference of the initial point, the rest of the missing points are basically positioned at the tail of the sequence. By means of the left and right constraint reasoning condition, the number of missing points in a sequence can be determined, and then the angle T is determined according to the sequence on line and three conventional parameters of the sequenceangleOffset ToffsetAnd a distance TdistanceTo infer the remaining end points, the inference results are shown in fig. 20.
Through the space inference judgment under the constraint of vacancy inference, left and right constraint inference, starting point inference and ending point inference conditions between two points, the target object in the image is basically and completely identified, and the final identification result is shown in fig. 21.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (10)

1. A SAR image ordered artificial structure extraction method based on a spatial reasoning method is characterized by comprising the following steps:
(1) respectively preprocessing the SAR image by utilizing a rapid double-parameter constant false alarm algorithm and a FAST angular point detection algorithm, and respectively acquiring the characteristic points of a conventional punctiform structure and the characteristic points of a special-shaped planar structure in the single-polarized SAR image;
(2) respectively establishing spatial relationship between the characteristic points to obtain spatial relationship characteristics corresponding to two isolated structures, namely a conventional punctiform structure and a special-shaped planar structure;
(3) modeling the obtained spatial relationship characteristics of the conventional punctiform structure to obtain three parameters of the spatial relationship of the conventional punctiform structure; respectively modeling the obtained spatial relationship characteristic and morphological characteristic of the planar structure with the special shape, and obtaining a spatial relationship three-parameter and a morphological characteristic two-parameter of the planar structure with the special shape; the three parameters of the spatial relationship are the angle T of the characteristic pointangleOffset ToffsetAnd a distance TdistanceThe morphological characteristic double parameter is the radius C of the characteristic pointrAnd an included angle Cangle
(4) Performing space inference judgment on unidentified points of the conventional punctiform structure after SAR image preprocessing by using three parameters of the spatial relationship of the conventional punctiform structure under the constraint of conditions of vacancy inference between two points, left-right constraint inference, starting point inference and ending point inference to obtain an identification result of the conventional punctiform structure; and (3) performing space inference judgment between two points, under the constraint of left and right constraint inference, starting point inference and ending point inference conditions on the points of the special-shaped planar structure which are not identified after SAR image preprocessing by using the three parameters of the spatial relationship and the two parameters of morphological characteristics of the special-shaped planar structure to obtain the identification result of the special-shaped planar structure.
2. The SAR image ordered artificial structure extraction method based on the spatial inference method as claimed in claim 1, characterized in that the SAR image is preprocessed by the fast two-parameter constant false alarm algorithm as follows:
firstly, carrying out first-stage global filtering on an SAR image to filter out a large number of clutter;
then, the result of the first-stage global filtering is subjected to second-stage global filtering: sorting the pixel gray values of the first-level global filtering result, selecting the first k pixel gray values to estimate a potential target threshold value, and generating a binary image with a to-be-detected region;
secondly, endowing the to-be-detected region in the binary image with the gray value of the original image;
and finally, performing a local double-parameter CFAR algorithm on each monitoring point of the to-be-detected area on the image to realize rapid detection.
3. The SAR image ordered artificial structure extraction method based on the spatial inference method as claimed in claim 2, characterized in that the value of k is 65% of the gray value of the image pixel after the first-level global filtering.
4. The SAR image ordered artificial structure extraction method based on the spatial inference method as claimed in claim 2, characterized in that the first-stage global filtering has the following formula for determining the potential target detection in the SAR image:
Figure FDA0002984610510000021
Figure FDA0002984610510000022
wherein p isfaIs constant false alarm rate, TgIs a first-stage global filtering threshold, fb(x) Is a clutter probability distribution function of the ground, and I (x, y) is a pixel value of an original image, Ig1(x, y) are the first level globally filtered pixel values.
5. The SAR image ordered artificial structure extraction method based on the spatial inference method as claimed in claim 2, characterized in that the judgment criteria of the second-stage global filtering on the potential target detection in the first-stage global filtering result are as follows:
Figure FDA0002984610510000023
wherein μ is a gray average of the first k gray values of the pixels, σ is a variance of the first k gray values of the pixels, α is a second-level global filter coefficient, and α is 0.95.
6. The SAR image ordered artificial structure extraction method based on the spatial inference method as claimed in claim 2, characterized in that the judgment criteria of the local two-parameter CFAR algorithm for target detection are as follows:
Figure FDA0002984610510000031
wherein, mubIs the mean value, σ, of the gray values of the pixels of the background windowbAnd t is the variance of the gray value of the pixels in the background window, t is a detection parameter for controlling the constant false alarm rate, and t is 2.5.
7. The SAR image ordered artificial structure extraction method based on the spatial inference method as claimed in claim 1, characterized in that the method of the vacancy inference between two points is:
firstly, an abnormal value E of the proximity distance is obtaineddistanceAnd a line where two points before and after the abnormal value and the abnormal value are located, and acquiring an angle T of the lineangleOffset ToffsetAnd a distance TdistanceThree parameters are calculated, a linear equation in a two-dimensional space is obtained, then the distance in the three parameters is compared with the distance abnormal value, and the number of points with missing detection in the distance abnormal value is obtained under the following condition;
Figure FDA0002984610510000032
then by an offset ToffsetAnd a distance TdistanceAs a constraint range, according to the angle TangleExtending the distance to the inference direction to obtain the inference coordinates of the missed detection points, and arranging the missed detection points into a line according to the size of the horizontal and vertical coordinates in the inference coordinates.
8. The SAR image ordered artificial structure extraction method based on the spatial inference method as claimed in claim 1, characterized in that the left and right constraint inference method is:
first, the number P of points on a certain line is obtainednumThen obtaining the number P of points of two adjacent lines of the linelnumAnd PmumCarrying out constraint reasoning, and judging conditions as follows:
if|Plnum-Prnum|≤2,thenPnum=[0.5*(Plnum+Prnum)]
if|Plnum-Prnum|>2,thenPnum=Pnum
wherein [0.5 (P)lnum+Prnum)]Represents not more than 0.5 (P)lnum+Prnum) Is the largest integer of (a).
9. The SAR image ordered artificial structure extraction method based on the spatial inference method as claimed in claim 8, characterized in that the starting point inference method is:
with the help of the left-right constraint reasoning condition, firstly, a sequence with the most points in each direction is obtained, and the three-parameter angle F of the connecting line of the initial points is solved by taking the initial point of the sequence with the most points as a reference pointangleOffset FoffsetDistance FdistanceAnd a linear equation; then with FangleAnd TangleTwo directions as the direction of inference, with Foffset、FdistanceAnd Toffset、TdistanceAs a constraint, the missing starting point is solved and updated into a linear sequence.
10. The SAR image ordered artificial structure extraction method based on the spatial inference method as claimed in claim 8, characterized in that the end point inference method is:
determining the number of missing points in a sequence by means of the conditions of the left-right constraint reasoning, and then determining the linear characteristic of the sequence and the conventional three-parameter angle T of the sequenceangleOffset ToffsetAnd a distance TdistanceTo infer the remaining end points.
CN202110296688.2A 2021-03-19 2021-03-19 SAR image ordered artificial structure extraction method based on spatial reasoning method Active CN112907567B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110296688.2A CN112907567B (en) 2021-03-19 2021-03-19 SAR image ordered artificial structure extraction method based on spatial reasoning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110296688.2A CN112907567B (en) 2021-03-19 2021-03-19 SAR image ordered artificial structure extraction method based on spatial reasoning method

Publications (2)

Publication Number Publication Date
CN112907567A true CN112907567A (en) 2021-06-04
CN112907567B CN112907567B (en) 2022-05-27

Family

ID=76105638

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110296688.2A Active CN112907567B (en) 2021-03-19 2021-03-19 SAR image ordered artificial structure extraction method based on spatial reasoning method

Country Status (1)

Country Link
CN (1) CN112907567B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114973013A (en) * 2022-05-25 2022-08-30 甘肃省地震局(中国地震局兰州地震研究所) Inference method for SAR image electric tower recognition based on spatial features

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104899562A (en) * 2015-05-29 2015-09-09 河南理工大学 Texture segmentation and fusion based radar remote-sensing image artificial building recognition algorithm
WO2016101279A1 (en) * 2014-12-26 2016-06-30 中国海洋大学 Quick detecting method for synthetic aperture radar image of ship target
US10032077B1 (en) * 2015-10-29 2018-07-24 National Technology & Engineering Solutions Of Sandia, Llc Vehicle track identification in synthetic aperture radar images
CN109145872A (en) * 2018-09-20 2019-01-04 北京遥感设备研究所 A kind of SAR image Ship Target Detection method merged based on CFAR with Fast-RCNN

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016101279A1 (en) * 2014-12-26 2016-06-30 中国海洋大学 Quick detecting method for synthetic aperture radar image of ship target
CN104899562A (en) * 2015-05-29 2015-09-09 河南理工大学 Texture segmentation and fusion based radar remote-sensing image artificial building recognition algorithm
US10032077B1 (en) * 2015-10-29 2018-07-24 National Technology & Engineering Solutions Of Sandia, Llc Vehicle track identification in synthetic aperture radar images
CN109145872A (en) * 2018-09-20 2019-01-04 北京遥感设备研究所 A kind of SAR image Ship Target Detection method merged based on CFAR with Fast-RCNN

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A TOKTAS等: ""CFAR based morphological filter design to remove clutter from GB-SAR images:An application to real data"", 《MICROWAVE & OPTICAL TECHNOLOGY LETTERS》 *
肖修来等: ""结合变差纹理特征的极化SAR建筑物震害信息提取"", 《地震工程学报》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114973013A (en) * 2022-05-25 2022-08-30 甘肃省地震局(中国地震局兰州地震研究所) Inference method for SAR image electric tower recognition based on spatial features

Also Published As

Publication number Publication date
CN112907567B (en) 2022-05-27

Similar Documents

Publication Publication Date Title
CN111028255A (en) Farmland area pre-screening method and device based on prior information and deep learning
Hu et al. Integrating CART algorithm and multi-source remote sensing data to estimate sub-pixel impervious surface coverage: a case study from Beijing Municipality, China
Zhou et al. Comparison of UAV-based LiDAR and digital aerial photogrammetry for measuring crown-level canopy height in the urban environment
Liu et al. A geomorphological model for landslide detection using airborne LIDAR data
CN112907567B (en) SAR image ordered artificial structure extraction method based on spatial reasoning method
CN116994156B (en) Landslide hidden danger comprehensive remote sensing identification method, system, equipment and medium
Li et al. The land-sea interface mapping: China’s coastal land covers at 10 m for 2020
An et al. Object-oriented urban dynamic monitoring—A case study of Haidian district of Beijing
Zahs et al. Classification of structural building damage grades from multi-temporal photogrammetric point clouds using a machine learning model trained on virtual laser scanning data
Ren et al. Mapping High-Resolution Global Impervious Surface Area: Status and Trends
CN112166688B (en) Method for monitoring desert and desertification land based on minisatellite
Liu et al. Architecture planning and geo-disasters assessment mapping of landslide by using airborne LiDAR data and UAV images
CN114596489A (en) High-precision multisource remote sensing city built-up area extraction method for human habitation index
Chen et al. A novel water change tracking algorithm for dynamic mapping of inland water using time-series remote sensing imagery
Gu et al. Ground point extraction using self-adaptive-grid and point to surface comparison
Kunyuan et al. Automated object extraction from MLS data: A survey
CN117475314B (en) Geological disaster hidden danger three-dimensional identification method, system and medium
Wu et al. Post-flood disaster damaged houses classification based on dual-view image fusion and Concentration-Based Attention Module
Su et al. The estimation of tree height based on LiDAR data and QuickBird imagery
Wu Object-oriented representation and analysis of coastal changes for hurricane-induced damage assessment
CN117171533B (en) Real-time acquisition and processing method and system for geographical mapping operation data
Zhang et al. Object-based 3D building change detection using point-level change indicators
Yang et al. A framework of linear sensor networks with unmanned aerial vehicle for rainfall-induced landslides detection
Yusri et al. Satellite-based landslide distribution mapping with the adoption of deep learning approach in the Kuantan River Basin, Pahang
Gao Algorithms and software tools for extracting coastal morphological information from airborne LiDAR data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant