CN113191374A - PolSAR image ridge line extraction method based on pyramid attention network - Google Patents
PolSAR image ridge line extraction method based on pyramid attention network Download PDFInfo
- Publication number
- CN113191374A CN113191374A CN202110547251.1A CN202110547251A CN113191374A CN 113191374 A CN113191374 A CN 113191374A CN 202110547251 A CN202110547251 A CN 202110547251A CN 113191374 A CN113191374 A CN 113191374A
- Authority
- CN
- China
- Prior art keywords
- image
- ridge line
- characteristic
- polsar
- pyramid attention
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/52—Scale-space analysis, e.g. wavelet analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/60—Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a pyramid attention network-based PolSAR image ridge line extraction method, which comprises the following steps: s101, preprocessing and polarization decomposition are carried out on the PolSAR image to obtain a body scattering component characteristic and a Lambda characteristic of Yamaguchi decomposition, and the two characteristics are combined to obtain a polarization combination characteristic; s102, labeling the polarization combination characteristics of the building and the ridge to obtain a labeled image, and segmenting the labeled image to obtain a training set; s103, inputting the training set into a PAN model, firstly performing 7 multiplied by 7 convolution operation, performing training processing by an encoder, and extracting input image characteristics; and then, the feature pyramid attention module and the global attention up-sampling module are used for carrying out image coding and decoding layer by layer step by step, so that extraction of multi-scale pixel level information and organic combination of low-level and high-level features are realized, and extraction of ridge lines is completed. The method obviously improves the extraction precision of the ridge line, and improves the automation, robustness and universality of the model.
Description
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a method for extracting a PolSAR image ridge line based on a pyramid attention network.
Background
Secondary geological disasters caused by earthquake, such as landslide, collapse and debris flow, have great threat to human life, and the mountain disasters triggered by the earthquake usually start near a ridge line, accurately acquire ridge information and are favorable for knowing the mountain disaster condition triggered by the earthquake, and the potential danger after the earthquake occurs is reasonably judged, so that help is provided for rescue workers and material transportation. In a complex scene including a polar Synthetic Aperture Radar (polar sar) image of a building, the ridge identification process often generates wrong judgments due to the existence of the building. At present, the research on the accurate and effective identification of the PolSAR image ridge is less. The remote sensing satellite can draw the region of interest quickly and accurately, and can provide important information of disaster areas. However, after an earthquake occurs, the influence of overcast and rainy weather is caused, and the function of the optical remote sensing image in disaster relief decision is greatly limited. The radar remote sensing has the characteristics of independence on sunlight, strong penetrating power, all-weather work and the like, and the advantages enable the radar remote sensing technology to be widely and deeply applied in earthquake research and are an indispensable important means for preventing and reducing disasters at all levels in China and regions.
With the development and progress of science and technology, the machine learning and deep learning technology is gradually mature, and an optimal solution is provided for many problems in the fields of image recognition, voice recognition, natural language processing and the like. For remote sensing image interpretation, from initial artificial visual interpretation to current computer automatic interpretation, phenomena applied to machine learning and deep learning technologies are common, for example, by utilizing properties of polarized SAR data, a plurality of deep learning network models specially processing the polarized SAR data are designed to complete classification and change detection tasks of the polarized SAR images. Although machine learning and deep learning are widely applied in the field of image segmentation, research for ridge line extraction is poor. Therefore, the method is based on machine learning and deep learning technologies, and based on the angle of a full convolution neural network, the PAN network is used for extracting the ridge line by combining the result of polarization decomposition, so that the purpose of full-automatic extraction is really achieved, and the extraction precision of the ridge line is obviously improved.
Disclosure of Invention
Aiming at the defects pointed out in the background technology, the invention provides a method for extracting the ridgeline of the PolSAR image based on the pyramid attention network, and aims to solve the problems in the prior art in the background technology.
In order to achieve the purpose, the invention adopts the technical scheme that:
the method for extracting the ridge line of the PolSAR image based on the pyramid attention network comprises the following steps:
s101, preprocessing and polarization decomposing the PolSAR image to obtain a body scattering component characteristic and a Lambda characteristic of Yamaguchi decomposition, and combining the body scattering component characteristic and the Lambda characteristic to obtain a polarization combination characteristic so as to label a building and a ridge subsequently, thereby obtaining a training sample and a test sample;
s102, labeling buildings and ridges of the polarized combined feature map to obtain a labeled image, and segmenting the labeled image to obtain a training set and a test set;
s103, inputting the training set into a Pyramid Attention Network (PAN) model, firstly performing 7 x 7 convolution operation, performing training processing by an encoder, and extracting input image features; then inputting the characteristics of the input image into a characteristic pyramid attention (FPA) module, and extracting pixel-level characteristic information of different scales; and finally, performing image decoding layer by layer step by using a Global Attention Upsampling (GAU) module through global pooling operation, and accurately combining bottom layer characteristic information and high layer characteristic information to finish the extraction of the ridge line.
Preferably, when the buildings and the ridges are marked, the adopted marking tool is LabelMe.
Preferably, the annotation image segmentation method includes: and (3) setting the step size to be 1, dividing the labeled image into 1406 samples with the size of 256 × 256, sequentially increasing the index of the 1406 samples from 0 to 1405, and then outputting two sets, wherein one set is a training set, the training set consists of 983 samples randomly screened from the total samples, and the rest samples are automatically classified into the other set, namely a test set, and the total number of 422 samples.
Preferably, the encoder employs a ResNet-101 network model.
Preferably, the FPA module adopts a u-type network structure.
Preferably, the method for extracting the different-scale pixel-level feature information includes: and multiplying the mapping characteristics output by the characteristic pyramid attention module FPA by the characteristic information obtained after the 1 multiplied by 1 convolution operation, and combining the obtained result with the input characteristics of the GAU module.
Preferably, the global pooling operation procedure of the GAU module is as follows: the method comprises the steps of firstly carrying out convolution operation of 3 multiplied by 3 on low-level features, carrying out global pooling and convolution operation of 1 multiplied by 1 on high-level features, multiplying the features obtained by the two convolution operations, and then connecting the features with the high-level features to guide classification of the low-level features.
Compared with the defects and shortcomings of the prior art, the invention has the following beneficial effects:
the invention embeds context information of different scales on the basis of the full convolution neural network, improves the receptive field and can efficiently recover the image pixel positioning details. From the perspective of a full convolution neural network, combining the result of polarization decomposition, and extracting the ridge line by using the PAN network. The method removes operations which may cause errors such as manual sampling, calculation and the like, achieves the purpose of real full-automatic extraction, achieves the final extraction precision of more than 98%, achieves the remarkable improvement of the ridge line extraction precision, and meanwhile improves the automation, robustness and universality of the model.
Drawings
Fig. 1 is a characteristic diagram of different polarizations obtained by polarimetric decomposition of PolSAR according to an embodiment of the present invention.
FIG. 2 is an annotated image provided by an embodiment of the invention.
Fig. 3 is a partial training set provided by an embodiment of the present invention.
FIG. 4 is a partial test set provided by an embodiment of the present invention.
Fig. 5 is a diagram showing the result of classifying the ridge and the building according to the present invention.
FIG. 6 is a diagram of different types of evaluation functions provided by embodiments of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
1. For the data source processing, the polarimetric sar data in the experimental region is first subjected to processing such as correction, noise filtering, registration, clipping, and the like, the noise filtering processing adopts a refined lee filter with a window size of 11 × 11, and then the filtered lee filter is subjected to polarization decomposition and texture decomposition, so as to obtain 9 polarization features shown in fig. 1, which are respectively: fig. 1(a) and (c) show Yamaguchi four-component decomposition characteristics, fig. 1(b), (f) and (i) show HA combination characteristics, fig. 1(d) shows Anisotropy characteristics, fig. 1(e) shows Lambda characteristics, fig. 1(g) shows Alpha characteristics, fig. 1(h) shows entopy characteristics, and characteristics meeting experimental requirements are selected from the multiple characteristics, and the Lambda characteristics and the Yamaguchi decomposed bulk scattering component characteristics (Yamaguchi vol _ vol) are selected according to the present invention.
As can be seen from fig. 1, the gray value of the building region in the Yamaguchi decomposed volume scattering component feature map represented by fig. 1(a) is lower, the ridge in the Lambda feature map represented by fig. 1(e) is brighter, and the gray value is higher, and the feature images except for these two features have the problems of unclear distribution rule of the feature pixels of the region to be extracted, serious speckle scattering phenomenon, and the like. Therefore, the invention finally selects the bulk scattering component characteristic of Yamaguchi decomposition and the Lambda characteristic as the research objects. The volume scatter component features are combined with Lambda features to facilitate subsequent labeling of buildings and ridges to obtain training and test samples.
2. The method comprises the steps of marking a polarization combination characteristic diagram, marking a building and a ridge from the diagram to obtain a marked image, wherein a marking tool used in the method is LabelMe, which is an open-source deep learning marking tool, and can be directly used after being loaded into a Python development environment, and the result shown in figure 2 is obtained through marking, wherein the dark gray part in the diagram is the building, the light gray part in the diagram is the ridge, and the black part in the diagram is the background. And segmenting the marked image to obtain a training set and a test set.
And segmenting the marked image to obtain a training set and a test set. In order to obtain training set and test set as many as possible, the step size is set to be 1, the annotation image is divided into 1406 samples with the size of 256 × 256, the 1406 sample indexes are sequentially increased from 0 to 1405, and then two sets are output, wherein one set is the training set, the training set is composed of 983 samples randomly screened from the total samples, and the rest samples are automatically classified into the other set, namely the test set, and 422 samples are obtained. Fig. 3 and 4 show a partial training set and a test set, respectively.
3. After the training set is obtained, ridge and building extraction is needed next, the training set is input into a Pyramid Attention Network (PAN) model, the input and the output of the Pyramid Attention Network (PAN) model can be kept consistent, and a result graph can be directly obtained. According to the pyramid structure of the model, convolution operation with the size of 7 multiplied by 7 is firstly carried out, then the convolution operation is processed by an encoder, so that the characteristics of an input image are extracted, the ResNet-101 network model is mainly used, and the ResNet-101 network is selected because the model not only keeps the complexity of the model, but also ensures the segmentation precision. Secondly, inputting the characteristics of the input image into a characteristic pyramid attention (FPA) module, wherein the purpose of the operation is to extract characteristic information of different scales and increase the receptive field of a pixel level, the FPA module is provided with a u-shaped network structure of a characteristic pyramid network, the mapping characteristics output by the FPA module are multiplied by the characteristic information obtained after 1 multiplied by 1 convolution operation, and the obtained result is combined with the input characteristics of a Global Attention Upsampling (GAU) module. The GAU module performs global pooling operations: the method comprises the steps of firstly carrying out convolution operation of 3 multiplied by 3 on low-level features, carrying out global pooling and convolution operation of 1 multiplied by 1 on high-level features, multiplying the features obtained by the two convolution operations, and then connecting the features with the high-level features to guide classification of the low-level features, thereby completing positioning of pixels of different categories and improving accuracy of a model on detail segmentation. The PAN network is trained 50 times to obtain the recognition result shown in fig. 5.
4. The invention verifies the extracted precision from four aspects, namely loss function (loss), pixel Precision (PA), average pixel precision (MPA) and average Intersection over Unit (MIoU) by using the identification precision of the test set verification result. Wherein, the loss function is the change of the loss value and the accuracy rate of each iteration of the observation model. Pixel accuracy is the simplest indicator to calculate the ratio between the number of correctly classified pixels and the total pixels. The average pixel accuracy is, as the name implies, changed based on the pixel accuracy, and is the average of the probability of correctly classifying pixels within a class. The average intersection ratio is the ratio of the intersection and the union of the two sets, namely the true value and the predicted value in image segmentation. The specific results are shown in fig. 6, and the abscissa in fig. 6 represents the epoch, i.e., the number of times of training, which is 50 times in total; the ordinate represents the corresponding value of different indexes, the curve with smaller fluctuation amplitude in the graph represents the training set, and the curve with larger fluctuation amplitude represents the testing set. First, it can be seen from the loss chart in fig. 6 that the 6 th training, the 17 th training, the loss value obtained from the training, has a larger deviation from other values, the corresponding loss values are 3.6189 and 2.7775, respectively, and the 49 th training, the loss value obtained from the training, has a smaller deviation from other values, and the corresponding loss value is 1.3281. Secondly, it can be found from the pa diagram in fig. 6 that the overall pixel accuracy is above 98%, and the pixel accuracy obtained by the 17 th training is 96.63%, which is lower than the overall level, but can be almost ignored. It can be seen from the mpa diagram in fig. 6 that the overall accuracy is above 75%, and the average pixel accuracy obtained only in the 17 th training is 57.11%, which is not ideal, and it can be seen from the diagram that the average pixel accuracy is lower than the pixel accuracy as a whole, because there are operations such as image translation, rotation, and restoration during the training process, which will affect the average pixel accuracy, and for this study, the pixel accuracy is more convincing and better represents the result. Finally, from the miou diagram in fig. 6, it can be seen that the training results of each training are uniform compared with the previous analysis, and the rest of the training results have better results except that the 17 th deviation from the training results is larger.
The extraction precision of the PolSAR image ridge line is up to more than 98%, and compared with the existing ridge line extraction method, the extraction precision of the PolSAR image ridge line extraction method is remarkably improved. The invention improves the automation, robustness and universality of the model.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.
Claims (7)
1. The method for extracting the ridge line of the PolSAR image based on the pyramid attention network is characterized by comprising the following steps of:
s101, preprocessing and polarization decomposing the PolSAR image to obtain a body scattering component characteristic and a Lambda characteristic of Yamaguchi decomposition, and combining the body scattering component characteristic and the Lambda characteristic to obtain a polarization combination characteristic so as to label a building and a ridge subsequently, thereby obtaining a training sample and a test sample;
s102, labeling buildings and ridges of the polarized combined feature map to obtain a labeled image, and segmenting the labeled image to obtain a training set and a test set;
s103, inputting the training set into a pyramid attention network PAN model, firstly carrying out 7 x 7 convolution operation, carrying out training processing by an encoder, and extracting input image characteristics; then inputting the input image features into a feature pyramid attention FPA module, and extracting pixel-level feature information of different scales; and finally, performing layered gradual image decoding by using a global attention up-sampling GAU module through global pooling operation, and accurately combining bottom layer characteristic information and high layer characteristic information to finish the extraction of the ridge line.
2. The pyramid attention network-based PolSAR image ridge line extraction method of claim 1, characterized in that when the building and ridge are marked, the adopted marking tool is LabelMe.
3. The pyramid attention network-based PolSAR image ridge line extraction method of claim 1, wherein the labeling image segmentation method is: and (3) setting the step size to be 1, dividing the labeled image into 1406 samples with the size of 256 × 256, sequentially increasing the index of the 1406 samples from 0 to 1405, and then outputting two sets, wherein one set is a training set, the training set consists of 983 samples randomly screened from the total samples, and the rest samples are automatically classified into the other set, namely a test set, and the total number of 422 samples.
4. The pyramid attention network-based PolSAR image ridge line extraction method of claim 1, characterized in that the encoder adopts a ResNet-101 network model.
5. The pyramid attention network-based PolSAR image ridge line extraction method of claim 1, wherein the FPA module adopts a u-type network structure.
6. The pyramid attention network-based PolSAR image ridge line extraction method of claim 5, wherein the extraction method of different scale pixel level feature information is as follows: and multiplying the mapping characteristics output by the characteristic pyramid attention module FPA by the characteristic information obtained after the 1 multiplied by 1 convolution operation, and combining the obtained result with the input characteristics of the GAU module.
7. The pyramid attention network-based PolSAR image ridge line extraction method of claim 6, wherein the global pooling operation process of the GAU module is as follows: the method comprises the steps of firstly carrying out convolution operation of 3 multiplied by 3 on low-level features, carrying out global pooling and convolution operation of 1 multiplied by 1 on high-level features, multiplying the features obtained by the two convolution operations, and then connecting the features with the high-level features to guide classification of the low-level features.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110547251.1A CN113191374B (en) | 2021-05-19 | 2021-05-19 | PolSAR image ridge line extraction method based on pyramid attention network |
DE102022112299.9A DE102022112299A1 (en) | 2021-05-19 | 2022-05-17 | METHOD FOR EXTRACTING CREST LINES FROM POLSAR IMAGES BASED ON A PYRAMID ATTENTION NETWORK |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110547251.1A CN113191374B (en) | 2021-05-19 | 2021-05-19 | PolSAR image ridge line extraction method based on pyramid attention network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113191374A true CN113191374A (en) | 2021-07-30 |
CN113191374B CN113191374B (en) | 2023-04-18 |
Family
ID=76982497
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110547251.1A Active CN113191374B (en) | 2021-05-19 | 2021-05-19 | PolSAR image ridge line extraction method based on pyramid attention network |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113191374B (en) |
DE (1) | DE102022112299A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115236655A (en) * | 2022-09-01 | 2022-10-25 | 成都理工大学 | Landslide identification method, system, equipment and medium based on fully-polarized SAR |
CN115311533A (en) * | 2022-08-12 | 2022-11-08 | 哈尔滨市科佳通用机电股份有限公司 | Vehicle door sliding track breaking fault detection method |
CN115797768A (en) * | 2022-11-30 | 2023-03-14 | 甘肃省地震局(中国地震局兰州地震研究所) | SAR building earthquake damage estimation method based on novel textural features |
CN115797768B (en) * | 2022-11-30 | 2024-05-17 | 甘肃省地震局(中国地震局兰州地震研究所) | SAR building earthquake damage estimation method based on texture features |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116842445A (en) * | 2023-07-03 | 2023-10-03 | 山东科技大学 | Method and system for automatically recognizing awakening based on multi-mode space-time spectrum fusion |
CN117830313A (en) * | 2024-03-05 | 2024-04-05 | 四川泓宝润业工程技术有限公司 | Method, device and system for detecting lower part of oilfield wellhead based on deep learning |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110310473A1 (en) * | 2010-06-16 | 2011-12-22 | Sony Corporation | Optical body, wall member, fitting, solar shading device, and building |
US8155391B1 (en) * | 2006-05-02 | 2012-04-10 | Geoeye Solutions, Inc. | Semi-automatic extraction of linear features from image data |
US20150125034A1 (en) * | 2013-11-05 | 2015-05-07 | Canon Kabushiki Kaisha | Information processing apparatus, information processing method, and storage medium |
WO2018026094A1 (en) * | 2016-08-05 | 2018-02-08 | 인하대학교 산학협력단 | Method and system for automatically generating ortho-photo texture by using dem data |
CN109031301A (en) * | 2018-09-26 | 2018-12-18 | 云南电网有限责任公司电力科学研究院 | Alpine terrain deformation extracting method based on PSInSAR technology |
CN110084210A (en) * | 2019-04-30 | 2019-08-02 | 电子科技大学 | The multiple dimensioned Ship Detection of SAR image based on attention pyramid network |
CN111602147A (en) * | 2017-11-17 | 2020-08-28 | 脸谱公司 | Machine learning model based on non-local neural network |
CN111860233A (en) * | 2020-07-06 | 2020-10-30 | 中国科学院空天信息创新研究院 | SAR image complex building extraction method and system based on attention network selection |
CN112083422A (en) * | 2020-08-26 | 2020-12-15 | 长沙理工大学 | Single-voyage InSAR system end-to-end classification method based on multistage deep learning network |
CN112101084A (en) * | 2020-07-22 | 2020-12-18 | 甘肃省地震局(中国地震局兰州地震研究所) | Method for automatically extracting earthquake damage information of polarized SAR building based on convolutional neural network |
CN112241956A (en) * | 2020-11-03 | 2021-01-19 | 甘肃省地震局(中国地震局兰州地震研究所) | PolSAR image ridge line extraction method based on region growing method and variation function |
CN112561876A (en) * | 2020-12-14 | 2021-03-26 | 中南大学 | Image-based pond and reservoir water quality detection method and system |
-
2021
- 2021-05-19 CN CN202110547251.1A patent/CN113191374B/en active Active
-
2022
- 2022-05-17 DE DE102022112299.9A patent/DE102022112299A1/en active Pending
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8155391B1 (en) * | 2006-05-02 | 2012-04-10 | Geoeye Solutions, Inc. | Semi-automatic extraction of linear features from image data |
US20110310473A1 (en) * | 2010-06-16 | 2011-12-22 | Sony Corporation | Optical body, wall member, fitting, solar shading device, and building |
US20150125034A1 (en) * | 2013-11-05 | 2015-05-07 | Canon Kabushiki Kaisha | Information processing apparatus, information processing method, and storage medium |
WO2018026094A1 (en) * | 2016-08-05 | 2018-02-08 | 인하대학교 산학협력단 | Method and system for automatically generating ortho-photo texture by using dem data |
CN111602147A (en) * | 2017-11-17 | 2020-08-28 | 脸谱公司 | Machine learning model based on non-local neural network |
CN109031301A (en) * | 2018-09-26 | 2018-12-18 | 云南电网有限责任公司电力科学研究院 | Alpine terrain deformation extracting method based on PSInSAR technology |
CN110084210A (en) * | 2019-04-30 | 2019-08-02 | 电子科技大学 | The multiple dimensioned Ship Detection of SAR image based on attention pyramid network |
CN111860233A (en) * | 2020-07-06 | 2020-10-30 | 中国科学院空天信息创新研究院 | SAR image complex building extraction method and system based on attention network selection |
CN112101084A (en) * | 2020-07-22 | 2020-12-18 | 甘肃省地震局(中国地震局兰州地震研究所) | Method for automatically extracting earthquake damage information of polarized SAR building based on convolutional neural network |
CN112083422A (en) * | 2020-08-26 | 2020-12-15 | 长沙理工大学 | Single-voyage InSAR system end-to-end classification method based on multistage deep learning network |
CN112241956A (en) * | 2020-11-03 | 2021-01-19 | 甘肃省地震局(中国地震局兰州地震研究所) | PolSAR image ridge line extraction method based on region growing method and variation function |
CN112561876A (en) * | 2020-12-14 | 2021-03-26 | 中南大学 | Image-based pond and reservoir water quality detection method and system |
Non-Patent Citations (4)
Title |
---|
HANCHAO LI等: ""Pyramid Attention Network for Semantic Segmentation"", 《ARXIV》 * |
ZONGYONG CUI等: ""Dense Attention Pyramid Networks for Multi-Scale Ship Detection in SAR Images"", 《IEEE》 * |
张皓然: ""基于极化SAR图像的山脊提取方法研究"", 《中国优秀硕士学位论文全文数据库 基础科学辑》 * |
赵斐: ""基于金字塔注意力机制的遥感图像语义分割"", 《国外电子测量技术》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115311533A (en) * | 2022-08-12 | 2022-11-08 | 哈尔滨市科佳通用机电股份有限公司 | Vehicle door sliding track breaking fault detection method |
CN115236655A (en) * | 2022-09-01 | 2022-10-25 | 成都理工大学 | Landslide identification method, system, equipment and medium based on fully-polarized SAR |
CN115236655B (en) * | 2022-09-01 | 2022-12-20 | 成都理工大学 | Landslide identification method, system, equipment and medium based on fully-polarized SAR |
US11747498B1 (en) | 2022-09-01 | 2023-09-05 | Chengdu University Of Technology | Method, system, device and medium for landslide identification based on full polarimetric SAR |
CN115797768A (en) * | 2022-11-30 | 2023-03-14 | 甘肃省地震局(中国地震局兰州地震研究所) | SAR building earthquake damage estimation method based on novel textural features |
CN115797768B (en) * | 2022-11-30 | 2024-05-17 | 甘肃省地震局(中国地震局兰州地震研究所) | SAR building earthquake damage estimation method based on texture features |
Also Published As
Publication number | Publication date |
---|---|
CN113191374B (en) | 2023-04-18 |
DE102022112299A1 (en) | 2022-11-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113191374B (en) | PolSAR image ridge line extraction method based on pyramid attention network | |
CN109800736B (en) | Road extraction method based on remote sensing image and deep learning | |
CN107527352B (en) | Remote sensing ship target contour segmentation and detection method based on deep learning FCN network | |
CN110598784B (en) | Machine learning-based construction waste classification method and device | |
CN113312993B (en) | Remote sensing data land cover classification method based on PSPNet | |
CN111598098A (en) | Water gauge water line detection and effectiveness identification method based on full convolution neural network | |
CN111914767B (en) | Scattered sewage enterprise detection method and system based on multi-source remote sensing data | |
CN111178438A (en) | ResNet 101-based weather type identification method | |
CN115546656A (en) | Remote sensing image breeding area extraction method based on deep learning | |
CN116309155A (en) | Image restoration method, model and device based on convolution and converter hybrid network | |
CN113469097B (en) | Multi-camera real-time detection method for water surface floaters based on SSD network | |
CN114266932A (en) | Self-learning-based semi-supervised labeling method for remote sensing multispectral data | |
CN112800851B (en) | Water body contour automatic extraction method and system based on full convolution neuron network | |
CN112033937B (en) | Evaluation method for water extraction precision | |
CN106897683B (en) | Ground object detection method and system of remote sensing image | |
CN113628180A (en) | Semantic segmentation network-based remote sensing building detection method and system | |
CN112966698A (en) | Freshwater fish image real-time identification method based on lightweight convolutional network | |
CN110458120B (en) | Method and system for identifying different vehicle types in complex environment | |
CN112836614A (en) | High-resolution remote sensing image classification method based on residual error network and transfer learning | |
CN116385716A (en) | Three-dimensional map ground object data automatic production method based on remote sensing map | |
CN115984603A (en) | Fine classification method and system for urban green land based on GF-2 and open map data | |
CN115410083A (en) | Small sample SAR target classification method and device based on antithetical domain adaptation | |
CN115205704A (en) | High-resolution remote sensing image small sample high-precision building segmentation and extraction method and device | |
CN110084125B (en) | Agricultural insurance investigation technical method based on deep learning | |
CN114757941A (en) | Transformer substation equipment defect identification method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |