CN113191374B - PolSAR image ridge line extraction method based on pyramid attention network - Google Patents
PolSAR image ridge line extraction method based on pyramid attention network Download PDFInfo
- Publication number
- CN113191374B CN113191374B CN202110547251.1A CN202110547251A CN113191374B CN 113191374 B CN113191374 B CN 113191374B CN 202110547251 A CN202110547251 A CN 202110547251A CN 113191374 B CN113191374 B CN 113191374B
- Authority
- CN
- China
- Prior art keywords
- image
- ridge line
- characteristic
- polsar
- pyramid attention
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/52—Scale-space analysis, e.g. wavelet analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/60—Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Molecular Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a pyramid attention network-based PolSAR image ridge line extraction method, which comprises the following steps: s101, preprocessing and polarization decomposing the PolSAR image to obtain a volume scattering component characteristic and a Lambda characteristic of Yamaguchi decomposition, and combining the two characteristics to obtain a polarization combination characteristic; s102, labeling the polarization combination characteristics of the building and the ridge to obtain a labeled image, and segmenting the labeled image to obtain a training set; s103, inputting the training set into a PAN model, firstly performing 7 multiplied by 7 convolution operation, performing training processing by an encoder, and extracting input image characteristics; and then, encoding and decoding the image layer by using a feature pyramid attention module and a global attention up-sampling module, so as to realize extraction of multi-scale pixel level information and organic combination of low-level and high-level features, and finish extraction of ridge lines. The method provided by the invention obviously improves the extraction precision of the ridge line, and simultaneously improves the automation, robustness and universality of the model.
Description
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a PolSAR image ridge line extraction method based on a pyramid attention network.
Background
Secondary geological disasters caused by earthquake, such as landslide, collapse and debris flow, have great threat to human life, and the mountain disasters triggered by the earthquake usually start near a ridge line, accurately acquire ridge information and are favorable for knowing the mountain disaster condition triggered by the earthquake, and the potential danger after the earthquake occurs is reasonably judged, so that help is provided for rescue workers and material transportation. In a complex scene including a polar Synthetic Aperture Radar (polar sar) image of a building, the ridge identification process often generates wrong judgments due to the existence of the building. At present, the research on the accurate and effective identification of the PolSAR image ridge is less. The remote sensing satellite can draw the region of interest quickly and accurately, and can provide important information of disaster areas. However, after an earthquake occurs, the influence of overcast and rainy weather is caused, and the function of the optical remote sensing image in disaster relief decision is greatly limited. The radar remote sensing has the characteristics of independence on sunlight, strong penetrating power, all-weather work and the like, and the advantages enable the radar remote sensing technology to be widely and deeply applied in earthquake research and are an indispensable important means for preventing and reducing all levels of disasters in China and regions.
With the development and progress of science and technology, the machine learning and deep learning technology is gradually mature, and an optimal solution is provided for many problems in the fields of image recognition, voice recognition, natural language processing and the like. For remote sensing image interpretation, from initial artificial visual interpretation to current computer automatic interpretation, phenomena applied to machine learning and deep learning technologies are common, for example, by utilizing properties of polarized SAR data, a plurality of deep learning network models specially processing the polarized SAR data are designed to complete classification and change detection tasks of the polarized SAR images. Although machine learning and deep learning are widely applied in the field of image segmentation, research for ridge line extraction is poor. Therefore, the method is based on machine learning and deep learning technologies, and based on the angle of a full convolution neural network, the PAN network is used for extracting the ridge line by combining the result of polarization decomposition, so that the purpose of full-automatic extraction is really achieved, and the extraction precision of the ridge line is obviously improved.
Disclosure of Invention
Aiming at the defects pointed out in the background technology, the invention provides a method for extracting the ridgeline of the PolSAR image based on the pyramid attention network, and aims to solve the problems in the prior art in the background technology.
In order to achieve the purpose, the invention adopts the technical scheme that:
the method for extracting the ridge line of the PolSAR image based on the pyramid attention network comprises the following steps:
s101, preprocessing and polarization decomposing the PolSAR image to obtain a body scattering component characteristic and a Lambda characteristic of Yamaguchi decomposition, and combining the body scattering component characteristic and the Lambda characteristic to obtain a polarization combination characteristic so as to label a building and a ridge subsequently, thereby obtaining a training sample and a test sample;
s102, labeling buildings and ridges of the polarized combined feature map to obtain a labeled image, and segmenting the labeled image to obtain a training set and a test set;
s103, inputting the training set into a Pyramid Attention Network (PAN) model, firstly performing 7 x 7 convolution operation, performing training processing by an encoder, and extracting input image features; then inputting the characteristics of the input image into a characteristic pyramid attention (FPA) module, and extracting pixel-level characteristic information of different scales; and finally, performing image decoding layer by layer step by using a Global Attention Upsampling (GAU) module through global pooling operation, and accurately combining bottom layer characteristic information and high layer characteristic information to finish the extraction of the ridge line.
Preferably, when the building and the ridge are marked, the marking tool is LabelMe.
Preferably, the annotation image segmentation method comprises: and (3) setting the step length as 1, dividing the marked image into 1406 samples with the size of 256 multiplied by 256, sequentially increasing the index of the 1406 samples from 0 to 1405, and then outputting two sets, wherein one set is a training set, the training set consists of 983 samples randomly screened from the total samples, and the rest samples are automatically classified into the other set, namely a test set, which is 423 in total.
Preferably, the encoder employs a ResNet-101 network model.
Preferably, the FPA module adopts a u-type network structure.
Preferably, the method for extracting the different-scale pixel-level feature information includes: and multiplying the mapping characteristics output by the characteristic pyramid attention module FPA by the characteristic information obtained after the 1 multiplied by 1 convolution operation, and combining the obtained result with the input characteristics of the GAU module.
Preferably, the global pooling operation procedure of the GAU module is: the method comprises the steps of firstly carrying out convolution operation of 3 multiplied by 3 on low-level features, carrying out global pooling and convolution operation of 1 multiplied by 1 on high-level features, multiplying the features obtained by the two convolution operations, and then connecting the features with the high-level features to guide classification of the low-level features.
Compared with the defects and shortcomings of the prior art, the invention has the following beneficial effects:
the invention embeds context information of different scales on the basis of the full convolution neural network, improves the receptive field and can efficiently recover the image pixel positioning details. From the perspective of a full convolution neural network, combining the result of polarization decomposition, and extracting the ridge line by using the PAN network. The method removes operations which may cause errors such as manual sampling, calculation and the like, achieves the purpose of real full-automatic extraction, achieves the final extraction precision of more than 98%, achieves the remarkable improvement of the ridge line extraction precision, and meanwhile improves the automation, robustness and universality of the model.
Drawings
Fig. 1 is a characteristic diagram of different polarizations obtained by polarimetric decomposition of PolSAR according to an embodiment of the present invention.
FIG. 2 is an annotated image provided by an embodiment of the invention.
Fig. 3 is a partial training set provided by an embodiment of the present invention.
FIG. 4 is a partial test set provided by an embodiment of the present invention.
Fig. 5 is a diagram showing the result of classifying the ridge and the building according to the present invention.
FIG. 6 is a diagram of different types of evaluation functions provided by embodiments of the present invention.
Detailed description of the preferred embodiments
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
1. For data source processing, the polarimetric synthetic aperture radar (PolSAR) data in the experimental region is firstly subjected to correction, noise filtering, registration, clipping and other processing, wherein the noise filtering adopts refined lee filtering with the window size of 11 × 11, and then polarization decomposition and texture decomposition are carried out on the PolSAR data to obtain 9 polarization characteristics shown in the figure 1, wherein the polarization characteristics are respectively as follows: fig. 1 (a) and (c) show Yamaguchi four-component decomposition characteristics, fig. 1 (b), (f) and (i) show HA combination characteristics, fig. 1 (d) shows Anisotropy characteristics, fig. 1 (e) shows Lambda characteristics, fig. 1 (g) shows Alpha characteristics, fig. 1 (h) shows entopy characteristics, and characteristics meeting experimental requirements are selected from the multiple characteristics, and the Lambda characteristics and the Yamaguchi decomposed bulk scattering component characteristics (Yamaguchi vol _ vol) are selected according to the present invention.
As can be seen from fig. 1, the gray value of the building region in the Yamaguchi decomposed volume scattering component feature map represented by fig. 1 (a) is lower, the ridge in the Lambda feature map represented by fig. 1 (e) is brighter, and the gray value is higher, and the feature images except for these two features have the problems of unclear distribution rule of the feature pixels of the region to be extracted, serious speckle scattering phenomenon, and the like. Therefore, the invention finally selects the bulk scattering component characteristic of Yamaguchi decomposition and the Lambda characteristic as the research objects. The volume scatter component features are combined with Lambda features to facilitate subsequent labeling of buildings and ridges to obtain training and test samples.
2. The method comprises the steps of marking a polarization combination characteristic diagram, marking a building and a ridge from the diagram to obtain a marked image, wherein a marking tool used in the method is LabelMe, which is an open-source deep learning marking tool, and can be directly used after being loaded into a Python development environment, and the result shown in figure 2 is obtained through marking, wherein the dark gray part in the diagram is the building, the light gray part in the diagram is the ridge, and the black part in the diagram is the background. And segmenting the marked image to obtain a training set and a test set.
And segmenting the marked image to obtain a training set and a test set. In order to obtain the training set and the test set as much as possible, the step size is set to be 1, the annotation image is divided into 1406 samples with the size of 256 × 256, the 1406 sample indexes are sequentially increased from 0 to 1405, and then two sets are output, wherein one set is the training set, the training set is composed of 983 samples randomly screened from the total samples, and the rest samples are automatically classified into the other set, namely the test set, and 423 samples in total are automatically classified into the other set. Fig. 3 and 4 show a partial training set and a test set, respectively.
3. After the training set is obtained, ridge and building extraction is needed next, the training set is input into a Pyramid Attention Network (PAN) model, input and output of the pyramid attention network model can be kept consistent, and a result graph can be directly obtained. According to the pyramid structure of the model, convolution operation with the size of 7 multiplied by 7 is firstly carried out, then the encoder carries out processing, and therefore the characteristics of the input image are extracted, the ResNet-101 network model is mainly used, and the ResNet-101 network is selected because the model not only keeps the complexity of the model, but also guarantees the segmentation accuracy. Secondly, inputting the features of the input image into a Feature Pyramid Attention (FPA) module, wherein the FPA module has a u-shaped network structure of a feature pyramid network, the mapping features output by the FPA module are multiplied by feature information obtained after 1 x 1 convolution operation, and the obtained result is combined with the input features of a global attention up-sampling (GAU) module in order to extract feature information of different scales and increase the receptive field of a pixel level. The GAU module performs global pooling operations: the method comprises the steps of firstly carrying out convolution operation of 3 multiplied by 3 on low-level features, carrying out global pooling and convolution operation of 1 multiplied by 1 on high-level features, multiplying the features obtained by the two convolution operations, and then connecting the features with the high-level features to guide classification of the low-level features, thereby completing positioning of pixels of different categories and improving accuracy of a model on detail segmentation. The PAN network is trained 50 times to obtain the recognition result shown in fig. 5.
4. Using the identification precision of the test set verification result, the invention verifies the extraction precision from four aspects, namely loss function (loss), pixel Precision (PA), average pixel precision (MPA), and average Intersection and Union ratio (MIoU). Wherein, the loss function is the change of the loss value and the accuracy rate of each iteration of the observation model. Pixel accuracy is the simplest indicator to calculate the ratio between the number of correctly classified pixels and the total pixels. The average pixel accuracy is, as the name implies, changed based on the pixel accuracy, and is the average of the probability of correctly classifying pixels within a class. The average intersection ratio is the ratio of the intersection and the union of the two sets, namely the true value and the predicted value in image segmentation. The specific results are shown in fig. 6, and the abscissa in fig. 6 represents the epoch, i.e., the number of times of training, which is 50 times in total; the ordinate represents the corresponding value of different indexes, the curve with smaller fluctuation amplitude in the graph represents the training set, and the curve with larger fluctuation amplitude represents the testing set. First, it can be seen from the loss graph in fig. 6 that the 6 th training, the 17 th training, the obtained loss value has a large deviation from other values, the corresponding loss values are 3.6189 and 2.7775, respectively, the 49 th training, the obtained loss value has a small deviation from other values, and the corresponding loss value is 1.3281. Secondly, it can be found from the pa diagram in fig. 6 that the overall pixel accuracy is over 98%, and the pixel accuracy obtained by the 17 th training is 96.63%, which is lower than the overall level, but can be almost ignored. Then, it can be seen from the mpa diagram in fig. 6 that the overall accuracy is above 75%, and the average pixel accuracy obtained only by the 17 th training is 57.11%, which is not ideal, and it can be seen from the diagram that the average pixel accuracy is lower than the pixel accuracy as a whole, because in the training process, there are operations such as image translation, rotation, and restoration, which will affect the average pixel accuracy, and for this study, the pixel accuracy is more convincing, and the result is better and worse. Finally, from the miou diagram in fig. 6, it can be seen that the training results of each training are uniform compared with the previous analysis, and the rest of the training results have better results except that the 17 th deviation from the training results is larger.
The method has the advantages that the extraction precision of the PolSAR image ridge line is up to more than 98%, and compared with the existing ridge line extraction method, the extraction precision is remarkably improved. The invention improves the automation, robustness and universality of the model.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.
Claims (7)
1. The method for extracting the ridge line of the PolSAR image based on the pyramid attention network is characterized by comprising the following steps of:
s101, preprocessing and polarization decomposing the PolSAR image to obtain a body scattering component characteristic and a Lambda characteristic of Yamaguchi decomposition, and combining the body scattering component characteristic and the Lambda characteristic to obtain a polarization combination characteristic so as to label a building and a ridge subsequently, thereby obtaining a training sample and a test sample;
s102, labeling buildings and ridges of the polarized combined feature map to obtain a labeled image, and segmenting the labeled image to obtain a training set and a test set;
s103, inputting the training set into a pyramid attention network PAN model, firstly carrying out 7 x 7 convolution operation, carrying out training processing by an encoder, and extracting input image characteristics; then inputting the input image features into a feature pyramid attention FPA module, and extracting pixel-level feature information of different scales; and finally, performing layered gradual image decoding by using a global attention up-sampling GAU module through global pooling operation, and accurately combining bottom layer characteristic information and high layer characteristic information to finish the extraction of the ridge line.
2. The pyramid attention network-based PolSAR image ridge line extraction method of claim 1, characterized in that when the building and ridge are marked, the adopted marking tool is LabelMe.
3. The pyramid attention network-based PolSAR image ridge line extraction method of claim 1, wherein the annotation image segmentation method comprises: and (3) setting the step length as 1, dividing the marked image into 1406 samples with the size of 256 multiplied by 256, sequentially increasing the index of the 1406 samples from 0 to 1405, and then outputting two sets, wherein one set is a training set, the training set consists of 983 samples randomly screened from the total samples, and the rest samples are automatically classified into the other set, namely a test set, which is 423 in total.
4. The pyramid attention network-based PolSAR image ridge line extraction method of claim 1, wherein the encoder adopts a ResNet-101 network model.
5. The pyramid attention network-based PolSAR image ridge line extraction method of claim 1, wherein the FPA module adopts a u-type network structure.
6. The pyramid attention network-based PolSAR image ridge line extraction method of claim 5, wherein the extraction method of different scale pixel level feature information is as follows: and multiplying the mapping characteristics output by the characteristic pyramid attention module FPA by the characteristic information obtained after the 1 multiplied by 1 convolution operation, and combining the obtained result with the input characteristics of the GAU module.
7. The pyramid attention network-based PolSAR image ridge line extraction method of claim 6, wherein the global pooling operation process of the GAU module is as follows: the method comprises the steps of firstly carrying out convolution operation of 3 multiplied by 3 on low-level features, carrying out global pooling and convolution operation of 1 multiplied by 1 on high-level features, multiplying the features obtained by the two convolution operations, and then connecting the features with the high-level features to guide classification of the low-level features.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110547251.1A CN113191374B (en) | 2021-05-19 | 2021-05-19 | PolSAR image ridge line extraction method based on pyramid attention network |
DE102022112299.9A DE102022112299A1 (en) | 2021-05-19 | 2022-05-17 | METHOD FOR EXTRACTING CREST LINES FROM POLSAR IMAGES BASED ON A PYRAMID ATTENTION NETWORK |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110547251.1A CN113191374B (en) | 2021-05-19 | 2021-05-19 | PolSAR image ridge line extraction method based on pyramid attention network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113191374A CN113191374A (en) | 2021-07-30 |
CN113191374B true CN113191374B (en) | 2023-04-18 |
Family
ID=76982497
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110547251.1A Active CN113191374B (en) | 2021-05-19 | 2021-05-19 | PolSAR image ridge line extraction method based on pyramid attention network |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113191374B (en) |
DE (1) | DE102022112299A1 (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115311533B (en) * | 2022-08-12 | 2023-04-18 | 哈尔滨市科佳通用机电股份有限公司 | Vehicle door sliding track breaking fault detection method |
CN115236655B (en) | 2022-09-01 | 2022-12-20 | 成都理工大学 | Landslide identification method, system, equipment and medium based on fully-polarized SAR |
CN115797768B (en) * | 2022-11-30 | 2024-05-17 | 甘肃省地震局(中国地震局兰州地震研究所) | SAR building earthquake damage estimation method based on texture features |
CN116842445B (en) * | 2023-07-03 | 2024-06-11 | 山东科技大学 | Method and system for automatically recognizing awakening based on multi-mode space-time spectrum fusion |
CN117830313B (en) * | 2024-03-05 | 2024-05-28 | 四川泓宝润业工程技术有限公司 | Method, device and system for detecting lower part of oilfield wellhead based on deep learning |
CN118551341A (en) * | 2024-07-30 | 2024-08-27 | 北京市农林科学院智能装备技术研究中心 | Corn crop coefficient measuring and calculating method, device, equipment, medium and program product |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018026094A1 (en) * | 2016-08-05 | 2018-02-08 | 인하대학교 산학협력단 | Method and system for automatically generating ortho-photo texture by using dem data |
CN111602147A (en) * | 2017-11-17 | 2020-08-28 | 脸谱公司 | Machine learning model based on non-local neural network |
CN112241956A (en) * | 2020-11-03 | 2021-01-19 | 甘肃省地震局(中国地震局兰州地震研究所) | PolSAR image ridge line extraction method based on region growing method and variation function |
CN112561876A (en) * | 2020-12-14 | 2021-03-26 | 中南大学 | Image-based pond and reservoir water quality detection method and system |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8155391B1 (en) * | 2006-05-02 | 2012-04-10 | Geoeye Solutions, Inc. | Semi-automatic extraction of linear features from image data |
JP5894358B2 (en) * | 2010-06-16 | 2016-03-30 | デクセリアルズ株式会社 | Optical body, wall material, joinery, solar shading device, and building |
JP6324025B2 (en) * | 2013-11-05 | 2018-05-16 | キヤノン株式会社 | Information processing apparatus and information processing method |
CN109031301A (en) * | 2018-09-26 | 2018-12-18 | 云南电网有限责任公司电力科学研究院 | Alpine terrain deformation extracting method based on PSInSAR technology |
CN110084210B (en) * | 2019-04-30 | 2022-03-29 | 电子科技大学 | SAR image multi-scale ship detection method based on attention pyramid network |
CN111860233B (en) * | 2020-07-06 | 2021-05-18 | 中国科学院空天信息创新研究院 | SAR image complex building extraction method and system based on attention network selection |
CN112101084B (en) * | 2020-07-22 | 2023-07-11 | 甘肃省地震局(中国地震局兰州地震研究所) | Automatic polarized SAR building earthquake hazard information extraction method based on convolutional neural network |
CN112083422B (en) * | 2020-08-26 | 2023-08-22 | 长沙理工大学 | Single-navigation InSAR system end-to-end classification method based on multistage deep learning network |
-
2021
- 2021-05-19 CN CN202110547251.1A patent/CN113191374B/en active Active
-
2022
- 2022-05-17 DE DE102022112299.9A patent/DE102022112299A1/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018026094A1 (en) * | 2016-08-05 | 2018-02-08 | 인하대학교 산학협력단 | Method and system for automatically generating ortho-photo texture by using dem data |
CN111602147A (en) * | 2017-11-17 | 2020-08-28 | 脸谱公司 | Machine learning model based on non-local neural network |
CN112241956A (en) * | 2020-11-03 | 2021-01-19 | 甘肃省地震局(中国地震局兰州地震研究所) | PolSAR image ridge line extraction method based on region growing method and variation function |
CN112561876A (en) * | 2020-12-14 | 2021-03-26 | 中南大学 | Image-based pond and reservoir water quality detection method and system |
Also Published As
Publication number | Publication date |
---|---|
CN113191374A (en) | 2021-07-30 |
DE102022112299A1 (en) | 2022-11-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113191374B (en) | PolSAR image ridge line extraction method based on pyramid attention network | |
US11521379B1 (en) | Method for flood disaster monitoring and disaster analysis based on vision transformer | |
CN110598784B (en) | Machine learning-based construction waste classification method and device | |
US20220315243A1 (en) | Method for identification and recognition of aircraft take-off and landing runway based on pspnet network | |
CN115546656B (en) | Remote sensing image cultivation region extraction method based on deep learning | |
CN111598098A (en) | Water gauge water line detection and effectiveness identification method based on full convolution neural network | |
CN113312993B (en) | Remote sensing data land cover classification method based on PSPNet | |
CN111178438A (en) | ResNet 101-based weather type identification method | |
CN113469097B (en) | Multi-camera real-time detection method for water surface floaters based on SSD network | |
CN115984603A (en) | Fine classification method and system for urban green land based on GF-2 and open map data | |
CN112836614A (en) | High-resolution remote sensing image classification method based on residual error network and transfer learning | |
CN113628180A (en) | Semantic segmentation network-based remote sensing building detection method and system | |
CN114266932B (en) | Self-learning-based remote sensing multispectral data semi-supervised labeling method | |
CN116309155A (en) | Image restoration method, model and device based on convolution and converter hybrid network | |
CN112033937B (en) | Evaluation method for water extraction precision | |
CN106897683B (en) | Ground object detection method and system of remote sensing image | |
CN110458120B (en) | Method and system for identifying different vehicle types in complex environment | |
House et al. | Using deep learning to identify potential roof spaces for solar panels | |
Liu et al. | Water extraction on the hyperspectral images of gaofen-5 satellite using spectral indices | |
CN116503750A (en) | Large-range remote sensing image rural block type residential area extraction method and system integrating target detection and visual attention mechanisms | |
CN116385716A (en) | Three-dimensional map ground object data automatic production method based on remote sensing map | |
CN113591740B (en) | Deep learning-based sediment particle identification method and device in complex river environment | |
CN115205704A (en) | High-resolution remote sensing image small sample high-precision building segmentation and extraction method and device | |
CN115527118A (en) | Remote sensing image target detection method fused with attention mechanism | |
CN110084125B (en) | Agricultural insurance investigation technical method based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |