CN107239751B - High-resolution SAR image classification method based on non-subsampled contourlet full convolution network - Google Patents
High-resolution SAR image classification method based on non-subsampled contourlet full convolution network Download PDFInfo
- Publication number
- CN107239751B CN107239751B CN201710364900.8A CN201710364900A CN107239751B CN 107239751 B CN107239751 B CN 107239751B CN 201710364900 A CN201710364900 A CN 201710364900A CN 107239751 B CN107239751 B CN 107239751B
- Authority
- CN
- China
- Prior art keywords
- data set
- layer
- image
- test data
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 239000011159 matrix material Substances 0.000 claims abstract description 70
- 238000012360 testing method Methods 0.000 claims abstract description 51
- 238000012549 training Methods 0.000 claims abstract description 51
- 238000013145 classification model Methods 0.000 claims abstract description 29
- 230000009466 transformation Effects 0.000 claims abstract description 11
- 238000011176 pooling Methods 0.000 claims description 21
- 238000000354 decomposition reaction Methods 0.000 claims description 8
- 238000013528 artificial neural network Methods 0.000 claims description 5
- 230000004927 fusion Effects 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 230000002087 whitening effect Effects 0.000 claims description 3
- 238000011425 standardization method Methods 0.000 claims description 2
- 238000005070 sampling Methods 0.000 claims 4
- 239000003550 marker Substances 0.000 claims 1
- 238000012163 sequencing technique Methods 0.000 claims 1
- 238000013527 convolutional neural network Methods 0.000 abstract description 12
- 238000004088 simulation Methods 0.000 description 4
- 238000000605 extraction Methods 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000007903 penetration ability Effects 0.000 description 1
- 230000010287 polarization Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Image Analysis (AREA)
Abstract
一种基于非下采样轮廓波全卷积网络的高分辨SAR图像分类方法,包括输入待分类的高分辨SAR图像,对图像中的各像素点进行多层非下采样轮廓波变换,获取各像素点的低频系数和高频系数;将低频系数和高频系数选择并融合,构成基于像素点的特征矩阵F;将特征矩阵F中的元素值归一化,得到归一化特征矩阵F1;将归一化特征矩阵F1切块,得到特征块矩阵F2并作为样本数据;构造训练数据集特征矩阵W1和测试数据集特征矩阵W2;构造基于全卷积神经网络的分类模型;训练分类模型;利用训练好的模型对测试数据集T分类,得到测试数据集T中每个像素点的类别,将得到的每个像素点类别与类标图对比,计算出分类准确率,提高了分类精度和速度。
A high-resolution SAR image classification method based on a non-subsampling contourlet full convolution network, comprising inputting a high-resolution SAR image to be classified, performing multi-layer non-subsampling contourlet transformation on each pixel in the image, and obtaining each pixel The low-frequency coefficient and high-frequency coefficient of the point; the low-frequency coefficient and the high-frequency coefficient are selected and fused to form a pixel-based feature matrix F; the element values in the feature matrix F are normalized to obtain the normalized feature matrix F1; The normalized feature matrix F1 is cut into blocks, and the feature block matrix F2 is obtained as sample data; the training data set feature matrix W1 and the test data set feature matrix W2 are constructed; the classification model based on the fully convolutional neural network is constructed; The trained model classifies the test data set T, obtains the category of each pixel in the test data set T, and compares the obtained category of each pixel with the class chart to calculate the classification accuracy, which improves the classification accuracy and speed. .
Description
技术领域technical field
本发明属于图像处理领域,具体涉及一种基于非下采样轮廓波全卷积网络的高分辨SAR图像分类方法,能够应用于高分辨SAR图像,有效提高目标的识别精度。The invention belongs to the field of image processing, and in particular relates to a high-resolution SAR image classification method based on a non-subsampling contourlet full convolution network, which can be applied to high-resolution SAR images and effectively improves the recognition accuracy of targets.
背景技术Background technique
合成孔径雷达(Synthetic Aperture Radar,SAR)是近年来得到广泛研究和应用的一种遥感传感器,与光学、红外等其它传感器相比,SAR成像不受天气、光照等条件的限制,能够对感兴趣的目标进行全天候、全天时的侦查。而且SAR还具有一定的穿透能力,能够在有云层干扰、树丛遮挡或是目标浅埋地表等不利条件下实现对目标的探测。此外,由于SAR特殊的成像机理,使得高分辨SAR图像包含与其他传感器不同的内容,给目标探测提供了更丰富全面的信息。由于SAR具备众多显著的优点,具有极大的应用潜力。近年来对SAR技术的研究引起了广泛关注,不少研究成果已被成功应用于环境监测、地形测量、目标探测等方面。Synthetic Aperture Radar (SAR) is a kind of remote sensing sensor that has been widely studied and applied in recent years. Compared with other sensors such as optical and infrared, SAR imaging is not limited by conditions such as weather and illumination, and can be used for target for all-weather, all-day reconnaissance. Moreover, SAR also has a certain penetration ability, which can detect the target under unfavorable conditions such as cloud interference, tree occlusion, or shallow buried surface of the target. In addition, due to the special imaging mechanism of SAR, high-resolution SAR images contain different content from other sensors, providing more comprehensive information for target detection. Because SAR has many remarkable advantages, it has great application potential. In recent years, the research on SAR technology has attracted extensive attention, and many research results have been successfully applied to environmental monitoring, terrain measurement, target detection and so on.
高分辨SAR图像分类的关键是对高分辨SAR图像的目标特征提取,现有的SAR图像分类技术有基于统计的分类方法、基于图像纹理的分类方法以及基于深度学习的分类方法。The key to high-resolution SAR image classification is the target feature extraction of high-resolution SAR images. Existing SAR image classification techniques include statistical-based classification methods, image texture-based classification methods, and deep learning-based classification methods.
基于统计的分类方法是根据不同性质图像区域的统计特性差异进行分类,但是该方法忽略了图像的空间分布特性,因此分类结果往往不理想。近年来也出现了一些基于纹理特征的分类方法,如基于灰度共生矩阵(GLCM)的方法、基于Markov随机场(MRF)的方法、Gabor小波方法等,但是由于SAR图像相干成像的机理,导致SAR图像中的纹理不明显且不稳健,此外计算机纹理特征需要对图像进行逐点扫描,计算量巨大且不能满足实时性要求。The classification method based on statistics is to classify according to the difference of statistical characteristics of image regions of different properties, but this method ignores the spatial distribution characteristics of the image, so the classification results are often unsatisfactory. In recent years, some classification methods based on texture features have also appeared, such as the method based on gray level co-occurrence matrix (GLCM), the method based on Markov random field (MRF), the Gabor wavelet method, etc. However, due to the mechanism of coherent imaging of SAR images, resulting in The texture in SAR images is not obvious and not robust. In addition, the computer texture features need to scan the image point by point, which requires a huge amount of calculation and cannot meet the real-time requirements.
以上传统的SAR图像分类方法只能依靠人工提取一些代表目标特性的浅层特征,这些浅层特征仅仅通过将原始输入信号转换到特定问题空间得出,并不能完全的表征出目标像素点之间的邻域相关性。2006年,Hinton等人提出了无监督的逐层贪婪训练方法,解决了深度增加所带来的“梯度耗散”问题。随后,许多学者根据不同的应用背景提出了多种DL模型,如深度置信网(Deep Belief Network,DBN)、栈式降噪自编码机(Stacked DenoisingAutoencoders,SDA)和卷积神经网络(Convolutional Neural Network,CNN)等。但是,上述特征提取方法均没有考虑到高分辨SAR图像的多尺度、多方向、多分辨特性,因此,对于背景复杂的高分辨SAR图像难以得到较高的分类精度。The above traditional SAR image classification methods can only rely on manual extraction of some shallow features representing target characteristics. These shallow features are only obtained by converting the original input signal to a specific problem space, and cannot fully characterize the difference between target pixels. neighborhood correlation. In 2006, Hinton et al. proposed an unsupervised layer-by-layer greedy training method to solve the "gradient dissipation" problem caused by increasing depth. Subsequently, many scholars proposed a variety of DL models according to different application backgrounds, such as Deep Belief Network (DBN), Stacked Denoising Autoencoders (SDA) and Convolutional Neural Network (Convolutional Neural Network) , CNN) etc. However, none of the above feature extraction methods take into account the multi-scale, multi-directional, and multi-resolution characteristics of high-resolution SAR images. Therefore, it is difficult to obtain high classification accuracy for high-resolution SAR images with complex backgrounds.
发明内容SUMMARY OF THE INVENTION
本发明的目的在于针对上述现有技术中的问题,提供一种基于非下采样轮廓波全卷积网络的高分辨SAR图像分类方法,结合高分辨SAR图像多尺度、多方向、多分辨的特性,提高其图像分类的准确率以及分类速度,进而有效提高目标的识别精度。The purpose of the present invention is to provide a high-resolution SAR image classification method based on a non-subsampling contourlet full convolution network in view of the above-mentioned problems in the prior art, combined with the multi-scale, multi-directional, and multi-resolution characteristics of the high-resolution SAR image , improve the accuracy and classification speed of its image classification, and then effectively improve the recognition accuracy of the target.
为了实现上述目的,本发明采用的技术方案包括以下步骤:In order to achieve the above object, the technical solution adopted in the present invention comprises the following steps:
1)输入待分类的高分辨SAR图像,对图像中的各像素点进行多层非下采样轮廓波变换,获取各像素点的低频系数和高频系数;1) Input the high-resolution SAR image to be classified, perform multi-layer non-subsampling contourlet transformation on each pixel in the image, and obtain the low-frequency and high-frequency coefficients of each pixel;
2)将低频系数和高频系数进行选择并融合,构成基于像素点的特征矩阵F;2) Select and fuse low-frequency coefficients and high-frequency coefficients to form a pixel-based feature matrix F;
3)将特征矩阵F中的元素值归一化到[0,1]之间,得到归一化特征矩阵F1;3) Normalize the element values in the feature matrix F to between [0, 1] to obtain a normalized feature matrix F1;
4)将归一化特征矩阵F1进行切块,得到特征块矩阵F2并作为样本数据;4) dicing the normalized feature matrix F1 into blocks to obtain a feature block matrix F2 and use it as sample data;
5)通过训练数据集D构造训练数据集特征矩阵W1,通过测试数据集T构造测试数据集特征矩阵W2;5) Construct training data set feature matrix W1 by training data set D, and construct test data set feature matrix W2 by testing data set T;
6)构造基于全卷积神经网络的分类模型;6) Construct a classification model based on a fully convolutional neural network;
7)将分类模型通过训练数据集D进行训练,得到训练好的模型;7) The classification model is trained through the training data set D to obtain a trained model;
8)利用训练好的模型对测试数据集T进行分类,得到测试数据集T中每个像素点的类别,将得到的每个像素点类别与类标图进行对比,计算出分类准确率。8) Use the trained model to classify the test data set T, obtain the category of each pixel point in the test data set T, and compare the obtained category of each pixel point with the class map to calculate the classification accuracy.
所述的步骤1)对图像中的各像素点进行三层非下采样轮廓波变换;非下采样轮廓波变换包括非下采样金字塔分解和非下采样方向滤波器分解,所述的非下采样金字塔分解通过非下采样滤波器组将时频平面分解为一个低频子代和多个环形高频子代,非下采样金字塔分解形成的带通图像再通过非下采样方向滤波器分解得到带通子图像的系数。Described step 1) carries out three-layer non-subsampling contourlet transformation to each pixel in the image; non-subsampling contourlet transformation includes non-subsampling pyramid decomposition and non-subsampling directional filter decomposition, and the non-subsampling Pyramid decomposition decomposes the time-frequency plane into a low-frequency sub-generation and multiple annular high-frequency sub-generations through a non-subsampling filter bank. The coefficients of the subimage.
所述的步骤2)将高频系数按照从大到小进行排序,选取其中前50%的高频系数,与第三层变换后的低频系数融合,定义基于像素点的特征矩阵F大小为M1×M2×1,M1为待分类SAR图像的长,M2为待分类SAR图像的宽,将融合结果赋值给基于像素点的特征矩阵F。Described step 2) sort the high-frequency coefficients from large to small, select the first 50% of the high-frequency coefficients, and fuse them with the low-frequency coefficients after the third layer transformation, and define the pixel-based feature matrix F size is M1 ×M2×1, M1 is the length of the SAR image to be classified, M2 is the width of the SAR image to be classified, and the fusion result is assigned to the pixel-based feature matrix F.
步骤3)所述的归一化通过特征线性缩放法、特征标准化法或特征白化法实现;特征线性缩放法先求出基于像素点的特征矩阵F的最大值max(F);再将基于像素点的特征矩阵F中的每个元素均除以最大值max(F),得到归一化特征矩阵F1。The normalization described in step 3) is realized by a feature linear scaling method, a feature standardization method or a feature whitening method; the feature linear scaling method first obtains the maximum value max(F) of the pixel-based feature matrix F; Each element in the feature matrix F of the point is divided by the maximum value max(F) to obtain the normalized feature matrix F1.
所述的步骤4)将归一化特征矩阵F1按照大小为128×128、间隔为50进行切块。In the step 4), the normalized feature matrix F1 is cut into blocks according to the size of 128×128 and the interval of 50.
所述步骤5)的具体操作如下:The concrete operation of described step 5) is as follows:
5a)将SAR图像地物分为3类,记录每个类别对应的像素点在待分类图像中的位置,生成三种分别代表三类地物像素点在待分类图像中的位置A1、A2、A3;5a) Divide the SAR image features into three categories, record the positions of the pixels corresponding to each category in the image to be classified, and generate three positions A1, A2, A3;
5b)从所述A1、A2、A3中随机选取5%的元素,生成三种对应不同类地物、被选作训练数据集的像素点位置B1、B2、B3,其中B1为对应第1类地物中被选作训练数据集的像素点在待分类图像中的位置,B2为对应第2类地物中被选作训练数据集的像素点在待分类图像中的位置,B3为对应第3类地物中被选作训练数据集的像素点在待分类图像中的位置,并将B1、B2、B3中的元素合并组成训练数据集的所有像素点在待分类图像中的位置L1;5b) Randomly select 5% of the elements from A1, A2, and A3 to generate three pixel positions B1, B2, and B3 that correspond to different types of ground objects and are selected as the training data set, where B1 corresponds to the first type The position of the pixels in the ground objects selected as the training data set in the image to be classified, B2 is the position of the pixels in the image to be classified corresponding to the second type of ground objects selected as the training data set, and B3 is the corresponding 3 types of features are selected as the positions of the pixels in the training data set in the image to be classified, and the elements in B1, B2, and B3 are combined to form the position L1 of all the pixels in the training data set in the image to be classified;
5c)用所述A1、A2、A3中其余95%的元素生成3种对应不同类地物被选作测试数据集的像素点位置C1、C2、C3,其中C1为对应第1类地物中被选作测试数据集的像素点在待分类图像中的位置,C2为对应第2类地物中被选作测试数据集的像素点在待分类图像中的位置,C3为对应第3类地物中被选作测试数据集的像素点在待分类图像中的位置,并将C1、C2、C3中的元素合并组成测试数据集的所有像素点在待分类图像中的位置L2;5c) Use the remaining 95% of the elements in A1, A2, and A3 to generate 3 pixel positions C1, C2, and C3 corresponding to different types of ground objects that are selected as the test data set, where C1 is the ground object corresponding to the first type. The position of the pixel selected as the test data set in the image to be classified, C2 is the position of the pixel selected as the test data set in the image to be classified corresponding to the second type of terrain, and C3 is the corresponding to the third type of ground. is selected as the position of the pixels in the test data set in the image to be classified, and the elements in C1, C2, and C3 are combined to form the position L2 of all the pixels in the test data set in the image to be classified;
5d)定义训练数据集D的训练数据集特征矩阵W1,在特征块矩阵F2中依据L1取对应位置上的值,并赋值给训练数据集D的训练数据集特征矩阵W1;5d) define the training data set feature matrix W1 of the training data set D, take the value on the corresponding position according to L1 in the feature block matrix F2, and assign it to the training data set feature matrix W1 of the training data set D;
5e)定义测试数据集T的测试数据集特征矩阵W2,在特征块矩阵F2中依据L2取对应位置上的值,并赋值给测试数据集T的测试数据集特征矩阵W2。5e) Define the test data set feature matrix W2 of the test data set T, take the value at the corresponding position in the feature block matrix F2 according to L2, and assign it to the test data set feature matrix W2 of the test data set T.
所述步骤6)构造基于全卷积神经网络的分类模型包括以下步骤:Described step 6) constructing the classification model based on full convolutional neural network comprises the following steps:
6a)选择一个依次由输入层、卷积层、池化层、卷积层、池化层、卷积层、池化层、卷积层、池化层、卷积层、Dropout层、卷积层、Dropout层、卷积层、反卷积上采样层、Crop裁剪层、softmax分类器所组成的17层深度神经网络,每层的参数如下:6a) Choose a sequence consisting of input layer, convolution layer, pooling layer, convolution layer, pooling layer, convolution layer, pooling layer, convolution layer, pooling layer, convolution layer, dropout layer, convolution layer Layer, Dropout layer, convolution layer, deconvolution upsampling layer, Crop cropping layer, softmax classifier composed of 17 layers of deep neural network, the parameters of each layer are as follows:
对于第1层输入层,设置特征映射图数目为3;For the first input layer, set the number of feature maps to 3;
对于第2层卷积层,设置特征映射图数目为32,卷积核大小5×5;For the second convolutional layer, set the number of feature maps to 32 and the convolution kernel size to 5×5;
对于第3层池化层,设置下采样尺寸为2;For the third layer pooling layer, set the downsampling size to 2;
对于第4层卷积层,设置特征映射图数目为64,卷积核大小5×5;For the fourth convolutional layer, set the number of feature maps to 64 and the convolution kernel size to 5×5;
对于第5层池化层,设置下采样尺寸为2;For the 5th layer pooling layer, set the downsampling size to 2;
对于第6层卷积层,设置特征映射图数目为96,卷积核大小3×3;For the sixth convolutional layer, set the number of feature maps to 96 and the convolution kernel size to 3×3;
对于第7层池化层,设置下采样尺寸为2;For the 7th layer pooling layer, set the downsampling size to 2;
对于第8层卷积层,设置特征映射图数目为128,卷积核大小3×3;For the 8th convolutional layer, set the number of feature maps to 128 and the convolution kernel size to 3×3;
对于第9层池化层,设置下采样尺寸为2;For the 9th pooling layer, set the downsampling size to 2;
对于第10层卷积层,设置特征映射图数目为128,卷积核大小3×3;For the 10th convolutional layer, set the number of feature maps to 128 and the convolution kernel size to 3×3;
对于第11层Dropout层,设置稀疏系数为0.5;For the 11th Dropout layer, set the sparsity coefficient to 0.5;
对于第12层卷积层,设置特征映射图数目为128,卷积核大小1×1;For the 12th convolutional layer, set the number of feature maps to 128 and the convolution kernel size to 1×1;
对于第13层Dropout层,设置稀疏系数为0.5;For the 13th Dropout layer, set the sparsity coefficient to 0.5;
对于第14层卷积层,设置特征映射图数目为2,卷积核大小1×1;For the 14th convolutional layer, set the number of feature maps to 2 and the convolution kernel size to 1×1;
对于第15层反卷积上采样层,设置特征映射图数目为2,卷积核大小32×32;For the 15th deconvolution upsampling layer, set the number of feature maps to 2 and the convolution kernel size to 32×32;
对于第16层Crop层,设置最终裁剪规格为128×128;For the 16th Crop layer, set the final crop size to 128×128;
对于第17层Softmax分类器,设置特征映射图数目为2;For the 17th layer Softmax classifier, set the number of feature maps to 2;
6b)将第二层卷积层的卷积核大小设置为5×5,减小感受野。6b) Set the size of the convolution kernel of the second convolutional layer to 5×5 to reduce the receptive field.
所述的步骤7)将训练数据集特征矩阵W1作为分类模型的输入,将训练数据集D中每个像素点的类别作为分类模型的输出,求解上述类别与人工标记的正确类别之间的误差,并对误差进行反向传播,优化分类模型的网络参数,得到训练好的分类模型。Described step 7) take the training data set feature matrix W1 as the input of the classification model, take the category of each pixel in the training data set D as the output of the classification model, and solve the error between the above-mentioned categories and the correct categories marked manually , and back-propagates the error, optimizes the network parameters of the classification model, and obtains a trained classification model.
所述的步骤8)将测试数据集特征矩阵W2作为训练好的分类模型输入,训练好的分类模型输出结果即为测试数据集T中每个像素点进行分类得到的分类类别。The step 8) takes the test data set feature matrix W2 as the input of the trained classification model, and the output result of the trained classification model is the classification category obtained by classifying each pixel point in the test data set T.
与现有技术相比,本发明具有如下的有益效果:通过将图像块特征扩展成像素级特征,避免了由于使用像素块而带来的重复存储和计算卷积,提高了分类的速度和效率。由于在全卷积神经网络前引入多层非下采样轮廓波变换,得到了低频系数和高频系数,低频系数体现对目标的粗略逼近,即目标所在区域等基本信息,高频系数能够比较精确地获取目标的细节信息,因此低频系数比高频系数更具有分类判别能力。本发明将低频系数和高频系数进行选择并融合,提高了分类准确性,由于将卷积神经网络中的全连接层替换为反卷积层,能够接受任意大小的输入图像,而不要求所有的训练图像和测试图像都具有同样的尺寸。综上所述,本发明高分辨SAR图像分类方法,不仅能提高分类准确率,还能够提高分类速度。Compared with the prior art, the present invention has the following beneficial effects: by extending image block features into pixel-level features, repeated storage and calculation convolution caused by using pixel blocks are avoided, and the speed and efficiency of classification are improved. . Due to the introduction of multi-layer non-subsampling contourlet transform before the fully convolutional neural network, low-frequency coefficients and high-frequency coefficients are obtained. Therefore, the low-frequency coefficients are more capable of classifying and discriminating than the high-frequency coefficients. The invention selects and fuses low-frequency coefficients and high-frequency coefficients to improve the classification accuracy. Since the fully connected layer in the convolutional neural network is replaced by a deconvolutional layer, it can accept input images of any size without requiring all Both the training and test images have the same size. To sum up, the high-resolution SAR image classification method of the present invention can not only improve the classification accuracy, but also improve the classification speed.
附图说明Description of drawings
图1本发明分类方法的流程图;Fig. 1 is the flow chart of the classification method of the present invention;
图2本发明对待分类图像的人工标记图;Fig. 2 is the manual labeling diagram of the image to be classified according to the present invention;
图3本发明对待分类图像的分类结果图。FIG. 3 is a diagram of the classification result of the image to be classified according to the present invention.
具体实施方式Detailed ways
下面结合附图对本发明做进一步的详细说明。The present invention will be further described in detail below in conjunction with the accompanying drawings.
参见图1,本发明的图像分类方法的实现步骤如下:Referring to Fig. 1, the implementation steps of the image classification method of the present invention are as follows:
步骤1、输入待分类的高分辨SAR图像,对各像素点进行3层非下采样轮廓波变换,获取其高、低频系数;待分类的高分辨SAR图像为德国宇航局(DLR)F-SAR航空系统于2007年获取的X波段水平极化图,分辨率为1m,图像大小为6187*4278。Step 1. Input the high-resolution SAR image to be classified, and perform 3-layer non-subsampling contourlet transform on each pixel to obtain its high and low frequency coefficients; the high-resolution SAR image to be classified is the German Space Agency (DLR) F-SAR The X-band horizontal polarization map obtained by the aviation system in 2007 has a resolution of 1m and an image size of 6187*4278.
1a)对各像素点的分类特征进行变换得到变换系数,其变换方法有小波变换、非下采样平稳小波变换、曲波变换、非下采样轮廓波变换等方法;1a) Transform the classification features of each pixel to obtain transform coefficients, and the transform methods include wavelet transform, non-subsampling stationary wavelet transform, curvelet transform, non-subsampling contourlet transform and other methods;
1b)本实例采用非下采样轮廓波变换对各像素点进行3层变换,非下采样轮廓波变换包括非下采样金字塔(NSP)分解和非下采样方向滤波器(NSDFB);1b) This example uses non-subsampling contourlet transform to perform 3-layer transformation on each pixel, and the non-subsampling contourlet transform includes non-subsampling pyramid (NSP) decomposition and non-subsampling directional filter (NSDFB);
1c)非下采样金字塔(NSP)变换利用非下采样滤波器组(NSFs)将时频平面分解为一个低频子代和许多环形高频子代;1c) The non-subsampling pyramid (NSP) transform utilizes non-subsampling filter banks (NSFs) to decompose the time-frequency plane into a low-frequency descendant and many annular high-frequency descendants;
1d)非下采样方向滤波器(NSDFB)为二通道非下采样滤波器组;1d) The non-subsampling directional filter (NSDFB) is a two-channel non-subsampling filter bank;
本实例中图像经过3级NSP滤波,得到1个低通图像和3个带通图像的系数;In this example, the image is filtered by 3-level NSP, and the coefficients of 1 low-pass image and 3 band-pass images are obtained;
本实例的图像经过NSP的多尺度分解后,其带通图像再由NSDFB进一步完成图像的0、1、3级多方向分解,从而分别得到1、2、8个带通子图像的系数。After the image in this example is multi-scale decomposed by NSP, the band-pass image of the image is further decomposed by NSDFB at levels 0, 1, and 3 in multiple directions, thereby obtaining 1, 2, and 8 bandpass sub-image coefficients respectively.
步骤2,选择并融合高、低频系数,构成基于像素点的特征矩阵F。本实例将分解得到高频系数按照从大到小进行排序,选取其中的前50%,与第3层变换后的低频系数融合作为变换域分类特征。定义一个大小为M1×M2×1的矩阵,并将融合结果赋给矩阵,得到基于像素点的特征F,其中M1为待分类SAR图像的长,M2为待分类SAR图像的宽。Step 2, select and fuse high and low frequency coefficients to form a pixel-based feature matrix F. In this example, the high-frequency coefficients obtained by decomposing are sorted from large to small, and the top 50% of them are selected and fused with the low-frequency coefficients after the third layer transformation as the transform domain classification feature. A matrix of size M1×M2×1 is defined, and the fusion result is assigned to the matrix to obtain the pixel-based feature F, where M1 is the length of the SAR image to be classified, and M2 is the width of the SAR image to be classified.
步骤3,对基于像素点的特征矩阵F进行归一化。Step 3, normalize the pixel-based feature matrix F.
常用的归一化方法有:特征线性缩放法、特征标准化和特征白化。Commonly used normalization methods are: feature linear scaling method, feature standardization and feature whitening.
本实例采用特征线性缩放法,即先求出基于像素点的特征矩阵F的最大值max(F);再将基于像素点的特征矩阵F中的每个元素均除以最大值max(F),得到归一化特征矩阵F1。In this example, the feature linear scaling method is used, that is, the maximum value max(F) of the pixel-based feature matrix F is firstly obtained; then each element in the pixel-based feature matrix F is divided by the maximum value max(F) , get the normalized feature matrix F1.
步骤4,将归一化特征矩阵F1按照大小为128×128、间隔为50进行切块处理,构成小的特征块矩阵F2,作为样本数据。In step 4, the normalized feature matrix F1 is cut into blocks according to the size of 128×128 and the interval of 50 to form a small feature block matrix F2 as sample data.
步骤5,通过训练数据集D构造训练数据集特征矩阵W1,通过测试数据集T构造测试数据集特征矩阵W2;具体包括如下步骤:Step 5: Construct training data set feature matrix W1 by training data set D, and construct test data set feature matrix W2 by testing data set T; specifically include the following steps:
5a)将SAR图像地物分为3类,记录每个类别对应的像素点在待分类图像中的位置,生成3种对应不同类地物像素点的位置A1、A2、A3;5a) Divide the SAR image features into 3 categories, record the positions of the pixels corresponding to each category in the image to be classified, and generate 3 kinds of positions A1, A2, A3 corresponding to different categories of pixels;
其中,A1对应第1类地物像素点在待分类图像中的位置,A2对应第2类地物像素点在待分类图像中的位置,A3对应第3类地物像素点在待分类图像中的位置;Among them, A1 corresponds to the position of the first type of ground object pixels in the image to be classified, A2 corresponds to the position of the second type of ground object pixels in the to-be-classified image, and A3 corresponds to the third type of ground object pixels in the to-be-classified image. s position;
5b)从所述不同类地物像素点位置A1、A2、A3中随机选取5%的元素,生成3种对应不同类地物被选作训练数据集的像素点的位置B1、B2、B3;5b) randomly select 5% of the elements from the pixel positions A1, A2, and A3 of the different types of ground objects to generate three positions B1, B2, and B3 corresponding to the different types of ground objects selected as the pixel points of the training data set;
其中,B1为对应第1类地物中被选作训练数据集的像素点在待分类图像中的位置,B2为对应第2类地物中被选作训练数据集的像素点在待分类图像中的位置,B3为对应第3类地物中被选作训练数据集的像素点在待分类图像中的位置,并将B1、B2、B3中的元素合并组成训练数据集的所有像素点在待分类图像中的位置L1;Among them, B1 is the position in the image to be classified corresponding to the first type of ground object selected as the training data set, and B2 is the pixel point corresponding to the second type of ground object selected as the training data set in the to-be-classified image. B3 is the position of the pixel in the image to be classified corresponding to the third type of ground object selected as the training data set, and the elements in B1, B2, and B3 are combined to form all the pixels in the training data set. position L1 in the image to be classified;
5c)用所述A1、A2、A3中其余95%的元素生成3种对应不同类地物被选作测试数据集的像素点的位置C1、C2、C3,其中C1为对应第1类地物中被选作测试数据集的像素点在待分类图像中的位置,C2为对应第2类地物中被选作测试数据集的像素点在待分类图像中的位置,C3为对应第3类地物中被选作测试数据集的像素点在待分类图像中的位置,并将C1、C2、C3中的元素合并组成测试数据集的所有像素点在待分类图像中的位置L2;5c) Use the remaining 95% of the elements in A1, A2, and A3 to generate 3 types of locations C1, C2, and C3 of the pixels corresponding to different types of ground objects selected as the test data set, where C1 is the ground object corresponding to the first type The position of the pixels selected as the test data set in the image to be classified, C2 is the position of the pixels selected as the test data set in the image to be classified corresponding to the second category of features, and C3 is the corresponding to the third category. The position of the pixels in the ground object selected as the test data set in the image to be classified, and the elements in C1, C2, and C3 are combined to form the position L2 of all the pixels in the test data set in the image to be classified;
5d)定义训练数据集D的训练数据集特征矩阵W1,在基于图像块的特征矩阵F2中依据L1取对应位置上的值,并赋值给训练数据集特征矩阵W1;5d) define the training data set feature matrix W1 of the training data set D, take the value on the corresponding position according to L1 in the feature matrix F2 based on the image block, and assign it to the training data set feature matrix W1;
5e)定义测试数据集T的测试数据集特征矩阵W2,在特征块矩阵F2中依据L2取对应位置上的值,并赋值给测试数据集特征矩阵W2。5e) Define the test data set feature matrix W2 of the test data set T, take the value at the corresponding position in the feature block matrix F2 according to L2, and assign it to the test data set feature matrix W2.
步骤6,构造基于全卷积神经网络的分类模型。Step 6, construct a classification model based on a fully convolutional neural network.
6a)选择一个由输入层→卷积层→池化层→卷积层→池化层→卷积层→池化层→卷积层→池化层→卷积层→Dropout层→卷积层→Dropout层→卷积层→上采样层(反卷积)→Crop层(裁剪)→softmax分类器组成的17层深度神经网络,每层的参数如下:6a) Select a layer consisting of input layer→convolutional layer→pooling layer→convolutional layer→pooling layer→convolutional layer→pooling layer→convolutional layer→pooling layer→convolutional layer→Dropout layer→convolutional layer →Dropout layer→Convolutional layer→Upsampling layer (deconvolution)→Crop layer (cropping)→17-layer deep neural network composed of softmax classifier, the parameters of each layer are as follows:
对于第1层输入层,设置特征映射图数目为3;For the first input layer, set the number of feature maps to 3;
对于第2层卷积层,设置特征映射图数目为32,卷积核大小5×5;For the second convolutional layer, set the number of feature maps to 32 and the convolution kernel size to 5×5;
对于第3层池化层,设置下采样尺寸为2;For the third layer pooling layer, set the downsampling size to 2;
对于第4层卷积层,设置特征映射图数目为64,卷积核大小5×5;For the fourth convolutional layer, set the number of feature maps to 64 and the convolution kernel size to 5×5;
对于第5层池化层,设置下采样尺寸为2;For the 5th layer pooling layer, set the downsampling size to 2;
对于第6层卷积层,设置特征映射图数目为96,卷积核大小3×3;For the sixth convolutional layer, set the number of feature maps to 96 and the convolution kernel size to 3×3;
对于第7层池化层,设置下采样尺寸为2;For the 7th layer pooling layer, set the downsampling size to 2;
对于第8层卷积层,设置特征映射图数目为128,卷积核大小3×3;For the 8th convolutional layer, set the number of feature maps to 128 and the convolution kernel size to 3×3;
对于第9层池化层,设置下采样尺寸为2;For the 9th pooling layer, set the downsampling size to 2;
对于第10层卷积层,设置特征映射图数目为128,卷积核大小3×3;For the 10th convolutional layer, set the number of feature maps to 128 and the convolution kernel size to 3×3;
对于第11层Dropout层,设置稀疏系数为0.5;For the 11th Dropout layer, set the sparsity coefficient to 0.5;
对于第12层卷积层,设置特征映射图数目为128,卷积核大小1×1;For the 12th convolutional layer, set the number of feature maps to 128 and the convolution kernel size to 1×1;
对于第13层Dropout层,设置稀疏系数为0.5;For the 13th Dropout layer, set the sparsity coefficient to 0.5;
对于第14层卷积层,设置特征映射图数目为2,卷积核大小1×1;For the 14th convolutional layer, set the number of feature maps to 2 and the convolution kernel size to 1×1;
对于第15层上采样层,设置特征映射图数目为2,卷积核大小32×32;For the 15th layer upsampling layer, set the number of feature maps to 2 and the convolution kernel size to 32×32;
对于第16层Crop层,设置最终裁剪规格为128×128;For the 16th Crop layer, set the final crop size to 128×128;
对于第17层Softmax分类器,设置特征映射图数目为2。For the 17th layer Softmax classifier, set the number of feature maps to 2.
6b)将第二层卷积层的卷积核大小设置为5×5,减小感受野;6b) Set the size of the convolution kernel of the second convolutional layer to 5×5 to reduce the receptive field;
步骤7,用训练数据集对分类模型进行训练,得到训练好的分类模型。Step 7: Train the classification model with the training data set to obtain a trained classification model.
将训练数据集特征矩阵W1作为分类模型的输入,训练数据集D中每个像素点的类别作为分类模型的输出,通过求解上述类别与人工标记的正确类别之间的误差并对误差进行反向传播,优化分类模型的网络参数,得到训练好的分类模型,人工标记的正确类标如图2所示。The training dataset feature matrix W1 is used as the input of the classification model, and the category of each pixel in the training dataset D is used as the output of the classification model. Propagation, optimize the network parameters of the classification model, and obtain the trained classification model. The correct class label of manual labeling is shown in Figure 2.
步骤8,利用训练好的分类模型对测试数据集进行分类。Step 8: Use the trained classification model to classify the test data set.
将测试数据集T的测试数据集特征矩阵W2作为训练好的分类模型的输入,训练好的分类模型的输出为对测试数据集中每个像素点进行分类得到的分类类别。The test data set feature matrix W2 of the test data set T is used as the input of the trained classification model, and the output of the trained classification model is the classification category obtained by classifying each pixel point in the test data set.
本发明的效果通过以下仿真实验进一步说明:The effect of the present invention is further illustrated by the following simulation experiments:
1、仿真条件:1. Simulation conditions:
硬件平台为:HPZ840。The hardware platform is: HPZ840.
软件平台为:Caffe。The software platform is: Caffe.
2、仿真内容与结果:2. Simulation content and results:
用本发明方法在上述仿真条件下进行实验,即分别从SAR数据中随机选取5%有标记的像素点作为训练样本,其余有标记的像素点作为测试样本,得到如图3的分类结果。Experiments are carried out under the above simulation conditions with the method of the present invention, that is, 5% of the marked pixels are randomly selected from the SAR data as training samples, and the remaining marked pixels are used as test samples, and the classification result as shown in Figure 3 is obtained.
从图3可以看出:分类结果的区域一致性较好,农田、森林和城镇这三类的边缘也较清晰,且保持了细节信息。It can be seen from Figure 3 that the regional consistency of the classification results is good, and the edges of the three categories of farmland, forest and town are also clearer, and the detailed information is maintained.
再依次减少训练样本,使训练样本占样本总数的4%、3%、2%,将本发明与全卷积神经网络的测试数据集分类精度进行对比,结果如表1所示:Then reduce the training samples in turn, so that the training samples account for 4%, 3%, and 2% of the total number of samples, and compare the classification accuracy of the present invention and the test data set of the full convolutional neural network. The results are shown in Table 1:
表1Table 1
从表1可见,训练样本占样本总数的5%、4%、3%、2%时,本发明的测试数据集分类精度均高于单纯的全卷积神经网络。综上,本发明通过在全卷积神经网络中引入非下采样轮廓波变换,考虑了高分辨SAR图像的方向信息和空间信息,有效提高了图像特征的表达能力,增强了模型的泛化能力,使得在训练样本较少的情况下仍可以达到很高的分类精度。It can be seen from Table 1 that when the training samples account for 5%, 4%, 3%, and 2% of the total number of samples, the classification accuracy of the test data set of the present invention is higher than that of the pure full convolutional neural network. In summary, the present invention, by introducing non-subsampling contourlet transform into the fully convolutional neural network, takes into account the directional information and spatial information of the high-resolution SAR image, effectively improves the expression ability of image features, and enhances the generalization ability of the model , so that high classification accuracy can still be achieved in the case of fewer training samples.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710364900.8A CN107239751B (en) | 2017-05-22 | 2017-05-22 | High-resolution SAR image classification method based on non-subsampled contourlet full convolution network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710364900.8A CN107239751B (en) | 2017-05-22 | 2017-05-22 | High-resolution SAR image classification method based on non-subsampled contourlet full convolution network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107239751A CN107239751A (en) | 2017-10-10 |
CN107239751B true CN107239751B (en) | 2020-11-03 |
Family
ID=59984361
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710364900.8A Active CN107239751B (en) | 2017-05-22 | 2017-05-22 | High-resolution SAR image classification method based on non-subsampled contourlet full convolution network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107239751B (en) |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107944470A (en) * | 2017-11-03 | 2018-04-20 | 西安电子科技大学 | SAR image sorting technique based on profile ripple FCN CRF |
CN107944353B (en) * | 2017-11-10 | 2019-12-24 | 西安电子科技大学 | SAR Image Change Detection Method Based on Contour Wave BSPP Network |
CN107832798B (en) * | 2017-11-20 | 2020-04-14 | 西安电子科技大学 | Target detection method in polarimetric SAR images based on NSCT ladder network model |
CN109886992A (en) * | 2017-12-06 | 2019-06-14 | 深圳博脑医疗科技有限公司 | For dividing the full convolutional network model training method in abnormal signal area in MRI image |
CN108062575A (en) * | 2018-01-03 | 2018-05-22 | 广东电子工业研究院有限公司 | High-similarity image identification and classification method |
CN108492319B (en) * | 2018-03-09 | 2021-09-03 | 西安电子科技大学 | Moving target detection method based on deep full convolution neural network |
CN109447124B (en) * | 2018-09-28 | 2019-11-19 | 北京达佳互联信息技术有限公司 | Image classification method, device, electronic equipment and storage medium |
CN109344898A (en) * | 2018-09-30 | 2019-02-15 | 北京工业大学 | A Convolutional Neural Network Image Classification Method Based on Sparse Coding Pre-training |
CN109444667B (en) * | 2018-12-17 | 2021-02-19 | 国网山东省电力公司电力科学研究院 | Power distribution network early fault classification method and device based on convolutional neural network |
CN109903301B (en) * | 2019-01-28 | 2021-04-13 | 杭州电子科技大学 | An Image Contour Detection Method Based on Multi-level Feature Channel Optimal Coding |
CN110097129B (en) * | 2019-05-05 | 2023-04-28 | 西安电子科技大学 | Remote Sensing Target Detection Method Based on Contourlet Grouping Feature Pyramid Convolution |
CN110188774B (en) * | 2019-05-27 | 2022-12-02 | 昆明理工大学 | A Classification and Recognition Method of Eddy Current Scanning Image Based on Deep Learning |
CN110702648B (en) * | 2019-09-09 | 2020-11-13 | 浙江大学 | Fluorescent spectrum pollutant classification method based on non-subsampled contourlet transformation |
CN111899232B (en) * | 2020-07-20 | 2023-07-04 | 广西大学 | Method for nondestructive detection of bamboo-wood composite container bottom plate by image processing |
CN113139579B (en) * | 2021-03-23 | 2024-02-02 | 广东省科学院智能制造研究所 | Image classification method and system based on image feature self-adaptive convolution network |
CN114037747B (en) * | 2021-11-25 | 2024-06-21 | 佛山技研智联科技有限公司 | Image feature extraction method, device, computer equipment and storage medium |
CN115310482B (en) * | 2022-07-31 | 2025-03-25 | 西南交通大学 | A radar intelligent identification method for bridge steel bars |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104915676A (en) * | 2015-05-19 | 2015-09-16 | 西安电子科技大学 | Deep-level feature learning and watershed-based synthetic aperture radar (SAR) image classification method |
CN105512680A (en) * | 2015-12-02 | 2016-04-20 | 北京航空航天大学 | Multi-view SAR image target recognition method based on depth neural network |
CN105718957A (en) * | 2016-01-26 | 2016-06-29 | 西安电子科技大学 | Polarized SAR image classification method based on nonsubsampled contourlet convolutional neural network |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7817082B2 (en) * | 2007-03-11 | 2010-10-19 | Vawd Applied Science And Technology Corporation | Multi frequency spectral imaging radar system and method of target classification |
-
2017
- 2017-05-22 CN CN201710364900.8A patent/CN107239751B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104915676A (en) * | 2015-05-19 | 2015-09-16 | 西安电子科技大学 | Deep-level feature learning and watershed-based synthetic aperture radar (SAR) image classification method |
CN105512680A (en) * | 2015-12-02 | 2016-04-20 | 北京航空航天大学 | Multi-view SAR image target recognition method based on depth neural network |
CN105718957A (en) * | 2016-01-26 | 2016-06-29 | 西安电子科技大学 | Polarized SAR image classification method based on nonsubsampled contourlet convolutional neural network |
Non-Patent Citations (3)
Title |
---|
SAR and Infrared Image Fusion Using Nonsubsampled Contourlet Transform;Ying Zhang等;《IEEE》;20090707;第398-401页 * |
基于深度脊波神经网络的极化SAR影像地物分类;张亚楠;《中国优秀硕士学位论文全文数据库信息科技辑》;20170315;第2017年卷(第3期);第7-62页 * |
基于非下采样轮廓波的图像检索;陈丽燕等;《福州大学学报(自然科学版)》;20120430;第40卷(第2期);第172-216页 * |
Also Published As
Publication number | Publication date |
---|---|
CN107239751A (en) | 2017-10-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107239751B (en) | High-resolution SAR image classification method based on non-subsampled contourlet full convolution network | |
CN108573276B (en) | A change detection method based on high-resolution remote sensing images | |
CN107358260B (en) | A Multispectral Image Classification Method Based on Surface Wave CNN | |
CN106408030B (en) | SAR image classification method based on middle layer semantic attribute and convolutional neural networks | |
CN110310264A (en) | A large-scale target detection method and device based on DCNN | |
CN111898688A (en) | A tree species classification method for airborne LiDAR data based on 3D deep learning | |
CN108510467A (en) | SAR image target recognition method based on variable depth shape convolutional neural networks | |
CN110909666A (en) | Night vehicle detection method based on improved YOLOv3 convolutional neural network | |
CN106156744A (en) | SAR target detection method based on CFAR detection with degree of depth study | |
CN107944353B (en) | SAR Image Change Detection Method Based on Contour Wave BSPP Network | |
CN107169492B (en) | Polarimetric SAR target detection method based on FCN-CRF master-slave network | |
CN109635733B (en) | Parking lot and vehicle target detection method based on visual saliency and queue correction | |
CN107909109A (en) | SAR image sorting technique based on conspicuousness and multiple dimensioned depth network model | |
CN103226826B (en) | Based on the method for detecting change of remote sensing image of local entropy visual attention model | |
CN111738114B (en) | Vehicle target detection method based on accurate sampling of remote sensing images without anchor points | |
CN108898065A (en) | Candidate regions quickly screen and the depth network Ship Target Detection method of dimension self-adaption | |
CN107133653B (en) | A high-resolution SAR image classification method based on deep ladder network | |
CN107944470A (en) | SAR image sorting technique based on profile ripple FCN CRF | |
CN110222767A (en) | Three-dimensional point cloud classification method based on nested neural and grating map | |
CN107358203A (en) | A kind of High Resolution SAR image classification method based on depth convolution ladder network | |
CN102663740B (en) | SAR image change detection method based on image cutting | |
CN112926556B (en) | Semantic segmentation-based aerial photography transmission line broken strand identification method and system | |
CN116543300A (en) | Cloud-aerosol hierarchical classification method based on semantic segmentation | |
Al-Ghrairi et al. | Classification of satellite images based on color features using remote sensing | |
CN109284752A (en) | A rapid detection method for vehicles |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |