CN115810149A - High-resolution remote sensing image building extraction method based on superpixel and image convolution - Google Patents

High-resolution remote sensing image building extraction method based on superpixel and image convolution Download PDF

Info

Publication number
CN115810149A
CN115810149A CN202211473667.4A CN202211473667A CN115810149A CN 115810149 A CN115810149 A CN 115810149A CN 202211473667 A CN202211473667 A CN 202211473667A CN 115810149 A CN115810149 A CN 115810149A
Authority
CN
China
Prior art keywords
pixel
graph
super
remote sensing
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211473667.4A
Other languages
Chinese (zh)
Inventor
郑康
方芳
徐瑞
郝清仪
李圣文
万波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Geosciences
Original Assignee
China University of Geosciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Geosciences filed Critical China University of Geosciences
Priority to CN202211473667.4A priority Critical patent/CN115810149A/en
Publication of CN115810149A publication Critical patent/CN115810149A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a high-resolution remote sensing image building extraction method based on superpixel and graph convolution, which comprises the following steps: acquiring a high-resolution remote sensing image and a corresponding label image, and preprocessing to obtain a preprocessing result; inputting the preprocessing result into a trained superpixel segmentation network to finally obtain superpixel characteristics; constructing node characteristics of the graph through the super-pixel characteristics, and constructing the graph according to the node characteristics of the graph and edges of the nodes; sending the constructed graph into a topological graph convolution neural network to be trained for training to obtain a trained topological graph convolution neural network; and inputting a graph structure to be processed into the trained graph convolution neural network to obtain a feature vector of a node, and mapping the node feature back to a pixel feature space to obtain a pixel classification result of the high-resolution remote sensing image. The invention has the beneficial effects that: the building extraction precision is improved.

Description

High-resolution remote sensing image building extraction method based on superpixel and image convolution
Technical Field
The invention relates to the field of image target extraction, in particular to a high-resolution remote sensing image building extraction method based on superpixel and graph convolution.
Background
The building extraction from the high-resolution remote sensing image refers to an automatic identification process of building and non-building pixels in the high-resolution remote sensing image. Building extraction plays an important role in many applications, such as city planning, population estimation, economic activity distribution, disaster reporting, illegal building construction, etc. The existing high-resolution remote sensing image building extraction method mainly comprises a traditional method and a deep learning method. Most of the traditional methods are building extraction methods based on optical remote sensing images, and buildings are extracted mainly by considering bottom semantic features such as colors, textures, shapes and the like. Such methods include edge detection, region segmentation, threshold segmentation, clustering, and the like. However, these methods are affected by lighting conditions, sensor type and building structure. Even if the high-resolution remote sensing image is rich in details, the problems of complex feature types, pixel mixing, shadow and the like of buildings are serious, so that the phenomenon of 'same object and different spectrum' or 'foreign object and same spectrum' is more common. In recent years, a deep learning method based on data driving has remarkable advantages in the aspect of high-resolution remote sensing image building extraction. In the field of deep learning, a building extraction task of a high-resolution remote sensing image is used as an image semantic segmentation task, and the purpose of automatic building extraction is achieved by distributing a category label to each pixel in the image.
The existing image semantic segmentation method mainly derives from a deep convolutional network, and because the features are learned from data, the deep convolutional network can avoid the subjectivity of manual feature selection and can provide better performance. However, the existing convolutional neural network model can generate phenomena such as 'salt and pepper' noise when processing a remote sensing image, and in addition, compared with the traditional classification method, the deep learning model considers the spatial relationship between adjacent pixels, so that high-quality performance can be obtained when processing a complex geographic object sample, but because the locality of convolution operation and the receptive field of a convolution unit are too small, the convolutional neural network usually focuses more on local features of the image, such as object color and texture, but the correlation between long-range context information cannot be well modeled and utilized, however, the convolutional neural network often causes the problem of fuzzy segmentation boundary when using a convolution kernel with a large receptive field to obtain the context information.
The image superpixel segmentation is used as a preprocessing means, pixel points with adjacent positions and similar characteristics such as color, brightness, texture and the like can be combined into small regions, effective information for further image segmentation is reserved in the small regions, and boundary information of objects in the image is generally not damaged. However, the traditional superpixel algorithm is derived based on a nearest neighbor clustering algorithm, only depends on detailed information such as the distance between pixels and colors, and cannot well utilize macroscopic features and the features of data. In addition, compared with the traditional convolutional neural network, the graph convolutional neural network has the advantages of realizing spatial context information interaction of different scales, learning the topological structure of nodes in the graph and the like, and can effectively solve the problems that the traditional convolutional neural network cannot model the relationship between objects in the scene and the like.
Disclosure of Invention
The invention provides a building extraction method of a high-resolution remote sensing image based on pre-training superpixels and image convolution, and aims to solve the problems that the traditional convolutional neural network extraction method has more building noise and is difficult to effectively model the scene context of the high-resolution remote sensing image on the premise of ensuring the building boundary precision.
The invention provides a high-resolution remote sensing image building extraction method based on superpixel and graph convolution, which comprises the following steps:
s1: acquiring a high-resolution remote sensing image and a corresponding label image, and utilizing image preprocessing to obtain a processed high-resolution remote sensing image and a corresponding label image; the high-resolution remote sensing image refers to an image with resolution exceeding a certain preset value;
s2: inputting the processed high-resolution remote sensing image into a trained super-pixel segmentation network to obtain a pixel-super-pixel mapping matrix, and mapping the original pixel characteristics into a super-pixel characteristic space through the mapping matrix to obtain super-pixel characteristics;
s3: constructing node characteristics of the graph through the obtained super-pixel characteristics, obtaining edges between the nodes according to the adjacency relation, and constructing the graph according to the node characteristics of the graph and the edges of the nodes:
S4, sending the constructed graph into a topological graph convolutional neural network to be trained for training to obtain a trained topological graph convolutional neural network;
and S5, inputting the graph structure to be processed into the trained graph convolutional neural network to obtain the characteristic vector of the node, multiplying the super pixel block classification result obtained after the node characteristic vector is processed by the pixel-super pixel mapping matrix to map the node characteristic back to the pixel characteristic space, and thus obtaining the pixel classification result of the high-resolution remote sensing image.
The beneficial effects provided by the invention are as follows: by introducing a mode of pre-training on training data by a learnable superpixel segmentation method, the superpixel blocks generated by the model are more consistent with the boundary of an actual object, and the condition of different types of aggregation in the superpixel blocks can be reduced.
In addition, the problems of low efficiency of context information utilization and the like in the prior art are solved by carrying out the spatial context information interaction of the high-resolution remote sensing image at different scales through the topological graph neural convolution network; the full use of the spatial context information enables the precision of pixel classification of the high-resolution remote sensing image to be effectively improved, and the technical problems that the untrained superpixel is preprocessed, the spatial context cannot be fully utilized and the like are solved;
Meanwhile, the method can be used for accurately identifying the buildings in the high-resolution remote sensing image and making a reasonable plan for scientific development of cities.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention;
FIG. 2 is a detailed flow chart of the method operation of the present invention;
FIG. 3 is a schematic diagram of data set test results.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be further described with reference to the accompanying drawings.
Referring to fig. 1-2, fig. 1 is a simplified flow diagram of the method of the present invention; FIG. 2 is a detailed flow chart of the method operation of the present invention;
a high-resolution remote sensing image building extraction method based on superpixel and graph convolution comprises the following steps:
s1: acquiring a high-resolution remote sensing image and a corresponding label image, and utilizing image preprocessing to obtain a processed high-resolution remote sensing image and a corresponding label image; the high-resolution remote sensing image refers to an image with resolution exceeding a certain preset value;
it should be noted that, the image preprocessing in step S1 specifically refers to: adopting random cutting, zooming or mirror image turning, or random combination of the three modes to make the preprocessed image accord with the actual required size;
The purpose of image preprocessing is to obtain an adaptive image size, so that subsequent related network training is facilitated;
s2: inputting the processed high-resolution remote sensing image into a trained super-pixel segmentation network to obtain a pixel-super-pixel mapping matrix, and mapping the original pixel characteristics into a super-pixel characteristic space through the mapping matrix to obtain super-pixel characteristics;
it should be noted that step S2 specifically includes:
s21: inputting the processed high-resolution remote sensing image and the corresponding label image into a superpixel segmentation network to be trained for training to obtain a trained superpixel segmentation network;
in the application, the structure of the super-pixel segmentation network comprises five down-sampling layers, four up-sampling layers and a SoftMax output layer which are sequentially connected, and an LeakyReLU activation layer is added behind each sampling layer except the last SoftMax layer.
The loss function of the superpixel splitting network is:
Figure BDA0003956544620000041
wherein,
Figure BDA0003956544620000051
representing the reconstructed loxel characteristics, I xy Representing the characteristics of the pixel in the original position,
Figure BDA0003956544620000052
representing the cross entropy.
S22, inputting the processed high-resolution remote sensing images into a trained superpixel segmentation network, wherein the trained superpixel segmentation network outputs a pixel-superpixel matrix Q for each processed high-resolution remote sensing image;
And S23, multiplying the pixel-super pixel matrix Q by the original pixel characteristic matrix to obtain super pixel characteristics.
It should be noted that the super-pixel feature is obtained by multiplying the pixel-super-pixel mapping matrix by the original pixel feature, where the original pixel feature pi j Including rgb pigment values and global coordinate value percentages of pixels, the calculation formula of the superpixel features is as follows:
Figure BDA0003956544620000053
wherein Q represents a pixel-superpixel mapping matrix, P represents a pixel feature matrix with position information, and S represents a calculated superpixel feature matrix;
s3: constructing node characteristics of the graph through the obtained super-pixel characteristics, obtaining edges between the nodes according to the adjacency relation, and constructing the graph according to the node characteristics of the graph and the edges of the nodes:
step S3 is specifically as follows:
s31: obtaining node characteristics V of graph through obtained super pixel characteristics f The calculation formula is as follows:
H=[v f1 ,v f2 ,...,v fn ] T =S=Q T P (9)
where H represents a node feature matrix, v fn Initial feature vector representing nth node, node initial feature vector v fn Corresponding to a super-pixel block S i The features of (1);
s32: obtaining a binary adjacency matrix A according to the super-pixel segmentation result, wherein Ai j When the value is 1, the ith node is adjacent to the jth node, and when the value is 0, the ith node and the jth node are not adjacent; the adjacency matrix is used for representing an original graph structure, and the calculation formula is as follows:
Figure BDA0003956544620000061
S4, sending the constructed graph into a topological graph convolutional neural network to be trained for training to obtain a trained topological graph convolutional neural network;
step S4 is specifically as follows:
s41: constructing high-resolution remote sensing image into map
Figure BDA0003956544620000062
Wherein
Figure BDA0003956544620000063
And
Figure BDA0003956544620000064
respectively representing a vertex set and an edge set of the diagram; at the same time, corresponding label images are constructed into a graph by the same segmentation mode
Figure BDA0003956544620000065
The figures are
Figure BDA0003956544620000066
And the drawings
Figure BDA0003956544620000067
All are unweighted undirected graphs; the unweighted undirected graph refers to that the connecting edges in the graph have no direction and no weight attribute;
s42: convolving the topological graph to be trained with the neural network pair graph
Figure BDA0003956544620000068
Training and using the graph
Figure BDA0003956544620000069
And (4) performing loss calculation, back propagation and gradient descent on the label data used as the graph node classification to obtain the trained topological graph convolutional neural network.
Wherein step S42 specifically comprises:
convolving the graph by the topological graph to be trained
Figure BDA00039565446200000610
Performing I-layer convolution operation on the nodes in the graph, and obtaining the trained graph convolution neural network after iteration is completed;
the convolution iteration process specifically comprises the following steps:
Figure BDA00039565446200000611
Figure BDA00039565446200000612
Figure BDA00039565446200000613
Figure BDA00039565446200000614
wherein A represents the adjacency matrix of the graph, I represents the identity matrix, D represents the diagonal degree matrix, and equation (5) represents the adjacency matrix normalized by normalizing the adjacency matrix of the graph
Figure BDA0003956544620000071
Figure BDA0003956544620000072
A polynomial convolution kernel is represented by a number of terms,
Figure BDA0003956544620000073
representing polynomial coefficients, equation (6) indicates that K different convolution kernels are used for the convolution operation;
Figure BDA0003956544620000074
the output of the l-th layer is represented,
Figure BDA0003956544620000075
representing the input of the l-th layer, b f A deviation that can be learned is indicated,
Figure BDA0003956544620000076
representing a value of 1
Figure BDA00039565446200000716
Dimension vectors, formula (7) shows that features are extracted from input graph structure data by using K convolution kernels, and linear combination is carried out;
Figure BDA0003956544620000077
the output of convolution representing the l layer is also the input of the l +1 layer, σ (·) represents nonlinear operation such as activation function by leakyReLU, and equation (8) represents activation processing on the output obtained by convolution to obtain the input of the next layer according to the structure of the convolutional neural network.
The Loss calculation process specifically comprises the following steps:
Figure BDA00039565446200000715
Figure BDA0003956544620000078
where BCE (·,. Cndot.) represents the cross-entropy loss,
Figure BDA0003956544620000079
represents the convolution output of the I-th layer, i.e., the predicted value of the input sample, and y represents the label of the input sample, which is a graph label consisting of n nodes.
And S5, inputting the graph structure to be processed into the trained graph convolutional neural network to obtain a characteristic vector of the node, and multiplying a super pixel block classification result obtained after the characteristic vector of the node is processed by a pixel-super pixel mapping matrix to map the characteristic of the node back to a pixel characteristic space, so that a pixel classification result of the high-resolution remote sensing image is obtained.
11. Step S5 is specifically as follows:
s51: inputting the picture to be predicted to the trained superpixel segmentation network to obtain a pixel-superpixel mapping matrix
Figure BDA00039565446200000710
And obtaining the graph structure according to the step S3
Figure BDA00039565446200000711
Pattern structure
Figure BDA00039565446200000712
Inputting the predicted node feature vectors into a trained graph convolution neural network to obtain predicted node feature vectors;
s52: mapping the measured node feature vector and a pixel-super pixel mapping matrix
Figure BDA00039565446200000713
Multiplying to obtain the predicted pixel characteristics, wherein the calculation formula is as follows:
Figure BDA00039565446200000714
Figure BDA0003956544620000081
wherein,
Figure BDA0003956544620000082
representing the predicted value of superpixel obtained by graph convolution neural network prediction, which is equal to the node feature obtained by I-layer graph convolution
Figure BDA0003956544620000083
The output after the argmax operation is performed,
Figure BDA0003956544620000084
representing the result of pixel classification with predicted pictures. The pixel classification result of the picture comprises the following steps: buildings as well as non-buildings.
As an example, the model was experimented with a WHU data set, a sample graph of which is shown in fig. 3. The picture specification in the data set is 512 x 512px, the spatial resolution is 0.3m/pixel, wherein 4736 pictures in the training set, 1036 pictures in the verification set and 2517 pictures in the test set. In the experiment, the model was optimized for parameters using Adam optimizer, with an initial learning rate of 0.001 and a reduction of the learning rate to 60% of the current learning rate after every 10 epochs. The training phase has a partition of 50 and a batch size set to 32. The final test results of the model are shown in table 1:
Table 1: final results of model on WHU test set
Figure BDA0003956544620000085
The beneficial effects of the invention are: by introducing a mode of pre-training on training data by a learnable superpixel segmentation method, the superpixel blocks generated by the model are more consistent with the boundary of an actual object, and the condition of different types of aggregation in the superpixel blocks can be reduced.
In addition, the problems of low efficiency of context information utilization and the like in the prior art are solved by carrying out the spatial context information interaction of the high-resolution remote sensing image at different scales through the topological graph neural convolution network; the full use of the spatial context information enables the precision of pixel classification of the high-resolution remote sensing image to be effectively improved, and the technical problems that the untrained superpixel is preprocessed, the spatial context cannot be fully utilized and the like are solved;
meanwhile, the method can be used for accurately identifying the buildings in the high-resolution remote sensing image and making a reasonable plan for scientific development of cities.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and should not be taken as limiting the scope of the present invention, which is intended to cover any modifications, equivalents, improvements, etc. within the spirit and scope of the present invention.

Claims (10)

1. A high-resolution remote sensing image building extraction method based on superpixel and graph convolution is characterized by comprising the following steps: the method comprises the following steps:
s1: acquiring a high-resolution remote sensing image and a corresponding label image, and utilizing image preprocessing to obtain a processed high-resolution remote sensing image and a corresponding label image; the high-resolution remote sensing image refers to an image with resolution exceeding a certain preset value;
s2: inputting the processed high-resolution remote sensing image into a trained super-pixel segmentation network to obtain a pixel-super-pixel mapping matrix, and mapping the original pixel characteristics into a super-pixel characteristic space through the mapping matrix to obtain super-pixel characteristics;
s3: constructing node characteristics of the graph according to the obtained super-pixel characteristics, obtaining edges between the nodes according to the adjacency relation, and constructing the graph according to the node characteristics of the graph and the edges of the nodes:
s4, sending the constructed graph into a topological graph convolution neural network to be trained for training to obtain a trained topological graph convolution neural network;
and S5, inputting the graph structure to be processed into the trained graph convolutional neural network to obtain the characteristic vector of the node, multiplying the super pixel block classification result obtained after the node characteristic vector is processed by the pixel-super pixel mapping matrix to map the node characteristic back to the pixel characteristic space, and thus obtaining the pixel classification result of the high-resolution remote sensing image.
2. The method for extracting the high-resolution remote sensing image building based on the super-pixel and image convolution as claimed in claim 1, characterized in that: the image preprocessing in the step S1 specifically refers to: and randomly cutting, zooming or mirror image turning or randomly combining the three modes to enable the preprocessed image to meet the actual required size.
3. The method for extracting the high-resolution remote sensing image building based on the super-pixel and image convolution as claimed in claim 1, characterized in that: the step S2 specifically comprises the following steps:
s21: inputting the processed high-resolution remote sensing image and the corresponding label image into a superpixel segmentation network to be trained for training to obtain a trained superpixel segmentation network;
s22, inputting the processed high-resolution remote sensing images into a trained superpixel segmentation network, wherein the trained superpixel segmentation network outputs a pixel-superpixel matrix Q for each processed high-resolution remote sensing image;
and S23, multiplying the pixel-super pixel matrix Q with the original pixel characteristic matrix to obtain super pixel characteristics.
4. The method for extracting the high-resolution remote sensing image building based on the super-pixel and image convolution as claimed in claim 3, characterized in that: the structure of the super-pixel segmentation network comprises five down-sampling layers, four up-sampling layers and a SoftMax output layer which are sequentially connected, and an LeakyReLU active layer is added behind each sampling layer except the last SoftMax layer.
5. The method for extracting the high-resolution remote sensing image building based on the super-pixel and image convolution as claimed in claim 3, characterized in that: the loss function of the superpixel splitting network is:
Figure FDA0003956544610000021
wherein,
Figure FDA0003956544610000022
representing the reconstructed loxel characteristics, I xy Representing the characteristics of the pixels in the home position,
Figure FDA0003956544610000023
representing the cross entropy.
6. The method for extracting the high-resolution remote sensing image building based on the super-pixel and image convolution as claimed in claim 3, characterized in that: the calculation formula of the superpixel feature in step S23 is as follows:
Figure FDA0003956544610000024
wherein Q represents a pixel-superpixel mapping matrix, P represents an original pixel feature matrix with position information, and S represents a calculated superpixel feature matrix.
7. The method for extracting the high-resolution remote sensing image building based on the super-pixel and image convolution as claimed in claim 6, characterized in that: step S3 is specifically as follows:
s31: obtaining node characteristics V of graph through obtained super pixel characteristics f The calculation formula is as follows:
H=[v f1 ,v f2 ,...,v fn ] T =S=Q T P (3)
where H represents a node signature matrix, v fn Initial feature vector representing nth node, node initial feature vector v fn Corresponding to a super-pixel block S i The features of (1);
s32: obtaining a binary adjacency matrix A according to the super-pixel segmentation result, wherein A ij When the value is 1, the ith node is adjacent to the jth node, and when the value is 0, the ith node and the jth node are not adjacent; the adjacency matrix is used for representing an original graph structure, and the calculation formula is as follows:
Figure FDA0003956544610000031
8. the method for extracting the high-resolution remote sensing image building based on the super-pixel and image convolution as claimed in claim 1, characterized in that: step S4 is specifically as follows:
s41: constructing high-resolution remote sensing image into map
Figure FDA0003956544610000032
Wherein v and
Figure FDA0003956544610000033
respectively representing a vertex set and an edge set of the graph; at the same time, corresponding label images are constructed into a graph by the same segmentation mode
Figure FDA0003956544610000034
The figures are
Figure FDA0003956544610000035
And the drawings
Figure FDA0003956544610000036
All are unweighted undirected graphs; the unweighted undirected graph refers to that the connecting edges in the graph have no direction and no weight attribute;
s42: convolving the topological graph to be trained with the neural network pair graph
Figure FDA0003956544610000037
Training and using the graph
Figure FDA0003956544610000038
And performing loss calculation, back propagation and gradient descent on the label data used as the graph node classification to obtain the trained topological graph convolution neural network.
9. The method for extracting the high-resolution remote sensing image building based on the super-pixel and graph convolution as claimed in claim 8, characterized in that: step S42 specifically includes: convolving the graph by the topological graph to be trained
Figure FDA0003956544610000039
And performing I-layer convolution operation on the nodes in the network, and obtaining the trained graph convolution neural network after iteration is completed.
10. The method for extracting the high-resolution remote sensing image building based on the super-pixel and image convolution as claimed in claim 1, characterized in that: step S5 is specifically as follows:
s51: inputting the picture to be predicted into the trained super-pixel segmentation network to obtain a pixel-super-pixel mapping matrix
Figure FDA0003956544610000041
And obtaining the graph structure according to the step S3
Figure FDA0003956544610000042
Pattern structure
Figure FDA0003956544610000043
Inputting the predicted node feature vectors into a trained graph convolution neural network to obtain predicted node feature vectors;
s52: mapping the measured node feature vector and a pixel-super pixel mapping matrix
Figure FDA0003956544610000044
Multiplying to obtain the predicted pixel characteristics, wherein the calculation formula is as follows:
Figure FDA0003956544610000045
Figure FDA0003956544610000046
wherein,
Figure FDA0003956544610000047
representing the predicted value of superpixel obtained by graph convolution neural network prediction, which is equal to the node feature obtained by I-layer graph convolution
Figure FDA0003956544610000048
The output after the argmax operation is performed,
Figure FDA0003956544610000049
representing the pixel classification result with the prediction picture.
CN202211473667.4A 2022-11-22 2022-11-22 High-resolution remote sensing image building extraction method based on superpixel and image convolution Pending CN115810149A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211473667.4A CN115810149A (en) 2022-11-22 2022-11-22 High-resolution remote sensing image building extraction method based on superpixel and image convolution

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211473667.4A CN115810149A (en) 2022-11-22 2022-11-22 High-resolution remote sensing image building extraction method based on superpixel and image convolution

Publications (1)

Publication Number Publication Date
CN115810149A true CN115810149A (en) 2023-03-17

Family

ID=85483900

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211473667.4A Pending CN115810149A (en) 2022-11-22 2022-11-22 High-resolution remote sensing image building extraction method based on superpixel and image convolution

Country Status (1)

Country Link
CN (1) CN115810149A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116205928A (en) * 2023-05-06 2023-06-02 南方医科大学珠江医院 Image segmentation processing method, device and equipment for laparoscopic surgery video and medium
CN116524369A (en) * 2023-04-18 2023-08-01 中国地质大学(武汉) Remote sensing image segmentation model construction method and device and remote sensing image interpretation method
CN118072138A (en) * 2024-04-24 2024-05-24 中国地质大学(武汉) Land cover characteristic extraction method and device, electronic equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116524369A (en) * 2023-04-18 2023-08-01 中国地质大学(武汉) Remote sensing image segmentation model construction method and device and remote sensing image interpretation method
CN116524369B (en) * 2023-04-18 2023-11-17 中国地质大学(武汉) Remote sensing image segmentation model construction method and device and remote sensing image interpretation method
CN116205928A (en) * 2023-05-06 2023-06-02 南方医科大学珠江医院 Image segmentation processing method, device and equipment for laparoscopic surgery video and medium
CN118072138A (en) * 2024-04-24 2024-05-24 中国地质大学(武汉) Land cover characteristic extraction method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111986099B (en) Tillage monitoring method and system based on convolutional neural network with residual error correction fused
CN107092870B (en) A kind of high resolution image Semantic features extraction method
CN115810149A (en) High-resolution remote sensing image building extraction method based on superpixel and image convolution
CN110516539A (en) Remote sensing image building extracting method, system, storage medium and equipment based on confrontation network
CN104462494B (en) A kind of remote sensing image retrieval method and system based on unsupervised feature learning
CN110322445B (en) Semantic segmentation method based on maximum prediction and inter-label correlation loss function
CN107066916B (en) Scene semantic segmentation method based on deconvolution neural network
CN112347970B (en) Remote sensing image ground object identification method based on graph convolution neural network
CN107506792B (en) Semi-supervised salient object detection method
CN112489164B (en) Image coloring method based on improved depth separable convolutional neural network
CN113627472A (en) Intelligent garden defoliating pest identification method based on layered deep learning model
CN111274964B (en) Detection method for analyzing water surface pollutants based on visual saliency of unmanned aerial vehicle
CN114387270B (en) Image processing method, image processing device, computer equipment and storage medium
CN112906813A (en) Flotation condition identification method based on density clustering and capsule neural network
CN103049340A (en) Image super-resolution reconstruction method of visual vocabularies and based on texture context constraint
CN106874862A (en) People counting method based on submodule technology and semi-supervised learning
CN106157330A (en) A kind of visual tracking method based on target associating display model
CN114863266A (en) Land use classification method based on deep space-time mode interactive network
CN110135435B (en) Saliency detection method and device based on breadth learning system
CN114998373A (en) Improved U-Net cloud picture segmentation method based on multi-scale loss function
CN115953330B (en) Texture optimization method, device, equipment and storage medium for virtual scene image
CN115082778B (en) Multi-branch learning-based homestead identification method and system
CN116310832A (en) Remote sensing image processing method, device, equipment, medium and product
CN113486967B (en) SAR image classification algorithm combining graph convolution network and Markov random field
CN112528803B (en) Road feature extraction method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination