CN115810149A - High-resolution remote sensing image building extraction method based on superpixel and image convolution - Google Patents
High-resolution remote sensing image building extraction method based on superpixel and image convolution Download PDFInfo
- Publication number
- CN115810149A CN115810149A CN202211473667.4A CN202211473667A CN115810149A CN 115810149 A CN115810149 A CN 115810149A CN 202211473667 A CN202211473667 A CN 202211473667A CN 115810149 A CN115810149 A CN 115810149A
- Authority
- CN
- China
- Prior art keywords
- pixel
- graph
- super
- remote sensing
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000605 extraction Methods 0.000 title claims abstract description 18
- 230000011218 segmentation Effects 0.000 claims abstract description 28
- 238000013507 mapping Methods 0.000 claims abstract description 22
- 239000013598 vector Substances 0.000 claims abstract description 18
- 238000012549 training Methods 0.000 claims abstract description 16
- 238000013528 artificial neural network Methods 0.000 claims abstract description 14
- 238000007781 pre-processing Methods 0.000 claims abstract description 10
- 239000011159 matrix material Substances 0.000 claims description 37
- 238000000034 method Methods 0.000 claims description 29
- 238000013527 convolutional neural network Methods 0.000 claims description 16
- 238000004364 calculation method Methods 0.000 claims description 11
- 238000005070 sampling Methods 0.000 claims description 6
- 230000009286 beneficial effect Effects 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000004913 activation Effects 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 230000002776 aggregation Effects 0.000 description 2
- 238000004220 aggregation Methods 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000001537 neural effect Effects 0.000 description 2
- 238000001228 spectrum Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 235000002566 Capsicum Nutrition 0.000 description 1
- 239000006002 Pepper Substances 0.000 description 1
- 235000016761 Piper aduncum Nutrition 0.000 description 1
- 235000017804 Piper guineense Nutrition 0.000 description 1
- 244000203593 Piper nigrum Species 0.000 description 1
- 235000008184 Piper nigrum Nutrition 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000009435 building construction Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000013100 final test Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 239000000049 pigment Substances 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 150000003839 salts Chemical class 0.000 description 1
- 238000000547 structure data Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a high-resolution remote sensing image building extraction method based on superpixel and graph convolution, which comprises the following steps: acquiring a high-resolution remote sensing image and a corresponding label image, and preprocessing to obtain a preprocessing result; inputting the preprocessing result into a trained superpixel segmentation network to finally obtain superpixel characteristics; constructing node characteristics of the graph through the super-pixel characteristics, and constructing the graph according to the node characteristics of the graph and edges of the nodes; sending the constructed graph into a topological graph convolution neural network to be trained for training to obtain a trained topological graph convolution neural network; and inputting a graph structure to be processed into the trained graph convolution neural network to obtain a feature vector of a node, and mapping the node feature back to a pixel feature space to obtain a pixel classification result of the high-resolution remote sensing image. The invention has the beneficial effects that: the building extraction precision is improved.
Description
Technical Field
The invention relates to the field of image target extraction, in particular to a high-resolution remote sensing image building extraction method based on superpixel and graph convolution.
Background
The building extraction from the high-resolution remote sensing image refers to an automatic identification process of building and non-building pixels in the high-resolution remote sensing image. Building extraction plays an important role in many applications, such as city planning, population estimation, economic activity distribution, disaster reporting, illegal building construction, etc. The existing high-resolution remote sensing image building extraction method mainly comprises a traditional method and a deep learning method. Most of the traditional methods are building extraction methods based on optical remote sensing images, and buildings are extracted mainly by considering bottom semantic features such as colors, textures, shapes and the like. Such methods include edge detection, region segmentation, threshold segmentation, clustering, and the like. However, these methods are affected by lighting conditions, sensor type and building structure. Even if the high-resolution remote sensing image is rich in details, the problems of complex feature types, pixel mixing, shadow and the like of buildings are serious, so that the phenomenon of 'same object and different spectrum' or 'foreign object and same spectrum' is more common. In recent years, a deep learning method based on data driving has remarkable advantages in the aspect of high-resolution remote sensing image building extraction. In the field of deep learning, a building extraction task of a high-resolution remote sensing image is used as an image semantic segmentation task, and the purpose of automatic building extraction is achieved by distributing a category label to each pixel in the image.
The existing image semantic segmentation method mainly derives from a deep convolutional network, and because the features are learned from data, the deep convolutional network can avoid the subjectivity of manual feature selection and can provide better performance. However, the existing convolutional neural network model can generate phenomena such as 'salt and pepper' noise when processing a remote sensing image, and in addition, compared with the traditional classification method, the deep learning model considers the spatial relationship between adjacent pixels, so that high-quality performance can be obtained when processing a complex geographic object sample, but because the locality of convolution operation and the receptive field of a convolution unit are too small, the convolutional neural network usually focuses more on local features of the image, such as object color and texture, but the correlation between long-range context information cannot be well modeled and utilized, however, the convolutional neural network often causes the problem of fuzzy segmentation boundary when using a convolution kernel with a large receptive field to obtain the context information.
The image superpixel segmentation is used as a preprocessing means, pixel points with adjacent positions and similar characteristics such as color, brightness, texture and the like can be combined into small regions, effective information for further image segmentation is reserved in the small regions, and boundary information of objects in the image is generally not damaged. However, the traditional superpixel algorithm is derived based on a nearest neighbor clustering algorithm, only depends on detailed information such as the distance between pixels and colors, and cannot well utilize macroscopic features and the features of data. In addition, compared with the traditional convolutional neural network, the graph convolutional neural network has the advantages of realizing spatial context information interaction of different scales, learning the topological structure of nodes in the graph and the like, and can effectively solve the problems that the traditional convolutional neural network cannot model the relationship between objects in the scene and the like.
Disclosure of Invention
The invention provides a building extraction method of a high-resolution remote sensing image based on pre-training superpixels and image convolution, and aims to solve the problems that the traditional convolutional neural network extraction method has more building noise and is difficult to effectively model the scene context of the high-resolution remote sensing image on the premise of ensuring the building boundary precision.
The invention provides a high-resolution remote sensing image building extraction method based on superpixel and graph convolution, which comprises the following steps:
s1: acquiring a high-resolution remote sensing image and a corresponding label image, and utilizing image preprocessing to obtain a processed high-resolution remote sensing image and a corresponding label image; the high-resolution remote sensing image refers to an image with resolution exceeding a certain preset value;
s2: inputting the processed high-resolution remote sensing image into a trained super-pixel segmentation network to obtain a pixel-super-pixel mapping matrix, and mapping the original pixel characteristics into a super-pixel characteristic space through the mapping matrix to obtain super-pixel characteristics;
s3: constructing node characteristics of the graph through the obtained super-pixel characteristics, obtaining edges between the nodes according to the adjacency relation, and constructing the graph according to the node characteristics of the graph and the edges of the nodes:
S4, sending the constructed graph into a topological graph convolutional neural network to be trained for training to obtain a trained topological graph convolutional neural network;
and S5, inputting the graph structure to be processed into the trained graph convolutional neural network to obtain the characteristic vector of the node, multiplying the super pixel block classification result obtained after the node characteristic vector is processed by the pixel-super pixel mapping matrix to map the node characteristic back to the pixel characteristic space, and thus obtaining the pixel classification result of the high-resolution remote sensing image.
The beneficial effects provided by the invention are as follows: by introducing a mode of pre-training on training data by a learnable superpixel segmentation method, the superpixel blocks generated by the model are more consistent with the boundary of an actual object, and the condition of different types of aggregation in the superpixel blocks can be reduced.
In addition, the problems of low efficiency of context information utilization and the like in the prior art are solved by carrying out the spatial context information interaction of the high-resolution remote sensing image at different scales through the topological graph neural convolution network; the full use of the spatial context information enables the precision of pixel classification of the high-resolution remote sensing image to be effectively improved, and the technical problems that the untrained superpixel is preprocessed, the spatial context cannot be fully utilized and the like are solved;
Meanwhile, the method can be used for accurately identifying the buildings in the high-resolution remote sensing image and making a reasonable plan for scientific development of cities.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention;
FIG. 2 is a detailed flow chart of the method operation of the present invention;
FIG. 3 is a schematic diagram of data set test results.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be further described with reference to the accompanying drawings.
Referring to fig. 1-2, fig. 1 is a simplified flow diagram of the method of the present invention; FIG. 2 is a detailed flow chart of the method operation of the present invention;
a high-resolution remote sensing image building extraction method based on superpixel and graph convolution comprises the following steps:
s1: acquiring a high-resolution remote sensing image and a corresponding label image, and utilizing image preprocessing to obtain a processed high-resolution remote sensing image and a corresponding label image; the high-resolution remote sensing image refers to an image with resolution exceeding a certain preset value;
it should be noted that, the image preprocessing in step S1 specifically refers to: adopting random cutting, zooming or mirror image turning, or random combination of the three modes to make the preprocessed image accord with the actual required size;
The purpose of image preprocessing is to obtain an adaptive image size, so that subsequent related network training is facilitated;
s2: inputting the processed high-resolution remote sensing image into a trained super-pixel segmentation network to obtain a pixel-super-pixel mapping matrix, and mapping the original pixel characteristics into a super-pixel characteristic space through the mapping matrix to obtain super-pixel characteristics;
it should be noted that step S2 specifically includes:
s21: inputting the processed high-resolution remote sensing image and the corresponding label image into a superpixel segmentation network to be trained for training to obtain a trained superpixel segmentation network;
in the application, the structure of the super-pixel segmentation network comprises five down-sampling layers, four up-sampling layers and a SoftMax output layer which are sequentially connected, and an LeakyReLU activation layer is added behind each sampling layer except the last SoftMax layer.
The loss function of the superpixel splitting network is:
wherein,representing the reconstructed loxel characteristics, I xy Representing the characteristics of the pixel in the original position,representing the cross entropy.
S22, inputting the processed high-resolution remote sensing images into a trained superpixel segmentation network, wherein the trained superpixel segmentation network outputs a pixel-superpixel matrix Q for each processed high-resolution remote sensing image;
And S23, multiplying the pixel-super pixel matrix Q by the original pixel characteristic matrix to obtain super pixel characteristics.
It should be noted that the super-pixel feature is obtained by multiplying the pixel-super-pixel mapping matrix by the original pixel feature, where the original pixel feature pi j Including rgb pigment values and global coordinate value percentages of pixels, the calculation formula of the superpixel features is as follows:
wherein Q represents a pixel-superpixel mapping matrix, P represents a pixel feature matrix with position information, and S represents a calculated superpixel feature matrix;
s3: constructing node characteristics of the graph through the obtained super-pixel characteristics, obtaining edges between the nodes according to the adjacency relation, and constructing the graph according to the node characteristics of the graph and the edges of the nodes:
step S3 is specifically as follows:
s31: obtaining node characteristics V of graph through obtained super pixel characteristics f The calculation formula is as follows:
H=[v f1 ,v f2 ,...,v fn ] T =S=Q T P (9)
where H represents a node feature matrix, v fn Initial feature vector representing nth node, node initial feature vector v fn Corresponding to a super-pixel block S i The features of (1);
s32: obtaining a binary adjacency matrix A according to the super-pixel segmentation result, wherein Ai j When the value is 1, the ith node is adjacent to the jth node, and when the value is 0, the ith node and the jth node are not adjacent; the adjacency matrix is used for representing an original graph structure, and the calculation formula is as follows:
S4, sending the constructed graph into a topological graph convolutional neural network to be trained for training to obtain a trained topological graph convolutional neural network;
step S4 is specifically as follows:
s41: constructing high-resolution remote sensing image into mapWhereinAndrespectively representing a vertex set and an edge set of the diagram; at the same time, corresponding label images are constructed into a graph by the same segmentation modeThe figures areAnd the drawingsAll are unweighted undirected graphs; the unweighted undirected graph refers to that the connecting edges in the graph have no direction and no weight attribute;
s42: convolving the topological graph to be trained with the neural network pair graphTraining and using the graphAnd (4) performing loss calculation, back propagation and gradient descent on the label data used as the graph node classification to obtain the trained topological graph convolutional neural network.
Wherein step S42 specifically comprises:
convolving the graph by the topological graph to be trainedPerforming I-layer convolution operation on the nodes in the graph, and obtaining the trained graph convolution neural network after iteration is completed;
the convolution iteration process specifically comprises the following steps:
wherein A represents the adjacency matrix of the graph, I represents the identity matrix, D represents the diagonal degree matrix, and equation (5) represents the adjacency matrix normalized by normalizing the adjacency matrix of the graph A polynomial convolution kernel is represented by a number of terms,representing polynomial coefficients, equation (6) indicates that K different convolution kernels are used for the convolution operation;the output of the l-th layer is represented,representing the input of the l-th layer, b f A deviation that can be learned is indicated,representing a value of 1Dimension vectors, formula (7) shows that features are extracted from input graph structure data by using K convolution kernels, and linear combination is carried out;the output of convolution representing the l layer is also the input of the l +1 layer, σ (·) represents nonlinear operation such as activation function by leakyReLU, and equation (8) represents activation processing on the output obtained by convolution to obtain the input of the next layer according to the structure of the convolutional neural network.
The Loss calculation process specifically comprises the following steps:
where BCE (·,. Cndot.) represents the cross-entropy loss,represents the convolution output of the I-th layer, i.e., the predicted value of the input sample, and y represents the label of the input sample, which is a graph label consisting of n nodes.
And S5, inputting the graph structure to be processed into the trained graph convolutional neural network to obtain a characteristic vector of the node, and multiplying a super pixel block classification result obtained after the characteristic vector of the node is processed by a pixel-super pixel mapping matrix to map the characteristic of the node back to a pixel characteristic space, so that a pixel classification result of the high-resolution remote sensing image is obtained.
11. Step S5 is specifically as follows:
s51: inputting the picture to be predicted to the trained superpixel segmentation network to obtain a pixel-superpixel mapping matrixAnd obtaining the graph structure according to the step S3Pattern structureInputting the predicted node feature vectors into a trained graph convolution neural network to obtain predicted node feature vectors;
s52: mapping the measured node feature vector and a pixel-super pixel mapping matrixMultiplying to obtain the predicted pixel characteristics, wherein the calculation formula is as follows:
wherein,representing the predicted value of superpixel obtained by graph convolution neural network prediction, which is equal to the node feature obtained by I-layer graph convolutionThe output after the argmax operation is performed,representing the result of pixel classification with predicted pictures. The pixel classification result of the picture comprises the following steps: buildings as well as non-buildings.
As an example, the model was experimented with a WHU data set, a sample graph of which is shown in fig. 3. The picture specification in the data set is 512 x 512px, the spatial resolution is 0.3m/pixel, wherein 4736 pictures in the training set, 1036 pictures in the verification set and 2517 pictures in the test set. In the experiment, the model was optimized for parameters using Adam optimizer, with an initial learning rate of 0.001 and a reduction of the learning rate to 60% of the current learning rate after every 10 epochs. The training phase has a partition of 50 and a batch size set to 32. The final test results of the model are shown in table 1:
Table 1: final results of model on WHU test set
The beneficial effects of the invention are: by introducing a mode of pre-training on training data by a learnable superpixel segmentation method, the superpixel blocks generated by the model are more consistent with the boundary of an actual object, and the condition of different types of aggregation in the superpixel blocks can be reduced.
In addition, the problems of low efficiency of context information utilization and the like in the prior art are solved by carrying out the spatial context information interaction of the high-resolution remote sensing image at different scales through the topological graph neural convolution network; the full use of the spatial context information enables the precision of pixel classification of the high-resolution remote sensing image to be effectively improved, and the technical problems that the untrained superpixel is preprocessed, the spatial context cannot be fully utilized and the like are solved;
meanwhile, the method can be used for accurately identifying the buildings in the high-resolution remote sensing image and making a reasonable plan for scientific development of cities.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and should not be taken as limiting the scope of the present invention, which is intended to cover any modifications, equivalents, improvements, etc. within the spirit and scope of the present invention.
Claims (10)
1. A high-resolution remote sensing image building extraction method based on superpixel and graph convolution is characterized by comprising the following steps: the method comprises the following steps:
s1: acquiring a high-resolution remote sensing image and a corresponding label image, and utilizing image preprocessing to obtain a processed high-resolution remote sensing image and a corresponding label image; the high-resolution remote sensing image refers to an image with resolution exceeding a certain preset value;
s2: inputting the processed high-resolution remote sensing image into a trained super-pixel segmentation network to obtain a pixel-super-pixel mapping matrix, and mapping the original pixel characteristics into a super-pixel characteristic space through the mapping matrix to obtain super-pixel characteristics;
s3: constructing node characteristics of the graph according to the obtained super-pixel characteristics, obtaining edges between the nodes according to the adjacency relation, and constructing the graph according to the node characteristics of the graph and the edges of the nodes:
s4, sending the constructed graph into a topological graph convolution neural network to be trained for training to obtain a trained topological graph convolution neural network;
and S5, inputting the graph structure to be processed into the trained graph convolutional neural network to obtain the characteristic vector of the node, multiplying the super pixel block classification result obtained after the node characteristic vector is processed by the pixel-super pixel mapping matrix to map the node characteristic back to the pixel characteristic space, and thus obtaining the pixel classification result of the high-resolution remote sensing image.
2. The method for extracting the high-resolution remote sensing image building based on the super-pixel and image convolution as claimed in claim 1, characterized in that: the image preprocessing in the step S1 specifically refers to: and randomly cutting, zooming or mirror image turning or randomly combining the three modes to enable the preprocessed image to meet the actual required size.
3. The method for extracting the high-resolution remote sensing image building based on the super-pixel and image convolution as claimed in claim 1, characterized in that: the step S2 specifically comprises the following steps:
s21: inputting the processed high-resolution remote sensing image and the corresponding label image into a superpixel segmentation network to be trained for training to obtain a trained superpixel segmentation network;
s22, inputting the processed high-resolution remote sensing images into a trained superpixel segmentation network, wherein the trained superpixel segmentation network outputs a pixel-superpixel matrix Q for each processed high-resolution remote sensing image;
and S23, multiplying the pixel-super pixel matrix Q with the original pixel characteristic matrix to obtain super pixel characteristics.
4. The method for extracting the high-resolution remote sensing image building based on the super-pixel and image convolution as claimed in claim 3, characterized in that: the structure of the super-pixel segmentation network comprises five down-sampling layers, four up-sampling layers and a SoftMax output layer which are sequentially connected, and an LeakyReLU active layer is added behind each sampling layer except the last SoftMax layer.
5. The method for extracting the high-resolution remote sensing image building based on the super-pixel and image convolution as claimed in claim 3, characterized in that: the loss function of the superpixel splitting network is:
6. The method for extracting the high-resolution remote sensing image building based on the super-pixel and image convolution as claimed in claim 3, characterized in that: the calculation formula of the superpixel feature in step S23 is as follows:
wherein Q represents a pixel-superpixel mapping matrix, P represents an original pixel feature matrix with position information, and S represents a calculated superpixel feature matrix.
7. The method for extracting the high-resolution remote sensing image building based on the super-pixel and image convolution as claimed in claim 6, characterized in that: step S3 is specifically as follows:
s31: obtaining node characteristics V of graph through obtained super pixel characteristics f The calculation formula is as follows:
H=[v f1 ,v f2 ,...,v fn ] T =S=Q T P (3)
where H represents a node signature matrix, v fn Initial feature vector representing nth node, node initial feature vector v fn Corresponding to a super-pixel block S i The features of (1);
s32: obtaining a binary adjacency matrix A according to the super-pixel segmentation result, wherein A ij When the value is 1, the ith node is adjacent to the jth node, and when the value is 0, the ith node and the jth node are not adjacent; the adjacency matrix is used for representing an original graph structure, and the calculation formula is as follows:
8. the method for extracting the high-resolution remote sensing image building based on the super-pixel and image convolution as claimed in claim 1, characterized in that: step S4 is specifically as follows:
s41: constructing high-resolution remote sensing image into mapWherein v andrespectively representing a vertex set and an edge set of the graph; at the same time, corresponding label images are constructed into a graph by the same segmentation modeThe figures areAnd the drawingsAll are unweighted undirected graphs; the unweighted undirected graph refers to that the connecting edges in the graph have no direction and no weight attribute;
s42: convolving the topological graph to be trained with the neural network pair graphTraining and using the graphAnd performing loss calculation, back propagation and gradient descent on the label data used as the graph node classification to obtain the trained topological graph convolution neural network.
9. The method for extracting the high-resolution remote sensing image building based on the super-pixel and graph convolution as claimed in claim 8, characterized in that: step S42 specifically includes: convolving the graph by the topological graph to be trained And performing I-layer convolution operation on the nodes in the network, and obtaining the trained graph convolution neural network after iteration is completed.
10. The method for extracting the high-resolution remote sensing image building based on the super-pixel and image convolution as claimed in claim 1, characterized in that: step S5 is specifically as follows:
s51: inputting the picture to be predicted into the trained super-pixel segmentation network to obtain a pixel-super-pixel mapping matrixAnd obtaining the graph structure according to the step S3Pattern structureInputting the predicted node feature vectors into a trained graph convolution neural network to obtain predicted node feature vectors;
s52: mapping the measured node feature vector and a pixel-super pixel mapping matrixMultiplying to obtain the predicted pixel characteristics, wherein the calculation formula is as follows:
wherein,representing the predicted value of superpixel obtained by graph convolution neural network prediction, which is equal to the node feature obtained by I-layer graph convolutionThe output after the argmax operation is performed,representing the pixel classification result with the prediction picture.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211473667.4A CN115810149A (en) | 2022-11-22 | 2022-11-22 | High-resolution remote sensing image building extraction method based on superpixel and image convolution |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211473667.4A CN115810149A (en) | 2022-11-22 | 2022-11-22 | High-resolution remote sensing image building extraction method based on superpixel and image convolution |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115810149A true CN115810149A (en) | 2023-03-17 |
Family
ID=85483900
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211473667.4A Pending CN115810149A (en) | 2022-11-22 | 2022-11-22 | High-resolution remote sensing image building extraction method based on superpixel and image convolution |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115810149A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116205928A (en) * | 2023-05-06 | 2023-06-02 | 南方医科大学珠江医院 | Image segmentation processing method, device and equipment for laparoscopic surgery video and medium |
CN116524369A (en) * | 2023-04-18 | 2023-08-01 | 中国地质大学(武汉) | Remote sensing image segmentation model construction method and device and remote sensing image interpretation method |
CN118072138A (en) * | 2024-04-24 | 2024-05-24 | 中国地质大学(武汉) | Land cover characteristic extraction method and device, electronic equipment and storage medium |
-
2022
- 2022-11-22 CN CN202211473667.4A patent/CN115810149A/en active Pending
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116524369A (en) * | 2023-04-18 | 2023-08-01 | 中国地质大学(武汉) | Remote sensing image segmentation model construction method and device and remote sensing image interpretation method |
CN116524369B (en) * | 2023-04-18 | 2023-11-17 | 中国地质大学(武汉) | Remote sensing image segmentation model construction method and device and remote sensing image interpretation method |
CN116205928A (en) * | 2023-05-06 | 2023-06-02 | 南方医科大学珠江医院 | Image segmentation processing method, device and equipment for laparoscopic surgery video and medium |
CN118072138A (en) * | 2024-04-24 | 2024-05-24 | 中国地质大学(武汉) | Land cover characteristic extraction method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111986099B (en) | Tillage monitoring method and system based on convolutional neural network with residual error correction fused | |
CN107092870B (en) | A kind of high resolution image Semantic features extraction method | |
CN115810149A (en) | High-resolution remote sensing image building extraction method based on superpixel and image convolution | |
CN110516539A (en) | Remote sensing image building extracting method, system, storage medium and equipment based on confrontation network | |
CN104462494B (en) | A kind of remote sensing image retrieval method and system based on unsupervised feature learning | |
CN110322445B (en) | Semantic segmentation method based on maximum prediction and inter-label correlation loss function | |
CN107066916B (en) | Scene semantic segmentation method based on deconvolution neural network | |
CN112347970B (en) | Remote sensing image ground object identification method based on graph convolution neural network | |
CN107506792B (en) | Semi-supervised salient object detection method | |
CN112489164B (en) | Image coloring method based on improved depth separable convolutional neural network | |
CN113627472A (en) | Intelligent garden defoliating pest identification method based on layered deep learning model | |
CN111274964B (en) | Detection method for analyzing water surface pollutants based on visual saliency of unmanned aerial vehicle | |
CN114387270B (en) | Image processing method, image processing device, computer equipment and storage medium | |
CN112906813A (en) | Flotation condition identification method based on density clustering and capsule neural network | |
CN103049340A (en) | Image super-resolution reconstruction method of visual vocabularies and based on texture context constraint | |
CN106874862A (en) | People counting method based on submodule technology and semi-supervised learning | |
CN106157330A (en) | A kind of visual tracking method based on target associating display model | |
CN114863266A (en) | Land use classification method based on deep space-time mode interactive network | |
CN110135435B (en) | Saliency detection method and device based on breadth learning system | |
CN114998373A (en) | Improved U-Net cloud picture segmentation method based on multi-scale loss function | |
CN115953330B (en) | Texture optimization method, device, equipment and storage medium for virtual scene image | |
CN115082778B (en) | Multi-branch learning-based homestead identification method and system | |
CN116310832A (en) | Remote sensing image processing method, device, equipment, medium and product | |
CN113486967B (en) | SAR image classification algorithm combining graph convolution network and Markov random field | |
CN112528803B (en) | Road feature extraction method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |