CN113673369A - Remote sensing image scene planning method and device, electronic equipment and storage medium - Google Patents
Remote sensing image scene planning method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN113673369A CN113673369A CN202110875474.0A CN202110875474A CN113673369A CN 113673369 A CN113673369 A CN 113673369A CN 202110875474 A CN202110875474 A CN 202110875474A CN 113673369 A CN113673369 A CN 113673369A
- Authority
- CN
- China
- Prior art keywords
- image
- remote sensing
- planning
- segmentation
- scene
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 64
- 230000011218 segmentation Effects 0.000 claims abstract description 165
- 230000000694 effects Effects 0.000 claims abstract description 92
- 238000012549 training Methods 0.000 claims abstract description 45
- 238000011176 pooling Methods 0.000 claims description 24
- 230000010354 integration Effects 0.000 claims description 14
- 230000003044 adaptive effect Effects 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 8
- 230000004927 fusion Effects 0.000 claims description 8
- 238000013528 artificial neural network Methods 0.000 claims description 7
- 238000012986 modification Methods 0.000 claims description 4
- 230000004048 modification Effects 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 8
- 238000005070 sampling Methods 0.000 description 8
- 238000004891 communication Methods 0.000 description 5
- 239000003086 colorant Substances 0.000 description 4
- 238000002372 labelling Methods 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 238000003709 image segmentation Methods 0.000 description 3
- 244000025254 Cannabis sativa Species 0.000 description 2
- 230000004913 activation Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 238000000137 annealing Methods 0.000 description 1
- 230000003042 antagnostic effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000012634 optical imaging Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000007670 refining Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention provides a method and a device for planning a remote sensing image scene, electronic equipment and a storage medium, wherein the method comprises the following steps: carrying out target segmentation on the remote sensing image of the scene to be planned to obtain a segmented image; planning and modifying based on the segmentation image to obtain a modified segmentation image; inputting the modified segmentation image into a planning effect generation model to obtain a scene effect image output by the planning effect generation model; the planning effect generation model is obtained by training a condition generation countermeasure network based on the sample remote sensing image and the sample segmentation image. According to the method, the device, the electronic equipment and the storage medium, the remote sensing image of the scene to be planned is subjected to target segmentation to obtain the segmented image, so that the scene planning can be conveniently and quickly performed on the segmented image, the planned scene effect image is generated through the planning effect generation model, and a vivid planning result can be quickly and efficiently obtained.
Description
Technical Field
The invention relates to the technical field of remote sensing image processing, in particular to a method and a device for planning a remote sensing image scene, electronic equipment and a storage medium.
Background
With the continuous improvement of the performance of remote sensing imaging equipment, the acquisition of various types of high-resolution remote sensing images becomes more and more convenient, and how to apply the data attracts a great deal of research, such as classification, detection, tracking and the like of remote sensing targets. Little research has been done on how to simply and efficiently plan the scene in the remotely sensed image. In addition, because the remote sensing image scene is complicated and changeable, the difficulty of directly planning the scene is large.
Therefore, how to perform simple and effective scene planning under the condition that the scene in the remote sensing image is complicated and disordered is still an urgent problem to be solved in the field of remote sensing images.
Disclosure of Invention
The invention provides a method and a device for planning a remote sensing image scene, electronic equipment and a storage medium, which are used for realizing simple and effective planning of the remote sensing image scene.
The invention provides a remote sensing image scene planning method, which comprises the following steps:
carrying out target segmentation on the remote sensing image of the scene to be planned to obtain a segmented image;
planning and modifying based on the segmentation image to obtain a modified segmentation image;
inputting the modified segmentation image into a planning effect generation model to obtain a scene effect image output by the planning effect generation model;
the planning effect generation model is obtained by training a condition generation countermeasure network based on the sample remote sensing image and the sample segmentation image.
According to the remote sensing image scene planning method provided by the invention, the remote sensing image of the scene to be planned is subjected to target segmentation to obtain a segmented image, and the method comprises the following steps:
inputting the remote sensing image into a target segmentation model to obtain a segmentation image;
the target segmentation model is obtained by training a full convolution neural network based on a sample remote sensing image.
According to the remote sensing image scene planning method provided by the invention, the target segmentation model comprises a backbone network, a multi-depth convolution module and a self-adaptive feature integration module;
inputting the remote sensing image into a target segmentation model to obtain the segmentation image, wherein the step of obtaining the segmentation image comprises the following steps:
inputting the remote sensing image into the backbone network to obtain shallow layer characteristics;
inputting the shallow features to the multi-depth convolution module to obtain deep features;
and fusing the shallow feature and the deep feature based on the self-adaptive feature integration module, and obtaining the segmentation image based on a fusion result.
According to the remote sensing image scene planning method provided by the invention, the multi-depth convolution module comprises a plurality of multi-depth convolution branches and a global pooling branch; each multi-depth convolution branch comprises convolution layers with different layers, and the convolution kernels of the convolution layers have the same size; the global pooling branch includes a global average pooling layer and a convolutional layer.
According to the remote sensing image scene planning method provided by the invention, the planning effect generation model is obtained by training based on the following method:
alternately training a generator and a discriminator in the conditional generation countermeasure network based on the sample segmentation image, and taking the generator obtained by training as the planning effect generation model;
the input of the discriminator comprises the sample segmentation image and the sample remote sensing image or the sample effect image corresponding to the sample segmentation image, and the sample effect image is obtained by inputting the sample segmentation image to the generator.
According to the remote sensing image scene planning method provided by the invention, the planning modification based on the segmentation image comprises the following steps:
and modifying, adding or deleting color blocks on the segmented images based on the preset corresponding relationship between the target in the scene to be planned and the color blocks in the segmented images.
According to the remote sensing image scene planning method provided by the invention, a generator in the condition generation countermeasure network comprises an encoder, and the encoder is a convolution network with 8 times of down sampling.
The invention also provides a remote sensing image scene planning device, which comprises:
the segmentation module is used for carrying out target segmentation on the remote sensing image of the scene to be planned to obtain a segmented image;
the planning module is used for planning and modifying based on the segmentation image to obtain a modified segmentation image;
the generating module is used for inputting the modified segmentation image into a planning effect generating model to obtain a scene effect image output by the planning effect generating model;
the planning effect generation model is obtained by training a condition generation countermeasure network based on the sample remote sensing image and the sample segmentation image.
The invention also provides electronic equipment which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the program to realize the steps of the remote sensing image scene planning method.
The invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method for remote sensing image scene planning as described in any of the above.
According to the remote sensing image scene planning method, the remote sensing image scene planning device, the electronic equipment and the storage medium, the remote sensing image of the scene to be planned is subjected to target segmentation to obtain the segmented image, so that scene planning can be conveniently and quickly performed on the segmented image, the planned scene effect image is generated through the planning effect generation model, and a vivid planning result can be quickly and efficiently obtained.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a remote sensing image scene planning method provided by the invention;
FIG. 2 is a schematic diagram of a network structure of a target segmentation model provided by the present invention;
FIG. 3 is a schematic diagram of a network structure of a generator and an arbiter provided by the present invention;
FIG. 4 is a second schematic flow chart of the method for planning a remote sensing image scene according to the present invention;
FIG. 5 is a schematic structural diagram of a remote sensing image scene planning device provided by the invention;
fig. 6 is a schematic structural diagram of an electronic device provided in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides a remote sensing image scene planning method, and fig. 1 is a flow schematic diagram of the remote sensing image scene planning method provided by the invention, as shown in fig. 1, the method comprises the following steps:
and 120, planning and modifying based on the segmented image to obtain a modified segmented image.
Specifically, a scene to be planned is a scene that needs scene planning, where the scene planning may be to plan buildings, grasslands, roads, and the like in different areas in the scene. Considering that scenes in the remote sensing images are complex and disordered, the difficulty of directly planning the scenes on the images is high, and the difficulty of subsequent planning can be greatly reduced by firstly segmenting the targets by using an image segmentation method and then planning the targets object by object.
In contrast, after the remote sensing image of the scene to be planned is acquired, the embodiment of the invention firstly performs target segmentation on the remote sensing image, so as to obtain segmented images divided according to different target categories in the scene to be planned. On the basis, planning and modifying can be carried out on each target object in the scene to be planned one by one on the basis of the segmentation images, and finally the modified segmentation images are obtained.
Here, the number of types of the targets included in the scene to be planned may be arbitrarily set according to requirements, and it can be understood that the more the number of types of the targets included in the scene is set, the more detailed the subsequent planning of the scene will be. The labeling modes for different target categories on the segmented image can adopt labeling modes of different colors, labeling modes of different gray levels and other labeling modes which can be conveniently distinguished.
In addition, the embodiment of the present invention does not specifically limit the acquisition manner of the segmented image, and may directly perform target segmentation on the remote sensing image by using a conventional image segmentation algorithm to obtain the segmented image, or input the remote sensing image into the target segmentation model of the present invention to obtain the segmented image output by the target segmentation model.
the planning effect generation model is obtained by training the condition generation countermeasure network based on the sample remote sensing image and the sample segmentation image.
Specifically, in order to obtain a realistic planning scene effect quickly and efficiently, in the embodiment of the present invention, after the modified segmented image is obtained, the modified segmented image may be input into the planning effect generation model, so as to obtain a planned scene effect image output by the planning effect generation model, where the scene effect image may be a simulated remote sensing image for representing an effect after planning a scene to be planned.
In addition, before step 130 is executed, a planning effect generation model needs to be trained in advance, and the planning effect generation model can be trained specifically in the following manner:
firstly, collecting a large number of sample remote sensing images, and carrying out target segmentation on the sample remote sensing images to obtain sample segmentation images of the sample remote sensing images. Then, considering that to generate a more realistic sample image, it is generally adopted to generate a countermeasure network, but the generation of the countermeasure network cannot accurately control the generated image, and the condition generation countermeasure network can output desired data through condition tag control. Aiming at the problem, the embodiment of the invention generates a confrontation network based on the condition to construct an initial model, and then trains the initial model by utilizing the sample remote sensing image and the sample segmentation image, thereby obtaining a planning effect generation model.
According to the method provided by the embodiment of the invention, the remote sensing image of the scene to be planned is subjected to target segmentation to obtain the segmented image, so that the scene planning can be conveniently and quickly performed on the segmented image, the planned scene effect image is generated through the planning effect generation model, and a vivid planning result can be quickly and efficiently obtained.
Based on any of the above embodiments, step 110 includes:
inputting the remote sensing image into a target segmentation model to obtain a segmentation image;
the target segmentation model is obtained by training a full convolution neural network based on a sample remote sensing image.
In particular, in consideration of the fact that the deep learning can automatically learn useful features from a large amount of data by means of a back propagation algorithm, compared with a traditional manually designed feature extractor, the features obtained by the deep learning are more reliable and easier to use, and the method provides favorable support for remote sensing image segmentation with complex scenes. Therefore, the target segmentation method adopted by the embodiment of the invention is to input the remote sensing image into the target segmentation model so as to obtain the segmented image output by the target segmentation model.
Here, the target segmentation model is specifically obtained by training in advance as follows: first, a large number of sample remote sensing images are collected. And then, constructing an initial target segmentation model based on the full convolution neural network, and training the initial target segmentation model by utilizing the sample remote sensing image, so that a target segmentation model which can be used for carrying out pixel-by-pixel classification on the remote sensing image can be obtained.
The mode of collecting the sample remote sensing image can be that under the condition of good weather, an optical imaging instrument carried by a satellite or a space shuttle is used for shooting the ground, so that a large number of sample remote sensing images are obtained. On the basis, all shot sample remote sensing images can be divided into a plurality of classes according to the meaning of a ground target object, each class is marked pixel by pixel and stored as a segmentation label image, and then all the sample remote sensing images and the corresponding segmentation label images are divided into a training set and a verification set for training a target segmentation model.
Based on any one of the embodiments, the target segmentation model comprises a backbone network, a multi-depth convolution module and an adaptive feature integration module;
inputting the remote sensing image into a target segmentation model to obtain a segmentation image, wherein the segmentation image comprises the following steps:
inputting the remote sensing image into a backbone network to obtain shallow layer characteristics;
inputting the shallow features into a multi-depth convolution module to obtain deep features;
and fusing the shallow feature and the deep feature based on the self-adaptive feature integration module, and obtaining a segmentation image based on a fusion result.
Specifically, aiming at the characteristic that a remote sensing image is complex and changeable, in order to realize accurate segmentation of various targets with similar textures, the target segmentation model in the embodiment of the invention comprises a backbone network, a multi-depth convolution module and an adaptive feature integration module, wherein the adaptive feature integration module can adaptively fuse shallow features and deep features of the network, so that more identification features can be obtained, and better segmentation can be realized for different classes of objects with similar appearances.
On the basis, the remote sensing image is input into a target segmentation model, a main network can extract features of the input remote sensing image to obtain shallow features, then the shallow features are input into a multi-depth convolution module to obtain deep features, then an adaptive feature integration module fuses the obtained shallow features and the deep features, and a segmentation image of the remote sensing image is generated according to the fused features and is used for carrying out scene planning on the basis of the segmentation image.
Based on any of the above embodiments, the multi-depth convolution module includes a plurality of multi-depth convolution branches and a global pooling branch; each multi-depth convolution branch comprises convolution layers with different layers, and the convolution kernels of the convolution layers have the same size; the global pooling branch includes a global average pooling layer and a convolutional layer.
Specifically, in order to realize accurate segmentation of targets with different sizes, the target segmentation model in the embodiment of the present invention performs target segmentation on a remote sensing image by using visual information of the image and also by using multi-scale context information of the image in a fusion manner, and is specifically embodied on a multi-depth convolution module included in the target segmentation model. Here, the multi-depth convolution module is composed of two parts, one of which is a multi-depth convolution branch set composed of a plurality of multi-depth convolution branches, each multi-depth convolution branch includes convolution layers with different layers, and convolution kernels of the convolution layers have the same size; the other part is a global pooling branch comprising a global average pooling layer and a convolutional layer.
It can be understood that, since each branch in the multi-depth convolution branch set uses convolution with the same size but different depths, the resolution of the output features of the branches is the same, but the receptive field size of the last convolution layer neuron is different, which indicates that the multi-depth convolution branches can obtain multi-scale context information, and the global pooling branch can learn global context information.
Based on any of the above embodiments, the planning effect generation model is obtained by training based on the following method:
alternately training a generator and a discriminator in the conditional generation countermeasure network based on the sample segmentation image, and taking the generator obtained by training as a planning effect generation model;
the input of the discriminator comprises a sample segmentation image and a sample remote sensing image or a sample effect image corresponding to the sample segmentation image, wherein the sample effect image is obtained by inputting the sample segmentation image to the generator.
Specifically, before step 130 is executed, the embodiment of the present invention obtains the planning effect generation model through training in the following manner:
firstly, a large number of sample remote sensing images are collected, and sample segmentation images corresponding to the sample remote sensing images are obtained. Then, in order to optimize the condition generation countermeasure network, the generator and the discriminator in the condition generation countermeasure network are alternately trained based on the sample segmentation image, so that the generator and the discriminator after alternate iterative training are obtained, and the generator is used as a planning effect generation model. Here, in the alternating training process, the input sample of the discriminator may be composed of the sample segmented image and the sample remote sensing image corresponding to the sample segmented image, or may be composed of the sample segmented image and the sample effect image obtained by inputting the sample segmented image to the generator.
It can be understood that, in the process of continuously iteratively training the conditional generation countermeasure network, the discrimination capability of the discriminator is stronger and stronger, and in order to cheat the discriminator, the scene effect image generated by the generator is more and more vivid, so that the generator with higher performance can be obtained as a planning effect generation model. On the basis, the modified segmentation image is sent into a planning effect generation model, and a relatively vivid scene effect image after planning can be correspondingly generated.
Further, the training details of the planning effect generation model may be set as follows: the generator is trained by using a small batch Stochastic Gradient Descent (SGD) method, and the learning rate is 2 multiplied by 10-4The small batch is 4; the arbiter uses Adam optimizer, the learning rate is also 2 × 10-4The small batch is 4; the momentum of the generator and the arbiter is set to 0.5 and 0.999 respectively; for a conditional generation countermeasure network, it is necessary to optimize both generator loss and discriminator lossAnd the generator of the network optimizes by using the area loss and the L1 loss at the same time, the L1 loss is responsible for low-frequency information, and the discriminator uses the two-class cross entropy loss.
Based on any of the above embodiments, step 120 includes:
and modifying, adding or deleting color blocks on the segmentation images based on the preset corresponding relation between the target in the scene to be planned and the color blocks in the segmentation images.
In particular, given the complexity of the remote sensing image scene, it is very difficult to plan directly on the original image, such as trying to remove a certain building in the area. For the problem, the embodiment of the invention firstly utilizes a segmentation algorithm to segment different targets in a scene to be planned, and then only needs to remove, modify or add segmentation color blocks representing different targets in a segmentation image to complete the planning.
In order to be able to visually see the distribution condition of each target in a scene to be planned from a segmented image and further to be able to conveniently and quickly plan the scene, in the process of segmenting the target of a remote sensing image, the embodiment of the invention marks the remote sensing image with different colors according to the meaning of different targets in the scene to be planned, thereby obtaining a segmented image which is divided into different color blocks; here, the correspondence between the target in the scene to be planned and the color block in the segmented image may be arbitrarily set according to the requirement, for example, the target is a building, and the segmented image may be marked with a blue color block or a brown color block;
on the basis, the operation of modifying, adding or deleting the color blocks representing different targets on the segmentation image can be carried out on the segmentation image based on the corresponding relation between the targets in the scene to be planned and the color blocks in the segmentation image, and the planning of the scene to be planned can be completed, so that the operation is very simple. For example, when a building in a scene to be planned needs to be modified to be a grass, then the blue color patch representing the building can be changed to a cyan color patch representing the grass. Here, the operations of modifying, adding, or deleting color patches may be accomplished using a drawing tool.
After the modified segmented image is obtained, the modified segmented image is input into a planning effect generation model, and the planning effect generation model can reversely generate a realistic remote sensing image according to the color blocks of different categories, for example, in the above example, the blue mark of the building at the corresponding position has been modified to cyan, so that the area in the generated scene effect image has become the grassland.
According to the method provided by the embodiment of the invention, the scene planning can be completed only by modifying, adding or deleting the color blocks on the segmentation images, the scene planning is not required to be manually performed in the complex original remote sensing images, but the whole planning process is divided into a plurality of simple steps, so that the more automatic scene planning can be conveniently and quickly completed.
Based on any of the above embodiments, the generator in the condition generation countermeasure network includes an encoder, and the encoder is a convolutional network with 8 times of downsampling.
Specifically, in consideration of the fact that a shallow high-resolution layer in the U-Net network is used for solving the problem of pixel positioning, and a deeper layer is used for solving the problem of pixel classification, so that the segmentation of the image semantic level can be realized, the generator in the condition generation countermeasure network in the embodiment of the invention adopts the U-Net network. In a U-Net network, an encoder acquires abstract semantic features through downsampling convolution, and a decoder performs the process of combining upsampling of each layer of features with shallow features of the network for starting several downsampling times to restore detail information. And the U-Net adopts gradual up-sampling, and shallow layer characteristics are added in each up-sampling process to restore details, so that the whole network forms a symmetrical structure.
The original condition generator uses a U-Net network which is sampled for 4 times, and takes the characteristics of larger size and wide coverage range of the remote sensing image into consideration, in order to obtain more global high-level semantic information and further obtain a more vivid planned scene effect image, the invention improves U-Net. The specific improvement mode is to increase the down-sampling times, use 8 times of convolution network of down-sampling as the coder, then use 8 times of up-sampling to restore the original resolution, and symmetrically fuse the network shallow layer characteristics in the up-sampling process.
Based on any one of the embodiments, aiming at the characteristic that the remote sensing image is complex and changeable, the embodiment of the invention provides a multi-depth convolution network fusing deep and shallow layer features on the basis of a full convolution neural network, the network can effectively fuse multi-scale context to mark targets with different sizes, and simultaneously combines the shallow layer features and the deep layer features to mark targets with similar textures. On the basis, the multi-depth convolution network is trained on the basis of the sample remote sensing image, and finally the target segmentation model can be obtained.
In order to ensure the accuracy of a target segmentation model and save the training time of the target segmentation model, the embodiment of the invention adopts a ResNet-101 network as a main network of the target segmentation model, and additionally designs two new modules: a multi-depth convolution module and an adaptive feature integration module. Fig. 2 is a schematic diagram of a network structure of the target segmentation model provided by the present invention, and as shown in fig. 2, a ResNet-101 network, a multi-depth convolution module, and an adaptive feature integration module together form a codec structure of the target segmentation model. The ResNet-101 network comprises a residual module and a maximum pooling layer, wherein the residual module and the maximum pooling layer are alternately arranged.
The MDCM in the multi-depth convolution module, namely the graph, comprises a plurality of multi-depth convolution branches and a global pooling branch, wherein each multi-depth convolution branch comprises 3 x 3 convolutional layers with different layers, a 1 x 1 convolutional layer is arranged in front of each multi-depth convolution branch, and the global pooling branch comprises a global average pooling layer and the 1 x 1 convolutional layer. Further, considering that the features of the global pooling branch output may be different from the resolution of the features of the multi-depth convolution branch output, the features of the global pooling branch output may be up-sampled by bilinear interpolation so as to have the same resolution as the features of the other multi-depth convolution branches, and on this basis, all the outputs of the two parts of the multi-depth convolution module may be connected and output after passing through a 3 × 3 convolution layer. Here, a ReLU activation function may be used after each convolutional layer.
An adaptive feature integration module, AFIM in the figure, integrates deep and shallow features, whose inputs include features from the multi-depth convolution module and features from each downsampling of ResNet-101. To balance the output ratio of the two features, the features from ResNet-101 are first reduced in number of channels by a 1 × 1 convolutional layer to ensure that the shallow and deep features have the same number of channels. And then the two depth layer features are connected, after the connected features are obtained, firstly, a 3 multiplied by 3 convolution layer is used for carrying out primary fusion on the features, then, the initial fusion features are sent to a proposed channel reweighting module for refining, and finally, the refined features and the initial fusion features are added through element addition operation to form a residual error structure.
Here, the channel weighting module first processes through a 3 × 3 convolutional layer, then, considering that features from different channels generally have different importance levels for semantic segmentation tasks, this module compresses a feature map into a D-dimensional (D represents the number of channels) weight vector using global pooling branches, and finally, multiplies all features of each channel by corresponding weights in the weight vector through element multiplication, where the element multiplication can enhance features of more important channels with larger weights.
Based on any of the above embodiments, the training details of the target segmentation model may be set as follows: the proposed segmentation network is realized under PyTorch framework, and the backbone network is ResNet-101 pre-trained on ImageNet data set; the network uses cross entropy as a loss function and uses batch normalization to ensure the convergence of the model; the optimization strategy adopts a small batch random gradient descent (SGD) method to train the network, the small batch is set as 12, the basic learning rate is set as 10-2Momentum is set to 0.9; furthermore, the learning rate can be automatically adjusted using cosine annealing.
On the basis, the remote sensing image in the test set can be automatically segmented by using the trained target segmentation model. At the moment, different objects in different areas in the remote sensing image are distinguished, and the remote sensing image is divided into different color blocks.
Based on any of the above embodiments, in the embodiments of the present invention, an improved version of Pix2Pix is used as a planning effect generation model. Pix2Pix is a conditional generation countermeasure network, and unlike a general generation countermeasure network which accepts noise input, a generator of the conditional generation countermeasure network is a full convolution network, accepts condition information as input to generate an image, and a discriminator is a classification network, and accepts condition input, a generated image and a real image to discriminate between true and false.
In the embodiment of the invention, the generator is an encoder-decoder structure, receives a segmented image of the sample remote sensing image as a condition input, can reversely generate a sample scene effect image by using the segmented image, and then respectively uses the segmented image and the generated scene effect image, the segmented image and a real scene image, namely the sample remote sensing image, as the input of the discriminator for the discriminator to discriminate true and false.
The discriminator uses a region GAN (generic adaptive Network) that outputs a prediction probability value for each region of the input image, and specifically, the discriminator combines N × N sized regions in the original image into one point by successive times of convolution downsampling, and then classifies the point using the downsampled feature map to determine whether the point is true or false. The discriminator is mainly concerned with high-frequency information of the image.
Fig. 3 is a schematic diagram of a network structure of a generator and an arbiter provided by the present invention, as shown in fig. 3, an encoder in the generator is a convolutional network with 8 downsamplings, and is composed of a residual module and a maximum pooling layer alternately, a decoder is a convolutional network with 8 upsamplings, and uses a 3 × 3 convolutional layer, and an activation function adopts a ReLU; the discriminator uses three groups of 3 multiplied by 3 convolutional layers with the step length of 2 to carry out down sampling, each feature point corresponds to 8 multiplied by 8 pixel points at the moment, and then 1 multiplied by 1 convolutional layer is used for reducing the number of channels for final classification, so as to discriminate the truth of the input image.
Based on any of the above embodiments, the embodiment of the present invention uses an ISPRS Potsdam data set as a presentation data set. This dataset was collected entirely from the german city pottsdam scene and consisted of 38 images of 6000 x 6000 pixels, each containing infrared, red, green and blue bands, all pixels being defined as 6 categories including opaque surfaces, buildings, vegetation, trees, cars and debris. In the embodiment of the invention, a sample remote sensing image set is constructed by only using infrared, red and green three-channel images in a Potsdam data set, the set is divided into a training set and a testing set which respectively comprise 24 images and 14 images, each image is divided into a plurality of classes according to the meaning of a ground target, and the different classes are marked as color blocks with different colors, so that a segmentation label image is obtained.
Considering that the size of the remote sensing image is usually very large, and the computer has limited memory and cannot be directly used for network training and testing, the embodiment of the invention automatically cuts all sample remote sensing images and corresponding segmentation label images into a group of small images with the size of 1024 × 1024 by using a sliding window technology, wherein the sliding step size is 384 pixels. And then performing data enhancement operations such as horizontal and vertical flipping, random rotation and the like on the small images cut out in the training set, so as to generate more training samples, wherein the training samples in the training set have 600 pieces.
Based on any one of the embodiments, in order to solve the problem that direct planning of a scene is not easy to realize due to the fact that the scene of a remote sensing image is complex and changeable, the embodiment of the invention discloses a remote sensing image scene planning method based on a segmentation network and a generation countermeasure network. Fig. 4 is a second schematic flow chart of the remote sensing image scene planning method provided by the present invention, as shown in fig. 4, the method specifically includes the following steps:
step 1: collecting a large number of sample remote sensing images of a certain scene to construct a sample remote sensing image set, dividing targets in the sample remote sensing images into a plurality of classes, marking pixels belonging to each class with different colors, and dividing marked data into a training set and a test set;
step 2: on the basis of a full convolution neural network, a multi-depth convolution network fusing deep and shallow layer characteristics is provided, a target segmentation model is constructed on the basis of the network, and the model can be used for automatically classifying and marking remote sensing scenes pixel by pixel through training and learning;
and step 3: generating an antagonistic network construction planning effect generation model based on conditions, wherein the model comprises a generator and a discriminator, and abstract global information is obtained by using improved U-Net as the generator;
and 4, step 4: and training a target segmentation model and a planning effect generation model by using the sample remote sensing image set to obtain parameters. Carrying out target segmentation on the remote sensing image by using the trained target segmentation model to obtain a segmented image;
and 5: and modifying and planning the segmentation image by using a drawing tool, and generating a planned scene effect image by using a planning effect generation model.
Through the steps, the remote sensing image scene planning can be quickly, efficiently and conveniently realized.
The remote sensing image scene planning device provided by the invention is described below, and the remote sensing image scene planning device described below and the remote sensing image scene planning method described above can be referred to correspondingly.
Fig. 5 is a schematic structural diagram of a remote sensing image scene planning device provided by the present invention, and as shown in fig. 5, the device includes:
the segmentation module 510 is configured to perform target segmentation on the remote sensing image of the scene to be planned to obtain a segmented image;
a planning module 520, configured to perform planning modification based on the segmented image to obtain a modified segmented image;
a generating module 530, configured to input the modified segmented image to the planning effect generating model, and obtain a scene effect image output by the planning effect generating model;
the planning effect generation model is obtained by training the condition generation countermeasure network based on the sample remote sensing image and the sample segmentation image.
The device provided by the embodiment of the invention can conveniently and quickly carry out scene planning on the segmented image by carrying out target segmentation on the remote sensing image of the scene to be planned to obtain the segmented image, and can quickly and efficiently obtain a vivid planning result by generating the planned scene effect image through the planning effect generation model.
Based on any of the above embodiments, the segmentation module 510 includes:
the segmentation unit is used for inputting the remote sensing image into the target segmentation model to obtain a segmentation image;
the target segmentation model is obtained by training a full convolution neural network based on a sample remote sensing image.
Based on any one of the embodiments, the target segmentation model comprises a backbone network, a multi-depth convolution module and an adaptive feature integration module;
the dividing unit is used for:
inputting the remote sensing image into a backbone network to obtain shallow layer characteristics;
inputting the shallow features into a multi-depth convolution module to obtain deep features;
and fusing the shallow feature and the deep feature based on the self-adaptive feature integration module, and obtaining a segmentation image based on a fusion result.
Based on any of the above embodiments, the multi-depth convolution module includes a plurality of multi-depth convolution branches and a global pooling branch; each multi-depth convolution branch comprises convolution layers with different layers, and the convolution kernels of the convolution layers have the same size; the global pooling branch includes a global average pooling layer and a convolutional layer.
Based on any of the above embodiments, the planning effect generation model is obtained by training based on the following method:
alternately training a generator and a discriminator in the conditional generation countermeasure network based on the sample segmentation image, and taking the generator obtained by training as a planning effect generation model;
the input of the discriminator comprises a sample segmentation image and a sample remote sensing image or a sample effect image corresponding to the sample segmentation image, wherein the sample effect image is obtained by inputting the sample segmentation image to the generator.
Based on any of the above embodiments, the planning module 520 is configured to:
and modifying, adding or deleting color blocks on the segmentation images based on the preset corresponding relation between the target in the scene to be planned and the color blocks in the segmentation images.
Based on any of the above embodiments, the generator in the condition generation countermeasure network includes an encoder, and the encoder is a convolutional network with 8 times of downsampling.
Fig. 6 illustrates a physical structure diagram of an electronic device, which may include, as shown in fig. 6: a processor (processor)610, a communication Interface (Communications Interface)620, a memory (memory)630 and a communication bus 640, wherein the processor 610, the communication Interface 620 and the memory 630 communicate with each other via the communication bus 640. Processor 610 may invoke logic instructions in memory 630 to perform a method of remote sensing image scene planning, the method comprising: carrying out target segmentation on the remote sensing image of the scene to be planned to obtain a segmented image; planning and modifying based on the segmentation image to obtain a modified segmentation image; inputting the modified segmentation image into a planning effect generation model to obtain a scene effect image output by the planning effect generation model; the planning effect generation model is obtained by training the condition generation countermeasure network based on the sample remote sensing image and the sample segmentation image.
In addition, the logic instructions in the memory 630 may be implemented in software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product comprising a computer program stored on a non-transitory computer-readable storage medium, the computer program comprising program instructions which, when executed by a computer, enable the computer to perform the method for planning a scene in a remotely sensed image provided by the above methods, the method comprising: carrying out target segmentation on the remote sensing image of the scene to be planned to obtain a segmented image; planning and modifying based on the segmentation image to obtain a modified segmentation image; inputting the modified segmentation image into a planning effect generation model to obtain a scene effect image output by the planning effect generation model; the planning effect generation model is obtained by training the condition generation countermeasure network based on the sample remote sensing image and the sample segmentation image.
In yet another aspect, the present invention also provides a non-transitory computer-readable storage medium having stored thereon a computer program, which when executed by a processor is implemented to perform the method for planning a scene in a remote sensing image provided above, the method comprising: carrying out target segmentation on the remote sensing image of the scene to be planned to obtain a segmented image; planning and modifying based on the segmentation image to obtain a modified segmentation image; inputting the modified segmentation image into a planning effect generation model to obtain a scene effect image output by the planning effect generation model; the planning effect generation model is obtained by training the condition generation countermeasure network based on the sample remote sensing image and the sample segmentation image.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (10)
1. A remote sensing image scene planning method is characterized by comprising the following steps:
carrying out target segmentation on the remote sensing image of the scene to be planned to obtain a segmented image;
planning and modifying based on the segmentation image to obtain a modified segmentation image;
inputting the modified segmentation image into a planning effect generation model to obtain a scene effect image output by the planning effect generation model;
the planning effect generation model is obtained by training a condition generation countermeasure network based on the sample remote sensing image and the sample segmentation image.
2. The remote sensing image scene planning method of claim 1, wherein the target segmentation of the remote sensing image of the scene to be planned to obtain a segmented image comprises:
inputting the remote sensing image into a target segmentation model to obtain a segmentation image;
the target segmentation model is obtained by training a full convolution neural network based on a sample remote sensing image.
3. The remote sensing image scene planning method of claim 2, wherein the target segmentation model comprises a backbone network, a multi-depth convolution module and an adaptive feature integration module;
inputting the remote sensing image into a target segmentation model to obtain the segmentation image, wherein the step of obtaining the segmentation image comprises the following steps:
inputting the remote sensing image into the backbone network to obtain shallow layer characteristics;
inputting the shallow features to the multi-depth convolution module to obtain deep features;
and fusing the shallow feature and the deep feature based on the self-adaptive feature integration module, and obtaining the segmentation image based on a fusion result.
4. The remote sensing image scene planning method of claim 3, wherein the multi-depth convolution module includes a plurality of multi-depth convolution branches and a global pooling branch; each multi-depth convolution branch comprises convolution layers with different layers, and the convolution kernels of the convolution layers have the same size; the global pooling branch includes a global average pooling layer and a convolutional layer.
5. The remote sensing image scene planning method of claim 1, wherein the planning effect generation model is trained based on the following method:
alternately training a generator and a discriminator in the conditional generation countermeasure network based on the sample segmentation image, and taking the generator obtained by training as the planning effect generation model;
the input of the discriminator comprises the sample segmentation image and the sample remote sensing image or the sample effect image corresponding to the sample segmentation image, and the sample effect image is obtained by inputting the sample segmentation image to the generator.
6. The remote sensing image scene planning method of any one of claims 1-5, wherein the planning modification based on the segmented image comprises:
and modifying, adding or deleting color blocks on the segmented images based on the preset corresponding relationship between the target in the scene to be planned and the color blocks in the segmented images.
7. A method for remote sensing image scene planning as recited in any of claims 1-5, wherein the generator in the conditional generation countermeasure network comprises an encoder that is an 8-time downsampled convolutional network.
8. A remote sensing image scene planning apparatus, comprising:
the segmentation module is used for carrying out target segmentation on the remote sensing image of the scene to be planned to obtain a segmented image;
the planning module is used for planning and modifying based on the segmentation image to obtain a modified segmentation image;
the generating module is used for inputting the modified segmentation image into a planning effect generating model to obtain a scene effect image output by the planning effect generating model;
the planning effect generation model is obtained by training a condition generation countermeasure network based on the sample remote sensing image and the sample segmentation image.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method for remote sensing image scene planning according to any of claims 1 to 7 are implemented by the processor when executing the program.
10. A non-transitory computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method for planning a scene in remote sensing images according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110875474.0A CN113673369A (en) | 2021-07-30 | 2021-07-30 | Remote sensing image scene planning method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110875474.0A CN113673369A (en) | 2021-07-30 | 2021-07-30 | Remote sensing image scene planning method and device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113673369A true CN113673369A (en) | 2021-11-19 |
Family
ID=78540931
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110875474.0A Pending CN113673369A (en) | 2021-07-30 | 2021-07-30 | Remote sensing image scene planning method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113673369A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116030363A (en) * | 2023-02-20 | 2023-04-28 | 北京数慧时空信息技术有限公司 | Remote sensing image class activation mapping chart optimizing method |
CN116311023A (en) * | 2022-12-27 | 2023-06-23 | 广东长盈科技股份有限公司 | Equipment inspection method and system based on 5G communication and virtual reality |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108537801A (en) * | 2018-03-29 | 2018-09-14 | 山东大学 | Based on the retinal angiomatous image partition method for generating confrontation network |
CN108830209A (en) * | 2018-06-08 | 2018-11-16 | 西安电子科技大学 | Based on the remote sensing images method for extracting roads for generating confrontation network |
CN109784283A (en) * | 2019-01-21 | 2019-05-21 | 陕西师范大学 | Based on the Remote Sensing Target extracting method under scene Recognition task |
CN111080645A (en) * | 2019-11-12 | 2020-04-28 | 中国矿业大学 | Remote sensing image semi-supervised semantic segmentation method based on generating type countermeasure network |
WO2020143323A1 (en) * | 2019-01-08 | 2020-07-16 | 平安科技(深圳)有限公司 | Remote sensing image segmentation method and device, and storage medium and server |
CN111738908A (en) * | 2020-06-11 | 2020-10-02 | 山东大学 | Scene conversion method and system for generating countermeasure network by combining instance segmentation and circulation |
WO2021097845A1 (en) * | 2019-11-22 | 2021-05-27 | 驭势(上海)汽车科技有限公司 | Simulation scene image generation method, electronic device and storage medium |
-
2021
- 2021-07-30 CN CN202110875474.0A patent/CN113673369A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108537801A (en) * | 2018-03-29 | 2018-09-14 | 山东大学 | Based on the retinal angiomatous image partition method for generating confrontation network |
CN108830209A (en) * | 2018-06-08 | 2018-11-16 | 西安电子科技大学 | Based on the remote sensing images method for extracting roads for generating confrontation network |
WO2020143323A1 (en) * | 2019-01-08 | 2020-07-16 | 平安科技(深圳)有限公司 | Remote sensing image segmentation method and device, and storage medium and server |
CN109784283A (en) * | 2019-01-21 | 2019-05-21 | 陕西师范大学 | Based on the Remote Sensing Target extracting method under scene Recognition task |
CN111080645A (en) * | 2019-11-12 | 2020-04-28 | 中国矿业大学 | Remote sensing image semi-supervised semantic segmentation method based on generating type countermeasure network |
WO2021097845A1 (en) * | 2019-11-22 | 2021-05-27 | 驭势(上海)汽车科技有限公司 | Simulation scene image generation method, electronic device and storage medium |
CN111738908A (en) * | 2020-06-11 | 2020-10-02 | 山东大学 | Scene conversion method and system for generating countermeasure network by combining instance segmentation and circulation |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116311023A (en) * | 2022-12-27 | 2023-06-23 | 广东长盈科技股份有限公司 | Equipment inspection method and system based on 5G communication and virtual reality |
CN116030363A (en) * | 2023-02-20 | 2023-04-28 | 北京数慧时空信息技术有限公司 | Remote sensing image class activation mapping chart optimizing method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111986099B (en) | Tillage monitoring method and system based on convolutional neural network with residual error correction fused | |
CN113780296B (en) | Remote sensing image semantic segmentation method and system based on multi-scale information fusion | |
Abdollahi et al. | Improving road semantic segmentation using generative adversarial network | |
CN107918776B (en) | Land planning method and system based on machine vision and electronic equipment | |
CN112380921A (en) | Road detection method based on Internet of vehicles | |
EP3690741B1 (en) | Method for automatically evaluating labeling reliability of training images for use in deep learning network to analyze images, and reliability-evaluating device using the same | |
CN108549893A (en) | A kind of end-to-end recognition methods of the scene text of arbitrary shape | |
JP2019514123A (en) | Remote determination of the quantity stored in containers in geographical areas | |
CN106649542A (en) | Systems and methods for visual question answering | |
CN113743417B (en) | Semantic segmentation method and semantic segmentation device | |
CN113673369A (en) | Remote sensing image scene planning method and device, electronic equipment and storage medium | |
CN112861690A (en) | Multi-method fused remote sensing image change detection method and system | |
CN112464766A (en) | Farmland automatic identification method and system | |
CN117409192B (en) | Data enhancement-based infrared small target detection method and device | |
CN114332473B (en) | Object detection method, device, computer apparatus, storage medium, and program product | |
CN112287983B (en) | Remote sensing image target extraction system and method based on deep learning | |
CN113239736A (en) | Land cover classification annotation graph obtaining method, storage medium and system based on multi-source remote sensing data | |
CN115861756A (en) | Earth background small target identification method based on cascade combination network | |
CN112288701A (en) | Intelligent traffic image detection method | |
CN115578693A (en) | Construction safety early warning method and device based on significance neural network model | |
Arya et al. | From global challenges to local solutions: A review of cross-country collaborations and winning strategies in road damage detection | |
CN117437615A (en) | Foggy day traffic sign detection method and device, storage medium and electronic equipment | |
CN112288702A (en) | Road image detection method based on Internet of vehicles | |
CN112132867A (en) | Remote sensing image transformation detection method and device | |
CN115761223A (en) | Remote sensing image instance segmentation method by using data synthesis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |