CN113052121A - Multi-level network map intelligent generation method based on remote sensing image - Google Patents

Multi-level network map intelligent generation method based on remote sensing image Download PDF

Info

Publication number
CN113052121A
CN113052121A CN202110377329.XA CN202110377329A CN113052121A CN 113052121 A CN113052121 A CN 113052121A CN 202110377329 A CN202110377329 A CN 202110377329A CN 113052121 A CN113052121 A CN 113052121A
Authority
CN
China
Prior art keywords
remote sensing
network map
map
sensing image
level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110377329.XA
Other languages
Chinese (zh)
Other versions
CN113052121B (en
Inventor
付莹
方政
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202110377329.XA priority Critical patent/CN113052121B/en
Publication of CN113052121A publication Critical patent/CN113052121A/en
Application granted granted Critical
Publication of CN113052121B publication Critical patent/CN113052121B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B29/00Maps; Plans; Charts; Diagrams, e.g. route diagram
    • G09B29/003Maps

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a multi-level network map intelligent generation method based on remote sensing images, and belongs to the technical field of computer vision. The method uses a preliminary generation algorithm model to generate a preliminary network map, and the model expands the hierarchy number into a normalized identification of the image size to assist the model in learning the drawing characteristics of maps of different hierarchies, so that the model can accurately generate the network map with different drawing characteristics of each hierarchy according to remote sensing images with similar contents of different hierarchies, and generate the multi-hierarchy network map with detailed and reasonable differences. And generating a refined network map by using a map improvement algorithm model, wherein the model uses a high-level refined network map to assist the generation of a low-level map, so that the model learns the consistency among maps of different levels, and the consistency among regional network maps corresponding to different levels is ensured. According to the method, the generation of the network map can be automatically completed according to the input image without manually collecting ground vector data and only by using a remote sensing image shot by aviation or satellite.

Description

Multi-level network map intelligent generation method based on remote sensing image
Technical Field
The invention relates to an intelligent generation method of a multi-level network map, in particular to an intelligent generation method of a multi-level network map based on a remote sensing image, and belongs to the technical field of computer vision.
Background
The remote sensing technology is a non-contact and remote sensing technology, and films or photos obtained by the remote sensing technology and recording the size of electromagnetic waves of various ground objects are called remote sensing images. The collection of remote sensing image can be accomplished through high altitude equipments such as unmanned aerial vehicle, aircraft and satellite, and it is fast to have an update rate, collects the characteristics that the cost is lower relatively, and the collection process is not influenced by ground condition, has good adaptability. The remote sensing image contains a large amount of ground electromagnetic wave information, can reflect the distribution conditions of ground objects such as roads, water areas, buildings and the like, and can extract and obtain information required by map drawing.
The multi-level network map is a network map adopting a multi-resolution level model, has the advantages of flexible use and convenient transmission, and is used by more and more network map services. The multi-level network map generally adopts a tile map pyramid model structure, and the resolution is lower and lower from the bottom layer (k layer) to the top layer (0 layer) of the tile pyramid, but the represented geographic range is unchanged. Specifically, the distance pixel ratio of the k-th layer tile map is half of that of the k-1-th layer, so that the tile map has a larger scale and can display finer content. In the multi-level network map, the geographic elements contained in different levels of maps are consistent, and the displayed detail degrees are different, so that the multi-level network map has consistency and difference between levels.
In a traditional multi-level network map, map images of all layers are usually rendered according to certain drawing standards according to map vector data, and the map vector data are usually acquired manually on site, so that the efficiency and the cost are greatly limited. Considering that the remote sensing image is fast in acquisition speed and low in collection cost, the method for automatically generating the network map according to the remote sensing image becomes a feasible solution. Existing methods typically view this task as an image semantic segmentation task or an image conversion task. The image semantic segmentation task aims to classify each pixel in the image according to the object class of the pixel, and the pixels of the remote sensing image can be classified according to the ground object class by using the correlation technology and are marked by different colors to form a network map. The image conversion task generally refers to converting an image of one style into an image of another style, and preserving structural information of the images, and the remote sensing image style image can be converted into a network map style image by using a related technology.
However, in the conventional method, only a single-level map is considered to be generated, and if a plurality of different levels are generated by using the method, consistency and difference between the maps of the different levels cannot be grasped, so that it is difficult to generate a multi-level network map with accurate and consistent information expression and good visual effect.
Disclosure of Invention
The invention aims to provide a multi-level network map generation method based on remote sensing images, which is based on the requirements of the existing network map service on multi-level network maps and aims at solving the technical problems that the traditional multi-level network map generation method is high in cost, low in efficiency and difficult to update in real time in emergency situations, and the existing network map intelligent generation method only considers single-level maps and cannot grasp the consistency and difference among different-level maps. The method has the advantages of strong adaptability, high generation speed, high result accuracy, good visual effect and good consistency and difference between layers.
The innovation points of the invention are as follows: the method includes a training phase and a use phase. In the training stage, firstly, the network map in the training data set of the remote sensing image-network map pair is clustered to the pixel color values, and the ground feature type mask corresponding to the network map is solved. And then, training a preliminary generation algorithm model by mixedly using each level remote sensing image, corresponding level information, a corresponding real ground object class mask and a corresponding real network map. And respectively inputting all the remote sensing images and the corresponding level numbers in the training set into the trained preliminary generation algorithm model, and generating a preliminary network map of each level for storage and standby. And then, training a map improvement algorithm model by sequentially utilizing the high-level to low-level remote sensing images, the preliminary network map, the real ground object class mask and the real network map.
In the using stage, firstly, if the acquired remote sensing image is a single-level image, the acquired remote sensing image needs to be expanded into a multi-level image. And then sequentially inputting the remote sensing images of all levels and the corresponding level information into a trained preliminary generation algorithm model, and generating and storing a corresponding preliminary network map. And then, based on the remote sensing images of all levels and the preliminary network map, sequentially generating the fine network map of each level from high to low by using a trained map improvement algorithm model. And finally, splicing the generated fine network maps of all levels into a multi-level network map according to the numbers.
The technical scheme adopted by the invention is as follows:
a multi-level network map generation method based on remote sensing images comprises the following steps:
step 1: and (5) a training stage.
Specifically, the method comprises the following steps:
step 1.1: and clustering the pixel color values of the network map in the training data set of the remote sensing image-network map pairing, and solving the ground feature type mask corresponding to the network map.
The specific method comprises the following steps:
firstly, clustering all pixel points of real network map data in a training set by using a clustering algorithm, solving a class number corresponding to each pixel, and corresponding each class number to an expressed ground feature semantic class.
And then, restoring the semantic categories corresponding to the pixels according to the original positions of the pixels in the network map, generating real ground object category masks corresponding to the real network map one by one and storing the masks.
Step 1.2: and training a preliminary generation algorithm model by using the remote sensing images of all levels, corresponding level information, corresponding real ground object class masks and corresponding real network maps in a mixed mode.
The specific method comprises the following steps:
and randomly selecting a remote sensing image from the training data set, normalizing the corresponding level number by dividing the remote sensing image by the total level number K, and inputting the remote sensing image and the normalized level number into a preliminary generation algorithm model. The model outputs a prediction result of a ground object type mask and a prediction result of a network map.
The size of the prediction result of the ground object type mask is consistent with that of the input remote sensing image, the solution space of each pixel is all integers of [0, (n-1) ], each integer represents one ground object type, and n is the total number of the ground object types. The prediction result of the network map is a network map picture in an RGB format, and the size of the network map picture is consistent with that of the input remote sensing image. And comparing the ground feature type mask prediction result and the network map prediction result output by the model with the real ground feature type mask and the real network map respectively, calculating a loss function, reversely propagating the loss value, and updating the parameters in the preliminarily generated algorithm model. And continuously repeating the process until the set iteration times are met, and storing the structure and the model parameters of the network to obtain the trained preliminary generation algorithm model structure and parameters.
The preliminary generated algorithm model includes two modules: the map drawing system comprises a first semantic extraction module and a first map drawing module.
When the remote sensing image is input into a preliminary generation algorithm model, the remote sensing image firstly passes through a first semantic extraction module which is a full convolution network. The first semantic extraction module can perform optimization by using a cross entropy loss function, wherein the minimized cross entropy loss function formula is as follows:
Figure BDA0003011371030000031
wherein theta is a model parameter of the first semantic extraction module F, the output of the first semantic extraction module F is a segmentation result and a feature map before the segmentation result, and x belongs to RN×H×WReferring to the input remote sensing image, N, H and W represent the number of image channels, height and width, respectively. s is formed by RC×H×WFor the true values of the semantic segmentation, C, H and W represent the number, height and width, s, of channels representing the true values of the semantic segmentation, respectivelyiSegmenting the true value, s, of an i-th class of interest object for semanticsiThe position of taking 1 represents that the point is the object of interest of the corresponding category, and 0 represents not. Fθ(x)iExtracting a module for a first semanticThe prediction confidence of the block for the i-th class of object of interest.
In addition, the model can select different loss functions such as a Focal loss function, a Lov-sz loss function and the like according to different specific details and different training data sets.
And then, the first mapping module receives output information (including a mask and a characteristic diagram), an original remote sensing image and a corresponding hierarchy number of the semantic extraction module at the same time, and generates a preliminary network map in an RGB format. The hierarchy number is normalized by dividing by the total number of hierarchies K, and then is filled with a vector of 1 × H × W, where H and W are the height and width of the input remote sensing image, respectively, and the vector represents hierarchy information.
The first mapping module is a conditional-based generation countermeasure network which performs supervised learning by using the result truth value of the target domain, and comprises a generator and a discriminator, wherein the generator and the discriminator perform countermeasure training: the generator generates synthetic data according to a given condition, and the discriminator distinguishes the generated data from the real data of the generator. The generator tries to produce data as close to the real as possible and accordingly the arbiter tries to perfectly distinguish the real data from the generated data. In this process, the discriminator takes the form of a loss function learned from the image data and instructs the generator to generate an image. The module may use the basis loss function as:
Figure BDA0003011371030000041
wherein phi is,
Figure BDA0003011371030000042
Are the parameters of generator G and discriminator D, x, y ∈ RC×H×WX is remote sensing image, y is real network map, C, H, W represents image channel number, height and width, pdata(x)、pdataAnd (y) data distribution of the remote sensing image and the real network map. And k represents the zoom level number to which the remote sensing image belongs, and the number is input into the model and then expanded into level information. Fθ(x) Is composed ofAnd the mask and the feature map output by the first semantic extraction module. E represents the mathematical expectation.
In addition, different loss functions such as a reconstruction loss function, a feature matching loss function, a perception loss function, a multi-size discriminator loss function and the like can be selected according to different specific details of the model and different training data sets.
Step 1.3: and inputting all the remote sensing images and the corresponding level numbers in the training set into the trained preliminary generation algorithm model respectively, generating a preliminary network map of each level, and storing for later use.
The specific method comprises the following steps:
and (3) creating a preliminary generation algorithm model according to the preliminary generation algorithm model structure and parameters stored in the step (1.2), inputting the remote sensing image and the corresponding hierarchy information into the model, and storing a preliminary network map output by the model. The preliminary network map generation formula is shown as follows:
y′=Gφ(x,Fθ(x),k) (3)
wherein y' is a preliminary network map, x is a remote sensing image, k represents a zoom level number to which the remote sensing image belongs, the number is expanded into level information after being input into a model, and Fθ(x) And G represents a generator, and phi represents generator parameters.
Step 1.4: and sequentially utilizing the high-level to low-level remote sensing images, the preliminary network map, the real ground object class mask and the real network map training map to improve the algorithm model.
The specific method comprises the following steps:
for a multi-level network map training data set containing K levels, the data levels are all integers in {0,1, …, K-1} numbered. First, the preliminary network map of the K-1 layer is used as the refined network map of the layer. And then, taking K as K-1, randomly selecting a K-1 layer remote sensing image, a K-1 layer preliminary network map corresponding to the K-1 layer remote sensing image and 4 pieces of fine-finished network maps corresponding to the K layer remote sensing image in an input map improvement algorithm model, generating a K-1 layer ground object type mask prediction result and a K-1 layer network map prediction result, respectively comparing the K-1 layer ground object type mask prediction result with a real ground object type mask and the real network map, calculating a loss function and updating parameters in the map improvement algorithm model according to the loss function.
And repeating the previous step until the set iteration times are met, and generating and storing corresponding k-1 layer fine-trimming network maps for all the remote sensing images of the k-1 layer by using the current map improvement algorithm model. Then, all integers with K being {1,2, …, K-2} are taken from large to small. And repeating the training process for each k value to finish the training of the map improvement algorithm model.
The map improvement algorithm model comprises a second semantic extraction module and a second map drawing module.
When the remote sensing image of the k-1 layer is input into a map improvement algorithm model, the map improvement algorithm model firstly passes through a semantic extraction module which is a full convolution network. The second semantic extraction module may perform optimization using a cross entropy loss function, wherein the minimized cross entropy loss function formula is:
Figure BDA0003011371030000051
wherein theta 'is a model parameter of the second semantic extraction module F', and the second semantic extraction module FThe output of (2) is a segmentation result and a feature map before the segmentation result, and x belongs to RN×H×WReferring to the input remote sensing image, N, H, W represents the number of image channels, height and width, respectively. s is formed by RC×H×WFor the true values of the semantic segmentation, C, H and W represent the number, height and width, s, of channels representing the true values of the semantic segmentation, respectivelyiSegmenting the true value, s, of an i-th class of interest object for semanticsiThe position of taking 1 represents that the point is the object of interest of the corresponding category, and 0 represents not. F'θ′(x)iAnd predicting confidence of the second semantic extraction module on the ith type of interest target.
In addition, the model can select different loss functions such as a Focal loss function, a Lov-sz loss function and the like according to different specific details and different training data sets.
And then, the second map drawing module receives output information (a mask and a feature map) of the second semantic extraction module, the preliminary network map of the k-1 layer corresponding to the remote sensing image and the 4 refined network maps corresponding to the k layer at the same time, and generates the refined network map corresponding to the remote sensing image.
The second map drawing module is a condition-based generation countermeasure network which utilizes the result truth value of the target domain to carry out supervised learning and comprises a generator and a discriminator, wherein the generator and the discriminator carry out the training of the countermeasure: the generator generates synthetic data according to a given condition, and the discriminator distinguishes the generated data from the real data of the generator. The generator tries to produce data as close to the real as possible and accordingly the arbiter tries to perfectly distinguish the real data from the generated data. In this process, the discriminator acts as a loss function learned from the image data, directing the generator to generate the image. Through mutual game learning of the generator and the discriminator, the generator can finally generate generated data meeting the quality requirement. The basis loss function used by this module is:
Figure BDA0003011371030000061
wherein phi
Figure BDA0003011371030000062
The parameters of the generator G and the discriminator D are respectively, x, y, y' are belonged to RC×H×WX is a remote sensing image, y is a real network map, y' is a preliminary network map, C, H, W respectively represents the number, height and width of image channels, subscript k-1 and subscript k represent the zoom level of the map, and pdata(x)、pdata(y) and pdata(y') respectively representing the data distribution of the remote sensing image, the real network map and the preliminary network map,
Figure BDA0003011371030000063
the refined network map representing the k-th layer is spliced and downsampled to form an image with the same size as the remote sensing image,
Figure BDA0003011371030000064
y′k-1、yk-1and xk-1The actual geographical area represented is the same in location and size. F'θ′(xk-1) And the mask and the feature map output by the second semantic extraction module. E represents the mathematical expectation.
In addition, different loss functions such as a reconstruction loss function, a feature matching loss function, a perception loss function, a multi-size discriminator loss function and the like can be selected and used according to different specific details of the model and different training data sets.
Step 2: and (4) a use stage.
Specifically, the method comprises the following steps:
step 2.1: if the collected remote sensing image is a single-level image, the remote sensing image is expanded into a multi-level image.
The specific method comprises the following steps:
and (2) regarding the collected single-layer remote sensing image as a kth layer, numbering all tiles, splicing every adjacent 2 x 2 remote sensing image tiles, sampling by methods such as interpolation and the like until the size of the tiles is the same as that of the original single tile, and processing all remote sensing image tiles on the layer to obtain the kth-1 layer remote sensing image.
And repeating the steps, and iteratively generating the images of each layer of k-2, k-3 and the like until the number of the images of the layer is small enough (within 20 sheets for example) or the images of the layer have only one row or only one column. And after the generation of the remote sensing images of each layer is finished, taking the lowest layer image as the 0 th layer, and numbering each layer again.
Step 2.2: and sequentially inputting the remote sensing images of all levels and the corresponding level information into a trained preliminary generation algorithm model, and generating and storing a corresponding preliminary network map.
The specific method comprises the following steps:
and creating a network model according to the preliminarily generated algorithm model structure and parameters stored in the training stage, inputting the remote sensing image and the hierarchical information into the model, predicting the model through a first semantic extraction module and a first language map drawing module respectively, and automatically storing a preliminary network map finally generated by the first language map drawing module, wherein the network map is in an RGB image format, and the size of the preliminary network map is consistent with that of the input remote sensing image tile.
The preliminary network map generation formula is as follows:
y′=Gφ(x,Fθ(x),k) (6)
wherein y' is a preliminary network map, x is a remote sensing map, k represents a zoom level number to which the remote sensing image belongs, the number is expanded into level information after being input into a model, and Fθ(x) And G represents a generator module, and phi represents generator module parameters.
Step 2.3: and based on the remote sensing images of all levels and the preliminary network map, sequentially generating the fine network map of each level from high to low by using a trained map improvement algorithm model.
For a multi-level remote sensing image data set containing K levels, the data levels are all integers in {0,1, …, K-1} numbered. First, the preliminary network map of the K-1 layer is used as the refined network map of the layer. And then, creating a network model according to a map improvement algorithm model structure and parameters saved in a training stage, taking K as K-1, respectively inputting the K-1 layer remote sensing image, the K-1 layer preliminary network map and the K layer refined network map into the model for operation, and generating and storing a corresponding K-1 layer refined network map. And (3) sequentially taking all integers with K being {1,2, …, K-2} from large to small, repeating the above process for each K value, and finishing the generation of the network map refined by all the levels.
The generation formula of the refined network map is as follows:
Figure BDA0003011371030000071
where φ 'is a parameter of the generator module G, x, y, y' e.g. RC×H×WX is a remote sensing image, y is a real network map, y' is a preliminary network map, C, H, W respectively represents the number, height and width of image channels, subscripts k and k-1 represent the zoom level of the map,
Figure BDA0003011371030000081
the representation generator generates a layer k-1 refinement network map,
Figure BDA0003011371030000082
the refined network map representing the k-th layer is spliced and downsampled to form an image with the same size as the remote sensing image,
Figure BDA0003011371030000083
y′k-1and xk-1The actual area represented is the same in position and size. F'θ′(xk-1) And the mask and the feature map output by the second semantic extraction module.
Step 2.4: and after generating each layer of network map block by block, splicing the generated network maps according to the sequence number to obtain a complete multi-layer network map.
The method can generate the multi-level network map on any scale.
Advantageous effects
Compared with the prior art, the method of the invention has the following advantages:
1. the method uses a preliminary generation algorithm model to generate a preliminary network map, the model expands the level information into a normalized identification of the image size, and assists the model to learn the drawing characteristics of maps of different levels, so that the model can accurately generate the network map with different drawing characteristics of each level according to remote sensing images with similar contents of different levels, and generate the multi-level network map with detailed details and reasonable differences.
2. The method uses a map improvement algorithm model to generate the refined network map, and the model uses a high-level refined network map to assist the generation of a low-level map, so that the model learns the consistency among maps of different levels, and the consistency among the regional network maps corresponding to different levels is ensured.
3. The method can reduce the generation cost of the network map. Compared with the traditional generation of a multi-level network map, the method does not need to manually collect ground vector data, only needs to utilize a remote sensing image shot by aviation or a satellite, and the generation of the network map can be automatically completed according to an input image.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2 is a schematic diagram of multi-level network image generation by the method of the present invention.
FIG. 3 is a schematic diagram of the internal details of the multi-level network image generation by the core algorithm model according to the method of the present invention.
Detailed Description
For a better understanding of the objects and advantages of the present invention, reference should be made to the following detailed description taken in conjunction with the accompanying drawings and examples.
Examples
A multi-level network map generation method based on remote sensing images comprises two stages of training and using
Step 1: and (5) a training stage.
In the training stage, the network map in the training data set of the remote sensing image-network map pair is used for clustering the pixel color values, and the ground object type mask corresponding to the network map is solved; training a preliminary generation algorithm model by mixedly using each level remote sensing image, corresponding level information, corresponding real ground object class masks and corresponding real network maps; respectively inputting all remote sensing images and corresponding level numbers in the training set into a trained preliminary generation algorithm model, and generating a preliminary network map of each level for storage and standby; and sequentially utilizing the remote sensing images of all levels from high to low, the preliminary network map, the real ground object class mask and the real network map to train the map improvement algorithm model.
Specifically, step 1 comprises the steps of:
step 1.1: and clustering the pixel color values of the network map in the training data set of the remote sensing image-network map pairing, and solving the ground feature type mask corresponding to the network map.
The specific method comprises the following steps:
clustering all pixel points of real network map data in the training set by using a clustering algorithm, solving a class number corresponding to each pixel, and corresponding each class number to an expressed ground feature semantic class; and then, restoring the semantic categories corresponding to the pixels according to the original positions of the pixels in the network map, generating real ground object category masks corresponding to the real network map one by one and storing the masks.
Step 1.2: and training a preliminary generation algorithm model by using the remote sensing images of all levels, corresponding level information, corresponding real ground object class masks and corresponding real network maps in a mixed mode.
The specific method comprises the following steps:
the remote sensing image is randomly selected from the training data set, the corresponding level number of the remote sensing image is normalized in a mode of dividing by K (K is the total level number), and the remote sensing image and the normalized level information are input into a preliminary generation algorithm model. The model outputs a prediction result of a ground object type mask and a prediction result of a network map. The size of the prediction result of the ground object type mask is consistent with that of the input remote sensing image, the solution space of each pixel is all integers of [0, (n-1) ], each integer represents one ground object type, and n is the total number of the ground object types. The prediction result of the network map is a network map picture in an RGB format, and the size of the network map picture is consistent with that of the input remote sensing image. And comparing the ground feature type mask prediction result and the network map prediction result output by the model with the real ground feature type mask and the real network map respectively, calculating a loss function, reversely propagating the loss value, and updating the parameters in the preliminarily generated algorithm model. And continuously repeating the process until the set iteration times are met, and storing the structure and the model parameters of the network to obtain the trained preliminary generation algorithm model structure and parameters.
The preliminary generated algorithm model includes two modules: the map drawing system comprises a first semantic extraction module and a first map drawing module.
When the remote sensing image is input into a preliminary generation algorithm model, the remote sensing image firstly passes through a first semantic extraction module which is a full convolution network. The first semantic extraction module can perform optimization by using a cross entropy loss function, wherein the minimized cross entropy loss function formula is as follows:
Figure BDA0003011371030000101
wherein theta is a model parameter of the first semantic extraction module F, the output of the first semantic extraction module F is a segmentation result and a feature map before the segmentation result, and x belongs to RN×H×WReferring to the input remote sensing image, N, H and W represent the number of image channels, height and width, respectively. s is formed by RC×H×WFor the true values of the semantic segmentation, C, H and W represent the number, height and width, s, of channels representing the true values of the semantic segmentation, respectivelyiSegmenting the true value, s, of an i-th class of interest object for semanticsiThe position of taking 1 represents that the point is the object of interest of the corresponding category, and 0 represents not. Fθ(x)iAnd predicting confidence of the first semantic extraction module on the ith type of interest target.
In addition, the model can select different loss functions such as a Focal loss function, a Lov-sz loss function and the like according to different specific details and different training data sets.
And then, the first map drawing module receives the output information (mask and characteristic map) of the first semantic extraction module, the original remote sensing image and the corresponding hierarchy number at the same time, and generates a preliminary network map in an RGB format. The remote sensing image hierarchy number is normalized by dividing by K (K is the total hierarchy number), and then is filled into a vector with the size of 1 multiplied by H multiplied by W, wherein H and W are respectively the height and the width of an input remote sensing image, and the vector represents hierarchy information.
The first mapping module is a conditional-based generation countermeasure network which performs supervised learning by using the result truth value of the target domain, and comprises a generator and a discriminator, wherein the generator and the discriminator perform countermeasure training: the generator generates synthetic data according to a given condition, and the discriminator distinguishes the generated data from the real data of the generator. The generator tries to produce data as close to the real as possible and accordingly the arbiter tries to perfectly distinguish the real data from the generated data. In this process, the discriminator acts as a loss function learned from the image data, directing the generator to generate the image. The basis loss function used by this module is:
Figure BDA0003011371030000102
wherein phi is,
Figure BDA0003011371030000103
Are the parameters of generator G and discriminator D, x, y ∈ RC×H×WX is remote sensing image, y is real network map, C, H, W represents image channel number, height and width, pdata(x)、pdataAnd (y) data distribution of the remote sensing image and the real network map. And k represents the zoom level number to which the remote sensing image belongs, and the number is input into the model and then expanded into level information. Fθ(x) And outputting the mask and the feature map for the first semantic extraction module.
In addition, different loss functions such as a reconstruction loss function, a feature matching loss function, a perception loss function, a multi-size discriminator loss function and the like can be selected and used according to different specific details of the model and different training data sets.
Step 1.3: and respectively inputting all the remote sensing images and the corresponding level numbers in the training set into the trained preliminary generation algorithm model, and generating a preliminary network map of each level for storage and standby.
The specific method comprises the following steps:
and (3) creating a preliminary generation algorithm model according to the preliminary generation algorithm model structure and parameters stored in the step (1.2), inputting the remote sensing image and the corresponding hierarchy information into the model, and storing a preliminary network map output by the model. The preliminary network map generation formula is shown as follows:
y′=Gφ(x,Fθ(x),k) (3)
wherein y' is a preliminary network map, x is a remote sensing map, k represents a zoom level number to which the remote sensing image belongs, the number is expanded into level information after being input into a model, and Fθ(x) And G represents a generator, and phi represents generator parameters.
Step 1.4: and sequentially utilizing the remote sensing images of all levels from high to low, the preliminary network map, the real ground object class mask and the real network map to train the map improvement algorithm model.
The specific method comprises the following steps:
for a multi-level network map training data set containing K levels, the data levels are all integers in {0,1, …, K-1} numbered. First, the preliminary network map of the K-1 layer is used as the refined network map of the layer. Then, taking K as K-1, randomly selecting a K-1 layer remote sensing image, a K-1 layer preliminary network map corresponding to the K-1 layer remote sensing image and 4 pieces of fine-finished network maps corresponding to the K layer remote sensing image in an input map improvement algorithm model, generating a K-1 layer ground object type mask prediction result and a K-1 layer network map prediction result, respectively comparing the K-1 layer ground object type mask prediction result with a real ground object type mask and the real network map, calculating a loss function and updating parameters in the map improvement algorithm model according to the loss function; repeating the previous step until the set iteration times are met, and generating and storing a corresponding k-1 layer fine trimming network map for all the remote sensing images of the k-1 layer by using the current map improvement algorithm model; then, all integers with K being {1,2, …, K-2} are taken from high to low in sequence, and the training of the map improvement algorithm model is completed after the training process is repeated for each K value.
The map improvement algorithm model includes two modules: the second semantic extraction module and the second map drawing module.
When the k-1 layer remote sensing image is input into a map improvement algorithm model, the k-1 layer remote sensing image firstly passes through a second semantic extraction module which is a full convolution network. The second semantic extraction module may perform optimization using a cross entropy loss function, wherein the minimized cross entropy loss function formula is:
Figure BDA0003011371030000121
wherein theta ' is a model parameter of the second semantic extraction module F ', the output of the second semantic extraction module F ' is a segmentation result and a feature map before the segmentation result, and x belongs to RN×H×WRemote sensing image of finger input, N, H, W respectively representing a pictureLike the number of channels, height and width. s is formed by RC×H×WFor the true values of the semantic segmentation, C, H and W represent the number, height and width, s, of channels representing the true values of the semantic segmentation, respectivelyiSegmenting the true value, s, of an i-th class of interest object for semanticsiThe position of taking 1 represents that the point is the object of interest of the corresponding category, and 0 represents not. F'θ′(x)iAnd predicting confidence of the second semantic extraction module on the ith type of interest target.
In addition, the model can select different loss functions such as a Focal loss function, a Lov-sz loss function and the like according to different specific details and different training data sets.
And then, the second map drawing module receives output information (a mask and a feature map) of the second semantic extraction module, the preliminary network map of the k-1 layer corresponding to the remote sensing image and the 4 refined network maps corresponding to the k layer at the same time, and generates the refined network map corresponding to the remote sensing image.
The second map drawing module is a condition-based generation countermeasure network which utilizes the result truth value of the target domain to carry out supervised learning and comprises a generator and a discriminator, wherein the generator and the discriminator carry out the training of the countermeasure: the generator generates synthetic data according to a given condition, and the discriminator distinguishes the generated data from the real data of the generator. The generator tries to produce data as close to the real as possible and accordingly the arbiter tries to perfectly distinguish the real data from the generated data. In this process, the discriminator acts as a loss function learned from the image data, directing the generator to generate the image. Through mutual game learning of the generator and the discriminator, the generator can finally generate generated data meeting the quality requirement. The basis loss function used by this module is:
Figure BDA0003011371030000122
wherein phi
Figure BDA0003011371030000123
Are respectivelyThe parameters of generator G and discriminator D, x, y, y' are belonged to RC×H×WX is a remote sensing image, y is a real network map, y' is a preliminary network map, C, H, W respectively represents the number, height and width of image channels, subscript k-1 and subscript k represent the zoom level of the map, and pdata(x)、pdata(y) and pdata(y') respectively representing the data distribution of the remote sensing image, the real network map and the preliminary network map,
Figure BDA0003011371030000131
the refined network map representing the k-th layer is spliced and downsampled to form an image with the same size as the remote sensing image,
Figure BDA0003011371030000132
y′k-1、yk-1and xk-1The actual geographical area represented is the same in location and size. F'θ′(xk-1) And the mask and the feature map output by the second semantic extraction module.
In addition, different loss functions such as a reconstruction loss function, a feature matching loss function, a perception loss function, a multi-size discriminator loss function and the like can be selected and used according to different specific details of the model and different training data sets.
Step 2: and (4) a use stage.
In the use stage, firstly, if the acquired remote sensing image is a single-level image, the remote sensing image needs to be expanded into a multi-level image; then, inputting the remote sensing images of all levels and the corresponding level information into a trained preliminary generation algorithm model in sequence, and generating and storing a corresponding preliminary network map; based on each level of remote sensing image and the preliminary network map, using a trained map improvement algorithm model to sequentially generate each level of fine network map from high to low; and finally, splicing the generated fine network maps of all levels into a multi-level network map according to the numbers.
Specifically, step 2 comprises the steps of:
step 2.1: if the acquired remote sensing image is a single-level image, the acquired remote sensing image needs to be expanded into a multi-level image.
The specific method comprises the following steps:
and (2) regarding the collected single-layer remote sensing image as a kth layer, numbering all tiles, splicing every adjacent 2 x 2 remote sensing image tiles, then sampling by interpolation and other methods until the size of the tiles is the same as that of the original single tile, and processing all remote sensing image tiles on the layer to obtain the kth-1 layer remote sensing image. The above steps are repeated to generate k-2, k-3 and other layers of images until the number of the layers of images is small enough (for example, less than 20 sheets) or the layers of images have only one row or only one column. And after the generation of the remote sensing images of each layer is finished, taking the lowest layer image as the 0 th layer, and numbering each layer again.
Step 2.2: sequentially inputting the remote sensing images of each level and the corresponding level information into a trained preliminary generation algorithm model, and generating and storing a corresponding preliminary network map; .
The specific method comprises the following steps:
and creating a network model according to the preliminarily generated algorithm model structure and parameters stored in the training stage, inputting the remote sensing image and the level information into the model, predicting the model through a first semantic extraction module and a first map drawing module respectively, and automatically storing a preliminarily network map finally generated by the first map drawing module, wherein the network map is in an RGB image format, and the size of the network map is consistent with that of the input remote sensing image tile. The preliminary network map generation formula is as follows:
y′=Gφ(x,Fθ(x),k) (6)
wherein y' is a preliminary network map, x is a remote sensing map, k represents a zoom level number to which the remote sensing image belongs, the number is expanded into level information after being input into a model, and Fθ(x) And G represents a generator module, and phi represents generator module parameters.
Step 2.3: based on each level of remote sensing image and the preliminary network map, using a trained map improvement algorithm model to sequentially generate each level of fine network map from high to low;
for a multi-level remote sensing image data set containing K levels, the data levels are all integers in {0,1, …, K-1} numbered. First, we take the preliminary network map of the K-1 layer as the refined network map of the layer. And then, a network model is established according to a map improvement algorithm model structure and parameters stored in a training stage, K is taken as K-1, the K-1 layer remote sensing image, the K-1 layer preliminary network map and the K layer refined network map are respectively input into the model for operation, and the corresponding K-1 layer refined network map is generated and stored. And (4) sequentially taking all integers with K being {1,2, …, K-2} from high to low, repeating the above process for each K value, and finishing the generation of the network map of all the levels.
The generation formula of the refined network map is as follows:
Figure BDA0003011371030000141
where φ 'is a parameter of the generator module G, x, y, y' e.g. RC×H×WX is a remote sensing image, y is a real network map, y' is a preliminary network map, C, H, W respectively represents the number, height and width of image channels, subscripts k, k-1 represents the zoom level of the map,
Figure BDA0003011371030000142
the representation generator generates a layer k-1 refinement network map,
Figure BDA0003011371030000143
the refined network map representing the K layer is spliced and downsampled to form an image with the same size as the remote sensing image,
Figure BDA0003011371030000144
y′k-1and xk-1The actual area represented is the same in position and size. F'θ′(xk-1) And the mask and the feature map output by the second semantic extraction module.
Step 2.4: and after generating each layer of network map block by block, splicing the generated network maps according to the sequence number to obtain a complete multi-layer network map. The method can generate a multi-level network map on any scale.

Claims (10)

1. A multi-level network map generation method based on remote sensing images is characterized by comprising the following steps:
step 1: a training stage;
step 1.1: clustering pixel color values of a network map in a training data set of remote sensing image-network map pairing, and solving a ground feature type mask corresponding to the network map;
step 1.2: the method comprises the following steps of training a preliminary generation algorithm model by mixedly using each level remote sensing image, corresponding level information, a corresponding real ground object class mask and a corresponding real network map, and comprises the following steps:
randomly selecting a remote sensing image from a training data set, normalizing the corresponding level number by dividing the corresponding level number by a total level number K, and inputting the remote sensing image and normalized level information into a preliminary generation algorithm model; the model outputs a prediction result of a ground object type mask and a prediction result of a network map;
the size of the prediction result of the ground object type mask is consistent with that of the input remote sensing image, the solution space of each pixel is all integers in [0, (n-1) ], each integer represents one ground object type, and n is the total number of the ground object types; the prediction result of the network map is a network map picture in an RGB format, and the size of the network map picture is consistent with that of the input remote sensing image; comparing the ground feature type mask prediction result and the network map prediction result output by the model with the real ground feature type mask and the real network map respectively, calculating a loss function, reversely propagating a loss value, and updating parameters in the preliminarily generated algorithm model; continuously repeating the process until the set iteration times are met, and storing the structure and the model parameters of the network to obtain the trained preliminary generation algorithm model structure and parameters;
the preliminary generated algorithm model includes two modules: the first semantic extraction module and the second map drawing module;
when the remote sensing image is input into a preliminary generation algorithm model, firstly, the remote sensing image passes through a first semantic extraction module which is a full convolution network; then, a first map drawing module receives output information of the first semantic extraction module, an original remote sensing image and corresponding hierarchy information at the same time and generates a primary network map in an RGB format, wherein the hierarchy information is a vector with the size of 1 multiplied by H multiplied by W after a hierarchy number to which the remote sensing image belongs is normalized by dividing the hierarchy number by the total hierarchy number N, and H and W are respectively the height and the width of the input remote sensing image;
the first mapping module is a conditional-based generation countermeasure network which performs supervised learning by using the result truth value of the target domain, and comprises a generator and a discriminator, wherein the generator and the discriminator perform countermeasure training: the generator generates synthetic data according to given conditions, and the discriminator distinguishes the generated data and the real data of the generator; the generator tries to produce data as close to reality as possible, and accordingly the discriminator tries to perfectly distinguish the true data from the generated data; in the process, the discriminator is used as a loss function obtained through image data learning to guide the generator to generate an image;
step 1.3: respectively inputting all remote sensing images and corresponding level numbers in the training set into a trained preliminary generation algorithm model, and generating a preliminary network map of each level for storage and standby;
step 1.4: sequentially utilizing the high-to-low level remote sensing images, the preliminary network map, the real ground object class mask and the real network map to train the map improvement algorithm model, and comprising the following steps:
for a multi-level network map training data set containing N levels, the data levels are numbered as all integers in {0,1, …, K-1 }; firstly, taking a preliminary network map of a K-1 layer as a refined network map of the layer; then, taking K as K-1, randomly selecting a K-1 layer remote sensing image, a K-1 layer preliminary network map corresponding to the K-1 layer remote sensing image and 4 pieces of fine-finished network maps corresponding to the K layer remote sensing image in an input map improvement algorithm model, generating a K-1 layer ground object type mask prediction result and a K-1 layer network map prediction result, respectively comparing the K-1 layer ground object type mask prediction result with a real ground object type mask and the real network map, calculating a loss function and updating parameters in the map improvement algorithm model according to the loss function;
repeating the previous step until the set iteration times are met, and generating and storing a corresponding k-1 layer fine trimming network map for all the remote sensing images of the k-1 layer by using a current map improvement algorithm model; then, sequentially taking K as all integers in {1,2, …, K-2} from large to small; after repeating the training process for each k value, completing the training of the map improvement algorithm model;
the map improvement algorithm model comprises a second semantic extraction module and a second map drawing module;
when a k-1 layer remote sensing image is input into a map improvement algorithm model, firstly, a semantic extraction module is passed through, and the module is a full convolution network;
then, the second map drawing module receives the output information of the second semantic extraction module, the preliminary network map of the k-1 layer corresponding to the remote sensing image and the refined network map corresponding to the k layer at the same time, and generates the refined network map corresponding to the remote sensing image;
the second map drawing module is a condition-based generation countermeasure network which utilizes the result truth value of the target domain to carry out supervised learning and comprises a generator and a discriminator, wherein the generator and the discriminator carry out the training of the countermeasure: the generator generates synthetic data according to given conditions, and the discriminator distinguishes the generated data and the real data of the generator; the generator tries to produce data as close to reality as possible, and accordingly the discriminator tries to perfectly distinguish the true data from the generated data; in the process, the discriminator is used as a loss function obtained through image data learning to guide the generator to generate an image; through mutual game learning of the generator and the discriminator, the generator can finally generate generated data meeting the quality requirement;
step 2: a use stage;
step 2.1: if the acquired remote sensing image is a single-level image, expanding the acquired remote sensing image into a multi-level image:
step 2.2: sequentially inputting the remote sensing images of each level and the corresponding level information into a trained preliminary generation algorithm model, and generating and storing a corresponding preliminary network map;
step 2.3: based on each level of remote sensing image and the preliminary network map, using a trained map improvement algorithm model to sequentially generate each level of fine network map from high to low;
step 2.4: and splicing the generated fine network maps of all levels into a multi-level network map according to the numbers.
2. The method for generating the multi-level network map based on the remote sensing image as claimed in claim 1, wherein in the training stage, the network map in the training data set of the remote sensing image-network map pair is clustered to the pixel color values, and the concrete implementation method for solving the ground feature type mask corresponding to the network map is as follows:
firstly, clustering all pixel points of real network map data in a training set by using a clustering algorithm, solving a class number corresponding to each pixel, and corresponding each class number to an expressed ground feature semantic class;
and then, restoring the semantic categories corresponding to the pixels according to the original positions of the pixels in the network map, generating real ground object category masks corresponding to the real network map one by one and storing the masks.
3. The method for generating the multi-level network map based on the remote sensing image as claimed in claim 1, wherein the first semantic extraction module performs optimization by using a cross entropy loss function, wherein the minimized cross entropy loss function formula is as follows:
Figure FDA0003011371020000031
wherein theta is a model parameter of the first semantic extraction module F, the output of the first semantic extraction module F is a segmentation result and a feature map before the segmentation result, and x belongs to RN×H×WThe input remote sensing image is referred to as N, H and W respectively represent the number, height and width of image channels; s is formed by RC×H×WFor the true values of the semantic segmentation, C, H and W represent the number of channels, height and weight of the true values of the semantic segmentation, respectivelyWidth, siSegmenting the true value, s, of an i-th class of interest object for semanticsiTaking the position of 1 to represent that the point is the interested target of the corresponding category, and taking 0 to represent not; fθ(x)iAnd predicting confidence of the i-th class of interest target for the semantic extraction module.
4. The method for generating a multi-level network map based on remote sensing images as claimed in claim 1, wherein the first mapping module uses a basis loss function as:
Figure FDA0003011371020000032
wherein phi is,
Figure FDA0003011371020000033
Are the parameters of generator G and discriminator D, x, y ∈ RC×H×WX is remote sensing image, y is real network map, C, H, W represents image channel number, height and width, pdata(x)、pdata(y) representing data distribution of the remote sensing image and the real network map; k represents the zoom level number to which the remote sensing image belongs, and the number is expanded into level information after being input into a model; fθ(x) A mask and a feature map output by the first semantic extraction module; e represents the mathematical expectation.
5. The method for generating the multi-level network map based on the remote sensing image as claimed in claim 1, wherein in the training stage, the method for generating the preliminary network map of each level is as follows:
y′=Gφ(x,Fθ(x),k) (3)
wherein y' is a preliminary network map, x is a remote sensing image, k represents a zoom level number to which the remote sensing image belongs, the number is expanded into level information after being input into a model, and Fθ(x) And G represents a generator, and phi represents generator parameters.
6. The method for generating the multi-level network map based on the remote sensing image as claimed in claim 1, wherein the second semantic extraction module performs optimization by using a cross entropy loss function, wherein the formula of the minimized cross entropy loss function is as follows:
Figure FDA0003011371020000041
wherein theta ' is a model parameter of the second semantic extraction module F ', the output of the second semantic extraction module F ' is a segmentation result and a feature map before the segmentation result, and x belongs to RN×H×WThe input remote sensing image is referred to as N, H, W, and the number, height and width of image channels are represented respectively; s is formed by RC×H×WFor the true values of the semantic segmentation, C, H and W represent the number, height and width, s, of channels representing the true values of the semantic segmentation, respectivelyiSegmenting the true value, s, of an i-th class of interest object for semanticsiTaking the position of 1 to represent that the point is the interested target of the corresponding category, and taking 0 to represent not; f'θ′(x)iAnd predicting confidence of the second semantic extraction module on the ith type of interest target.
7. The method for generating a multi-level network map based on remote sensing images as claimed in claim 1, wherein the second mapping module uses a basis loss function as follows:
Figure FDA0003011371020000042
wherein phi
Figure FDA0003011371020000043
The parameters of the generator G and the discriminator D are respectively, x, y, y' are belonged to RC×H×WX is remote sensing image, y is real network map, y' is preliminary network map, C, H, W represents image channel number and height respectivelyDegree and width, subscript k-1 and subscript k representing the zoom level at which the map is located, pdata(x)、pdata(y) and pdata(y') respectively representing the data distribution of the remote sensing image, the real network map and the preliminary network map,
Figure FDA0003011371020000051
the refined network map representing the k-th layer is spliced and downsampled to form an image with the same size as the remote sensing image,
Figure FDA0003011371020000052
y′k-1、yk-1and xk-1The actual geographical areas represented are of the same location and size; f'θ′(xk-1) The mask and the feature map output by the second semantic extraction module; e represents the mathematical expectation.
8. The method for generating the multi-level network map based on the remote sensing image as claimed in claim 1, wherein in the using stage, the method for expanding the collected single-level remote sensing image into the multi-level image is as follows:
the collected single-layer remote sensing image is regarded as the kth layer, all tiles are numbered, every adjacent 2 x 2 remote sensing image tiles are spliced and down-sampled to the size the same as that of the original single tile by methods such as interpolation, and all remote sensing image tiles of the layer are processed to obtain the kth-1 layer remote sensing image;
repeating the steps, and iteratively generating images of each layer of k-2, k-3 and the like until the image of the layer has only one row or only one column; and after the generation of the remote sensing images of each layer is finished, taking the lowest layer image as the 0 th layer, and numbering each layer again.
9. The method for generating a multi-level network map based on remote sensing images as claimed in claim 1, wherein in the using stage, the specific method for generating the corresponding preliminary network map is as follows:
y′=Gφ(x,Fθ(x),k) (6)
wherein y' is a preliminary network map, x is a remote sensing map, k represents a zoom level number to which the remote sensing image belongs, the number is expanded into level information after being input into a model, and Fθ(x) And G represents a generator module, and phi represents generator module parameters.
10. The method for generating the multi-level network map based on the remote sensing image as claimed in claim 1, wherein in the using stage, based on the multi-level remote sensing image and the preliminary network map, the method for sequentially generating the multi-level fine network map from top to bottom by using the trained map improvement algorithm model comprises the following steps:
for a multi-level remote sensing image data set comprising K levels, the data levels are numbered as all integers in {0,1, …, K-1 }; firstly, taking a preliminary network map of a K-1 layer as a refined network map of the layer; then, a network model is established according to a map improvement algorithm model structure and parameters saved in a training stage, K is taken as K-1, the K-1 layer remote sensing image, the K-1 layer preliminary network map and the K layer fine network map are respectively input into the model for operation, and a corresponding K-1 layer fine network map is generated and stored; sequentially taking all integers with K being {1,2, …, K-2} from large to small, repeating the above process for each K value, and finishing the generation of the fine network map of all levels;
the generation formula of the refined network map is as follows:
Figure FDA0003011371020000061
where φ 'is a parameter of the generator module G, x, y, y' e.g. RC×H×WX is a remote sensing image, y is a real network map, y' is a preliminary network map, C, H, W respectively represents the number, height and width of image channels, subscripts k and k-1 represent the zoom level of the map,
Figure FDA0003011371020000062
the representation generator generates a layer k-1 refinement network map,
Figure FDA0003011371020000063
the refined network map representing the k-th layer is spliced and downsampled to form an image with the same size as the remote sensing image,
Figure FDA0003011371020000064
y′k-1and xk-1The position and the size of the represented actual area are the same; f'θ′(xk-1) And the mask and the feature map output by the second semantic extraction module.
CN202110377329.XA 2021-04-08 2021-04-08 Multi-level network map intelligent generation method based on remote sensing image Active CN113052121B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110377329.XA CN113052121B (en) 2021-04-08 2021-04-08 Multi-level network map intelligent generation method based on remote sensing image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110377329.XA CN113052121B (en) 2021-04-08 2021-04-08 Multi-level network map intelligent generation method based on remote sensing image

Publications (2)

Publication Number Publication Date
CN113052121A true CN113052121A (en) 2021-06-29
CN113052121B CN113052121B (en) 2022-09-06

Family

ID=76519072

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110377329.XA Active CN113052121B (en) 2021-04-08 2021-04-08 Multi-level network map intelligent generation method based on remote sensing image

Country Status (1)

Country Link
CN (1) CN113052121B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114418005A (en) * 2022-01-21 2022-04-29 杭州碧游信息技术有限公司 Game map automatic generation method, device, medium and equipment based on GAN network
CN114882139A (en) * 2022-04-12 2022-08-09 北京理工大学 End-to-end intelligent generation method and system for multi-level map

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170161584A1 (en) * 2015-12-07 2017-06-08 The Climate Corporation Cloud detection on remote sensing imagery
CN110516539A (en) * 2019-07-17 2019-11-29 苏州中科天启遥感科技有限公司 Remote sensing image building extracting method, system, storage medium and equipment based on confrontation network
CN111626947A (en) * 2020-04-27 2020-09-04 国家电网有限公司 Map vectorization sample enhancement method and system based on generation of countermeasure network
CN111625608A (en) * 2020-04-20 2020-09-04 中国地质大学(武汉) Method and system for generating electronic map according to remote sensing image based on GAN model
US20200364507A1 (en) * 2019-05-14 2020-11-19 Here Global B.V. Method, apparatus, and system for providing map emedding analytics
CN112580654A (en) * 2020-12-25 2021-03-30 西南电子技术研究所(中国电子科技集团公司第十研究所) Semantic segmentation method for ground objects of remote sensing image

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170161584A1 (en) * 2015-12-07 2017-06-08 The Climate Corporation Cloud detection on remote sensing imagery
US20200364507A1 (en) * 2019-05-14 2020-11-19 Here Global B.V. Method, apparatus, and system for providing map emedding analytics
CN110516539A (en) * 2019-07-17 2019-11-29 苏州中科天启遥感科技有限公司 Remote sensing image building extracting method, system, storage medium and equipment based on confrontation network
CN111625608A (en) * 2020-04-20 2020-09-04 中国地质大学(武汉) Method and system for generating electronic map according to remote sensing image based on GAN model
CN111626947A (en) * 2020-04-27 2020-09-04 国家电网有限公司 Map vectorization sample enhancement method and system based on generation of countermeasure network
CN112580654A (en) * 2020-12-25 2021-03-30 西南电子技术研究所(中国电子科技集团公司第十研究所) Semantic segmentation method for ground objects of remote sensing image

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
SIJUN DONG等: "A Multi-Level Feature Fusion Network for Remote Sensing", 《 HTTPS://WWW.MDPI.COM/JOURNAL/SENSORS》 *
XIAOLIANG QIAN等: "Object Detection in Remote Sensing Images Based on Improved Bounding Box Regression and Multi-Level Features Fusion", 《 WWW.MDPI.COM/JOURNAL/REMOTESENSING》 *
余帅等: "基于多级通道注意力的遥感图像分割方法", 《激光与光电子学进展》 *
毕晓君等: "基于生成对抗网络的机载遥感图像超分辨率重建", 《智能系统学报》 *
王港等: "基于深度神经网络的遥感目标检测及特征提取", 《无线电工程》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114418005A (en) * 2022-01-21 2022-04-29 杭州碧游信息技术有限公司 Game map automatic generation method, device, medium and equipment based on GAN network
CN114882139A (en) * 2022-04-12 2022-08-09 北京理工大学 End-to-end intelligent generation method and system for multi-level map
CN114882139B (en) * 2022-04-12 2024-06-07 北京理工大学 End-to-end intelligent generation method and system for multi-level map

Also Published As

Publication number Publication date
CN113052121B (en) 2022-09-06

Similar Documents

Publication Publication Date Title
CN106909902B (en) Remote sensing target detection method based on improved hierarchical significant model
CN112149547B (en) Remote sensing image water body identification method based on image pyramid guidance and pixel pair matching
CN112052783A (en) High-resolution image weak supervision building extraction method combining pixel semantic association and boundary attention
CN111985376A (en) Remote sensing image ship contour extraction method based on deep learning
CN111178316A (en) High-resolution remote sensing image land cover classification method based on automatic search of depth architecture
CN114155481A (en) Method and device for recognizing unstructured field road scene based on semantic segmentation
CN113052121B (en) Multi-level network map intelligent generation method based on remote sensing image
CN111414954B (en) Rock image retrieval method and system
CN112347970A (en) Remote sensing image ground object identification method based on graph convolution neural network
CN111640116B (en) Aerial photography graph building segmentation method and device based on deep convolutional residual error network
CN111753677A (en) Multi-angle remote sensing ship image target detection method based on characteristic pyramid structure
CN112950780B (en) Intelligent network map generation method and system based on remote sensing image
CN112633140A (en) Multi-spectral remote sensing image urban village multi-category building semantic segmentation method and system
CN109657082B (en) Remote sensing image multi-label retrieval method and system based on full convolution neural network
CN114820655A (en) Weak supervision building segmentation method taking reliable area as attention mechanism supervision
CN111986193A (en) Remote sensing image change detection method, electronic equipment and storage medium
CN113378897A (en) Neural network-based remote sensing image classification method, computing device and storage medium
Li et al. An aerial image segmentation approach based on enhanced multi-scale convolutional neural network
CN104408731A (en) Region graph and statistic similarity coding-based SAR (synthetic aperture radar) image segmentation method
CN106056609A (en) Method based on DBNMI model for realizing automatic annotation of remote sensing image
CN115393690A (en) Light neural network air-to-ground observation multi-target identification method
CN114187506A (en) Remote sensing image scene classification method of viewpoint-aware dynamic routing capsule network
CN114863266A (en) Land use classification method based on deep space-time mode interactive network
CN114494699A (en) Image semantic segmentation method and system based on semantic propagation and foreground and background perception
CN112668662B (en) Outdoor mountain forest environment target detection method based on improved YOLOv3 network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant