CN112950780A - Intelligent network map generation method and system based on remote sensing image - Google Patents

Intelligent network map generation method and system based on remote sensing image Download PDF

Info

Publication number
CN112950780A
CN112950780A CN202110269420.XA CN202110269420A CN112950780A CN 112950780 A CN112950780 A CN 112950780A CN 202110269420 A CN202110269420 A CN 202110269420A CN 112950780 A CN112950780 A CN 112950780A
Authority
CN
China
Prior art keywords
network map
remote sensing
subsystem
sensing image
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110269420.XA
Other languages
Chinese (zh)
Other versions
CN112950780B (en
Inventor
付莹
梁帅哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202110269420.XA priority Critical patent/CN112950780B/en
Publication of CN112950780A publication Critical patent/CN112950780A/en
Application granted granted Critical
Publication of CN112950780B publication Critical patent/CN112950780B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T17/205Re-meshing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Graphics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention relates to a network map intelligent generation method and system based on remote sensing images, and belongs to the technical field of computer vision. The method extracts semantic categories from the acquired network map and redraws the semantic categories, so that the correctness of the semantic meaning contained in the network map can be corrected in an explicit mode in the model training process, and color values in the network map are unified and standardized. By using the dual-module design of the ground feature semantic information prediction module and the network map generation module, each module can be updated according to the latest or future updated semantic segmentation technology and image generation technology, so that the ground feature semantic information of the remote sensing image can be understood and corrected explicitly, and information such as image style, drawing specifications, detail features and the like contained in the network map can be learned adaptively by means of countermeasure generation. The method can improve the accuracy and sensory quality of the map generated under the complex real remote sensing image on the premise of no need of special equipment and higher generation speed.

Description

Intelligent network map generation method and system based on remote sensing image
Technical Field
The invention relates to an intelligent network map generation method and system, in particular to an intelligent network map generation method and system based on remote sensing images, and belongs to the technical field of computer vision.
Background
The remote sensing technology is a technology for collecting electromagnetic radiation information of a ground object target from a satellite, an airplane, an unmanned aerial vehicle or other aircrafts to determine relevant information such as the earth surface environment and resources. In recent years, with the continuous optimization of the artificial earth satellite transmission technology, the quantity and the quality of the artificial satellites available for human beings are rapidly improved, and the development of the remote sensing technology is greatly promoted.
The technology of detecting and recognizing a target by sensing electromagnetic waves, visible light, infrared rays, etc. reflected or radiated from the target from a long distance is a remote sensing technology in which the target is detected by visible light and recorded in an RGB color format to form an image, which is generally called a remote sensing image.
A network map is a map that is digitally stored and referred to using computer technology. The main product of the current network map service is a tile map, that is, a vector digital map with different scales is subjected to a series of processing and map matching according to an image specification, then is cut and cut according to different zoom levels and specific mathematical bases and according to a specified size and format, and is pre-rendered to obtain a map picture library with a plurality of rows and columns.
With the development of information technology and the arrival of mobile internet, high-precision network map service becomes an indispensable part of modern daily life, the convenience degree of free travel of people is greatly improved, and the rapid development of emerging business fields such as network taxi appointment, take-out and the like is promoted. Under the extreme natural disaster conditions of earthquakes, floods, debris flows and the like, the rapidly updated network map can even provide important information for rescue workers and powerfully assist the rescue actions.
According to the traditional network map generation method, a large amount of manpower is needed to carry out manual field mapping to obtain vector data, then map drawing is carried out according to the vector data and a drawing specification, and finally, the vector data are sliced according to a certain rule and guided into a computer to form complete network map service in cooperation with a network map service system. The process is not only complicated and long, but also consumes a large amount of manpower and material resources, and the generation process is greatly influenced by the local ground conditions, so that rapid response and update cannot be realized in emergency.
The method has the advantages that the remote sensing image is easier to obtain, quicker, lower in cost and free from the influence of ground conditions, the adaptability is better, and the method enters the visual field of people by generating the network map according to the remote sensing image.
Before the deep learning method is developed rapidly, the traditional method can only utilize low-level information such as color, brightness, boundary and the like in the remote sensing image to judge according to a priori rule of manual design, so that the complex problem of automatic generation of a network map is difficult to deal with, and the interference of various image factors such as shadow, weather, seasons and the like is easy to occur.
In recent years, the rapid development of deep learning technology brings possibility for automatically generating a network map with good effect based on remote sensing images. Deep learning is a new research direction in the field of machine learning, and usually a deep nonlinear neural network is used to realize approximation of a complex function, and suitable data and a loss function are used to train the complex function, so that parameters stored in the neural network are changed, statistical rules are learned from a large amount of known data, and therefore, unknown events are predicted. The deep learning technology is developed rapidly after breakthrough is made on an image classification task in 2012, and with continuous development of new ideas of new algorithms, human intelligence endows machines with higher and higher understanding capability on image information, and the problem of automatically generating a network map based on remote sensing images gradually becomes solved.
At present, the prior art generally regards the problem of generating a network map from remote sensing images as an image generation problem with conditions (remote sensing images) or an image conversion problem. In recent years, image content conversion has received more and more attention, mainly referring to a framework of "pix 2 pix" proposed by the university of california at berkeley division of university, 2017 by preserving structural information of an image and transforming the content of the image, and the framework uses a conditional generation countermeasure network to learn a mapping function input to a generated image. Although the method can conveniently learn the mapping between the images in different domains, a possible learning framework is provided for converting the remote sensing image into the network map.
However, the method directly adopting the native method has the problems of distorted generated map elements, non-uniform global style, fuzzy local details and the like. Therefore, the network map generated by the method still has a large promotion space in both the information provision and the human-eye-sensing style.
Therefore, from the requirements of improving the accuracy and sensory quality of the remote sensing image for automatically generating the network map, reducing the cost of obtaining the map by the network map service, and improving the updating speed and timeliness of the network map, an intelligent remote sensing image network map generation method and an intelligent remote sensing image network map generation system which can automatically generate the network map with proper image style and accurate map information and still keep better performance under the real remote sensing image with complex conditions are urgently needed.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and creatively provides a network map intelligent generation method and system based on remote sensing images. The method can improve the accuracy and sensory quality of the map generated under the complex real remote sensing image on the premise of no need of special equipment and higher generation speed.
The innovation points of the invention are as follows: the intelligent generation of the network map based on the remote sensing image is divided into a training stage and a using stage. In the training stage, in a training data set matched by the remote sensing image-network map, the network map in the RGB color space is clustered through pixel tristimulus values to obtain a mapping mask of the network map in the ground feature type space, and then the network map is redrawn according to the mapping mask of the network map in the ground feature type space, so that the color values in the network map are unified and standardized, and the information disturbance brought in the network map drawing process is reduced. The intelligent generation algorithm model generates a prediction result of a ground object type mask and a generated network map according to the input remote sensing image, compares the prediction result with the real ground object type mask and the real network map, calculates a loss function of the loss function and updates parameters in the model according to the loss function; and repeating the previous step until a certain preset termination condition is met, and storing the structure and the model parameters of the network. And in the use stage, a network model is created according to the model structure and the parameters stored in the training stage, the acquired remote sensing image images are respectively input into the model for operation, and the network map generated by the model is stored. If the used remote sensing image has a corresponding real network map, the generated network map and the real network map can be compared and evaluated by using an evaluation method built in the system so as to judge the quality of the effect of the model generated network map.
In order to achieve the above purpose, the invention adopts the following technical scheme:
a network map intelligent generation method based on remote sensing images comprises two stages of training and using. The method specifically comprises the following steps:
step 1: and (5) a training stage.
And iteratively modifying the parameter dictionary of the algorithm model by using the remote sensing image-network map pairing data set.
Specifically, step 1 comprises the steps of:
step 1.1: and clustering the pixel tristimulus values of the network map in the RGB color space to obtain the mapping of the network map in the ground feature type space.
The specific method comprises the following steps:
and clustering all pixel points in the acquired network map data by using a clustering algorithm (preferably using a K-means clustering algorithm), and converting each color in the network map into a category number representing the ground feature semantics expressed by the color in a mode of controlling the initial central point of each category. And then, restoring according to the spatial position of each pixel in the network map to form a category mask with the size consistent with that of the original network map. Finally, all masks will be stored in grayscale image format for training.
Step 1.2: and redrawing the network map according to the mapping mask of the network map in the category space. The method aims to unify and standardize color values in the network map and reduce information disturbance brought by the network map drawing process.
The specific method comprises the following steps:
and (3) selecting a standard color value for each category according to the category mask obtained in the step 1.1, and restoring the mask to the network map of the RGB space again according to the category-standard color value comparison table.
Wherein, the standard color value is selected to meet the requirement that obvious differences exist among the standard color values of different ground feature categories (the standard color values can be selected from the mainstream color schemes of various network maps). In addition, the edge of a ground object such as a road or a building may be stroked with a slightly darker color to enhance the visual effect with reference to the conventional network map drawing method.
And finally, storing all the redrawn network maps for training.
Step 1.3: the intelligent generation algorithm model generates a prediction result of the ground feature type mask and a generated network map according to the input remote sensing image, compares the prediction result with the real ground feature type mask and the generated network map respectively, calculates a loss function of the loss function and updates parameters in the model according to the loss function.
The specific method comprises the following steps:
and randomly selecting a plurality of remote sensing images from the training data set to input into an algorithm model. The model outputs a prediction mask of the ground object type of the remote sensing image for each remote sensing image and generates a network map predicted by the model. Wherein, the width and the height of the prediction mask of the ground feature class are consistent with the input remote sensing image. The solution space is all integers on [0, (n-1) ], each integer represents a ground object type, and n is the total number of ground object types. The network map predicted by the model is a network map picture in an RGB format, and the size of the network map picture is consistent with that of the input remote sensing image.
The intelligent generative algorithm model comprises two modules: the system comprises a ground object semantic information prediction module and a network map generation module.
After the remote sensing image is input into the algorithm model, the remote sensing image firstly passes through a ground object semantic information prediction module which is a full convolution network, is established on the basis of translation invariance, and a convolution layer, a pooling layer and an activation function are operated on a local input area and only depend on relevant space coordinates. Each layer of model parameters of the module is a three-dimensional array with the size h x w x d, wherein h and w are space dimensions, and d is the number of channels. The first layer is an image with a size h x w and d color channels.
Let the vector of the position (i, j) in the specific layer of the ground object semantic information prediction module be XijAfter calculation is yij
yij=fks({Xsi+δi,sj+δj}0≤δi,δj≤k) (1)
Wherein f isksDetermining layer types including convolution, mean pooling, maximum pooling and activation functions;
Figure BDA0002973624830000053
the position in the feature diagram X representing the participation in the calculation is (si + delta)i,sj+δj) The vector of (a); k is the convolution kernel size; s is the step size of the convolution operation; deltai、δjTo take on a value of [0, k]Represents all vectors within the convolution kernel with side length k and (si, sj) as the top left vertex. The basic loss function used by the module is the sum of cross entropy losses of each pixel in the last layer, wherein the cross entropy loss function is as follows:
Figure BDA0002973624830000051
wherein x isiRepresenting data, p representing the true probability distribution, and q representing the predicted probability distribution. Minimizing cross entropy loss allows the probability distribution of the predicted values to be maximally close to the true distribution.
In addition, the model can select different loss functions such as a Focal loss function, a Lov-sz loss function and the like according to different specific details and different training data sets.
And then, the network map generation module receives the output information (mask or characteristic diagram) of the ground feature semantic information prediction module and the original remote sensing image at the same time, and generates the network map in the RGB format.
The network map generation module is a condition-based generation confrontation network, which utilizes the result truth value of the target domain to carry out supervised learning and comprises a generator and a discriminator, wherein the generator and the discriminator carry out confrontation training: the generator generates synthetic data according to a given condition, and the discriminator distinguishes the generated data from the real data of the generator. The generator tries to produce data as close to the real as possible and accordingly the arbiter tries to perfectly distinguish the real data from the generated data. In this process, the discriminator acts as a loss function learned from the image data, directing the generator to generate the image. Through mutual game learning of the generator and the discriminator, the generator can finally generate generated data with symbol quality requirements. The basis loss function used by this module is:
Figure BDA0002973624830000052
wherein A represents a remote sensing image set (aerial image), M represents a network map set (map), E represents a mathematical expectation,
Figure BDA0002973624830000054
and
Figure BDA0002973624830000055
representing the probability distribution of alpha and m respectively obeying the remote sensing image set and the network map set; dMIs a discriminator; gAMTo a generator.
In addition, different loss functions such as a reconstruction loss function, a feature matching loss function, a perception loss function, a multi-size discriminator loss function and the like can be selected and used according to different specific details of the model and different training data sets.
Step 1.4: and (4) repeating the step 1.3 until the set termination condition is met, and storing the structure and the model parameters of the network.
In the training process, the remote sensing images are continuously and repeatedly input into the model, parameters in the model are dynamically adjusted according to the loss function until preset conditions are met (for example, the training round reaches a certain amount or certain evaluation index is superior to a certain preset value), the training process is stopped, and the model parameters are stored.
Step 2: and (4) a use stage.
And (3) generating a network map according to the remote sensing image by using the model parameters obtained in the training stage in the step (1) and storing the network map. Further, if a real network map corresponding to the remote sensing image exists, the generated network map and the real network map are evaluated, and the effect of generating the network map by the model is judged.
Specifically, step 2 comprises the steps of:
step 2.1: and establishing a network model according to the model structure and parameters saved in the training stage, respectively inputting the acquired remote sensing image images into the model for operation, and storing the network map generated by the model.
Currently, remote sensing images used by network map services are usually in RGB format and formed by splicing a plurality of tiles with the same size. If the collected remote sensing image is not in the format, the complete remote sensing image is cut and stored. And then, creating a network model according to the model structure and parameters stored in the training stage, inputting the remote sensing image tiles into the model, predicting the model through a ground feature semantic information prediction module and a network map generation module respectively, and automatically storing the network map generated by the network map generation module finally, wherein the network map is in an RGB image format, and the size of the network map is consistent with that of the input remote sensing image tiles.
Step 2.2: and if the used remote sensing image has a corresponding real network map, comparing and evaluating the generated network map and the real network map so as to judge the quality of the effect of generating the network map by the model.
For objectively judging the effect of generating the network map by the model, an image objective evaluation index can be generated based on FID (fringe acceptance Distance), wd (wasserstein Distance), Kernel MMD (Kernel Maximum Mean discrimination) and the like, and the generated network map can be directly compared with a real network map for evaluation and the index can be calculated, wherein the calculation formula of the FID is as follows:
Figure BDA0002973624830000061
wherein μ represents an average value, a subscript r represents real picture data, and g represents generated picture data; tr represents a trace of the matrix; sigma is a covariance matrix;
WD is calculated as:
Figure BDA0002973624830000062
wherein p isr、pgProbability distributions, x, representing real and generated picture features, respectivelyr、xgRespectively representing a real picture sample and a generated picture sample, inf representing a lower bound, Γ being a joint probability distribution symbol, γ representing a joint distribution of the real picture and the generated picture, E representing a mathematical expectation, d representing a distance function.
The formula for Kernel MMD is:
Figure BDA0002973624830000071
wherein p isr、pgProbability distributions respectively representing the real picture features and the generated picture features; e represents a mathematical expectation; x is the number ofr、x′rRespectively representing two different real picture samples, xg、x′gRespectively representing two different generated picture samples; k is a gaussian kernel function with:
Figure BDA0002973624830000072
wherein, σ is the width hyper-parameter of the function, and x and y are two parameters of the kernel function respectively.
Based on the method, the invention provides a network map intelligent generation system based on remote sensing images, which comprises the following steps: the system comprises a remote sensing image preprocessing subsystem, a remote sensing image semantic feature extraction subsystem, a network map generation subsystem, a real network map preprocessing subsystem, a loss function calculation subsystem and a generated result evaluation subsystem.
The remote sensing image preprocessing subsystem comprises a remote sensing image amplification module and a remote sensing image formatting module. The remote sensing image augmentation module is used for carrying out operations such as random cutting, rotation and turning on the remote sensing image so as to increase a remote sensing image data set and diversify the remote sensing image as much as possible, so that the training model has stronger generalization capability; the remote sensing image formatting module is used for converting the remote sensing image stored in the bitmap format into a tensor format and normalizing the tensor format to meet the format requirement of the remote sensing image semantic feature extraction subsystem on input data;
the remote sensing image semantic feature extraction subsystem comprises a convolution layer, a pooling layer, an activation layer and a cross-layer connection structure and is used for carrying out layer-by-layer feature extraction on the remote sensing image, judging the ground feature type corresponding to each pixel point in the remote sensing image and providing important information for generating a network map;
the network map generation subsystem comprises a network map generator and a network map discriminator. The network map generator adopts a symmetrical encoder-decoder structure and is used for generating a corresponding network map according to the remote sensing image and the remote sensing semantic features; the network map discriminator adopts a down-sampling encoder structure for distinguishing a real network map and generating the network map, so that the network map generator generates the network map which can be more false and truer;
the real network map preprocessing subsystem comprises a real network map augmentation module, a real network map ground object type extraction module and a real network map redrawing module. The real network map augmentation module is used for cooperating with the remote sensing image augmentation module to produce a real network map corresponding to the spatial position of the processed remote sensing image; the real network map ground object type extraction module adopts a clustering algorithm of a preset initial state to extract the real ground object type of each pixel, namely semantic information of each pixel, and generates a real semantic mask; the real network map redrawing module redraws the real network map according to the real semantic mask, so that color values in the network map are unified and standardized, and the network map generation subsystem is more favorable for learning;
the loss function calculation subsystem comprises a ground object semantic loss function calculation module and a network map loss function calculation module. The semantic feature loss function calculation module is used for calculating semantic feature loss according to the real semantic mask and the semantic mask extracted by the remote sensing image semantic feature extraction subsystem, wherein the loss is propagated reversely and used for updating parameters of each neural network layer in the remote sensing image semantic feature extraction subsystem and correcting the semantic features extracted by the remote sensing image semantic feature extraction subsystem; the network map loss function calculation module calculates the loss of the network map according to the redrawn real network map and the network map generated by the network map generation subsystem, the loss is propagated reversely to update the parameters of each neural network layer in the network map generation subsystem, the parameters in the remote sensing image semantic feature extraction subsystem are indirectly influenced by the network map generation subsystem, and the network map finally generated by the system is corrected;
and the generated result evaluation subsystem is used for comparing the generated network map with a real network map, automatically calculating various generated image objective evaluation indexes such as FID, WD, Kernel MMD and the like, and supplementing the subjective feeling of a user to help the user to quantitatively know the quality of the network map generated by the system.
The connection relationship among the above-mentioned component systems is:
the remote sensing image preprocessing subsystem is connected with the remote sensing image semantic feature extraction subsystem and the network map generation subsystem and provides processed remote sensing image pictures meeting the requirements to the remote sensing image semantic feature extraction subsystem and the network map generation subsystem;
the remote sensing image semantic feature extraction subsystem is connected with the remote sensing image preprocessing subsystem, the network map generation subsystem and the loss function calculation subsystem, provides extracted semantic features for the network map generation subsystem, and receives a loss value returned by the network map generation subsystem to train the system;
the network map generation subsystem is respectively connected with the remote sensing image preprocessing subsystem, the remote sensing image semantic feature extraction subsystem, the loss function calculation subsystem and the generation result evaluation subsystem, and generates a network map according to the remote sensing image and the semantic features;
the real network map preprocessing subsystem is connected with the loss function calculation subsystem and the generated result evaluation subsystem, processes the original network map, generates a real semantic mask according to the original network map, redraws the real network map according to the real semantic mask, and provides the generated real semantic mask and the real network map to other subsystems;
the loss function calculation subsystem is connected with the remote sensing image semantic feature extraction subsystem, the network map generation subsystem is connected with the real network map preprocessing subsystem, the real network map and the real semantic mask are obtained from the real network map preprocessing subsystem, the semantic mask generated by the loss function calculation subsystem is obtained from the remote sensing image semantic feature extraction subsystem, the loss value is calculated by comparing the semantic mask with the real semantic mask, and then the loss value is transmitted back to the remote sensing image semantic feature extraction subsystem for training; meanwhile, the loss function calculation subsystem acquires the generated network map from the network map generation subsystem and a binary discrimination result output by a discriminator of the network map, compares the two discrimination results with the real network map to calculate a loss value, and then transmits the loss value back to the network map generation subsystem for training;
the generated result evaluation subsystem is connected with the real network map preprocessing subsystem and the network map generating subsystem, acquires the real network map from the real network map preprocessing subsystem, acquires the generated network map from the network map generating subsystem, and calculates evaluation indexes through comparison so as to help a user to know the quality of the network map generated by the system in a quantitative manner.
The use method of the system comprises the following steps:
step 1: inputting the remote sensing image into a remote sensing image preprocessing subsystem, performing data augmentation, normalization and formatting, and storing the processed remote sensing image in a memory;
step 2: inputting the network map into a real network map preprocessing subsystem, extracting the ground feature types, redrawing and amplifying data, and ensuring that the amplified area corresponds to the area reserved for data amplification in the step 1. Storing the processed real network map and the real semantic mask for later use;
and step 3: the remote sensing image semantic feature extraction subsystem reads the remote sensing image, the remote sensing image is processed through the neural network layer, a semantic feature map and a semantic mask are generated, and the semantic feature map is transmitted to the network map generation subsystem. The semantic mask is stored to be used.
And 4, step 4: the network map generation subsystem reads the remote sensing image and the semantic feature map, a network map generator in the network map generation subsystem generates a network map through processing of a neural network layer, then the generated network map is input into a network map discriminator in the network map generation subsystem to carry out true/false binary discrimination, and a discrimination result is used for calculating a loss function; generating a network map for storage and standby use;
and 5: a ground feature semantic loss function calculation part in the loss function calculation subsystem reads the real semantic mask and generates a semantic mask, compares the calculated loss value and transmits the loss value back to the remote sensing image semantic feature extraction subsystem for training; and a network map loss function calculation part in the loss function calculation subsystem reads the real network map, generates a network map and a binary discrimination result output by a discriminator in the network map generation subsystem, and compares the two discrimination results with each other to calculate a loss value. Then, the loss value is transmitted back to the network map generation subsystem for training;
step 6: each neural network layer in the network map generation subsystem carries out parameter modification according to the loss value returned by the loss function calculation subsystem, completes one training iteration, and returns the loss value of the first layer to the remote sensing image semantic feature extraction subsystem;
and 7: each neural network layer in the semantic feature extraction subsystem of the remote sensing image carries out parameter modification according to loss values returned by the loss function calculation subsystem and the network map generation subsystem, and one training iteration is completed;
and 8: repeatedly executing the steps 1 to 7, and sending the real network map and the generated network map to the generated result evaluation subsystem to calculate indexes after all the training data are traversed and used once; if the index meets the preset condition or the total iteration number reaches the preset upper limit, stopping training and executing the step 9;
and step 9: and (4) executing the steps 1 to 4, generating a final network map, and sending the real network map and the generated network map to a generated result evaluation subsystem to calculate a final index.
Advantageous effects
Compared with the prior art, the method and the system have the following advantages that:
1. according to the method, the collected network map is subjected to semantic category extraction processing, so that the correctness of the semantic meaning contained in the network map can be corrected in an explicit mode in the model training process, the higher requirement of the task of intelligently converting the network map from the remote sensing image on the consistency of the semantic expression of each pixel in the generated image and the original image is met, and the accuracy of the generated network map is higher than that of the method for automatically generating the network map based on the remote sensing image in the prior art.
2. The invention carries out redrawing processing on the collected network map, so that the color values in the network map are unified and standardized, the interference of information disturbance brought in the network map drawing and transmission process on neural network training is reduced, and the accuracy of generating the network map is higher than that of the method for automatically generating the network map based on the remote sensing image in the prior art.
3. The invention uses the design of the ground object semantic information prediction module and the network map generation module, not only can explicitly understand and correct the ground object semantic information of the remote sensing image, but also can learn the information of image style, drawing specification, detail characteristics and the like contained in the network map in a self-adaptive manner by means of resisting generation, so that the generated network map is superior to the method for automatically generating the network map based on the remote sensing image in the prior art in the aspects of accuracy and visual effect.
4. The invention uses the design mode of a ground object semantic information prediction module and a network map generation module, wherein each module can be updated according to the latest or future updated semantic segmentation technology and image generation technology, and the generation effect is better than that of singly using a single module, so that the network model structure provided by the invention has good renewability and expandability.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
FIG. 2 is a schematic diagram of the core algorithm model of the method of the present invention being trained.
FIG. 3 is a schematic diagram of the core algorithm model of the method of the present invention for use and result evaluation.
FIG. 4 is a schematic diagram of the system of the present invention.
Detailed Description
For a better understanding of the objects and advantages of the present invention, reference is made to the following description taken in conjunction with the accompanying drawings and examples.
Examples
The embodiment discloses a network map intelligent generation method based on remote sensing images, which comprises a training phase and a using phase.
In the training stage, in a training data set matched by the remote sensing image-network map, clustering the pixel tristimulus values of the network map in the RGB color space to obtain a mapping mask of the network map in the ground feature type space; redrawing the network map according to the mapping mask of the network map in the ground feature type space, so that the color values in the network map are unified and standardized, and the information disturbance brought in the network map drawing process is reduced; the intelligent generation algorithm model generates a prediction result of a ground object type mask and a generated network map according to the input remote sensing image, compares the prediction result with the real ground object type mask and the real network map, calculates a loss function of the loss function and updates parameters in the model according to the loss function; and repeating the previous step until a certain preset termination condition is met, and storing the structure and the model parameters of the network. A using stage is to create a network model according to the model structure and parameters saved in the training stage, respectively input the acquired remote sensing image images into the model for operation, and store a network map generated by the model; if the used remote sensing image has a corresponding real network map, the generated network map and the real network map can be compared and evaluated by using an evaluation method built in the system so as to judge the quality of the effect of the model generated network map. The flow chart of this embodiment is shown in fig. 1.
The traditional process for generating the network map based on the remote sensing image is to obtain the remote sensing image of a relevant area, identify and extract the remote sensing image, update vector data, map making and tile map generating and serving, and the process is complex and long and needs to consume a large amount of manpower and material resources. In recent years, the development of deep learning technology has made it possible to automatically generate a network map based on remote sensing images. The existing remote sensing image automatic generation network map algorithm is mainly divided into two categories, one category regards the problem as the problem of image-to-image cross-domain conversion, and the remote sensing image is directly translated into a network map by using an image conversion technology; and the other type regards the problem as a classification problem of each pixel of the remote sensing image, performs pixel-by-pixel ground object type judgment on the remote sensing image by using a semantic segmentation technology, and colors a judgment result according to a certain specification to form a network map. However, both of these methods have problems in the accuracy or style of map generation, thus limiting their applications. The network map intelligent generation method and system based on the remote sensing image disclosed by the embodiment provide a double-module core algorithm, which comprises a ground object semantic prediction module and a network map generation module, and skillfully fuses the two modules, so that the whole network can be trained by fully utilizing information in the remote sensing image, and the generated map has higher accuracy and a suitable image style. A schematic diagram of a two-module network model training for generating a network map based on a remote sensing image is shown in fig. 2.
The method for intelligently generating the network map based on the remote sensing image is an end-to-end solution, and can directly generate the network map according to the input remote sensing image in a use stage without explicit intermediate steps and manual intervention. A schematic flow chart of the remote sensing image based intelligent generation network map algorithm in the use stage is shown in fig. 3.
The details of the present embodiment are as follows.
A network map intelligent generation method based on remote sensing images is divided into two stages of training and using, and comprises the following steps:
step 1: in the training stage, the processed remote sensing image-network map pairing data set is used for iteratively modifying a parameter dictionary of the algorithm model and storing the parameter dictionary:
step 1.1: and clustering the pixel tristimulus values of the network map in the RGB color space to obtain the mapping of the network map in the ground feature type space.
For all pixel points in the collected network map data, clustering is carried out on all pixel points by using a clustering algorithm, preferably a K-means clustering algorithm (see Hartigan J A, Wong M A. Algorithm AS 136: A K-means clustering algorithm [ J ]. Journal of the Royal Statistical society. series C (applied statistics), 1979, 28 (1): 100-means 108.) is used, each color in the network map is converted into a category number representing the ground object semantics expressed by the color by controlling the initial central point of each category, and reduction is carried out according to the spatial position of each pixel in the network map to form a category mask with the size consistent with that of the original network map. All masks will be stored in grayscale image format and ready for training.
Step 1.2: and redrawing the network map according to the mapping mask of the network map in the category space, so that the color values in the network map are unified and standardized, and the information disturbance brought by the network map drawing process is reduced.
Selecting a standard color value for each category according to the category mask obtained in the step 1.1, and restoring the mask to a network map of an RGB space according to a category-standard color value comparison table; the standard color value is selected to meet the requirement that obvious differences exist among the standard color values of different ground feature types, and the standard color value is usually selected from mainstream color schemes of various network maps; in addition, with reference to the mainstream network map drawing method, the edge of a ground object such as a road or a building is usually drawn with a dark color to enhance the visual effect. All the redrawn network maps are stored and ready for training.
Step 1.3: the intelligent generation algorithm model generates a prediction result of a ground feature type mask and a generated network map according to the input remote sensing image, compares the prediction result with the real ground feature type mask and the real network map respectively, calculates a loss function of the loss function and updates parameters in the model according to the loss function.
And randomly selecting a plurality of remote sensing images from the training data set to input into an algorithm model, outputting a prediction mask of the ground feature category of the remote sensing image for each remote sensing image by the model, and generating a network map predicted by the model. The width and the height of the prediction mask of the ground object type are consistent with those of the input remote sensing image, the solution space is all integers of [0, (n-1) ], each integer represents one ground object type, and n is the total number of the ground object types. The network map predicted by the model is a network map picture in an RGB format, and the size of the network map picture is consistent with that of the input remote sensing image.
The intelligent generation algorithm model consists of two modules, namely a ground object semantic prediction module and a network map generation module. After the remote sensing image is input into an algorithm model, the remote sensing image firstly passes through a ground object semantic information prediction module, the main idea of the module is a full convolution network, the module comprises a plurality of basic modules with translation invariance, each layer of model parameters are three-dimensional arrays with the size h x w x d, wherein h and w are space dimensions, and d is a channel number. The first layer is an image with a size h x w, d color channels. The module is based on translational invariance. Its basic components, convolutional layers, pooling layers, and activation functions are all computed over a local input region and depend only on the associated spatial coordinates. The vector of position (i, j) in a particular layer is xijAfter calculation is yij
yij=fks({Xsi+δi,sj+δj}0≤δi,δj≤k) (1)
Where k is the convolution kernel size, s is the step size, fksDetermining the layer type: convolution, mean pooling, maximum pooling, activation function, and the like. The basic loss function used by the module is the sum of cross-entropy losses of each pixel in the last layer, wherein the cross-entropy loss function is:
Figure BDA0002973624830000131
wherein x isiRepresenting data, p representing the true probability distribution, and q representing the predicted probability distribution. Minimizing cross entropy loss can make the probability distribution of the predicted value as close to the real distribution as possible; in addition, the model can also select different loss functions such as a Focal loss function, a Lov-sz loss function and the like according to different specific details of the model and different training data sets.
Then, the network map generation module takes the output information (mask or characteristic diagram) of the feature semantic prediction module and the original remote sensing image which are received at the same time as input, and generates the network map in the RGB format. The main idea of this module is to generate a countermeasure network based on conditions, which uses the result truth of the target domain for supervised learning. The main body of the device consists of a generator and a discriminator, and the generator and the discriminator are subjected to antagonistic training: the generator generates synthetic data according to a given condition, and the discriminator distinguishes the generated data from the real data of the generator. The generator attempts to produce data that is closer to true and, accordingly, the discriminator attempts to perfectly distinguish between true data and generated data, in the process of which the discriminator acts as a loss function learned through a large amount of image data, directing the generator to generate images. Therefore, through mutual game learning of the generator and the discriminator, the generator can finally generate quite good generated data. The basis loss function used by this module is:
Figure BDA0002973624830000141
wherein A represents a remote sensing image set (aerial image), M represents a network map set (map), E represents a mathematical expectation,
Figure BDA0002973624830000142
and
Figure BDA0002973624830000143
representing the probability distribution of alpha and m respectively obeying the remote sensing image set and the network map set; dMIs a discriminator; gAMTo a generator.
In addition, different loss functions such as a reconstruction loss function, a feature matching loss function, a perception loss function, a multi-size discriminator loss function and the like can be selected according to different specific details of the model and different training data sets.
Step 1.4: and (4) repeating the step 1.3 until a certain preset termination condition is met, and storing the structure and the model parameters of the network.
The training process will repeat inputting remote sensing images to the model and adjusting parameters in the model according to the loss function. And stopping the training process and storing the model parameters until a certain preset condition is met, such as a certain training turn or a certain evaluation index is better than a certain preset value.
Step 2: in the using stage, a network map is generated and stored according to the remote sensing image by using the model parameters obtained in the step 1; in addition, if a real network map corresponding to the remote sensing image exists, the generated network map and the real network map can be evaluated, and the effect of generating the network map by the model can be judged.
Step 2.1: and establishing a network model according to the model structure and parameters saved in the training stage, respectively inputting the acquired remote sensing image images into the model for operation, and storing the network map generated by the model.
The remote sensing image used by the current network map service is usually in an RGB format and formed by splicing a plurality of tiles with the same size; if the collected remote sensing image is not in the format, the complete remote sensing image needs to be cut and stored. Then, the network map intelligent generation system creates a network model according to the model structure and parameters stored in the training stage, the remote sensing image tiles are input into the model, the model is predicted through the ground object semantic information prediction module and the network map generation module respectively, the network map generated by the network map generation module is automatically stored, the network map is in an RGB image format, and the size of the network map is consistent with that of the input remote sensing image tiles.
Step 2.2: if the used remote sensing image has a corresponding real network map, the system can also use a built-in evaluation method to compare and evaluate the generated network map and the real network map so as to judge the quality of the effect of the model generated network map.
In order to objectively judge the effect of generating the network map by the model, a plurality of generated image objective evaluation indexes such as FID (Friechet inclusion Distance), WD (Wassertein Distance), Kernel MMD (Kernel Maximum Mean separation) and the like are built in the system, and the system supports direct comparison evaluation of the generated network map and the real network map and calculation of the indexes. Wherein, the calculation formula of the FID is as follows:
Figure BDA0002973624830000151
wherein μ is an average value, a subscript r represents real picture data, and g represents generated picture data; tr represents a trace of the matrix; sigma is a covariance matrix;
WD is calculated as:
Figure BDA0002973624830000152
wherein p isr、pgProbability distributions, x, representing real and generated picture features, respectivelyr、xgRespectively representing a real picture sample and a generated picture sample, inf representing a lower bound, Γ being a joint probability distribution symbol, γ representing a joint distribution of the real picture and the generated picture, E representing a mathematical expectation, d representing a distance function.
The formula for Kernel MMD is:
Figure BDA0002973624830000153
wherein p isr,pgA probability distribution representing the real picture features and the generated picture features; e represents a mathematical expectation; x is the number ofr、x′rRespectively representing two different real picture samples, xg、x′gRespectively representing two different generated picture samples; k is a gaussian kernel function:
Figure BDA0002973624830000154
wherein, σ is the width hyper-parameter of the function, and x and y are two parameters of the kernel function respectively.
In addition, for better illustrating the objects and advantages of the present invention, the inventive system is further described below with reference to the accompanying drawings.
Examples
The system for intelligently generating the network map based on the remote sensing image disclosed by the embodiment specifically comprises the following components: the system comprises a remote sensing image preprocessing subsystem, a remote sensing image semantic feature extraction subsystem, a network map generation subsystem, a real network map preprocessing subsystem, a loss function calculation subsystem and a generation result evaluation subsystem.
The remote sensing image preprocessing subsystem comprises a remote sensing image amplification subsystem and a remote sensing image formatting subsystem. The remote sensing image augmentation subsystem is used for carrying out operations such as random cutting, rotation and turning on the remote sensing image so as to increase a remote sensing image data set and diversify the remote sensing image as much as possible, so that the training model has stronger generalization capability; the remote sensing image formatting subsystem is used for converting the remote sensing image stored in the bitmap format into a tensor format and normalizing the tensor format to meet the format requirement of the remote sensing image semantic feature extraction subsystem on input data;
the remote sensing image semantic feature extraction subsystem comprises a plurality of convolution layers, a pooling layer, an activation layer and a cross-layer connection structure and is used for carrying out layer-by-layer feature extraction on the remote sensing image, judging the ground feature type corresponding to each pixel point in the remote sensing image and providing important information for generating a network map;
the network map generation subsystem consists of a network map generator and a network map discriminator, wherein the network map generator adopts a symmetrical encoder-decoder structure and is used for generating a corresponding network map according to the remote sensing image and the remote sensing semantic features; the network map discriminator adopts a coder structure with multiple downsampling for distinguishing a real network map and generating the network map, so that the network map generator is prompted to generate the network map which can be more falsely and truly;
the real network map preprocessing subsystem comprises a real network map augmentation subsystem, a real network map ground object type extraction subsystem and a real network map redrawing subsystem. The real network map augmentation subsystem is used for cooperating with the remote sensing image augmentation subsystem to produce a real network map corresponding to the spatial position of the processed remote sensing image; the real network map ground object class extraction subsystem extracts the real ground object class of each pixel, namely semantic information of each pixel, by adopting a clustering algorithm in a preset initial state and generates a real semantic mask; the real network map redrawing subsystem redraws the real network map according to the real semantic mask, so that color values in the network map are unified and standardized, and the network map generation subsystem is more favorable for learning;
the loss function computing subsystem comprises a ground feature semantic loss function computing subsystem and a network map loss function computing subsystem, the semantic feature loss function computing subsystem is used for computing semantic feature loss according to a real semantic mask and a semantic mask extracted by the remote sensing image semantic feature extraction subsystem, the loss is propagated reversely to update parameters of each neural network layer in the remote sensing image semantic feature extraction subsystem, and semantic features extracted by the remote sensing image semantic feature extraction subsystem are corrected; the network map loss function calculation subsystem calculates the network map loss according to the redrawn real network map and the network map generated by the network map generation subsystem, the loss is propagated reversely to update the parameters of each neural network layer in the network map generation subsystem, the parameters in the remote sensing image semantic feature extraction subsystem are indirectly influenced by the network map generation subsystem, and the network map finally generated by the system is corrected;
the generated result evaluation subsystem can compare the network map generated by the system with a real network map, automatically calculate objective evaluation indexes of various generated images such as FID, WD, Kernel MMD and the like, and is used as a supplement for subjective feeling of a user to help the user quantitatively know the quality of the network map generated by the system.
The connection relationship among the above-mentioned component systems is:
the remote sensing image preprocessing subsystem is connected with the remote sensing image semantic feature extraction subsystem and the network map generation subsystem and provides processed remote sensing image pictures meeting the requirements to the remote sensing image semantic feature extraction subsystem and the network map generation subsystem;
the remote sensing image semantic feature extraction subsystem is connected with the remote sensing image preprocessing subsystem, the network map generation subsystem and the loss function calculation subsystem, provides extracted semantic features for the network map generation subsystem, and receives a loss value returned by the network map generation subsystem to train the system;
the network map generation subsystem is connected with the remote sensing image preprocessing subsystem, the remote sensing image semantic feature extraction subsystem, the loss function calculation subsystem and the generation result evaluation subsystem, and generates a network map according to the remote sensing image and the semantic features;
the real network map preprocessing subsystem is connected with the loss function calculation subsystem and the generated result evaluation subsystem, processes the original network map, generates a real semantic mask according to the original network map, redraws the real network map according to the real semantic mask, and provides the generated real semantic mask and the real network map to other subsystems;
the loss function calculation subsystem is connected with the remote sensing image semantic feature extraction subsystem, the network map generation subsystem is connected with the real network map preprocessing subsystem, the real network map and the real semantic mask are obtained from the real network map preprocessing subsystem, the semantic mask generated by the loss function calculation subsystem is obtained from the remote sensing image semantic feature extraction subsystem, the loss value is calculated by comparing the semantic mask with the real semantic mask, and then the loss value is transmitted back to the remote sensing image semantic feature extraction subsystem for training; meanwhile, the loss function calculation subsystem acquires the generated network map from the network map generation subsystem and a binary discrimination result output by a discriminator of the network map, compares the two discrimination results with the real network map to calculate a loss value, and then transmits the loss value back to the network map generation subsystem for training;
the generated result evaluation subsystem is connected with the real network map preprocessing subsystem and the network map generating subsystem, acquires the real network map from the real network map preprocessing subsystem, acquires the generated network map from the network map generating subsystem, and calculates various evaluation indexes through comparison so as to help a user to know the quality of the network map generated by the system in a quantitative manner.
The above system is used as follows
Step 1: inputting the remote sensing image into a remote sensing image preprocessing subsystem, performing data augmentation, normalization and formatting, and storing the processed remote sensing image in a memory for use;
step 2: inputting the network map into a real network map preprocessing subsystem, extracting the ground feature types, redrawing and amplifying data, and ensuring that the amplified area corresponds to the area reserved for data amplification in the step 1; the processed real network map and the real semantic mask are stored for use;
and step 3: the remote sensing image semantic feature extraction subsystem reads a remote sensing image, processes the remote sensing image through a neural network layer and generates a semantic feature map and a semantic mask, wherein the semantic feature map is transmitted to the network map generation subsystem, and the semantic mask is stored in a memory to be used;
and 4, step 4: the network map generation subsystem reads the remote sensing image and the semantic feature map, a network map generator in the network map generation subsystem generates a network map through processing of a plurality of neural network layers, then the generated network map is input into a network map discriminator in the network map generation subsystem to carry out true/false binary discrimination, and discrimination results are stored and used for calculating loss functions; generating a network map and storing the network map in a memory to be used;
and 5: a ground feature semantic loss function calculation part in the loss function calculation subsystem reads the real semantic mask and generates a semantic mask, compares the calculated loss value and transmits the loss value back to the remote sensing image semantic feature extraction subsystem for training; a network map loss function calculation part in the loss function calculation subsystem reads the real network map, generates a binary discrimination result output by a discriminator in the network map and network map generation subsystem, compares and calculates a loss value, and then transmits the loss value back to the network map generation subsystem for training;
step 6: each neural network layer in the network map generation subsystem carries out parameter modification according to the loss value returned by the loss function calculation subsystem, completes one training iteration, and returns the loss value of the first layer to the remote sensing image semantic feature extraction subsystem;
and 7: each neural network layer in the semantic feature extraction subsystem of the remote sensing image carries out parameter modification according to loss values returned by the loss function calculation subsystem and the network map generation subsystem, and one training iteration is completed;
and 8: repeatedly executing the steps 1 to 7, and sending the real network map and the generated network map to the generated result evaluation subsystem to calculate indexes after all the training data are traversed and used once; if the index meets the preset condition or the total iteration number reaches the preset upper limit, stopping training and executing the step 9;
and step 9: and (4) executing the steps 1 to 4, generating a final network map, and sending the real network map and the generated network map to a generated result evaluation subsystem to calculate a final index.
The above detailed description is intended to illustrate the objects, aspects and advantages of the present invention, and it should be understood that the above detailed description is only exemplary of the present invention and is not intended to limit the scope of the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (8)

1. A network map intelligent generation method based on remote sensing images comprises two stages of training and using; the method specifically comprises the following steps:
step 1: in the training stage, a remote sensing image-network map matched data set is used for iteratively modifying a parameter dictionary of the algorithm model;
step 1.1: clustering the pixel tristimulus values of the network map in the RGB color space to obtain the mapping of the network map in the ground feature class space, and forming a class mask with the same size as the original network map;
step 1.2: according to the mapping mask of the network map in the category space, redrawing the network map:
selecting a standard color value for each category according to the category mask obtained in the step 1.1, and restoring the mask to a network map of an RGB space again according to a category-standard color value comparison table; storing all the redrawn network maps for training;
step 1.3: the intelligent generation algorithm model generates a prediction result of a ground object type mask and a generated network map according to an input remote sensing image, compares the prediction result with a real ground object type mask and the generated network map respectively, calculates a loss function of the loss function and updates parameters in the model according to the loss function, and the specific method comprises the following steps:
randomly selecting a plurality of remote sensing images from the training data set to input an algorithm model, outputting a prediction mask of the ground feature category of the remote sensing image to each remote sensing image by the model, and generating a network map predicted by the model; wherein, the width and the height of the prediction mask of the ground object category are consistent with the input remote sensing image; solving all integers with the space of [0, (n-1) ], wherein each integer represents a ground object type, and n is the total number of the ground object types; the network map predicted by the model is a network map picture in an RGB format, and the size of the network map picture is consistent with that of the input remote sensing image;
the intelligent generative algorithm model comprises two modules: the system comprises a ground object semantic information prediction module and a network map generation module;
after the remote sensing image is input into an algorithm model, the remote sensing image firstly passes through a ground object semantic information prediction module which is a full convolution network, and is established on the basis of translational invariance, wherein a convolution layer, a pooling layer and an activation function are operated on a local input area and only depend on relevant space coordinates; each layer of model parameters of the module are three-dimensional arrays with the size h x w x d, wherein h and w are space dimensions, and d is the number of channels; the first layer is an image with size h x w and d color channels;
let the vector of the position (i, j) in the specific layer of the ground object semantic information prediction module be XijAfter calculation is yij
yij=fks({Xsi+δi,sj+δj}0≤δi,δj≤k) (1)
Wherein f isksDetermining layer types including convolution, mean pooling, maximum pooling and activation functions;
Figure FDA0002973624820000021
the position in the feature diagram X representing the participation in the calculation is (si + delta)i,sj+δj) The vector of (a); k is the convolution kernel size; s is the step size of the convolution operation; deltai、δjTo take on a value of [0, k]All integers of (a), representing the vectors involved in the calculation, are all vectors within a convolution kernel with (si, sj) as the top left vertex and a side length of k;
then, the network map generation module receives the output information of the ground feature semantic information prediction module and the original remote sensing image at the same time, and generates a network map in an RGB format;
the network map generation module is a condition-based generation confrontation network, which utilizes the result truth value of the target domain to carry out supervised learning and comprises a generator and a discriminator, wherein the generator and the discriminator carry out confrontation training: the generator generates synthetic data according to given conditions, and the discriminator distinguishes the generated data and the real data of the generator; the generator tries to produce data as close to reality as possible, and accordingly the discriminator tries to perfectly distinguish the true data from the generated data; in the process, the discriminator plays a loss function obtained by learning the image data to guide the generator to generate the image; through mutual game learning of the generator and the discriminator, the generator can finally generate generated data required by the symbol quality;
step 1.4: repeating the step 1.3 until the set termination condition is met, and storing the structure and the model parameters of the network;
in the training process, continuously and repeatedly inputting the remote sensing image into the model, dynamically adjusting parameters in the model according to the loss function until a preset condition is met, stopping the training process, and storing the model parameters;
step 2: a using stage, generating a network map according to the remote sensing image by using the model parameters obtained in the training stage of the step 1 and storing the network map; if a real network map corresponding to the remote sensing image exists, the generated network map and the real network map are evaluated, and the effect of the network map generated by the model is judged to be good or bad;
step 2.1: creating a network model according to the model structure and parameters saved in the training stage, respectively inputting the acquired remote sensing image images into the model for operation, and storing a network map generated by the model;
if the collected remote sensing image is not in the format, firstly cutting and storing the complete remote sensing image; then, a network model is established according to a model structure and parameters stored in a training stage, the remote sensing image tiles are input into the model, the model is predicted through a ground feature semantic information prediction module and a network map generation module respectively, and a network map generated by the network map generation module is automatically stored, wherein the network map is in an RGB image format, and the size of the network map is consistent with that of the input remote sensing image tiles;
step 2.2: and if the used remote sensing image has a corresponding real network map, comparing and evaluating the generated network map and the real network map so as to judge the quality of the effect of generating the network map by the model.
2. The intelligent network map generation method based on remote sensing images as claimed in claim 1, wherein the specific implementation method of step 1.1 is as follows:
clustering all pixel points in the collected network map data by using a clustering algorithm, and converting each color in the network map into a category number representing the ground feature semantics expressed by the color by controlling the initial central point of each category;
then, restoring according to the spatial position of each pixel in the network map to form a category mask with the size consistent with that of the original network map;
finally, all masks are stored in grayscale image format for training.
3. The method for intelligently generating the network map based on the remote sensing image as claimed in claim 2, wherein a K-means clustering algorithm is used for clustering all the pixel points.
4. The intelligent network map generation method based on the remote sensing image as claimed in claim 1, wherein the basic loss function used by the ground object semantic information prediction module is the sum of cross entropy losses of each pixel in the last layer, wherein the cross entropy loss function is as follows:
Figure FDA0002973624820000031
wherein x isiRepresenting data, p representing a true probability distribution, q representing a predicted probability distribution; minimizing cross entropy loss allows the probability distribution of the predicted values to be maximally close to the true distribution.
5. The intelligent network map generation method based on remote sensing images as claimed in claim 1, wherein the basic loss function used by the network map generation module is:
Figure FDA0002973624820000032
wherein A represents a remote sensing image set (aerial image), M represents a network map set (map), E represents a mathematical expectation,
Figure FDA0002973624820000033
and
Figure FDA0002973624820000034
representing probability distribution of alpha and m respectively obeying a remote sensing image set and a network map set; dMIs a discriminator; gAMTo a generator.
6. The intelligent network map generation method based on remote sensing images as claimed in claim 1, characterized in that, for the objective judgment model generating the effect of the network map, based on the strategies including FID, WD and Kernel MMD, generating an image objective evaluation index, comparing the generated network map with the real network map for evaluation and calculating the index;
wherein, the calculation formula of the FID is as follows:
Figure FDA0002973624820000035
wherein μ represents an average value, a subscript r represents real picture data, and g represents generated picture data; tr represents a trace of the matrix; sigma is a covariance matrix;
WD is calculated as:
Figure FDA0002973624820000041
wherein p isr、pgRespectively represent the real figuresProbability distribution, x, of slice features and generated picture featuresr、xgRespectively representing a real picture sample and a generated picture sample, inf representing a lower bound, gamma representing the joint distribution of the real picture and the generated picture, E representing mathematical expectation, and d representing a distance function;
the formula for Kernel MMD is:
Figure FDA0002973624820000042
wherein p isr、pgProbability distributions respectively representing the real picture features and the generated picture features; e represents a mathematical expectation; x is the number ofr、x′rRespectively representing two different real picture samples, xg、x′gRespectively representing two different generated picture samples; k is a gaussian kernel function with:
Figure FDA0002973624820000043
wherein, σ is the width hyper-parameter of the function, and x and y are two parameters of the kernel function respectively.
7. A network map intelligent generation system based on remote sensing images is characterized by comprising a remote sensing image preprocessing subsystem, a remote sensing image semantic feature extraction subsystem, a network map generation subsystem, a real network map preprocessing subsystem, a loss function calculation subsystem and a generation result evaluation subsystem;
the remote sensing image preprocessing subsystem comprises a remote sensing image amplification module and a remote sensing image formatting module; the remote sensing image augmentation module is used for carrying out operations including random cutting, rotation and overturning on the remote sensing image; the remote sensing image formatting module is used for converting the remote sensing image stored in the bitmap format into a tensor format and normalizing the tensor format;
the remote sensing image semantic feature extraction subsystem comprises a convolution layer, a pooling layer, an activation layer and a cross-layer connection structure and is used for carrying out layer-by-layer feature extraction on the remote sensing image, judging the ground feature type corresponding to each pixel point in the remote sensing image and providing important information for generating a network map;
the network map generation subsystem comprises a network map generator and a network map discriminator; the network map generator adopts a symmetrical encoder-decoder structure and is used for generating a corresponding network map according to the remote sensing image and the remote sensing semantic features; the network map discriminator adopts a down-sampling encoder structure for distinguishing a real network map and generating the network map, so that the network map generator generates the network map which can be more false and truer;
the real network map preprocessing subsystem comprises a real network map augmentation module, a real network map ground object type extraction module and a real network map redrawing module; the real network map augmentation module is used for matching with the remote sensing image augmentation module to manufacture a real network map corresponding to the spatial position of the processed remote sensing image; the real network map ground object type extraction module adopts a clustering algorithm of a preset initial state to extract the real ground object type of each pixel, namely semantic information of each pixel, and generates a real semantic mask; the real network map redrawing module redraws the real network map according to the real semantic mask so as to unify and standardize color values in the network map;
the loss function calculation subsystem comprises a ground feature semantic loss function calculation module and a network map loss function calculation module; the semantic feature loss function calculation module is used for calculating semantic feature loss according to a real semantic mask and a semantic mask extracted by the remote sensing image semantic feature extraction subsystem, wherein the loss is propagated reversely and used for updating parameters of each neural network layer in the remote sensing image semantic feature extraction subsystem and correcting semantic features extracted by the remote sensing image semantic feature extraction subsystem; the network map loss function calculation module calculates the loss of the network map according to the redrawn real network map and the network map generated by the network map generation subsystem, the loss is propagated reversely to update the parameters of each neural network layer in the network map generation subsystem, the parameters in the remote sensing image semantic feature extraction subsystem are indirectly influenced by the network map generation subsystem, and the network map finally generated by the system is corrected;
the generated result evaluation subsystem is used for comparing the generated network map with a real network map, automatically calculating and generating an objective image evaluation index, serving as a supplement for subjective feeling of a user and helping the user to quantitatively know the quality of the network map generated by the system;
the connection relationship among the above-mentioned component systems is:
the remote sensing image preprocessing subsystem is connected with the remote sensing image semantic feature extraction subsystem and the network map generation subsystem and provides processed remote sensing image pictures meeting the requirements to the remote sensing image semantic feature extraction subsystem and the network map generation subsystem;
the remote sensing image semantic feature extraction subsystem is connected with the remote sensing image preprocessing subsystem, the network map generation subsystem and the loss function calculation subsystem, provides extracted semantic features for the network map generation subsystem, and receives a loss value returned by the network map generation subsystem to train the system;
the network map generation subsystem is respectively connected with the remote sensing image preprocessing subsystem, the remote sensing image semantic feature extraction subsystem, the loss function calculation subsystem and the generation result evaluation subsystem, and generates a network map according to the remote sensing image and the semantic features;
the real network map preprocessing subsystem is connected with the loss function calculation subsystem and the generated result evaluation subsystem, processes the original network map, generates a real semantic mask according to the original network map, redraws the real network map according to the real semantic mask, and provides the generated real semantic mask and the real network map to other subsystems;
the loss function calculation subsystem is connected with the remote sensing image semantic feature extraction subsystem, the network map generation subsystem is connected with the real network map preprocessing subsystem, the real network map and the real semantic mask are obtained from the real network map preprocessing subsystem, the semantic mask generated by the loss function calculation subsystem is obtained from the remote sensing image semantic feature extraction subsystem, the loss value is calculated by comparing the semantic mask with the real semantic mask, and then the loss value is transmitted back to the remote sensing image semantic feature extraction subsystem for training; meanwhile, the loss function calculation subsystem acquires the generated network map from the network map generation subsystem and a binary discrimination result output by a discriminator of the network map, compares the two discrimination results with the real network map to calculate a loss value, and then transmits the loss value back to the network map generation subsystem for training;
the generated result evaluation subsystem is connected with the real network map preprocessing subsystem and the network map generating subsystem, acquires the real network map from the real network map preprocessing subsystem, acquires the generated network map from the network map generating subsystem, and calculates evaluation indexes through comparison so as to help a user to know the quality of the network map generated by the system in a quantitative manner.
8. The system for intelligently generating the network map based on the remote sensing image as claimed in claim 7, wherein the using method of the system is as follows:
step 1: inputting the remote sensing image into a remote sensing image preprocessing subsystem, performing data augmentation, normalization and formatting, and storing the processed remote sensing image in a memory;
step 2: inputting the network map into a real network map preprocessing subsystem, extracting the ground feature types, redrawing and amplifying data, and ensuring that the amplified area corresponds to the area reserved for data amplification in the step 1; storing the processed real network map and the real semantic mask for later use;
and step 3: the remote sensing image semantic feature extraction subsystem reads the remote sensing image, processes the remote sensing image through the neural network layer and generates a semantic feature map and a semantic mask, wherein the semantic feature map is transmitted to the network map generation subsystem; storing the semantic mask to be used;
and 4, step 4: the network map generation subsystem reads the remote sensing image and the semantic feature map, a network map generator in the network map generation subsystem generates a network map through processing of a neural network layer, then the generated network map is input into a network map discriminator in the network map generation subsystem to carry out true/false binary discrimination, and a discrimination result is used for calculating a loss function; generating a network map for storage and standby use;
and 5: a ground feature semantic loss function calculation part in the loss function calculation subsystem reads the real semantic mask and generates a semantic mask, compares the calculated loss value and transmits the loss value back to the remote sensing image semantic feature extraction subsystem for training; a network map loss function calculation part in the loss function calculation subsystem reads a real network map, generates a network map and a binary discrimination result output by a discriminator in the network map generation subsystem, and compares and calculates a loss value; then, the loss value is transmitted back to the network map generation subsystem for training;
step 6: each neural network layer in the network map generation subsystem carries out parameter modification according to the loss value returned by the loss function calculation subsystem, completes one training iteration, and returns the loss value of the first layer to the remote sensing image semantic feature extraction subsystem;
and 7: each neural network layer in the semantic feature extraction subsystem of the remote sensing image carries out parameter modification according to loss values returned by the loss function calculation subsystem and the network map generation subsystem, and one training iteration is completed;
and 8: repeatedly executing the steps 1 to 7, and sending the real network map and the generated network map to the generated result evaluation subsystem to calculate indexes after all the training data are traversed and used once; if the index meets the preset condition or the total iteration number reaches the preset upper limit, stopping training and executing the step 9;
and step 9: and (4) executing the steps 1 to 4, generating a final network map, and sending the real network map and the generated network map to a generated result evaluation subsystem to calculate a final index.
CN202110269420.XA 2021-03-12 2021-03-12 Intelligent network map generation method and system based on remote sensing image Active CN112950780B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110269420.XA CN112950780B (en) 2021-03-12 2021-03-12 Intelligent network map generation method and system based on remote sensing image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110269420.XA CN112950780B (en) 2021-03-12 2021-03-12 Intelligent network map generation method and system based on remote sensing image

Publications (2)

Publication Number Publication Date
CN112950780A true CN112950780A (en) 2021-06-11
CN112950780B CN112950780B (en) 2022-09-06

Family

ID=76229376

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110269420.XA Active CN112950780B (en) 2021-03-12 2021-03-12 Intelligent network map generation method and system based on remote sensing image

Country Status (1)

Country Link
CN (1) CN112950780B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113704372A (en) * 2021-08-18 2021-11-26 中国人民解放军国防科技大学 Remote sensing image conversion map migration method and device based on depth countermeasure network
CN114265431A (en) * 2021-12-22 2022-04-01 中国地质大学(武汉) Ground disaster monitoring method and system based on distributed coverage control
CN117173353A (en) * 2023-09-04 2023-12-05 广东省核工业地质局测绘院 Geological mapping system based on remote sensing image
CN117422787A (en) * 2023-12-18 2024-01-19 中国人民解放军国防科技大学 Remote sensing image map conversion method integrating discriminant and generative model
CN118015287A (en) * 2024-04-09 2024-05-10 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Domain correction adaptive device-based cross-domain small sample segmentation method
CN118298317A (en) * 2024-06-06 2024-07-05 南京工业大学 Automatic labeling method and system based on online map training set

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110378293A (en) * 2019-07-22 2019-10-25 泰瑞数创科技(北京)有限公司 A method of high-precision map is produced based on outdoor scene threedimensional model
CN110517575A (en) * 2019-08-21 2019-11-29 华北水利水电大学 A kind of surface water body drafting method and device
CN110516539A (en) * 2019-07-17 2019-11-29 苏州中科天启遥感科技有限公司 Remote sensing image building extracting method, system, storage medium and equipment based on confrontation network
CN111325788A (en) * 2020-02-07 2020-06-23 北京科技大学 Building height determination method based on street view picture
CN111553303A (en) * 2020-05-07 2020-08-18 武汉大势智慧科技有限公司 Remote sensing ortho image dense building extraction method based on convolutional neural network
CN111597283A (en) * 2020-04-28 2020-08-28 杨军 Map tile updating method, device, equipment and storage medium
CN111598101A (en) * 2020-05-25 2020-08-28 中国测绘科学研究院 Urban area intelligent extraction method, system and equipment based on remote sensing image scene segmentation
WO2020244261A1 (en) * 2019-06-05 2020-12-10 中国科学院长春光学精密机械与物理研究所 Scene recognition system for high-resolution remote sensing image, and model generation method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020244261A1 (en) * 2019-06-05 2020-12-10 中国科学院长春光学精密机械与物理研究所 Scene recognition system for high-resolution remote sensing image, and model generation method
CN110516539A (en) * 2019-07-17 2019-11-29 苏州中科天启遥感科技有限公司 Remote sensing image building extracting method, system, storage medium and equipment based on confrontation network
CN110378293A (en) * 2019-07-22 2019-10-25 泰瑞数创科技(北京)有限公司 A method of high-precision map is produced based on outdoor scene threedimensional model
CN110517575A (en) * 2019-08-21 2019-11-29 华北水利水电大学 A kind of surface water body drafting method and device
CN111325788A (en) * 2020-02-07 2020-06-23 北京科技大学 Building height determination method based on street view picture
CN111597283A (en) * 2020-04-28 2020-08-28 杨军 Map tile updating method, device, equipment and storage medium
CN111553303A (en) * 2020-05-07 2020-08-18 武汉大势智慧科技有限公司 Remote sensing ortho image dense building extraction method based on convolutional neural network
CN111598101A (en) * 2020-05-25 2020-08-28 中国测绘科学研究院 Urban area intelligent extraction method, system and equipment based on remote sensing image scene segmentation

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113704372A (en) * 2021-08-18 2021-11-26 中国人民解放军国防科技大学 Remote sensing image conversion map migration method and device based on depth countermeasure network
CN114265431A (en) * 2021-12-22 2022-04-01 中国地质大学(武汉) Ground disaster monitoring method and system based on distributed coverage control
CN117173353A (en) * 2023-09-04 2023-12-05 广东省核工业地质局测绘院 Geological mapping system based on remote sensing image
CN117422787A (en) * 2023-12-18 2024-01-19 中国人民解放军国防科技大学 Remote sensing image map conversion method integrating discriminant and generative model
CN117422787B (en) * 2023-12-18 2024-03-08 中国人民解放军国防科技大学 Remote sensing image map conversion method integrating discriminant and generative model
CN118015287A (en) * 2024-04-09 2024-05-10 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Domain correction adaptive device-based cross-domain small sample segmentation method
CN118298317A (en) * 2024-06-06 2024-07-05 南京工业大学 Automatic labeling method and system based on online map training set
CN118298317B (en) * 2024-06-06 2024-08-09 南京工业大学 Automatic labeling method and system based on online map training set

Also Published As

Publication number Publication date
CN112950780B (en) 2022-09-06

Similar Documents

Publication Publication Date Title
CN112950780B (en) Intelligent network map generation method and system based on remote sensing image
CN110135267B (en) Large-scene SAR image fine target detection method
CN110443143B (en) Multi-branch convolutional neural network fused remote sensing image scene classification method
CN108154192B (en) High-resolution SAR terrain classification method based on multi-scale convolution and feature fusion
CN112966684B (en) Cooperative learning character recognition method under attention mechanism
CN111986099B (en) Tillage monitoring method and system based on convolutional neural network with residual error correction fused
CN108573276B (en) Change detection method based on high-resolution remote sensing image
CN112734775B (en) Image labeling, image semantic segmentation and model training methods and devices
CN112149547B (en) Remote sensing image water body identification method based on image pyramid guidance and pixel pair matching
CN105741267B (en) The multi-source image change detecting method of cluster guidance deep neural network classification
CN113160062B (en) Infrared image target detection method, device, equipment and storage medium
CN108549893A (en) A kind of end-to-end recognition methods of the scene text of arbitrary shape
CN105825502B (en) A kind of Weakly supervised method for analyzing image of the dictionary study based on conspicuousness guidance
CN112464911A (en) Improved YOLOv 3-tiny-based traffic sign detection and identification method
CN114943876A (en) Cloud and cloud shadow detection method and device for multi-level semantic fusion and storage medium
CN106408030A (en) SAR image classification method based on middle lamella semantic attribute and convolution neural network
CN114694038A (en) High-resolution remote sensing image classification method and system based on deep learning
CN110060273A (en) Remote sensing image landslide plotting method based on deep neural network
CN113312993B (en) Remote sensing data land cover classification method based on PSPNet
CN111079807B (en) Ground object classification method and device
CN113657414B (en) Object identification method
Chen et al. Agricultural remote sensing image cultivated land extraction technology based on deep learning
CN110334656A (en) Multi-source Remote Sensing Images Clean water withdraw method and device based on information source probability weight
CN114332473A (en) Object detection method, object detection device, computer equipment, storage medium and program product
CN106991411A (en) Remote Sensing Target based on depth shape priori becomes more meticulous extracting method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant