CN114882139B - End-to-end intelligent generation method and system for multi-level map - Google Patents

End-to-end intelligent generation method and system for multi-level map Download PDF

Info

Publication number
CN114882139B
CN114882139B CN202210383509.3A CN202210383509A CN114882139B CN 114882139 B CN114882139 B CN 114882139B CN 202210383509 A CN202210383509 A CN 202210383509A CN 114882139 B CN114882139 B CN 114882139B
Authority
CN
China
Prior art keywords
map
layer
network
level
remote sensing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210383509.3A
Other languages
Chinese (zh)
Other versions
CN114882139A (en
Inventor
付莹
方政
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202210383509.3A priority Critical patent/CN114882139B/en
Publication of CN114882139A publication Critical patent/CN114882139A/en
Application granted granted Critical
Publication of CN114882139B publication Critical patent/CN114882139B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/206Drawing of charts or graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an end-to-end intelligent generation method and system of a multi-level map, and belongs to the technical field of computer vision. According to the deep neural network, a multi-level map generation network for generating a multi-level map by remote sensing images is designed, then a multi-level network map generation data set is established, and the data set is used for training the generation network to obtain optimized generation network model parameters. And finally, sampling the remote sensing image to obtain a multi-level remote sensing image, and inputting the collected image into a generating network for processing to generate a multi-level network map. The invention generates the multi-level network map from the remote sensing image through the multi-level network map generation network, can realize the generation of the multi-level network map without manual participation, and has high generation speed and low cost.

Description

End-to-end intelligent generation method and system for multi-level map
Technical Field
The invention relates to an end-to-end intelligent generation method and system of a multi-level map, and belongs to the technical field of computer vision.
Background
Map images of each layer in a traditional network map are usually rendered according to map vector data and a certain drawing standard. However, the acquisition of map vector data generally requires manual field acquisition, and has great limitations in terms of efficiency and cost. The characteristics of high acquisition speed and low collection cost of the remote sensing images are considered, and the network map is automatically generated according to the remote sensing images, so that the method becomes a feasible solution. However, the existing method does not comprehensively analyze the difficulty of generating the multi-level network map, the same geographic elements exist in different levels of the multi-level map, the different levels of geographic elements of the multi-level map are displayed in detail differently, and the remote sensing pixel space and the map pixel space have larger difference, so that the multi-level network map with accurate and consistent information expression and good visual effect is difficult to generate.
The multi-level network map generally adopts a tile map pyramid model structure. From the highest level (K-th level) to the lowest level (0-th level) of the tile pyramid, the resolution is increasingly lower, but the geographical area range represented is unchanged. In particular, the distance pixel ratio of the K-th layer tile map is half that of the K-1 th layer, so that the tile map has larger spatial resolution and can display finer content. In the multi-level network map, geographic elements contained in different level maps are consistent, and the displayed detail degree is different, so that the multi-level network map has consistency and difference among the levels.
The remote sensing image is an image obtained by shooting the ground by using high-altitude equipment such as an unmanned plane, an airplane, a satellite and the like. Compared with map vector data, the method has the characteristics of high updating speed and relatively low collection cost. However, since the remote sensing image is collected from the actual environment, there is a significant difference compared with the artificially beautified map, which makes the generation of the map from the remote sensing image a challenging task.
Disclosure of Invention
The invention aims to design an effective deep neural network for realizing conversion from a remote sensing pixel space to a map pixel space, and utilizes the characteristics that a multi-level map has a plurality of identical geographic ranges, so that the generated multi-level map has level consistency and difference, and an end-to-end intelligent generation method and an end-to-end intelligent generation system of the multi-level map are creatively provided. The invention realizes the generation of the multi-level network map from the remote sensing image without manual drawing, reduces the labor cost and accelerates the map generation speed.
An end-to-end intelligent generation method of a multi-level map comprises the following steps:
Step 1: and designing a multi-level map generation network for generating a multi-level map from the remote sensing image according to the deep neural network.
The multi-level map generation network is an end-to-end network and is used for generating multi-level map images from remote sensing images.
Specifically, the multi-level map generation network includes:
and the hierarchy classifier is used for judging the hierarchy to which the preliminary map generated by the rendering generator belongs. The hierarchical classifier includes a batch normalization layer and a convolution layer.
And the map element extractor is used for extracting the geographic element characteristics from the remote sensing image. The map element extractor comprises a semantic segmentation network based on full convolution, a semantic segmentation network based on a Transformer, and a semantic segmentation network based on an encoder and decoder structure.
And the rendering generator is used for generating a preliminary map according to the geographic element characteristics, the remote sensing images and the level marks. The rendering generator includes a generator that generates a network based on conditional antagonism, and a generator that generates a network based on recurring consistency.
And the multi-layer fusion generator is used for generating a refined map according to the multi-layer preliminary map. The multi-layer fusion generator includes a batch normalization layer, a resizing layer, a conditional challenge-generating network based generator, and a loop consistency based challenge-generating network generator.
And the discriminator is used for judging whether the preliminary map and the refined map are true. The discriminators include a discriminator based on a conditional challenge-generating network, and a discriminator based on a loop consistency challenge-generating network.
Step 2: a multi-level network map generation dataset is established.
Each type of data in the data set comprises a remote sensing image, a map image, a geographic element tag and a hierarchical identifier.
The remote sensing image is sampled according to the highest level map resolution, and downsampling is carried out by 2 times according to the level number, so that a multi-layer remote sensing image is obtained, namely the resolution of the k-th layer remote sensing image is one half of that of the k+1-th layer.
Step 3: training the generated network by using the data set to obtain optimized generated network model parameters.
Step 4: and (3) sampling the remote sensing image to obtain a multi-level remote sensing image, and inputting the collected image into a generating network for processing to generate a multi-level network map.
The sampling method comprises bilinear interpolation, bicubic interpolation and median interpolation.
An end-to-end intelligent generation system of a multi-level map comprises a training module and a generation module.
The training module is used for establishing a multi-level network map generation data set and training the multi-level map generation network according to the multi-level network map generation data set.
And the generation module is used for inputting the multi-level remote sensing image into a pre-trained multi-level map generation network for processing to generate a multi-level network map.
Advantageous effects
Compared with the prior art, the invention has the following advantages:
the invention generates the multi-level network map from the remote sensing image through the multi-level network map generation network, can realize the generation of the multi-level network map without manual participation, and has high generation speed and low cost.
Drawings
FIG. 1 is a schematic flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of an end-to-end intelligent generation network training of a multi-level map;
FIG. 3 is a schematic diagram of an end-to-end intelligent generation network test of a multi-level map;
fig. 4 is a schematic diagram of the system of the present invention.
Detailed Description
The invention and embodiments are described in further detail below with reference to the accompanying drawings.
Examples
An end-to-end intelligent generation method of a multi-level map, as shown in fig. 1, comprises the following steps:
step 101: according to the deep neural network, an end-to-end intelligent generation network of the multi-level map is designed.
The deep neural network is an end-to-end network. Training the deep neural network according to the multi-level network map generation data set to obtain optimized multi-level network map generation model parameters.
Step 102: and establishing a multi-level network map generation data set which comprises remote sensing images, map images, geographic element labels and level identifications of different cities and different levels.
Each data in the multi-level network map generation data set comprises a remote sensing image, a map image, a geographic element label and a level identifier. Taking Beijing 18 th level map data as an example, the remote sensing image is a picture which is shot to the ground and contains ground information, the map is a picture manually drawn by an expert according to vector data, the geographic element labels are obtained by clustering the map through Kmeans, and the levels are represented as corresponding level numbers 18.
And carrying out Kmeans clustering on the map by collecting remote sensing images and map pairing data to obtain a geographic element label, and recording the belonging level as an identifier, thereby forming a data set.
Step 103: and training the multi-level network map generation network according to the multi-level network map generation data set.
In this embodiment, the multi-level map generation network includes: a hierarchical classifier, a map element extractor, a rendering generator, a multi-layer fusion generator, and a discriminant.
The hierarchy classifier is used for judging the hierarchy to which the preliminary map generated by the rendering generator belongs. The hierarchical classifier includes a batch normalization layer (BN), a convolution layer (Conv).
The map element extractor is used for extracting geographic element features from the remote sensing image. The map element extractor includes a semantic segmentation network based on full convolution (FCN, deep v3+ etc.), a semantic segmentation network based on a transducer (Swin transducer, segFormer, SEGMENTER etc.), and a semantic segmentation network based on an encoder-decoder structure (Unet, PSPNet etc.).
The rendering generator is used for generating a preliminary map according to the geographic element characteristics, the remote sensing images and the hierarchical identifiers. The rendering generators include generators (Pixe Pix, pix2pixHD, TSIT, selectionGAN, etc.) that generate networks based on conditional antagonism, and generators (CycleGAN, SMAPGAN, etc.) that generate networks based on antagonism of loop consistency.
The multi-layer fusion generator is used for generating a refined map according to the multi-layer preliminary map. The multi-layer fusion generator includes a batch normalization layer (BN), a resizing layer (resolution), a conditional challenge-generating network based generator (Pixe Pix, pix2pixHD, TSIT, selectionGAN, etc.), and a loop consistency based challenge-generating network generator (CycleGAN, SMAPGAN, etc.).
The discriminator is used for judging whether the preliminary map and the refined map are true. The discriminators include a discriminator (Pixe 2Pix, pix2pixHD, TSIT, selectionGAN, etc.) of the condition-based challenge-generating network, a discriminator (CycleGAN, SMAPGAN, etc.) of the loop-consistency-based challenge-generating network.
In this embodiment, the remote sensing image is sampled according to the highest level map resolution, and is downsampled twice for multiple times according to the level number to obtain the multi-layer remote sensing image. Wherein each layer of remote sensing image resolution is one half of the resolution of the higher layer. The sampling mode can be bilinear interpolation, bicubic interpolation or median interpolation.
Based on the steps, generating a multi-level network map by using the obtained multi-level remote sensing images
Further, as shown in fig. 2, the training process will be described in detail.
In this embodiment, performing end-to-end training on a multi-level network map generation network includes the following steps:
specifically, first, the K-th layer remote sensing image and the K-1 th layer remote sensing image are respectively sent to a map element extractor to extract map elements:
M=Fθ(xK) (1)
Wherein, F θ is a map element extractor, x K is a K-th layer remote sensing image, and M is a map element.
Then, splicing the remote sensing images and map elements and hierarchical identifiers, and sending the spliced remote sensing images and the map elements and the hierarchical identifiers into a rendering generator to generate a preliminary map:
Wherein y' K is the K-th preliminary map, x K is the K-th remote sensing map, Representing hierarchy information, F θ(xK) representing map elements output by the map element extractor, G 'representing a rendering generator, phi' representing rendering generator parameters.
Then, inputting the obtained K-layer preliminary map and the K-1-layer preliminary map into a multi-layer fusion generator to finish the generation of the K-layer refined map:
y″K=G″φ″(y′K,y′K-1) (3)
Where y "K is the K-th level of refinement map, G" represents the multi-layer fusion generator, phi '"represents the multi-layer fusion generator parameters, and y' K-1 is the K-1 th level of preliminary map.
Then, the K-th layer preliminary map is sent into a hierarchy classifier, the hierarchy to which the generated K-th layer preliminary map belongs is judged, and cross entropy loss function is utilized for optimization:
Wherein θ is a model parameter of the hierarchical classifier C, and N represents the number of hierarchical levels; s i is the true value of the ith layer of the hierarchy, the position of 1 in s i represents that the map is the corresponding hierarchy, and 0 represents not; c θ(y′K)i is the K-th layer preliminary map is the prediction confidence of the i-th layer.
And finally, respectively sending the K-layer preliminary map and the K-layer actual map, and the K-layer refined map and the K-layer actual map into a discriminator, outputting whether the map is true or not, and updating network parameters:
Wherein D is a discriminator, and the method comprises the steps of, Is a discriminator parameter, and p data (x) and p data (y) represent the data distribution of the remote sensing image and the actual network map, respectively. V () is an objective function and E is the expected value of the distribution function.
Step 104: and (3) sampling the remote sensing image to obtain a multi-level remote sensing image, and inputting the collected image into a generating network for processing to generate a multi-level network map.
As shown in fig. 3, the obtained remote sensing image is first sampled to obtain a multi-layer remote sensing image. And respectively sending the K-layer remote sensing image and the K-1 layer remote sensing image into a map element extractor to extract map elements. And splicing the remote sensing images with the map elements and the level marks, and sending the spliced remote sensing images and the map elements and the level marks into a rendering generator to generate a preliminary map. And finally, inputting the obtained K-layer preliminary map and the K-1-layer preliminary map into a multi-layer fusion generator to finish the generation of the K-layer refined map.
In order to implement the above embodiment, the present embodiment further proposes an end-to-end intelligent generation system of a multi-level map, as shown in fig. 4, including a training module 10 and a generation module 20.
The training module 10 is configured to establish a multi-level network map generation data set, train the multi-level map generation network according to the multi-level network map generation data set, and the multi-level network map generation data set includes a remote sensing image, a map image, a geographic element tag and a hierarchical identifier.
The generating module 20 is configured to input the multi-level remote sensing image into a pre-trained multi-level map generating network for processing, and generate a multi-level network map.
The output of training module 10 is connected to the input of generating module 20.

Claims (2)

1. The end-to-end intelligent generation method of the multi-level map is characterized by comprising the following steps of:
Step 1: according to the deep neural network, designing a remote sensing image to generate a multi-level map generation network;
the multi-level map generation network is an end-to-end network, and is used for generating a multi-level map image from a remote sensing image, and comprises the following steps:
The hierarchy classifier is used for judging the hierarchy to which the preliminary map generated by the rendering generator belongs; the hierarchical classifier comprises a batch normalization layer and a convolution layer;
The map element extractor is used for extracting geographic element characteristics from the remote sensing image; the map element extractor comprises a semantic segmentation network based on full convolution, a semantic segmentation network based on a Transformer and a semantic segmentation network based on a coder decoder structure;
The rendering generator is used for generating a preliminary map according to the geographic element characteristics, the remote sensing images and the level marks; the rendering generator comprises a generator based on a conditional antagonism generation network and a generator based on a cyclic consistency antagonism generation network;
The multi-layer fusion generator is used for generating a refined map according to the multi-layer preliminary map; the multi-layer fusion generator comprises a batch normalization layer, a resizing layer, a generator based on a conditional antagonism generation network, and a generator based on a cyclic consistency antagonism generation network;
the discriminator is used for judging whether the preliminary map and the refined map are true or not; the discriminators include a discriminator based on a conditional challenge-generating network, and a discriminator based on a loop consistency challenge-generating network;
step 2: establishing a multi-level network map generation data set;
each type of data in the data set comprises a remote sensing image, a map image, a geographic element tag and a hierarchical identifier;
Step 3: training the generated network by using the data set to obtain optimized generated network model parameters;
Firstly, respectively sending the K-layer remote sensing image and the K-1 layer remote sensing image into a map element extractor to extract map elements:
M=Fθ(xK) (1)
Wherein F θ is a map element extractor, x K is a K-th layer remote sensing image, and M is a map element;
then, splicing the remote sensing images and map elements and hierarchical identifiers, and sending the spliced remote sensing images and the map elements and the hierarchical identifiers into a rendering generator to generate a preliminary map:
Wherein y' K is the K-th preliminary map, x K is the K-th remote sensing map, Representation level information, F θ(xK) represents map elements output by the map element extractor, G 'represents a rendering generator, and Φ' represents rendering generator parameters;
Then, inputting the obtained K-layer preliminary map and the K-1-layer preliminary map into a multi-layer fusion generator to finish the generation of the K-layer refined map:
y″K=G″φ″(y′K,y′K-1) (3)
Wherein y "K is the K-th layer refined map, G" represents the multi-layer fusion generator, phi '"represents the multi-layer fusion generator parameters, and y' K-1 is the K-1 th layer preliminary map;
Then, the K-th layer preliminary map is sent into a hierarchy classifier, the hierarchy to which the generated K-th layer preliminary map belongs is judged, and cross entropy loss function is utilized for optimization:
Wherein θ is a model parameter of the hierarchical classifier C, and N represents the number of hierarchical levels; s i is the true value of the ith layer of the hierarchy, the position of 1 in s i represents that the map is the corresponding hierarchy, and 0 represents not; c θ(y′K)i is the prediction confidence of the K layer preliminary map is the i layer;
And finally, respectively sending the K-layer preliminary map and the K-layer actual map, and the K-layer refined map and the K-layer actual map into a discriminator, outputting whether the map is true or not, and updating network parameters:
Wherein D is a discriminator, and the method comprises the steps of, Is a discriminator parameter, and p data (x) and p data (y) respectively represent the data distribution of the remote sensing image and the actual network map; v () is an objective function, E is the expected value of the distribution function;
step 4: sampling the remote sensing image to obtain a multi-level remote sensing image, inputting the collected image into a generating network for processing, and generating a multi-level network map;
the sampling method comprises bilinear interpolation, bicubic interpolation and median interpolation.
2. The method of claim 1, wherein in step 2, the remote sensing image is sampled according to the highest level map resolution and downsampled by 2 times according to the level number, so as to obtain a multi-level remote sensing image, wherein each level of remote sensing image resolution is one half of that of the other level.
CN202210383509.3A 2022-04-12 2022-04-12 End-to-end intelligent generation method and system for multi-level map Active CN114882139B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210383509.3A CN114882139B (en) 2022-04-12 2022-04-12 End-to-end intelligent generation method and system for multi-level map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210383509.3A CN114882139B (en) 2022-04-12 2022-04-12 End-to-end intelligent generation method and system for multi-level map

Publications (2)

Publication Number Publication Date
CN114882139A CN114882139A (en) 2022-08-09
CN114882139B true CN114882139B (en) 2024-06-07

Family

ID=82670143

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210383509.3A Active CN114882139B (en) 2022-04-12 2022-04-12 End-to-end intelligent generation method and system for multi-level map

Country Status (1)

Country Link
CN (1) CN114882139B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117422787B (en) * 2023-12-18 2024-03-08 中国人民解放军国防科技大学 Remote sensing image map conversion method integrating discriminant and generative model

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019153245A1 (en) * 2018-02-09 2019-08-15 Baidu.Com Times Technology (Beijing) Co., Ltd. Systems and methods for deep localization and segmentation with 3d semantic map
CN113052121A (en) * 2021-04-08 2021-06-29 北京理工大学 Multi-level network map intelligent generation method based on remote sensing image
WO2021184891A1 (en) * 2020-03-20 2021-09-23 中国科学院深圳先进技术研究院 Remotely-sensed image-based terrain classification method, and system
CN113449594A (en) * 2021-05-25 2021-09-28 湖南省国土资源规划院 Multilayer network combined remote sensing image ground semantic segmentation and area calculation method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019153245A1 (en) * 2018-02-09 2019-08-15 Baidu.Com Times Technology (Beijing) Co., Ltd. Systems and methods for deep localization and segmentation with 3d semantic map
WO2021184891A1 (en) * 2020-03-20 2021-09-23 中国科学院深圳先进技术研究院 Remotely-sensed image-based terrain classification method, and system
CN113052121A (en) * 2021-04-08 2021-06-29 北京理工大学 Multi-level network map intelligent generation method based on remote sensing image
CN113449594A (en) * 2021-05-25 2021-09-28 湖南省国土资源规划院 Multilayer network combined remote sensing image ground semantic segmentation and area calculation method

Also Published As

Publication number Publication date
CN114882139A (en) 2022-08-09

Similar Documents

Publication Publication Date Title
Zhang et al. Remote sensing image spatiotemporal fusion using a generative adversarial network
CN108537191B (en) Three-dimensional face recognition method based on structured light camera
CN115457531A (en) Method and device for recognizing text
CN111461006B (en) Optical remote sensing image tower position detection method based on deep migration learning
CN111144418B (en) Railway track area segmentation and extraction method
CN112950780B (en) Intelligent network map generation method and system based on remote sensing image
CN113838064B (en) Cloud removal method based on branch GAN using multi-temporal remote sensing data
CN101930461A (en) Digital image visualized management and retrieval for communication network
CN114882139B (en) End-to-end intelligent generation method and system for multi-level map
CN112308029A (en) Rainfall station and satellite rainfall data fusion method and system
CN110992366A (en) Image semantic segmentation method and device and storage medium
CN114820655A (en) Weak supervision building segmentation method taking reliable area as attention mechanism supervision
CN112766409A (en) Feature fusion method for remote sensing image target detection
CN113436060A (en) Method and device for transferring styles of different-source remote sensing images
CN113052121B (en) Multi-level network map intelligent generation method based on remote sensing image
Gu et al. A classification method for polsar images using SLIC superpixel segmentation and deep convolution neural network
Wang et al. [Retracted] Processing Methods for Digital Image Data Based on the Geographic Information System
CN114462486A (en) Training method of image processing model, image processing method and related device
CN115909255B (en) Image generation and image segmentation methods, devices, equipment, vehicle-mounted terminal and medium
CN110826478A (en) Aerial photography illegal building identification method based on countermeasure network
CN115861756A (en) Earth background small target identification method based on cascade combination network
Li et al. Super resolution for single satellite image using a generative adversarial network
CN111402223A (en) Transformer substation defect problem detection method using transformer substation video image
Nugraha et al. Performance Improvement of Deep Convolutional Networks for Aerial Imagery Segmentation of Natural Disaster-Affected Areas
CN113538247B (en) Super-resolution generation and conditional countermeasure network remote sensing image sample generation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant