CN114882139A - End-to-end intelligent generation method and system for multi-level map - Google Patents

End-to-end intelligent generation method and system for multi-level map Download PDF

Info

Publication number
CN114882139A
CN114882139A CN202210383509.3A CN202210383509A CN114882139A CN 114882139 A CN114882139 A CN 114882139A CN 202210383509 A CN202210383509 A CN 202210383509A CN 114882139 A CN114882139 A CN 114882139A
Authority
CN
China
Prior art keywords
map
network
level
layer
remote sensing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210383509.3A
Other languages
Chinese (zh)
Inventor
付莹
方政
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202210383509.3A priority Critical patent/CN114882139A/en
Publication of CN114882139A publication Critical patent/CN114882139A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/206Drawing of charts or graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation

Abstract

The invention relates to an end-to-end intelligent generation method and system of a multi-level map, and belongs to the technical field of computer vision. According to the method, a multi-level map generation network of a multi-level map generated by remote sensing images is designed according to a deep neural network, then a multi-level network map generation data set is established, and the generation network is trained by using the data set to obtain optimized generation network model parameters. And finally, sampling the remote sensing image to obtain a multi-level remote sensing image, inputting the collected image into a generation network for processing, and generating a multi-level network map. The invention generates the multi-level network map from the remote sensing image through the multi-level network map generation network, can realize the generation of the multi-level network map without manual participation, and has high generation speed and low cost.

Description

End-to-end intelligent generation method and system for multi-level map
Technical Field
The invention relates to an end-to-end intelligent generation method and system of a multi-level map, and belongs to the technical field of computer vision.
Background
Map images of each layer in a traditional network map are usually rendered according to certain drawing standards according to map vector data. However, the acquisition of the map vector data usually requires manual field acquisition, and has great limitations in terms of efficiency and cost. The network map is automatically generated according to the remote sensing image in consideration of the characteristics of high acquisition speed and low collection cost of the remote sensing image, and the method becomes a feasible solution. However, the existing method does not fully analyze the difficulty of generating the multi-level network map, and the multi-level network map with accurate and consistent information expression and better visual effect is difficult to generate due to the fact that the same geographic elements exist in different levels of the multi-level map, the display details of the geographic elements in different levels of the multi-level map are slightly different, and the remote sensing pixel space and the map pixel space have larger difference.
The multi-level network map usually adopts a pyramid model structure of the tile map. From the highest level (layer K) to the lowest level (layer 0) of the tile pyramid, the resolution gets lower and lower, but the geographical area represented does not change. Specifically, the distance pixel ratio of the tile map of the K-th layer is half of that of the K-1-th layer, so that the tile map has larger spatial resolution and can display finer content. In the multi-level network map, the geographic elements contained in different levels of maps are consistent, and the displayed detail degrees are different, so that the multi-level network map has consistency and difference between levels.
The remote sensing image is an image shot by high-altitude equipment such as an unmanned aerial vehicle, an airplane, a satellite and the like. Compared with map vector data, the method has the characteristics of high updating speed and relatively low collection cost. However, since the remote sensing image is collected from the actual environment, there is a significant difference compared to the artificially beautified map, which makes the map generation from the remote sensing image a challenging task.
Disclosure of Invention
The invention aims to design an effective deep neural network for realizing the conversion from a remote sensing pixel space to a map pixel space, and the characteristic that a multi-level map has a plurality of same geographic ranges is utilized to ensure that the generated multi-level map has hierarchical consistency and difference, and an end-to-end intelligent generation method and system of the multi-level map are creatively provided. The invention realizes the generation of the multi-level network map from the remote sensing image without manual drawing, reduces the labor cost and accelerates the map generation speed.
An end-to-end intelligent generation method of a multi-level map comprises the following steps:
step 1: and designing a multi-level map generation network for generating a multi-level map from the remote sensing image according to the deep neural network.
The multi-level map generation network is an end-to-end network and is used for generating multi-level map images from remote sensing images.
Specifically, the multi-level map generation network includes:
and the hierarchy classifier is used for judging the hierarchy of the preliminary map generated by the rendering generator. The hierarchical classifier includes a batch normalization layer and a convolution layer.
And the map element extractor is used for extracting the geographic element characteristics from the remote sensing image. The map element extractor comprises a full convolution-based semantic segmentation network, a transform-based semantic segmentation network and a coder-decoder structure-based semantic segmentation network.
And the rendering generator is used for generating a preliminary map according to the geographic element characteristics, the remote sensing image and the hierarchy identification. The rendering generator includes a generator that generates a network based on conditional countermeasures, and a generator that generates a network based on cyclic consistency countermeasures.
And the multilayer fusion generator is used for generating a refined map according to the multilayer preliminary map. The multi-layer fusion generator comprises a batch normalization layer, a resizing layer, a generator for generating a network based on conditional countermeasures, and a generator for generating a network based on cyclic consistency countermeasures.
And the discriminator is used for judging whether the preliminary map and the refined map are true or not. The discriminators include a discriminator for generating a network based on conditional countermeasures, and a discriminator for generating a network based on a cyclic consistency countermeasure.
Step 2: and establishing a multi-level network map generation data set.
Each type of data in the data set comprises a remote sensing image, a map image, a geographic element label and a hierarchy identifier.
And sampling the remote sensing image according to the highest level map resolution, and performing 2-time down-sampling according to the level number to obtain a multilayer remote sensing image, wherein the resolution of the k-th layer remote sensing image is one half of that of the (k + 1) -th layer.
And step 3: and training the generated network by using the data set to obtain optimized generated network model parameters.
And 4, step 4: sampling the remote sensing image to obtain a multi-level remote sensing image, inputting the collected image into a generation network for processing, and generating a multi-level network map.
The sampling method comprises bilinear interpolation, bicubic interpolation and median interpolation.
An end-to-end intelligent generation system of a multi-level map comprises a training module and a generation module.
The training module is used for establishing a multi-level network map generation data set and training a multi-level map generation network according to the multi-level network map generation data set.
And the generation module is used for inputting the multi-level remote sensing image into a pre-trained multi-level map generation network for processing to generate a multi-level network map.
Advantageous effects
Compared with the prior art, the invention has the following advantages:
the invention generates the multi-level network map from the remote sensing image through the multi-level network map generation network, can realize the generation of the multi-level network map without manual participation, and has high generation speed and low cost.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention;
FIG. 2 is a schematic diagram illustrating end-to-end intelligent generation of a multi-level map for network training;
FIG. 3 is a schematic diagram of an end-to-end intelligent generation network test of a multi-level map;
FIG. 4 is a schematic diagram of the system of the present invention.
Detailed Description
The present invention and embodiments are described in further detail below with reference to the accompanying drawings.
Examples
An end-to-end intelligent generation method of a multi-level map, as shown in fig. 1, includes the following steps:
step 101: according to the deep neural network, an end-to-end intelligent generation network of a multi-level map is designed.
The deep neural network is an end-to-end network. And training the deep neural network according to the multi-level network map generation data set to obtain optimized multi-level network map generation model parameters.
Step 102: and establishing a multi-level network map generation data set which comprises remote sensing images, map images, geographic element labels and level identifications of different cities and different levels.
Each data in the multi-level network map generation data set comprises a remote sensing image, a map image, a geographic element label and a level identification. Taking the 18 th level map data of Beijing as an example, the remote sensing image is a picture shot by the ground and containing ground information, the map is a picture artificially drawn by experts according to vector data, the geographic element labels are obtained by clustering the map through Kmeans, and the level is represented as the corresponding level number 18.
By collecting remote sensing images and map matching data, Kmeans clustering is carried out on the map to obtain a geographic element label, and the level to which the geographic element label belongs is recorded as an identifier, so that a data set is formed.
Step 103: and training the multi-level network map generation network according to the multi-level network map generation data set.
In this embodiment, the multi-level map generation network includes: the system comprises a hierarchy classifier, a map element extractor, a rendering generator, a multi-layer fusion generator and a discriminator.
Wherein, the hierarchy classifier is used for judging the hierarchy to which the preliminary map generated by the rendering generator belongs. The hierarchical classifier includes a batch normalization layer (BN), a convolution layer (Conv).
The map element extractor is used for extracting the geographic element characteristics from the remote sensing image. The map element extractor includes a full convolution-based semantic segmentation network (FCN, depllabv 3+, etc.), a transform-based semantic segmentation network (Swin Transformer, SegFormer, Segmenter, etc.), and a coder-decoder structure-based semantic segmentation network (uet, PSPNet, etc.).
And the rendering generator is used for generating a preliminary map according to the geographic element characteristics, the remote sensing image and the hierarchy identification. The rendering generator includes generators (Pixe2Pix, Pix2pixHD, tstit, SelectionGAN, etc.) that generate a network based on conditional opposition, and generators (CycleGAN, SMAPGAN, etc.) that generate a network based on opposition of cycle consistency.
The multi-layer fusion generator is used for generating a refined map according to the multi-layer preliminary map. The multi-layer fusion generator includes a batch normalization layer (BN), a Resize layer (Resize), a generator for generating a network based on conditional antagonism (Pixe2Pix, Pix2pixHD, tstit, SelectionGAN, etc.), and a generator for generating a network based on cyclic coherence antagonism (CycleGAN, SMAPGAN, etc.).
The discriminator is used for judging whether the preliminary map and the refined map are true. The discriminators include discriminators (Pixe2Pix, Pix2pixHD, tstit, SelectionGAN, etc.) for generating a network against the condition, discriminators (CycleGAN, SMAPGAN, etc.) for generating a network against the condition based on cycle consistency, and the like.
In this embodiment, the remote sensing image is sampled according to the highest level map resolution, and multiple double-sampling is performed according to the level number to obtain the multilayer remote sensing image. Wherein the resolution of each layer of remote sensing image is one half of that of the higher layer. The sampling mode can be bilinear interpolation, bicubic interpolation or median interpolation.
Based on the steps, the obtained multilayer remote sensing image is utilized to generate a multilayer network map
Further, as shown in fig. 2, the training process is explained in detail.
In this embodiment, the end-to-end training of the multi-level network map generation network includes the following steps:
specifically, firstly, the K-th layer remote sensing image and the K-1-th layer remote sensing image are respectively sent to a map element extractor, and map elements are extracted:
M=F θ (x K ) (1)
wherein, F θ For map element extractor, x K The K-th layer remote sensing image is M, and the M is a map element.
Then, splicing the remote sensing image, map elements and level marks and sending the remote sensing image, the map elements and the level marks into a rendering generator to generate a preliminary map:
Figure BDA0003592749960000051
wherein, y' K For the K-th preliminary map, x K Is the Kth layer of remote sensing map,
Figure BDA0003592749960000052
representing hierarchy information, F θ (x K ) The presentation graph element extractor outputs map elements, G 'represents a rendering generator, and φ' represents a rendering generator parameter.
And then inputting the obtained K-layer preliminary map and the K-1-layer preliminary map into a multi-layer fusion generator to finish the generation of the K-layer refined map:
y″ K =G″ φ″ (y′ K ,y′ K-1 ) (3)
wherein, y ″) K For layer K refinement maps, G "represents a multi-layer fusion generator,phi '″ denotes a multilayer fusion generator parameter, y' K-1 Is the K-1 layer preliminary map.
Then, sending the K-th layer of preliminary map into a hierarchy classifier, judging the hierarchy to which the generated K-th layer of preliminary map belongs, and optimizing by using a cross entropy loss function:
Figure BDA0003592749960000053
wherein, theta is a model parameter of the hierarchical classifier C, and N represents the number of hierarchical levels; s i Is the true value of the ith layer of the hierarchy, s i Taking the position of 1 to represent that the map is a corresponding hierarchy, and taking 0 to represent not; c θ (y′ K ) i And taking the preliminary map of the K layer as the prediction confidence of the i layer.
And finally, respectively sending the K-th layer preliminary map and the K-th layer actual map, and the K-th layer refined map and the K-th layer actual map into a discriminator, outputting whether the maps are real or not, and updating network parameters:
Figure BDA0003592749960000054
Figure BDA0003592749960000055
wherein D is a discriminator,
Figure BDA0003592749960000061
is the discriminator parameter, p data (x) And p data And (y) respectively representing the data distribution of the remote sensing image and the actual network map. V () is the objective function and E is the expected value of the distribution function.
Step 104: sampling the remote sensing image to obtain a multi-level remote sensing image, inputting the collected image into a generation network for processing, and generating a multi-level network map.
As shown in fig. 3, the obtained remote sensing image is sampled to obtain a multi-layered remote sensing image. And respectively sending the K-th layer remote sensing image and the K-1-th layer remote sensing image into a map element extractor to extract map elements. And splicing the remote sensing image, the map elements and the level identification and sending the remote sensing image, the map elements and the level identification into a rendering generator to generate a preliminary map. And finally, inputting the obtained K-th layer preliminary map and the K-1-th layer preliminary map into a multi-layer fusion generator to finish the generation of the K-th layer fine map.
In order to implement the foregoing embodiment, the present embodiment further provides an end-to-end intelligent generation system for a multi-level map, as shown in fig. 4, including a training module 10 and a generation module 20.
The training module 10 is configured to establish a multi-level network map generation data set, train a multi-level network map generation network according to the multi-level network map generation data set, where the multi-level network map generation data set includes a remote sensing image, a map image, a geographic element tag, and a level identifier.
And the generating module 20 is configured to input the multi-level remote sensing image into a pre-trained multi-level map generating network for processing, so as to generate a multi-level network map.
An output of the training module 10 is connected to an input of the generation module 20.

Claims (5)

1. An end-to-end intelligent generation method of a multi-level map is characterized by comprising the following steps:
step 1: designing a multi-level map generation network for generating a multi-level map by the remote sensing image according to the deep neural network;
the multi-level map generation network is an end-to-end network and is used for generating a multi-level map image from a remote sensing image;
step 2: establishing a multi-level network map generation data set;
each type of data in the data set comprises a remote sensing image, a map image, a geographic element label and a hierarchy identifier;
and step 3: training the generated network by using a data set to obtain optimized generated network model parameters;
and 4, step 4: sampling the remote sensing image to obtain a multi-level remote sensing image, inputting the collected image into a generation network for processing to generate a multi-level network map;
the sampling method comprises bilinear interpolation, bicubic interpolation and median interpolation.
2. The method according to claim 1, wherein in step 1, the multi-level map generation network comprises:
the hierarchy classifier is used for judging the hierarchy to which the preliminary map generated by the rendering generator belongs; the hierarchical classifier comprises a batch normalization layer and a convolution layer;
a map element extractor for extracting the geographic element characteristics from the remote sensing image; the map element extractor comprises a full convolution-based semantic segmentation network, a transform-based semantic segmentation network and a coder-decoder structure-based semantic segmentation network;
the rendering generator is used for generating a preliminary map according to the geographic element characteristics, the remote sensing image and the hierarchy identification; the rendering generator comprises a generator for generating a network based on the conditional countermeasure and a generator for generating a network based on the countermeasure of the cycle consistency;
the multilayer fusion generator is used for generating a refined map according to the multilayer preliminary map; the multi-layer fusion generator comprises a batch normalization layer, a size adjustment layer, a generator for generating a network based on conditional countermeasures and a generator for generating a network based on the countermeasures of cycle consistency;
the discriminator is used for judging whether the preliminary map and the refined map are true; the discriminators include a discriminator for generating a network based on conditional countermeasures, and a discriminator for generating a network based on a cyclic consistency countermeasure.
3. The method according to claim 1, wherein in step 2, the remote sensing image is sampled according to the highest level map resolution, and is sampled 2 times according to the level number to obtain a multi-level remote sensing image, wherein the resolution of each layer of remote sensing image is one half of the higher layer.
4. The method for intelligently generating a multi-level map end-to-end as claimed in claim 1, wherein the training method of step 3 is as follows:
firstly, respectively sending a K-th layer remote sensing image and a K-1-th layer remote sensing image into a map element extractor, and extracting map elements:
M=F θ (x K ) (1)
wherein, F θ For map element extractor, x K The K-th layer remote sensing image is obtained, and M is a map element;
then, splicing the remote sensing image, map elements and level marks and sending the remote sensing image, the map elements and the level marks into a rendering generator to generate a preliminary map:
Figure FDA0003592749950000021
wherein, y' K For the K-th preliminary map, x K Is the Kth layer of remote sensing map,
Figure FDA0003592749950000022
representing hierarchy information, F θ (x K ) Representing the map element output by the map element extractor, G 'representing a rendering generator, and phi' representing a rendering generator parameter;
and then inputting the obtained K-layer preliminary map and the K-1-layer preliminary map into a multi-layer fusion generator to finish the generation of the K-layer refined map:
y″ K =G″ φ″ (y′ K ,y′ K-1 ) (3)
wherein, y ″) K For the K-th layer refinement map, G ' represents a multi-layer fusion generator, φ ' represents a multi-layer fusion generator parameter, y ' K-1 Is the K-1 layer preliminary map;
then, sending the K-th layer of preliminary map into a hierarchy classifier, judging the hierarchy to which the generated K-th layer of preliminary map belongs, and optimizing by using a cross entropy loss function:
Figure FDA0003592749950000023
wherein, theta is a model parameter of the hierarchical classifier C, and N represents the number of hierarchical levels; s i Is the true value of the ith layer of the hierarchy, s i Taking the position of 1 to represent that the map is a corresponding hierarchy, and taking 0 to represent not; c θ (y′ K ) i Taking the K-th layer preliminary map as the prediction confidence of the i-th layer;
and finally, respectively sending the K-th layer preliminary map and the K-th layer actual map, and the K-th layer refined map and the K-th layer actual map into a discriminator, outputting whether the maps are real or not, and updating network parameters:
Figure FDA0003592749950000031
Figure FDA0003592749950000032
wherein D is a discriminator,
Figure FDA0003592749950000033
is the discriminator parameter, p data (x) And p data (y) respectively representing the data distribution of the remote sensing image and the actual network map; v () is the objective function and E is the expected value of the distribution function.
5. An end-to-end intelligent generation system of a multi-level map is characterized by comprising a training module and a generation module;
the training module is used for establishing a multi-level network map generation data set and training a multi-level map generation network according to the multi-level network map generation data set;
the generating module is used for inputting the multi-level remote sensing image into a pre-trained multi-level map generating network for processing to generate a multi-level network map;
the output end of the training module is connected with the input end of the generating module.
CN202210383509.3A 2022-04-12 2022-04-12 End-to-end intelligent generation method and system for multi-level map Pending CN114882139A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210383509.3A CN114882139A (en) 2022-04-12 2022-04-12 End-to-end intelligent generation method and system for multi-level map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210383509.3A CN114882139A (en) 2022-04-12 2022-04-12 End-to-end intelligent generation method and system for multi-level map

Publications (1)

Publication Number Publication Date
CN114882139A true CN114882139A (en) 2022-08-09

Family

ID=82670143

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210383509.3A Pending CN114882139A (en) 2022-04-12 2022-04-12 End-to-end intelligent generation method and system for multi-level map

Country Status (1)

Country Link
CN (1) CN114882139A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117422787A (en) * 2023-12-18 2024-01-19 中国人民解放军国防科技大学 Remote sensing image map conversion method integrating discriminant and generative model

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019153245A1 (en) * 2018-02-09 2019-08-15 Baidu.Com Times Technology (Beijing) Co., Ltd. Systems and methods for deep localization and segmentation with 3d semantic map
CN113052121A (en) * 2021-04-08 2021-06-29 北京理工大学 Multi-level network map intelligent generation method based on remote sensing image
WO2021184891A1 (en) * 2020-03-20 2021-09-23 中国科学院深圳先进技术研究院 Remotely-sensed image-based terrain classification method, and system
CN113449594A (en) * 2021-05-25 2021-09-28 湖南省国土资源规划院 Multilayer network combined remote sensing image ground semantic segmentation and area calculation method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019153245A1 (en) * 2018-02-09 2019-08-15 Baidu.Com Times Technology (Beijing) Co., Ltd. Systems and methods for deep localization and segmentation with 3d semantic map
WO2021184891A1 (en) * 2020-03-20 2021-09-23 中国科学院深圳先进技术研究院 Remotely-sensed image-based terrain classification method, and system
CN113052121A (en) * 2021-04-08 2021-06-29 北京理工大学 Multi-level network map intelligent generation method based on remote sensing image
CN113449594A (en) * 2021-05-25 2021-09-28 湖南省国土资源规划院 Multilayer network combined remote sensing image ground semantic segmentation and area calculation method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117422787A (en) * 2023-12-18 2024-01-19 中国人民解放军国防科技大学 Remote sensing image map conversion method integrating discriminant and generative model
CN117422787B (en) * 2023-12-18 2024-03-08 中国人民解放军国防科技大学 Remote sensing image map conversion method integrating discriminant and generative model

Similar Documents

Publication Publication Date Title
Zhang et al. Remote sensing image spatiotemporal fusion using a generative adversarial network
Song et al. Spatiotemporal satellite image fusion using deep convolutional neural networks
CN108537742B (en) Remote sensing image panchromatic sharpening method based on generation countermeasure network
CN109934282B (en) SAGAN sample expansion and auxiliary information-based SAR target classification method
CN111862126B (en) Non-cooperative target relative pose estimation method combining deep learning and geometric algorithm
CN108537191B (en) Three-dimensional face recognition method based on structured light camera
Li et al. Simplified unsupervised image translation for semantic segmentation adaptation
CN110929607B (en) Remote sensing identification method and system for urban building construction progress
CN111144418B (en) Railway track area segmentation and extraction method
CN111461006B (en) Optical remote sensing image tower position detection method based on deep migration learning
CN114820655B (en) Weak supervision building segmentation method taking reliable area as attention mechanism supervision
CN103679740B (en) ROI (Region of Interest) extraction method of ground target of unmanned aerial vehicle
Li et al. An aerial image segmentation approach based on enhanced multi-scale convolutional neural network
CN110633640A (en) Method for identifying complex scene by optimizing PointNet
CN110751271B (en) Image traceability feature characterization method based on deep neural network
CN114882139A (en) End-to-end intelligent generation method and system for multi-level map
CN111640116A (en) Aerial photography graph building segmentation method and device based on deep convolutional residual error network
CN113052121B (en) Multi-level network map intelligent generation method based on remote sensing image
CN104616035B (en) Visual Map fast matching methods based on image overall feature and SURF algorithm
CN110826478A (en) Aerial photography illegal building identification method based on countermeasure network
Li et al. I-gans for infrared image generation
CN111950476A (en) Deep learning-based automatic river channel ship identification method in complex environment
CN113034598B (en) Unmanned aerial vehicle power line inspection method based on deep learning
CN116109682A (en) Image registration method based on image diffusion characteristics
Sun et al. The recognition framework of deep kernel learning for enclosed remote sensing objects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination