CN109598771B - Terrain synthesis method of multi-landform feature constraint - Google Patents

Terrain synthesis method of multi-landform feature constraint Download PDF

Info

Publication number
CN109598771B
CN109598771B CN201811430776.1A CN201811430776A CN109598771B CN 109598771 B CN109598771 B CN 109598771B CN 201811430776 A CN201811430776 A CN 201811430776A CN 109598771 B CN109598771 B CN 109598771B
Authority
CN
China
Prior art keywords
layer
network
image
convolution
terrain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811430776.1A
Other languages
Chinese (zh)
Other versions
CN109598771A (en
Inventor
全红艳
周双双
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
East China Normal University
Original Assignee
East China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by East China Normal University filed Critical East China Normal University
Priority to CN201811430776.1A priority Critical patent/CN109598771B/en
Publication of CN109598771A publication Critical patent/CN109598771A/en
Application granted granted Critical
Publication of CN109598771B publication Critical patent/CN109598771B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/206Drawing of charts or graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a topography synthesis method of multi-relief feature constraint, which utilizes a topography data block of a Digital Elevation Model (DEM) and adopts a deep learning strategy combining a conditional variation self-encoder and a generation type countermeasure network, so that a realistic topography with the multi-relief feature constraint can be customized according to a sketch input by a user.

Description

Terrain synthesis method of multi-landform feature constraint
Technical Field
The invention relates to the technical field of virtual simulation, in particular to a terrain synthesis method with multiple landform feature constraints, which utilizes a terrain data block of a digital elevation model (Digital Elevation Mode, DEM for short), adopts a condition variation self-encoder to automatically encode a terrain image, and utilizes a generation type countermeasure network to learn the terrain feature; when the topography is synthesized, according to the sketch of the user by hand, and by combining with the input elevation data, a pre-trained deep learning network is utilized to automatically synthesize the customized topography with multiple topography features corresponding to the sketch of the user.
Background
The terrain synthesis technology has wide value in practical simulation application, and can better improve the use experience of users in natural disaster prevention or video game creation. Current terrain synthesis techniques can be broadly divided into the following categories: modeling methods based on procedural, modeling methods based on physical erosion, modeling methods based on user sketch, and synthetic strategies based on deep learning. In recent years, with the development of artificial intelligence technology, a topography synthesis method based on a convolutional neural network appears, depth information prediction is carried out on a mountain contour line input by a user, and the rationality of synthesized topography is improved. However, the method for synthesizing the terrain by using the deep learning method has the main problems that the structure of the network is complex, the training of network parameters is difficult to converge, and the problems are all existing in the current intelligent terrain synthesis research.
Disclosure of Invention
Aiming at the defects of the prior art and aiming at the practical problems in terrain synthesis, the invention provides a terrain synthesis method of multiple landform features. The method has the characteristics of simplicity and effectiveness, and can synthesize terrains with multiple landform features according to the sketch of the hand painting of the user.
The specific technical scheme for realizing the aim of the invention is as follows: a topography synthesis method of multi-relief feature constraint is characterized by comprising the following specific steps:
step 1: constructing a dataset
(1) Preparing elevation data blocks
The elevation data block J of the WGS84 coordinate system is downloaded from the SRTM website http:// srtm.csi.cgirar.org and stored in tiff format, the spatial resolution of which is between 90m x 90m and 200m x 200m, and the height of any point A of J is recorded as H A And establishing a gray image G according to the height information of J: the highest point corresponds to white, the lowest point corresponds to black, and the interpolation is gray between the highest point and the lowest point; g resolution of N t ×N t ,N t 256, 512 or 1024; the number of pixels in G is k=n t ×N t
(2) Constructing a multi-relief feature skeleton image B, wherein the multi-relief feature skeleton image B comprises river networks, ridge lines and geometric feature points: calculating three types of landform features of G by using a D8 algorithm and a 3X 3 window, specifically calculating a ridge line by using the D8 algorithm according to the gray value of G, subtracting each pixel gray value of G by using the maximum value of pixel gray in G to obtain a reverse image, and calculating by using the D8 algorithm to obtain a mark of a river network; the geometric feature point marking method comprises the following steps: accumulating absolute values of height differences between the height of A and the surrounding 8 neighborhood points, wherein the difference is larger than a threshold value theta (more than or equal to 400 and less than or equal to 1000), and marking the difference as a geometric feature point; different geomorphic features are marked by different colors: blue marked river network, red marked ridge line, greenThe point marks geometric feature points, the rest background part in B is represented by black, and the resolution of B is N t ×N t ,N t 256, 512 or 1024;
(3) B and G form a data pair, and a data set S (Q is more than or equal to 1000 and less than or equal to 1500) formed by Q data pairs is established; step 2: network topology design
The network topology structure is composed of a condition variation automatic encoder and generator network, and the design method comprises the following steps:
(1) The automatic condition variation encoder is realized by adopting U-net coding, and the input is an image B and the shape is N t ×N t X 3, the output is a high-dimensional feature Z, the shape is 1×1×512, L is adopted t (8≤L t Less than or equal to 10) architecture of layer convolutional neural network, if N t 256, L t 8; if N t 512, L t 9; if N t Is 1024L t 10; the encoder has L t Layer substructures, each of which is a 1-layer convolution: adopting a 4 multiplied by 4 convolution kernel, wherein the convolution step length is 2, carrying out neighborhood zero padding processing outside the boundary of an input image, carrying out batch normalization on the output of each convolution layer, and using a leakage Relu activation function; the number of the convolution kernels of the layer 1 is T k When N t 256, T k Taking 64; when N is t 512, T k Taking 32; when N is t Is 1024, T k Taking 16; the number of convolution kernels of the layer 2 and the subsequent layers is doubled in turn until the layer L t A layer;
(2) The generator network is L t Each layer of substructure consists of a transposed convolution layer and a connecting layer; the convolution kernel of the transposed convolution layers is designed to be 4 multiplied by 4, the convolution step length is 2, each transposed convolution layer carries out batch normalization, and the activation function uses Relu; the number of convolution kernels of the 1 st layer convolution is T k The number of convolution kernels of the layer 2 and the subsequent layers is reduced by half in turn until the layer L t A layer; at L t After the layer, the input Z is restored to N using an additional transposed convolution layer t ×N t X 3; the last layer of the generator is connected with a Tanh activation layer, and the network output is required to be a floating point number between-1 and 1;
step 3: training of neural networks
When the neural network is trained, a supervision method is adopted, a data set S is used, 100 rounds of training are performed, an NS GAN method is adopted to calculate a loss term of a discriminator during training, the definition of a loss function l of the network is composed of five items of a discrimination loss term c, a divergence loss term d, a consistency loss e of a generated terrain gray image, an antagonism loss term n and a skeleton structure loss term g, and the loss function l is defined as follows: l=c+λ 1 d+λ 2 e+λ 3 f+λ 4 g, wherein lambda i (i=1, 2,3, 4) is the weight parameter of the corresponding loss, 1.0.ltoreq.lambda 1 ≤5.0,1.0≤λ 2 ≤5.0,0.0001≤λ 3 ≤0.01,0.1≤λ 4 ≤2.0;
c. d, e, n, g loss terms are defined as follows:
(1) c is defined as: c=e x∈f(X) [log(D(x))]+E G(z)∈Y [log(1-D(G(z)))]Wherein f (X) represents a gray scale image set of the network-output multi-relief feature topography; x represents the terrain supervision gray image of the sample, and the D (x) function represents the probability that the sample x is true; z is a high-dimensional vector of the output from the encoder by the conditional variation, Y represents the sample terrain supervisory gray image dataset, G (z) represents the terrain gray image generated by z, D (G (z)) represents the probability that the generated terrain gray image is false, E represents the energy loss function;
(2) d is defined as
Figure BDA0001882646590000031
Wherein μ and ε are the mean and covariance of Z, e is the base of the natural logarithm;
(3) e is defined as
Figure BDA0001882646590000032
Wherein x represents a sample topography supervision gray scale image, < >>
Figure BDA0001882646590000033
For generating a topographic gray image, f D A feature map input for the full-connection convolution layer in the discriminator;
(4) The loss n is defined as
Figure BDA0001882646590000034
(5) Loss g is defined as
Figure BDA0001882646590000035
Wherein B is B, < >>
Figure BDA0001882646590000036
Is the skeleton of the network-generated topographic image;
the super parameters used by the neural network include: dropout rate takes 0.5, momentum beta using Adam optimizer 1 =0.5, the number of samples per batch is 1, and the learning rates of the generator network and the arbiter network are all 0.002;
step 4: terrain synthesis
Drawing a sketch image V by drawing software to be used as a hand-drawn sketch, inputting the V into a network, inputting the V into a convolutional neural network N, predicting by using trained parameters in the N network, outputting a corresponding terrain gray image, and further obtaining a customized terrain synthesis result according to the maximum value and the minimum value of the height of the input terrain.
The invention has the characteristics of simplicity and practicability, and can synthesize the terrain customized by the user according to the sketch of the user by hand drawing and combining with the input elevation data.
Drawings
FIG. 1 is a graph of the results of the synthetic terrain of the present invention;
fig. 2 is a three-dimensional view of the synthetic terrain of the present invention.
Detailed Description
The invention will be further described with reference to the accompanying drawings and examples.
Examples
The embodiment is implemented under Windows 10-bit operating system on PC, and the hardware configuration is a processor
Figure BDA0001882646590000037
Core TM i5-7500 3.4GHz CPU,8GB memory, wherein the software environment is Matlab 2015b, and Python is adopted for programmingIn other words, the open source library OpenCV 2.4.4 and the open source grid space data conversion library GDAL are combined.
The embodiment specifically comprises the following steps:
step 1: constructing a dataset
(1) Preparing elevation data blocks
The elevation data block J of the WGS84 coordinate system is downloaded from the SRTM website http:// srtm.csi.cgirar.org and stored in tiff format, the spatial resolution of which is between 90m x 90m and 200m x 200m, and the height of any point A of J is recorded as H A And establishing a gray image G according to the height information of J: the highest point corresponds to white, the lowest point corresponds to black, and the interpolation is gray between the highest point and the lowest point; g resolution of N t ×N t ,N t 256; the number of pixels in G is k=n t ×N t
(2) Constructing a multi-relief feature skeleton image B, wherein the multi-relief feature skeleton image B comprises river networks, ridge lines and geometric feature points: calculating three types of landform features of G by using a D8 algorithm and a 3X 3 window, specifically calculating a ridge line by using the D8 algorithm according to the gray value of G, subtracting each pixel gray value of G by using the maximum value of pixel gray in G to obtain a reverse image, and calculating by using the D8 algorithm to obtain a mark of a river network; the geometric feature point marking method comprises the following steps: accumulating absolute values of height differences between the height of A and the surrounding 8 neighborhood points, wherein the difference is larger than a threshold value theta (600 is taken), and marking the difference as a geometric feature point; different geomorphic features are marked by different colors: blue mark river network, red mark ridge line, green mark geometric characteristic point, B the rest background part is represented by black, B resolution is N t ×N;
(3) A data pair is formed of B and G, and a data set S is created of 1250 data pairs.
Step 2: network topology design
The network topology structure N is composed of a condition variation automatic encoder and generator network, and the design method comprises the following steps: (1) The automatic condition variation encoder is realized by adopting U-net coding, and the input is an image B and the shape is N t ×N t X 3, the output is a high-dimensional feature Z, the shape is 1×1×512, L is adopted t Architecture of layer convolutional neural network, L t 8; the encoder has L t Layer substructures, each of which is a 1-layer convolution: adopting a 4 multiplied by 4 convolution kernel, wherein the convolution step length is 2, carrying out neighborhood zero padding processing outside the boundary of an input image, carrying out batch normalization on the output of each convolution layer, and using a leakage Relu activation function; the number of the convolution kernels of the layer 1 is T k ,T k Taking 64; the number of convolution kernels of the layer 2 and the subsequent layers is doubled in turn until the layer L t A layer; (2) The generator network is L t Each layer of substructure consists of a transposed convolution layer and a connecting layer; the convolution kernel of the transposed convolution layers is designed to be 4 multiplied by 4, the convolution step length is 2, each transposed convolution layer carries out batch normalization, and the activation function uses Relu; the number of convolution kernels of the 1 st layer convolution is T k The number of convolution kernels of the layer 2 and the subsequent layers is reduced by half in turn until the layer L t A layer; at L t After the layer, the input Z is restored to N using an additional transposed convolution layer t ×N t X 3. The last layer of the generator is connected with a Tanh activation layer, and the network output is required to be a floating point number between-1 and 1.
Step 3: training of neural networks
When the network N is trained, a supervision method is adopted, a data set S is used, 100 rounds of training are performed, an NS GAN method is adopted to calculate a loss term of a discriminator during training, the definition of a loss function l of the network is composed of five items of a discrimination loss term c, a divergence loss term d, a consistency loss e of a generated terrain gray image, an antagonism loss term N and a skeleton structure loss term g, and the loss function l is defined as follows: l=c+λ 1 d+λ 2 e+λ 3 f+λ 4 g, wherein lambda i (i=1, 2,3, 4) is the weight parameter of the corresponding loss, λ 1 =2.0,λ 2 =2.0,λ 3 =0.001,λ 4 =1.0;
c. d, e, n, g loss terms are defined as follows:
(1) c is defined as: c=e x∈f(X) [log(D(x))]+E G(z)∈Y [log(1-D(G(z)))]Wherein f (X) represents a gray scale image set of the network-output multi-relief feature topography; x represents the topography supervised gray scale image of the sample, and the D (x) function represents the sample x asProbability of true; z is a high-dimensional vector of the output from the encoder by the conditional variation, Y represents the sample terrain supervisory gray image dataset, G (z) represents the terrain gray image generated by z, D (G (z)) represents the probability that the generated terrain gray image is false, E represents the energy loss function;
(2) d is defined as
Figure BDA0001882646590000051
Wherein μ and ε are the mean and covariance of Z, e is the base of the natural logarithm;
(3) e is defined as
Figure BDA0001882646590000052
Wherein x represents a sample topography supervision gray scale image, < >>
Figure BDA0001882646590000053
For generating a topographic gray image, f D A feature map input for the full-connection convolution layer in the discriminator;
(4) The loss n is defined as
Figure BDA0001882646590000054
(5) Loss g is defined as
Figure BDA0001882646590000055
Wherein B is B, < >>
Figure BDA0001882646590000056
Is the skeleton of the network-generated topographic image;
the super parameters used by the neural network include: dropout rate takes 0.5, momentum β1=0.5 is used by Adam optimizer, sample number of each batch takes 1, and learning rate of generator network and discriminator network takes 0.002;
step 4: terrain synthesis
Drawing a sketch image V by drawing software to be used as a hand-drawn sketch, inputting the V into a network, inputting the V into a convolutional neural network N, predicting by using trained parameters in the N network, outputting a corresponding terrain gray image, and further obtaining a customized terrain synthesis result according to the maximum value and the minimum value of the height of the input terrain.
Fig. 1 is a sketch of a user, and shows the result of synthesizing a terrain by using a DEM terrain sample, wherein the first column is a multi-landform feature sketch from left to right, and the second column is the result of synthesizing a multi-landform feature terrain by using the method of the invention.
Fig. 2 is a three-dimensional view of a synthetic topography, from left to right in the figure, the first column is a user sketch of a multi-topography feature, the second column is a multi-topography result synthesized by the present invention, and the third column is a three-dimensional view of a synthetic multi-topography feature topography, and from the figure, it can be seen that the method of the present invention can synthesize a result of a topography ridge, a river multi-topography feature with three-dimensional realism.

Claims (1)

1. A topography synthesis method of multi-relief feature constraint is characterized in that a network architecture combining a conditional variation self-encoder and a generation type countermeasure network is adopted to realize the realistic topography synthesis of multi-relief elements, and the method specifically comprises the following steps:
step 1: constructing a dataset
(1) Preparing elevation data blocks
The elevation data block J of the WGS84 coordinate system is downloaded from the SRTM website http:// srtm.csi.cgirar.org and stored in tiff format, the spatial resolution of which is between 90m x 90m and 200m x 200m, and the height of any point A of J is recorded as H A And establishing a gray image G according to the height information of J: the highest point corresponds to white, the lowest point corresponds to black, and the interpolation is gray between the highest point and the lowest point; g resolution of N t ×N t ,N t 256, 512 or 1024; the number of pixels in G is k=n t ×N t
(2) Constructing a multi-relief feature skeleton image B, wherein the multi-relief feature skeleton image B comprises river networks, ridge lines and geometric feature points: three kinds of landform features of G are calculated using a 3 x 3 window using a D8 algorithm, specifically, according to the gray value of G,calculating a ridge line by using a D8 algorithm, subtracting the gray value of each pixel of G by using the maximum value of the gray values of the pixels of G to obtain a reverse image, and calculating by using the D8 algorithm to obtain a mark of the river network; the geometric feature point marking method comprises the following steps: accumulating absolute values of height differences between the height of the A and the surrounding 8 neighborhood points, wherein the difference is larger than a threshold value theta, theta is more than or equal to 400 and less than or equal to 1000, and the difference is marked as a geometric feature point; different geomorphic features are marked by different colors: blue mark river network, red mark ridge line, green mark geometric characteristic point, B the rest background part is represented by black, B resolution is N t ×N t ,N t 256, 512 or 1024;
(3) B and G form a data pair, and a data set S formed by Q data pairs is established, wherein Q is more than or equal to 1000 and less than or equal to 1500;
step 2: network topology design
The network topology structure is composed of a condition variation automatic encoder and generator network, and the design method comprises the following steps:
(1) The automatic condition variation encoder is realized by adopting U-net coding, and the input is an image B and the shape is N t ×N t X 3, the output is a high-dimensional feature Z, the shape is 1×1×512, L is adopted t Architecture of layer convolutional neural network, wherein 8.ltoreq.L t Less than or equal to 10; if N t 256, L t 8; if N t 512, L t 9; if N t Is 1024L t 10; the encoder has L t Layer substructures, each of which is a 1-layer convolution: adopting a 4 multiplied by 4 convolution kernel, wherein the convolution step length is 2, carrying out neighborhood zero padding processing outside the boundary of an input image, carrying out batch normalization on the output of each convolution layer, and using a leakage Relu activation function; the number of the convolution kernels of the layer 1 is T k When N t 256, T k Taking 64; when N is t 512, T k Taking 32; when N is t Is 1024, T k Taking 16; the number of convolution kernels of the layer 2 and the subsequent layers is doubled in turn until the layer L t A layer;
(2) The generator network is L t Each layer of substructure consists of a transposed convolution layer and a connecting layer; a convolution kernel of the transposed convolution layer, designed to4×4, the convolution step length is 2, each transposed convolution layer is subjected to batch normalization, and the activation function uses Relu; the number of convolution kernels of the 1 st layer convolution is T k The number of convolution kernels of the layer 2 and the subsequent layers is reduced by half in turn until the layer L t A layer; at L t After the layer, the input Z is restored to N using an additional transposed convolution layer t ×N t X 3; the last layer of the generator is connected with a Tanh activation layer, and the network output is required to be a floating point number between-1 and 1;
step 3: training of neural networks
When the neural network is trained, a supervision method is adopted, a data set S is used, 100 rounds of training are performed, an NS GAN method is adopted to calculate a loss term of a discriminator during training, the definition of a loss function l of the network is composed of five items of a discrimination loss term c, a divergence loss term d, a consistency loss e of a generated terrain gray image, an antagonism loss term n and a skeleton structure loss term g, and the loss function l is defined as follows: l=c+λ 1 d+λ 2 e+λ 3 f+λ 4 g, wherein lambda i I=1, 2,3,4 for the weight parameter corresponding to the penalty; lambda is more than or equal to 1.0 1 ≤5.0,1.0≤λ 2 ≤5.0,0.0001≤λ 3 ≤0.01,0.1≤λ 4 ≤2.0;
c. d, e, n, g loss terms are defined as follows:
(1) c is defined as: c=e x∈f(X) [log(D(x))]+E G(z)∈Y [log(1-D(G(z)))]Wherein f (X) represents a gray scale image set of the network-output multi-relief feature topography; x represents the terrain supervision gray image of the sample, and the D (x) function represents the probability that the sample x is true; z is a high-dimensional vector of the output from the encoder by the conditional variation, Y represents the sample terrain supervisory gray image dataset, G (z) represents the terrain gray image generated by z, D (G (z)) represents the probability that the generated terrain gray image is false, E represents the energy loss function;
(2) d is defined as
Figure FDA0004039183890000021
Wherein μ and ε are the mean and covariance of Z, e is the base of the natural logarithm;
(3) e is defined as
Figure FDA0004039183890000022
Wherein x represents a sample topography supervision gray scale image, < >>
Figure FDA0004039183890000023
For generating a topographic gray image, f D A feature map input for the full-connection convolution layer in the discriminator;
(4) The loss n is defined as
Figure FDA0004039183890000024
(5) Loss g is defined as
Figure FDA0004039183890000025
Wherein B is B, < >>
Figure FDA0004039183890000026
Is the skeleton of the network-generated topographic image;
the super parameters used by the neural network include: dropout rate takes 0.5, momentum beta using Adam optimizer 1 =0.5, the number of samples per batch is 1, and the learning rates of the generator network and the arbiter network are all 0.002;
step 4: terrain synthesis
Drawing a sketch image V by drawing software to be used as a hand-drawn sketch, inputting the V into a network, inputting the V into a convolutional neural network N, predicting by using trained parameters in the N network, outputting a corresponding terrain gray image, and further obtaining a customized terrain synthesis result according to the maximum value and the minimum value of the height of the input terrain.
CN201811430776.1A 2018-11-28 2018-11-28 Terrain synthesis method of multi-landform feature constraint Active CN109598771B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811430776.1A CN109598771B (en) 2018-11-28 2018-11-28 Terrain synthesis method of multi-landform feature constraint

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811430776.1A CN109598771B (en) 2018-11-28 2018-11-28 Terrain synthesis method of multi-landform feature constraint

Publications (2)

Publication Number Publication Date
CN109598771A CN109598771A (en) 2019-04-09
CN109598771B true CN109598771B (en) 2023-04-25

Family

ID=65959674

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811430776.1A Active CN109598771B (en) 2018-11-28 2018-11-28 Terrain synthesis method of multi-landform feature constraint

Country Status (1)

Country Link
CN (1) CN109598771B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110472746B (en) * 2019-08-16 2021-04-13 北京智能工场科技有限公司 Artificial intelligence-based coding prediction method and system
CN110930472A (en) * 2019-11-14 2020-03-27 三星电子(中国)研发中心 Picture generation method and device
CN111210517B (en) * 2020-01-09 2021-11-19 浙江大学 Multi-grid terrain generation method based on neural network

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018028255A1 (en) * 2016-08-11 2018-02-15 深圳市未来媒体技术研究院 Image saliency detection method based on adversarial network
CN108830912A (en) * 2018-05-04 2018-11-16 北京航空航天大学 A kind of interactive grayscale image color method of depth characteristic confrontation type study

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018028255A1 (en) * 2016-08-11 2018-02-15 深圳市未来媒体技术研究院 Image saliency detection method based on adversarial network
CN108830912A (en) * 2018-05-04 2018-11-16 北京航空航天大学 A kind of interactive grayscale image color method of depth characteristic confrontation type study

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吴增巍 ; 全红艳.虚拟战场环境中大规模地形高效建模与实时绘制技术研究.第16届中国系统仿真技术及其应用学术会议.2015,全文. *

Also Published As

Publication number Publication date
CN109598771A (en) 2019-04-09

Similar Documents

Publication Publication Date Title
CN108510532B (en) Optical and SAR image registration method based on deep convolution GAN
CN110503680B (en) Unsupervised convolutional neural network-based monocular scene depth estimation method
CN108648197B (en) Target candidate region extraction method based on image background mask
JP6962263B2 (en) 3D point cloud label learning device, 3D point cloud label estimation device, 3D point cloud label learning method, 3D point cloud label estimation method, and program
CN110390638B (en) High-resolution three-dimensional voxel model reconstruction method
CN109598771B (en) Terrain synthesis method of multi-landform feature constraint
CN111798369B (en) Face aging image synthesis method for generating confrontation network based on circulation condition
CN111161364B (en) Real-time shape completion and attitude estimation method for single-view depth map
CN110706303B (en) Face image generation method based on GANs
CN107507126A (en) A kind of method that 3D scenes are reduced using RGB image
CN113487739A (en) Three-dimensional reconstruction method and device, electronic equipment and storage medium
CN110660020A (en) Image super-resolution method of countermeasure generation network based on fusion mutual information
CN115471423A (en) Point cloud denoising method based on generation countermeasure network and self-attention mechanism
CN109658508B (en) Multi-scale detail fusion terrain synthesis method
CN111462274A (en) Human body image synthesis method and system based on SMP L model
CN112560865A (en) Semantic segmentation method for point cloud under outdoor large scene
Durán-Rosal et al. Detection and prediction of segments containing extreme significant wave heights
CN112257496A (en) Deep learning-based power transmission channel surrounding environment classification method and system
CN116844041A (en) Cultivated land extraction method based on bidirectional convolution time self-attention mechanism
CN114663880A (en) Three-dimensional target detection method based on multi-level cross-modal self-attention mechanism
CN114638408A (en) Pedestrian trajectory prediction method based on spatiotemporal information
CN113935899A (en) Ship plate image super-resolution method based on semantic information and gradient supervision
CN109064430B (en) Cloud removing method and system for aerial region cloud-containing image
CN116704596A (en) Human behavior recognition method based on skeleton sequence
CN117093830A (en) User load data restoration method considering local and global

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant