CN109598771A - A kind of landform synthetic method of more geomorphic feature constraints - Google Patents

A kind of landform synthetic method of more geomorphic feature constraints Download PDF

Info

Publication number
CN109598771A
CN109598771A CN201811430776.1A CN201811430776A CN109598771A CN 109598771 A CN109598771 A CN 109598771A CN 201811430776 A CN201811430776 A CN 201811430776A CN 109598771 A CN109598771 A CN 109598771A
Authority
CN
China
Prior art keywords
network
landform
layer
image
loss
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811430776.1A
Other languages
Chinese (zh)
Other versions
CN109598771B (en
Inventor
全红艳
周双双
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
East China Normal University
Original Assignee
East China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by East China Normal University filed Critical East China Normal University
Priority to CN201811430776.1A priority Critical patent/CN109598771B/en
Publication of CN109598771A publication Critical patent/CN109598771A/en
Application granted granted Critical
Publication of CN109598771B publication Critical patent/CN109598771B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/206Drawing of charts or graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of landform synthetic methods of more geomorphic feature constraints, this method utilizes the terrain data block of digital elevation model (DEM), the deep learning strategy combined using condition variation self-encoding encoder with production confrontation network, the realistic terrain of more geomorphic feature constraints is provided in the sketch that can be inputted according to user, customization.

Description

A kind of landform synthetic method of more geomorphic feature constraints
Technical field
The present invention relates to Virtual Simulation fields, and in particular to a kind of landform synthetic method of more geomorphic feature constraints, It is self-editing using condition variation using the terrain data block of digital elevation model (Digital Elevation Mode, vehicle economy M) Code device carries out autocoding to terrain graph, is learnt using production confrontation network to features of terrain;In landform synthesis, According to user's cartographical sketching, in conjunction with input altitude data, using deep learning network trained in advance, it is automatically synthesized and user's grass Scheme the corresponding customization landform with more geomorphic features.
Background technique
Landform synthetic technology has value extensively in actual emulation application, either prevents in natural calamity, or In video display game creation, realistic terrain can preferably improve the usage experience of user.Landform synthetic technology substantially can be at present It is divided into lower class: the modeling method of Kernel-based methods, the modeling method based on physical erosion, the modeling method based on user's sketch And the synthetic strategy based on deep learning.In recent years, with the development of artificial intelligence technology, occur based on convolutional Neural net The landform synthetic method of network, and realize and contour line progress depth information prediction in mountain range is inputted to user, improve synthesis landform Reasonability.But landform is synthesized using deep learning method, existing main problem is exactly that structure is complicated for network, network The problem of parameter training is difficult to restrain, these are all current intelligent landform study on the synthesis.
Summary of the invention
The purpose of the present invention is in view of the deficiencies of the prior art, and for the practical problem in landform synthesis, one kind is proposed The landform synthetic method of more geomorphic features, this method utilize preparatory training in conjunction with input altitude data according to user's cartographical sketching Deep learning network, customization landform corresponding with user's sketch can be synthesized.This method have the characteristics that it is simple, effective, The landform with more geomorphic features can be synthesized according to user's cartographical sketching.
Realizing the specific technical solution of the object of the invention is: a kind of landform synthetic method of more geomorphic feature constraints, feature This method comprising the following specific steps
Step 1: building data set
(1) prepare altitude data block
From the website SRTM http://srtm.csi.cgiar.org downloading WGS84 coordinate system altitude data block J and with The storage of tiff format, spatial resolution are 90m × 90m between 200m × 200m, and the height of the arbitrary point A of J is denoted as HA, root Establish gray level image G according to the elevation information of J: the corresponding white in highest point, minimum point corresponds to black, in interleaving for highest and lowest point Value is grey;G resolution ratio is Nt×Nt, NtIt is 256,512 or 1024;The number of pixel is K=N in Gt×Nt
(2) more geomorphic feature skeleton image B, including the network of waterways, ridge line and geometrical characteristic point are constructed: utilizing D8 algorithm, adopts Specifically according to the gray value of G, ridge line is calculated using D8 algorithm, then with three kinds of geomorphic features of 3 × 3 window calculation G Each grey scale pixel value that G is subtracted using the maximum value of pixel grey scale in G is obtained negating image, D8 algorithm is recycled to be counted It calculates, obtains the label of the network of waterways;Geometrical characteristic point labeling method are as follows: the height absolute value of the difference of height and 8 neighborhood point of surrounding to A Added up, which is greater than threshold θ (400≤θ≤1000), is labeled as geometrical characteristic point;It is marked with different colours different Geomorphic feature: the blue markings network of waterways, red-label ridge line, green point label geometrical characteristic point, remaining background parts is with black in B Color table shows that the resolution ratio of B is Nt×Nt, NtIt is 256,512 or 1024;
(3) data pair are made of B and G, are established by Q data to the data set S (1000≤Q≤1500) constituted; Step 2: design of network topology structure
Network topology structure is made of condition variation autocoder and generator network, design method:
(1) condition variation autocoder realizes that input is image B, shape N using U-net codingt×Nt× 3, it is defeated It is high dimensional feature amount Z out, shape is 1 × 1 × 512, using Lt(8≤Lt≤ 10) framework of layer convolutional neural networks, if NtFor 256, LtIt is 8;If NtIt is 512, LtIt is 9;If NtIt is 1024, LtIt is 10;Encoder has LtStraton structure, each minor structure are 1 layer Convolution: using 4 × 4 convolution kernel, and convolution step-length is 2, and neighborhood zero padding is handled outside input picture boundary, each convolutional layer Output carries out batch normalization, uses Leaky Relu activation primitive;Level 1 volume product core number is Tk, work as NtIt is 256, TkTake 64; Work as NtIt is 512, TkTake 32;Work as NtIt is 1024, TkTake 16;2nd layer and its convolution kernel number of each layer successively increases one times later, Until LtLayer;
(2) generator network is LtStraton structure, every straton structure are made of transposition convolutional layer and articulamentum;Transposition convolution The convolution kernel of layer is designed as 4 × 4, and convolution step-length is 2, and each transposition convolutional layer carries out batch normalization, and activation primitive uses Relu;The convolution kernel number of 1st layer convolution is Tk, the 2nd layer and its convolution kernel number of each layer successively halves later, Zhi Dao LtLayer;In LtAn additional transposition convolutional layer is used after layer, and input Z is reduced into Nt×Nt×3;Generator the last layer A Tanh active coating is connected, needing network output is the floating number between -1 to 1;
Step 3: the training of neural network
When neural metwork training, using measure of supervision, using data set S, 100 rounds of training use NS GAN when training Method computational discrimination device loses item, and the definition of the loss function l of network is by differentiating loss item c, divergence loss item d, generating landform Gray level image consistency loses e, confrontation loss item n and skeleton structure loss item g totally five Xiang Zucheng, loss function l is defined as: l =c+ λ1d+λ2e+λ3f+λ4G, wherein λi(i=1,2,3,4) is the weight parameter of corresponding loss, 1.0≤λ1≤ 5.0,1.0≤ λ2≤ 5.0,0.0001≤λ3≤ 0.01,0.1≤λ4≤2.0;
C, d, e, n, g lose item and are defined respectively as:
(1) c is defined as: c=Ex∈f(X)[log(D(x))]+EG(z)∈Y[log (1-D (G (z)))], wherein f (X) indicates net The grayscale image image set of more geomorphic feature landform of network output;X indicates that sample landform supervises gray level image, D (x) function representation sample X is genuine probability;Z is by the high dimension vector of the output of condition variation self-encoding encoder, and Y indicates that sample landform supervises gray level image number According to collection, G (z) indicates that the landform gray level image generated by z, D (G (z)) indicate that the landform gray level image generated is false probability, E Indicate energy damage threshold;
(2) d is defined asWherein, μ and ε is the mean value and covariance of Z, and e is natural logrithm Bottom;
(3) e is defined asWherein, x indicates that sample landform supervises gray level image,For The landform gray level image of generation, fDFor the characteristic pattern for connecting convolutional layer input in arbiter entirely;
(4) loss n is defined as
(5) loss g is defined asWherein, b is B,It is the skeleton for the terrain graph that network generates;
The hyper parameter that neural network uses includes: that Dropout rate takes 0.5, uses Adam optimizer, momentum β1=0.5, often Batch sample number takes 1, and generator network and the learning rate of arbiter network take 0.002;
Step 4: landform synthesis
Sketch image V is drawn using drawing software, as cartographical sketching, V is input to network, and V is input to convolution In neural network N, using trained parameter is predicted in N network, corresponding landform gray level image is exported, further According to the maximum value and minimum value of input Terrain Elevation, customization landform composite result can be obtained.
The present invention has the characteristics that simple, practical, according to user's cartographical sketching, in conjunction with input altitude data, can synthesize Customized landform.
Detailed description of the invention
Fig. 1 is the result figure of present invention synthesis landform;
Fig. 2 is the 3-D view of present invention synthesis landform.
Specific embodiment
The present invention is further described with reference to the accompanying drawings and embodiments.
Embodiment
The present embodiment is implemented under 64 bit manipulation system of Windows10 in PC machine, and hardware configuration is processorCoreTMI5-7500 3.4GHz CPU, 8GB memory, software environment are Matlab 2015b, and programming uses Python Language, in conjunction with vision open source library OpenCV 2.4.4 and open source raster spatial data transformation warehouse GDAL.
The present embodiment specifically includes the following steps:
Step 1: building data set
(1) prepare altitude data block
From the website SRTM http://srtm.csi.cgiar.org downloading WGS84 coordinate system altitude data block J and with The storage of tiff format, spatial resolution are 90m × 90m between 200m × 200m, and the height of the arbitrary point A of J is denoted as HA, root Establish gray level image G according to the elevation information of J: the corresponding white in highest point, minimum point corresponds to black, in interleaving for highest and lowest point Value is grey;G resolution ratio is Nt×Nt, NtIt is 256;The number of pixel is K=N in Gt×Nt
(2) more geomorphic feature skeleton image B, including the network of waterways, ridge line and geometrical characteristic point are constructed: utilizing D8 algorithm, adopts Specifically according to the gray value of G, ridge line is calculated using D8 algorithm, then with three kinds of geomorphic features of 3 × 3 window calculation G Each grey scale pixel value that G is subtracted using the maximum value of pixel grey scale in G is obtained negating image, D8 algorithm is recycled to be counted It calculates, obtains the label of the network of waterways;Geometrical characteristic point labeling method are as follows: the height absolute value of the difference of height and 8 neighborhood point of surrounding to A Added up, which is greater than threshold θ (taking 600), is labeled as geometrical characteristic point;Mark different landforms special with different colours Sign: the blue markings network of waterways, red-label ridge line, green point mark geometrical characteristic point, remaining background parts black table in B Show, the resolution ratio of B is Nt×N;
(3) data pair are made of B and G, are established by 1250 data to the data set S constituted.
Step 2: design of network topology structure
Network topology structure N is made of condition variation autocoder and generator network, design method: (1) condition becomes Autocoder is divided to realize that input is image B, shape N using U-net codingt×Nt× 3, output is high dimensional feature amount Z, shape Shape is 1 × 1 × 512, using LtThe framework of layer convolutional neural networks, LtIt is 8;Encoder has LtStraton structure, each minor structure are Level 1 volume product: using 4 × 4 convolution kernel, and convolution step-length is 2, and neighborhood zero padding is handled outside input picture boundary, each convolution The output of layer carries out batch normalization, uses Leaky Relu activation primitive;Level 1 volume product core number is Tk, TkTake 64;2nd layer and Convolution kernel number of each layer successively increases one times after it, until LtLayer;(2) generator network is LtStraton structure, every straton Structure is made of transposition convolutional layer and articulamentum;The convolution kernel of transposition convolutional layer is designed as 4 × 4, and convolution step-length is 2, Mei Gezhuan It sets convolutional layer and all carries out batch normalization, activation primitive uses Relu;The convolution kernel number of 1st layer convolution is Tk, the 2nd layer and its The convolution kernel number of later each layer successively halves, until LtLayer;In LtAn additional transposition convolutional layer is used after layer, it will Input Z is reduced into Nt×Nt×3.Generator the last layer connects a Tanh active coating, and needing network output is between -1 to 1 Floating number.
Step 3: the training of neural network
When network N training, using measure of supervision, using data set S, 100 rounds of training use NS GAN method when training Computational discrimination device loses item, and the definition of the loss function l of network is by differentiating loss item c, divergence loss item d, generating landform gray scale Image consistency loses e, confrontation loss item n and skeleton structure loss item g totally five Xiang Zucheng, loss function l is defined as: l=c+ λ1d+λ2e+λ3f+λ4G, wherein λi(i=1,2,3,4) is the weight parameter of corresponding loss, λ1=2.0, λ2=2.0, λ3= 0.001, λ4=1.0;
C, d, e, n, g lose item and are defined respectively as:
(1) c is defined as: c=Ex∈f(X)[log(D(x))]+EG(z)∈Y[log (1-D (G (z)))], wherein f (X) indicates net The grayscale image image set of more geomorphic feature landform of network output;X indicates that sample landform supervises gray level image, D (x) function representation sample X is genuine probability;Z is by the high dimension vector of the output of condition variation self-encoding encoder, and Y indicates that sample landform supervises gray level image number According to collection, G (z) indicates that the landform gray level image generated by z, D (G (z)) indicate that the landform gray level image generated is false probability, E Indicate energy damage threshold;
(2) d is defined asWherein, μ and ε is the mean value and covariance of Z, and e is natural logrithm Bottom;
(3) e is defined asWherein, x indicates that sample landform supervises gray level image,For The landform gray level image of generation, fDFor the characteristic pattern for connecting convolutional layer input in arbiter entirely;
(4) loss n is defined as
(5) loss g is defined asWherein, b is B,It is the skeleton for the terrain graph that network generates;
The hyper parameter that neural network uses includes: that Dropout rate takes 0.5, using Adam optimizer, momentum β 1=0.5, often Batch sample number takes 1, and generator network and the learning rate of arbiter network take 0.002;
Step 4: landform synthesis
Sketch image V is drawn using drawing software, as cartographical sketching, V is input to network, and V is input to convolution In neural network N, using trained parameter is predicted in N network, corresponding landform gray level image is exported, further According to the maximum value and minimum value of input Terrain Elevation, customization landform composite result can be obtained.
Fig. 1 be according to user's sketch, using DEM landform sample synthesis landform as a result, in two column in figure from left to right, First row is more geomorphic feature sketches, and secondary series is the more geomorphic feature landform synthesized using the present invention as a result, can from figure To find out, method of the invention is more effective, can synthesize the landform with more landforms sense of reality minutias.
Fig. 2 is the 3-D view for synthesizing landform, and in three column in figure from left to right, first row is the user of more geomorphic features Sketch, secondary series are more landforms that the present invention synthesizes as a result, third column are the three-dimensional views of more geomorphic feature landform of synthesis Figure, it can be seen from the figure that it is landform knot ridge, the more landforms in river that method of the invention, which can be synthesized with three dimension realistic, The result of feature.

Claims (1)

1. a kind of landform synthetic method of more geomorphic feature constraints, which is characterized in that use condition variation self-encoding encoder and generation The network architecture that formula confrontation network combines realizes the realistic terrain synthesis of more landforms elements, specifically includes and walk in detail below It is rapid:
Step 1: building data set
(1) prepare altitude data block
The altitude data block J of WGS84 coordinate system is downloaded from the website SRTM http://srtm.csi.cgiar.org and with tiff Format storage, spatial resolution are 90m × 90m between 200m × 200m, and the height of the arbitrary point A of J is denoted as HA, according to J Elevation information establish gray level image G: the corresponding white in highest point, minimum point corresponds to black, in the interpolation of highest and lowest point For grey;G resolution ratio is Nt×Nt, NtIt is 256,512 or 1024;The number of pixel is K=N in Gt×Nt
(2) more geomorphic feature skeleton image B, including the network of waterways, ridge line and geometrical characteristic point are constructed: utilizing D8 algorithm, using 3 × Three kinds of geomorphic features of 3 window calculation G specifically according to the gray value of G, calculate ridge line using D8 algorithm, then utilize G The maximum value of middle pixel grey scale subtracts each grey scale pixel value of G, obtains negating image, recycles D8 algorithm to be calculated, obtain To the label of the network of waterways;Geometrical characteristic point labeling method are as follows: the height absolute value of the difference of height and 8 neighborhood point of surrounding to A carries out Accumulative, which is greater than threshold θ (400≤θ≤1000), is labeled as geometrical characteristic point;Different landforms are marked with different colours Feature: the blue markings network of waterways, red-label ridge line, green point mark geometrical characteristic point, remaining background parts black table in B Show, the resolution ratio of B is Nt×Nt, NtIt is 256,512 or 1024;
(3) data pair are made of B and G, are established by Q data to the data set S (1000≤Q≤1500) constituted;
Step 2: design of network topology structure
Network topology structure is made of condition variation autocoder and generator network, design method:
(1) condition variation autocoder realizes that input is image B, shape N using U-net codingt×Nt× 3, output is High dimensional feature amount Z, shape is 1 × 1 × 512, using Lt(8≤Lt≤ 10) framework of layer convolutional neural networks, if NtIt is 256, Lt It is 8;If NtIt is 512, LtIt is 9;If NtIt is 1024, LtIt is 10;Encoder has LtStraton structure, each minor structure are level 1 volume product: being adopted With 4 × 4 convolution kernel, convolution step-length is 2, and neighborhood zero padding is handled outside input picture boundary, the output of each convolutional layer into Row batch normalization, uses Leaky Relu activation primitive;Level 1 volume product core number is Tk, work as NtIt is 256, TkTake 64;Work as NtFor 512, TkTake 32;Work as NtIt is 1024, TkTake 16;2nd layer and its convolution kernel number of each layer successively increases one times later, Zhi Dao LtLayer;
(2) generator network is LtStraton structure, every straton structure are made of transposition convolutional layer and articulamentum;Transposition convolutional layer Convolution kernel is designed as 4 × 4, and convolution step-length is 2, and each transposition convolutional layer carries out batch normalization, and activation primitive uses Relu; The convolution kernel number of 1st layer convolution is Tk, the 2nd layer and its convolution kernel number of each layer successively halves later, until LtLayer; In LtAn additional transposition convolutional layer is used after layer, and input Z is reduced into Nt×Nt×3;Generator the last layer connection one A Tanh active coating, needing network output is the floating number between -1 to 1;
Step 3: the training of neural network
When neural metwork training, using measure of supervision, using data set S, 100 rounds of training use NS GAN method when training Computational discrimination device loses item, and the definition of the loss function l of network is by differentiating loss item c, divergence loss item d, generating landform gray scale Image consistency loses e, confrontation loss item n and skeleton structure loss item g totally five Xiang Zucheng, loss function l is defined as: l=c+ λ1d+λ2e+λ3f+λ4G, wherein λi(i=1,2,3,4) is the weight parameter of corresponding loss, 1.0≤λ1≤ 5.0,1.0≤λ2≤ 5.0,0.0001≤λ3≤ 0.01,0.1≤λ4≤2.0;
C, d, e, n, g lose item and are defined respectively as:
(1) c is defined as: c=Ex∈f(X)[log(D(x))]+EG(z)∈Y[log (1-D (G (z)))], wherein f (X) indicates network output More geomorphic feature landform grayscale image image set;X indicates that sample landform supervises gray level image, and D (x) function representation sample x is true Probability;Z is by the high dimension vector of the output of condition variation self-encoding encoder, and Y indicates that sample landform supervises greyscale image data collection, G (z) indicates that the landform gray level image generated by z, D (G (z)) indicate that the landform gray level image generated is false probability, and E indicates energy Measure loss function;
(2) d is defined asWherein, μ and ε is the mean value and covariance of Z, and e is the bottom of natural logrithm;
(3) e is defined asWherein, x indicates that sample landform supervises gray level image,To generate Landform gray level image, fDFor the characteristic pattern for connecting convolutional layer input in arbiter entirely;
(4) loss n is defined as
(5) loss g is defined asWherein, b is B,It is the skeleton for the terrain graph that network generates;
The hyper parameter that neural network uses includes: that Dropout rate takes 0.5, uses Adam optimizer, momentum β1=0.5, every batch of Sample number takes 1, and generator network and the learning rate of arbiter network take 0.002;
Step 4: landform synthesis
Sketch image V is drawn using drawing software, as cartographical sketching, V is input to network, and V is input to convolutional Neural In network N, using trained parameter is predicted in N network, corresponding landform gray level image is exported, further basis The maximum value and minimum value for inputting Terrain Elevation can obtain customization landform composite result.
CN201811430776.1A 2018-11-28 2018-11-28 Terrain synthesis method of multi-landform feature constraint Active CN109598771B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811430776.1A CN109598771B (en) 2018-11-28 2018-11-28 Terrain synthesis method of multi-landform feature constraint

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811430776.1A CN109598771B (en) 2018-11-28 2018-11-28 Terrain synthesis method of multi-landform feature constraint

Publications (2)

Publication Number Publication Date
CN109598771A true CN109598771A (en) 2019-04-09
CN109598771B CN109598771B (en) 2023-04-25

Family

ID=65959674

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811430776.1A Active CN109598771B (en) 2018-11-28 2018-11-28 Terrain synthesis method of multi-landform feature constraint

Country Status (1)

Country Link
CN (1) CN109598771B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110472746A (en) * 2019-08-16 2019-11-19 北京智能工场科技有限公司 A kind of coding prediction technique and system based on artificial intelligence
CN110930472A (en) * 2019-11-14 2020-03-27 三星电子(中国)研发中心 Picture generation method and device
CN111210517A (en) * 2020-01-09 2020-05-29 浙江大学 Multi-grid terrain generation method based on neural network

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018028255A1 (en) * 2016-08-11 2018-02-15 深圳市未来媒体技术研究院 Image saliency detection method based on adversarial network
CN108830912A (en) * 2018-05-04 2018-11-16 北京航空航天大学 A kind of interactive grayscale image color method of depth characteristic confrontation type study

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018028255A1 (en) * 2016-08-11 2018-02-15 深圳市未来媒体技术研究院 Image saliency detection method based on adversarial network
CN108830912A (en) * 2018-05-04 2018-11-16 北京航空航天大学 A kind of interactive grayscale image color method of depth characteristic confrontation type study

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吴增巍;全红艳: "虚拟战场环境中大规模地形高效建模与实时绘制技术研究" *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110472746A (en) * 2019-08-16 2019-11-19 北京智能工场科技有限公司 A kind of coding prediction technique and system based on artificial intelligence
CN110472746B (en) * 2019-08-16 2021-04-13 北京智能工场科技有限公司 Artificial intelligence-based coding prediction method and system
CN110930472A (en) * 2019-11-14 2020-03-27 三星电子(中国)研发中心 Picture generation method and device
CN111210517A (en) * 2020-01-09 2020-05-29 浙江大学 Multi-grid terrain generation method based on neural network
CN111210517B (en) * 2020-01-09 2021-11-19 浙江大学 Multi-grid terrain generation method based on neural network

Also Published As

Publication number Publication date
CN109598771B (en) 2023-04-25

Similar Documents

Publication Publication Date Title
Hepp et al. Learn-to-score: Efficient 3d scene exploration by predicting view utility
CN107229904A (en) A kind of object detection and recognition method based on deep learning
CN109215123B (en) Method, system, storage medium and terminal for generating infinite terrain based on cGAN
CN109377530A (en) A kind of binocular depth estimation method based on deep neural network
CN110533721A (en) A kind of indoor objects object 6D Attitude estimation method based on enhancing self-encoding encoder
CN110419049A (en) Room layout estimation method and technology
CN107358626A (en) A kind of method that confrontation network calculations parallax is generated using condition
CN110097609B (en) Sample domain-based refined embroidery texture migration method
CN109978165A (en) A kind of generation confrontation network method merged from attention mechanism
CN109598771A (en) A kind of landform synthetic method of more geomorphic feature constraints
CN108416266A (en) A kind of video behavior method for quickly identifying extracting moving target using light stream
CN108510504A (en) Image partition method and device
Zamuda et al. Vectorized procedural models for animated trees reconstruction using differential evolution
CN107590515A (en) The hyperspectral image classification method of self-encoding encoder based on entropy rate super-pixel segmentation
KR102343582B1 (en) Artificial intelligence-based metaverse contents making system for using biometric information
JP2021525401A (en) Image generation network training and image processing methods, equipment, electronics, and media
CN113487739A (en) Three-dimensional reconstruction method and device, electronic equipment and storage medium
CN114511778A (en) Image processing method and device
CN112991503B (en) Model training method, device, equipment and medium based on skin weight
CN109658508A (en) A kind of landform synthetic method of multiple dimensioned details fusion
US20080129738A1 (en) Method and apparatus for rendering efficient real-time wrinkled skin in character animation
CN116310219A (en) Three-dimensional foot shape generation method based on conditional diffusion model
CN112257496A (en) Deep learning-based power transmission channel surrounding environment classification method and system
CN116563682A (en) Attention scheme and strip convolution semantic line detection method based on depth Hough network
Panagiotou et al. Procedural 3D terrain generation using generative adversarial networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant