CN111784592A - Automatic design image generation method based on GAN - Google Patents

Automatic design image generation method based on GAN Download PDF

Info

Publication number
CN111784592A
CN111784592A CN202010423772.1A CN202010423772A CN111784592A CN 111784592 A CN111784592 A CN 111784592A CN 202010423772 A CN202010423772 A CN 202010423772A CN 111784592 A CN111784592 A CN 111784592A
Authority
CN
China
Prior art keywords
image
gan
design image
preset
generation method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010423772.1A
Other languages
Chinese (zh)
Inventor
费棋
曹磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhisheng Shanghai Artificial Intelligence Technology Co ltd
Original Assignee
Zhisheng Shanghai Artificial Intelligence Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhisheng Shanghai Artificial Intelligence Technology Co ltd filed Critical Zhisheng Shanghai Artificial Intelligence Technology Co ltd
Priority to CN202010423772.1A priority Critical patent/CN111784592A/en
Publication of CN111784592A publication Critical patent/CN111784592A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering

Abstract

The invention discloses a GAN-based automatic design image generation method, which comprises the following steps: step 1: performing blank elimination and size normalization operation on the images, and converting all the images into preset pixels multiplied by the pixel size of the preset pixels; step 2: building and operating a GAN deep learning model; and step 3: after a plurality of rounds of training iteration, assigning the super parameters into a generator network to generate a new design image; and 4, step 4: carrying out corrosion and expansion mathematical calculation on the generated design image to reduce noise points of the design image; and 5: and smoothing the color edge in the image by using median filtering to reduce edge abrupt change. Compared with the traditional GAN method, the method provided by the invention has the advantages that on one hand, the generated design image has smoother edge and more beautiful color, and on the other hand, the distribution function of the generated image can be changed according to the super parameter adjustment in the network model, so that the diversity of the design image is increased.

Description

Automatic design image generation method based on GAN
Technical Field
The invention relates to the technical field of image generation, in particular to a GAN-based automatic design image generation method.
Background
Designers need to consider the problems of beauty, diversity and the like when designing pictures, often spend a large amount of time designing pictures and modifying the pictures, so that the artificial intelligence algorithm is used for quickly generating the designed pictures, the design time is greatly shortened, and the customized design work of most low ends is transferred to mass-selling design to have great economic value.
There are studies or inventions for generating an image using a generation countermeasure network (GAN), such as a patent of the university of science and technology in china "a pedestrian image generation method in any posture" (application No. 201810295994.2), a patent of the university of electronic technology "a clothing matching generation method based on generation of a countermeasure network" (application No. 201910842802.X), and so on.
However, most GAN-based image generation methods generally use a naive GAN model, and have the defects of color matching intersection, unsmooth edges, uncontrollable details, single generation style and the like. The invention provides a new GAN structure, integrates the design concept of the styleGAN model, further improves the design concept, and can design a design image which can generate various images with beautiful colors and smooth edges.
Disclosure of Invention
Compared with the traditional GAN method, the method has the advantages that on one hand, the generated design image is smoother in edge and more attractive in color, on the other hand, the distribution function of the generated image can be changed according to the super parameter adjustment in the network model, and the diversity of the design image is increased.
In order to achieve the purpose, the invention provides the following technical scheme: a GAN-based automatic design image generation method comprises the following steps:
step 1: performing blank elimination and size normalization operation on the images, and converting all the images into preset pixels multiplied by the pixel size of the preset pixels;
step 2: building and operating a GAN deep learning model;
and step 3: after a plurality of rounds of training iteration, assigning the super parameters into a generator network to generate a new design image;
and 4, step 4: carrying out corrosion and expansion mathematical calculation on the generated design image to reduce noise points of the design image;
and 5: and smoothing the color edge in the image by using median filtering to reduce edge abrupt change.
Preferably, in step 2: the GAN deep learning model can be divided into a generator and a discriminator, after random vectors with preset pixel multiplied by 1 dimension are input by a generator network in one training iteration, an image with preset pixel multiplied by preset pixel is generated, the generated image and a real image are placed into the discriminator, then classification accuracy and a loss function are calculated, calculated loss is fed back to the generator and the discriminator respectively, the generator network and the discriminator network feed back to network parameters according to a loss gradient reduction result, the network parameters are adjusted, and then the next training iteration is carried out.
Preferably, the generator network may continuously generate an image of preset pixel elements x preset pixel elements.
Preferably, the preset pixels are images of 256 × 256 pixels or images of 1024 × 1024 pixels.
Preferably, in step 1: and blank elimination is to traverse each pixel point of the design image, find the non-white pixel point at the top left corner and the non-white pixel point at the bottom right corner, reserve a rectangular range within the two pixel points and provide a blank part outside the range.
Preferably, in step 1: the size normalization is to ensure the aspect ratio of the design image, delete the image with the aspect ratio of the design image within the aspect ratio of the preset image, and scale the image length and width outside the aspect ratio of the preset image into the preset pixel x the preset pixel.
Preferably, the preset image aspect ratio is 1: 1.2.
Preferably, the preset image aspect ratio is 0.83: 1.
Preferably, in step 5: the median filter kernel size range is ≧ 3, 3.
Preferably, the median filter kernel size range is ≦ 15, 15.
Compared with the traditional GAN method in the prior art, the design image generated by the method has smoother edge and more beautiful color, and the distribution function of the generated image can be changed according to the super parameter adjustment in the network model, so that the diversity of the design image is increased.
Drawings
FIG. 1 is a schematic diagram of the structure of a GAN model of the present invention;
FIG. 2 is a block diagram of a generator of the GAN of the present invention;
FIG. 3 is a schematic structural diagram of a StyleGAN model module AdaIN according to the present invention;
FIG. 4 is a parameter diagram of each convolutional layer in the generator network of the present invention;
FIG. 5 is the final design image generated by the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention provides an automatic image design method based on a GAN neural network, which comprises the following steps:
step 1: firstly, collecting a large number of design images, performing blank elimination, size normalization and other operations, and converting all trademark images into preset pixels multiplied by the pixel size of the preset pixels;
blank elimination: and traversing each pixel point of the design image, searching the non-white pixel point at the top left corner and the non-white pixel point at the bottom right corner, reserving a rectangular range within the two pixel points, and providing other blank parts.
Size normalization: in order to ensure the aspect ratio of the design image, the image with the aspect ratio of the design image of 1:1.2 or 0.83:1 is deleted, and the length and width of other images are scaled to be the preset pixel x the preset pixel.
Step 2: and building and operating a GAN deep learning model. The GAN structure is similar to the conventional GAN structure and mainly includes a generator and a discriminator. After a random vector of a dimension of a preset pixel multiplied by 1 is input by a generator network in one training iteration, a generated image of the preset pixel multiplied by the preset pixel is generated. And (3) after the generated image and the real image are placed into a discriminator, calculating classification accuracy and a loss function, feeding calculated loss back to the generator and the discriminator respectively, feeding back to network parameters by the generator network and the discriminator network according to a loss gradient descending result, adjusting the network parameters, and then entering the next round of training iteration.
The GAN model is structured as shown in FIG. 1.
The GAN model framework used in the present invention has been introduced in the previous chapter, and this chapter mainly introduces the specific parameters in the framework.
In the generator, the random vector dimension is a preset pixel × 1. The normalization layer function is:
Figure BDA0002497917860000041
where μ (x) is the mean of the x vectors and σ (x) is the variance of the x vectors.
The 6 fully-connected layers for generating the intermediate vector W all use the calculation units with preset pixels, and the output dimension of each layer is the preset pixel multiplied by 1. And the dimension of the finally output intermediate vector is a preset pixel multiplied by 1. W is converted into two vectors through the FC layer, vector length and training batch: batchnum x 1, then scaled and offset with the upsampled layer and convolved layer data, respectively. The noise data is converted directly through an FC layer to the dimensions of the generated image and added directly on each channel after normalization. And finally, overlapping the output results of the upsampling layer and the convolution layer, and normalizing the results according to the channel by using a normalization layer function.
The parameters of each convolutional layer in the generator network are shown in fig. 4, and represent the width of the convolutional core × the height of the convolutional core × the number of channels, respectively. The pooling cores of the largest pooling layers were all 2 × 2. The calculation units of the two FC layers are 512.
Finally, in terms of loss, both the generator network and the arbiter network use logic loss. Wherein the generator network loss function is:
lossG=-log(eD(G(s))+1)
wherein G (z) is the generator output, D (x) is the discriminator output. The arbiter network loss function is:
lossDD=log(eD(G(z))+1)-log(eD(G(z))+1)
the training super-parameter selection is as follows:
Figure BDA0002497917860000051
wherein the Relu activation function is:
f(x)=max(0,x)
the training step size decreases iteratively, where the initial step size is 1024, the training step size is decreased by half each time the number of times of training reaches 2 (e.g. 4, 8, 16) and the like, and the minimum training step size is 16.
And step 3: after multiple rounds of training iteration, the hyper-parameters are assigned into a generator network to generate a new design image.
And 4, step 4: and carrying out corrosion and expansion mathematical calculation on the generated design image to reduce noise points of the design image.
Wherein the corrosion calculation formula is as follows:
Figure BDA0002497917860000052
the expression represents that A is corroded by a structure B, and it is noted that an origin point needs to be defined in B, when the origin point of B is translated to a pixel (x, y) of an image A, if B is completely contained in an overlapped area of the image A at the position of (x, y), the pixel (x, y) corresponding to an output image is assigned to be 1, otherwise, the pixel is assigned to be 0. The erosion operation can be used to eliminate small and meaningless target pixels. The expansion calculation formula is:
Figure BDA0002497917860000061
the equation shows that expanding a with structure B translates the origin of structure element B to the image pixel (x, y) location. And if the intersection of the B and the A at the image pixel (x, y) is not null, the pixel (x, y) corresponding to the output image is assigned to be 1, and otherwise, the pixel is assigned to be 0. The dilation calculation may be used to fill in certain holes in the target region and to eliminate small particle noise contained in the target region.
The corrosion operation and the expansion operation are matched to be used, so that the small noise points can be removed, and the function of the original image structure is kept.
And carrying out corrosion and expansion mathematical calculation on the generated design image to reduce noise points of the design image. The size range of the corrosion core is selected from 1-3, and the size range of the expansion core is selected from 1-3.
And 5: color edges in the image are smoothed using median filtering. Reducing edge mutations. The median filtering is implemented by replacing the value of a point in the digital image by the median of the values of the points of a region of the point. We refer to a neighborhood of a certain length or shape of a point as a window, and for median filtering of two-dimensional images, a 3 × 3 or 5 × 5 window is typically used for filtering. The median filtering has good filtering effect on impulse noise, and particularly, the median filtering can protect the edge of a signal from being blurred while filtering the noise. And 4, the processed picture in the step 4 is the finally generated design image.
Color edges in the image are smoothed using median filtering. The median filter kernel size range is:
[3,3]-[15,15]
finally, a design image is generated as shown in FIG. 5:
the GAN model proposed in the present invention is a modification based on the conventional GAN model. The present invention improves primarily the generator portion of the GAN, as shown in fig. 2.
The GAN model of the present invention is improved by adding a mapping network composed of 6 fully connected layers (FCs) to the input of the generator network, and the output vector W of the mapping network is the same as the input layer in size, and is a preset pixel × 1. The goal of adding a mapping network is to encode the input vectors into intermediate vectors, and the intermediate vectors are subsequently passed to the generating network to obtain 14 control vectors, by which the diversity and style of the generated design image can be varied. The capability is very limited if only input vectors are used to control the image features, which must follow the probability density of the training data. This therefore does not map part of the input (elements in the vector) onto features, which causes feature entanglement (B features are also changed when controlling the a features that generate the image, e.g. line thickness is also changed when controlling color). However, by using another neural network, the model can generate a vector that does not necessarily follow the distribution of the training data, and the correlation between features can be reduced.
The GAN of the present invention then uses the style module AdaIN structure applied to StyleGAN in the generation network (fig. 3) and adds the noise, control variables and residual structure (fig. 2). The generation network transforms the intermediate vector W into a pattern control vector, and participates in and influences the generation flow of the generator. The generator, since it transforms from 4 × 4 to 8 × 8 and finally to preset pixel × preset pixel, consists of 7 generation stages, each of which is affected by two control vectors (a), one after upsampling (Upsample) and one after Convolution (Conv), in AdaIN (adaptive instance normalization). Thus, the intermediate vector W' is transformed into a total of 14 control vectors (a) to be passed to the generator.
In which the radial transformation of the control vector A is performed by a full connection layer (FC), mapping W in the 256 × 1 dimension to two transformation factors, the scaling factor ys,iAnd a deviation factor yb,i. In the AdaIN module, two factors are weighted and summed with the normalized convolution output to complete the W-influence on the original output xiThe process of (1). This way of influencing enables pattern control, mainly because it lets W (i.e. transformed y)s,iAnd yb,i) Global information affecting the picture while preserving the generated design imageSince the key information of (2) is determined by the upsampled layer and the convolutional layer, W can only affect the style information of the picture. AdaIN varies as follows:
Figure BDA0002497917860000081
wherein xiFor outputting the ith channel data of x, μ (x)i) Is xiMean, σ (x)i) Is the variance of the received signal and the received signal,
Figure BDA0002497917860000082
is xiThe channel normalization operation of (1).
To further enhance the randomness and diversity of the images generated by the model, the GAN of the present invention adds random noise data to the generation process, as shown in fig. 2. At each generation stage, random noise is added to the data generated at the previous stage and the data after upsampling, convolution and two times of AdaIN processing. After being generated, the random noise is mapped into corresponding original data for superposition after being transformed by a full connection layer (FC). In each phase generator there is also a residual connection, superimposing the sampled data and the convolved data of the two AdaIN processes.
The discriminator in GAN is similar to VGG network, and uses the combined structure of 6 layers of convolutional layer and pooling layer (pool) to extract the features and reduce the dimensions of the image, and finally uses 2 layers of fully-connected layer to output the discrimination result (as shown in fig. 4). The convolution and pooling combined structure collocation equips two convolution layers with one maximum pooling layer (maxpool), and the convolution layers use padding operation without reducing image dimensionality. The number of features increases with the number of layers of the discriminator, starting from 16, and increases by multiple times, and finally reaches 512-dimensional features at most. The first fully-connected layer then maps the input 4 × 4 × 512 dimensional data into 512 × 1 vectors, and the second fully-connected layer further maps them into 1 × 1 vectors, representing whether the input image is positive (True, 1) or False (False, 0).
The preset pixels of the present invention are 256 × 256 pixel images or 1024 × 1024 pixel images.
The generator network may continuously generate an image of preset pixel elements x preset pixel elements.
Compared with the traditional GAN method, the method has the advantages that the edge of the generated design image is smoother and the color is more attractive, and the distribution function of the generated image can be changed according to the super parameter adjustment in the network model, so that the diversity of the design image is increased.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (10)

1. A method for generating an automatic design image based on GAN is characterized in that: the method comprises the following steps:
step 1: performing blank elimination and size normalization operation on the images, and converting all the images into preset pixels multiplied by the pixel size of the preset pixels;
step 2: building and operating a GAN deep learning model;
and step 3: after a plurality of rounds of training iteration, assigning the super parameters into a generator network to generate a new design image;
and 4, step 4: carrying out corrosion and expansion mathematical calculation on the generated design image to reduce noise points of the design image;
and 5: and smoothing the color edge in the image by using median filtering to reduce edge abrupt change.
2. The GAN-based automated design image generation method of claim 1, wherein: in the step 2: the GAN deep learning model can be divided into a generator and a discriminator, after random vectors with preset pixel multiplied by 1 dimension are input by a generator network in one training iteration, an image with preset pixel multiplied by preset pixel is generated, the generated image and a real image are placed into the discriminator, then classification accuracy and a loss function are calculated, calculated loss is fed back to the generator and the discriminator respectively, the generator network and the discriminator network feed back to network parameters according to a loss gradient reduction result, the network parameters are adjusted, and then the next training iteration is carried out.
3. The GAN-based automated design image generation method of claim 2, wherein: the generator network may continuously generate an image of preset pixel elements x preset pixel elements.
4. The GAN-based automated design image generation method according to any one of claims 1 to 3, wherein: the preset pixels are images of 256 × 256 pixels or images of 1024 × 1024 pixels.
5. The GAN-based automated design image generation method of claim 1, wherein: in the step 1: and blank elimination is to traverse each pixel point of the design image, find the non-white pixel point at the top left corner and the non-white pixel point at the bottom right corner, reserve a rectangular range within the two pixel points and provide a blank part outside the range.
6. The GAN-based automated design image generation method of claim 1, wherein: in the step 1: the size normalization is to ensure the aspect ratio of the design image, delete the image with the aspect ratio of the design image within the aspect ratio of the preset image, and scale the image length and width outside the aspect ratio of the preset image into the preset pixel x the preset pixel.
7. The GAN-based automated design image generation method of claim 6, wherein: the preset image aspect ratio is 1: 1.2.
8. The GAN-based automated design image generation method of claim 7, wherein: the preset image aspect ratio is 0.83: 1.
9. The GAN-based automated design image generation method of claim 1, wherein: in the step 5: the median filter kernel size range is ≧ 3, 3.
10. The GAN-based automated design image generation method of claim 9, wherein: the size range of the median filtering kernel is less than or equal to [15, 15 ].
CN202010423772.1A 2020-05-19 2020-05-19 Automatic design image generation method based on GAN Pending CN111784592A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010423772.1A CN111784592A (en) 2020-05-19 2020-05-19 Automatic design image generation method based on GAN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010423772.1A CN111784592A (en) 2020-05-19 2020-05-19 Automatic design image generation method based on GAN

Publications (1)

Publication Number Publication Date
CN111784592A true CN111784592A (en) 2020-10-16

Family

ID=72754180

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010423772.1A Pending CN111784592A (en) 2020-05-19 2020-05-19 Automatic design image generation method based on GAN

Country Status (1)

Country Link
CN (1) CN111784592A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106683080A (en) * 2016-12-15 2017-05-17 广西师范大学 Retinal fundus image preprocessing method
AU2017101166A4 (en) * 2017-08-25 2017-11-02 Lai, Haodong MR A Method For Real-Time Image Style Transfer Based On Conditional Generative Adversarial Networks
CN107392255A (en) * 2017-07-31 2017-11-24 深圳先进技术研究院 Generation method, device, computing device and the storage medium of minority class picture sample
CN108564119A (en) * 2018-04-04 2018-09-21 华中科技大学 A kind of any attitude pedestrian Picture Generation Method
US20180286055A1 (en) * 2017-04-04 2018-10-04 General Electric Company Optical flow determination system
CN110276708A (en) * 2019-05-08 2019-09-24 济南浪潮高新科技投资发展有限公司 A kind of image digital watermark generation and identification system and method based on GAN network
CN110659958A (en) * 2019-09-06 2020-01-07 电子科技大学 Clothing matching generation method based on generation of countermeasure network
CN110956079A (en) * 2019-10-12 2020-04-03 深圳壹账通智能科技有限公司 Face recognition model construction method and device, computer equipment and storage medium
CN111161137A (en) * 2019-12-31 2020-05-15 四川大学 Multi-style Chinese painting flower generation method based on neural network

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106683080A (en) * 2016-12-15 2017-05-17 广西师范大学 Retinal fundus image preprocessing method
US20180286055A1 (en) * 2017-04-04 2018-10-04 General Electric Company Optical flow determination system
CN107392255A (en) * 2017-07-31 2017-11-24 深圳先进技术研究院 Generation method, device, computing device and the storage medium of minority class picture sample
AU2017101166A4 (en) * 2017-08-25 2017-11-02 Lai, Haodong MR A Method For Real-Time Image Style Transfer Based On Conditional Generative Adversarial Networks
CN108564119A (en) * 2018-04-04 2018-09-21 华中科技大学 A kind of any attitude pedestrian Picture Generation Method
CN110276708A (en) * 2019-05-08 2019-09-24 济南浪潮高新科技投资发展有限公司 A kind of image digital watermark generation and identification system and method based on GAN network
CN110659958A (en) * 2019-09-06 2020-01-07 电子科技大学 Clothing matching generation method based on generation of countermeasure network
CN110956079A (en) * 2019-10-12 2020-04-03 深圳壹账通智能科技有限公司 Face recognition model construction method and device, computer equipment and storage medium
CN111161137A (en) * 2019-12-31 2020-05-15 四川大学 Multi-style Chinese painting flower generation method based on neural network

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
CEDRIC OELDORF 等: "LoGANv2: Conditional Style-Based Logo Generation with Generative Adversarial Networks", 《ARXIV》 *
IAN J. GOODFELLOW 等: "Generative Adversarial Nets", 《ARXIV》 *
TERO KARRAS 等: "A Style-Based Generator Architecture for Generative Adversarial Networks", 《ARXIV》 *
YAN WU 等: "LOGAN: LATENT OPTIMISATION FOR GENERATIVE ADVERSARIAL NETWORKS", 《ARXIV》 *
梁培俊 等: "基于条件生成对抗网络的漫画手绘图上色方法", 《计算机应用研究》 *
王坤峰 等: "生成式对抗网络 GAN 的研究进展与展望", 《自动化学报》 *
黄真赟 等: "基于生成对抗网络自动生成动漫人物形象的研究", 《电子技术与软件工程》 *

Similar Documents

Publication Publication Date Title
WO2019120110A1 (en) Image reconstruction method and device
Panjwani et al. Markov random field models for unsupervised segmentation of textured color images
CN110532897B (en) Method and device for recognizing image of part
Lin et al. Hyperspectral image denoising via matrix factorization and deep prior regularization
CN111986075B (en) Style migration method for target edge clarification
CN113239981B (en) Image classification method of local feature coupling global representation
CN111488938B (en) Image matching method based on two-step switchable normalized depth neural network
CN113256504A (en) Image processing method and electronic equipment
CN114663552B (en) Virtual fitting method based on 2D image
Simon et al. Barycenters of natural images constrained wasserstein barycenters for image morphing
Sharma et al. Point cloud upsampling and normal estimation using deep learning for robust surface reconstruction
CN115565043A (en) Method for detecting target by combining multiple characteristic features and target prediction method
CN112686817B (en) Image completion method based on uncertainty estimation
Mun et al. Texture preserving photo style transfer network
CN111784592A (en) Automatic design image generation method based on GAN
CN114862699B (en) Face repairing method, device and storage medium based on generation countermeasure network
US20220207790A1 (en) Image generation method and apparatus, and computer
CN113808006B (en) Method and device for reconstructing three-dimensional grid model based on two-dimensional image
WO2022133874A1 (en) Image processing method and device and computer-readable storage medium
CN113688715A (en) Facial expression recognition method and system
CN112529975A (en) Image generation method and device and computer
CN117237623B (en) Semantic segmentation method and system for remote sensing image of unmanned aerial vehicle
Wu et al. Semantic image inpainting based on generative adversarial networks
Balachandran et al. LR-to-HR Face Hallucination with an Adversarial Progressive Attribute-Induced Network
Mohammed et al. Cleanup Sketched Drawings: Deep Learning-Based Model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201016

RJ01 Rejection of invention patent application after publication