CN113112572A - Hidden space search-based image editing method guided by hand-drawn sketch - Google Patents

Hidden space search-based image editing method guided by hand-drawn sketch Download PDF

Info

Publication number
CN113112572A
CN113112572A CN202110393721.3A CN202110393721A CN113112572A CN 113112572 A CN113112572 A CN 113112572A CN 202110393721 A CN202110393721 A CN 202110393721A CN 113112572 A CN113112572 A CN 113112572A
Authority
CN
China
Prior art keywords
image
distance
hidden space
hand
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110393721.3A
Other languages
Chinese (zh)
Other versions
CN113112572B (en
Inventor
付彦伟
汪成荣
曹辰捷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CN202110393721.3A priority Critical patent/CN113112572B/en
Publication of CN113112572A publication Critical patent/CN113112572A/en
Application granted granted Critical
Publication of CN113112572B publication Critical patent/CN113112572B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/206Drawing of charts or graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an image editing method guided by a hand-drawn sketch based on hidden space search, which is used for editing an image to be edited and is characterized by comprising the following steps: step S1, obtaining images for training; step S2, extracting an edge graph of the image for training and training a neural network; step S3, extracting an edge graph of the image to be edited, and combining the edge graph with the hand-drawn sketch according to the mask; step S4, calculating an initial hidden space vector by using a neural network; step S5, generating a generated image according to the initial hidden space vector; step S6, extracting the edge graph of the generated image, calculating the distance between the edge graph and the characteristic graph of the hand-drawn sketch in the mask area, and simultaneously calculating the Euclidean distance and the perception distance between the edge graph and the image to be edited in the non-mask area; step S7, using gradient descent algorithm to continuously reduce the three distances; and step S8, fusing the mask region of the finally generated image and the non-mask region of the image to be edited to obtain a final editing result.

Description

Hidden space search-based image editing method guided by hand-drawn sketch
Technical Field
The invention belongs to the field of computer images, and particularly relates to a hand-drawn sketch guided image editing method based on hidden space search.
Background
The hand-drawn sketch guided image editing techniques are directed to modifying the mask portions of the provided artwork using a user-entered hand-drawn sketch. The prior art generally uses a method for training a confrontation network to perform hand-drawn sketch guided image editing, however, the method for completely new training of the neural network cannot be faithfully and stably generated according to the hand-drawn sketch for guidance, and cannot ensure the quality of the generated image.
The unsupervised image generation technology can generate images with good quality, and the generated quality is superior to that of the original image editing method. Some methods for controlling generated content by optimizing a hidden space, such as Image2style gan + +, may perform an inpainting task on an Image by optimizing a hidden space vector, or perform limited editing on an Image by using colored strokes; the PULSE can find the corresponding high resolution output for the low resolution image by a hidden space searching method.
Although the above-mentioned hidden space vector optimization methods can achieve high generation quality, these methods cannot be edited for the guidance of hand-drawn sketches.
Disclosure of Invention
In order to solve the problems, the invention provides an image editing method which combines a differentiable edge extractor, provides a direction for optimizing a hidden space vector by comparing the distance between an edge image corresponding to an image generated from the hidden space vector and a guide hand-drawn sketch input by a user, and finds an image generation result meeting the requirements by searching in the hidden space, wherein the image editing method adopts the following technical scheme:
the invention provides an image editing method guided by a hand-drawn sketch based on hidden space search, which is used for editing an image to be edited according to an input image to be edited, a mask of a region to be edited and the hand-drawn sketch for guiding the editing, and is characterized by comprising the following steps of: step S1, randomly sampling a plurality of n-dimensional random vectors from a normal distribution, and inputting the random vectors into a generator which is trained in advance and generates a countermeasure network, thereby obtaining a plurality of pairs at least formed by training images generated by the generator and the corresponding input random vectors; step S2, extracting the edge graph of the training image as the training edge graph by using a preset differentiable edge extraction algorithm, inputting the training edge graph into a neural network, and enabling the neural network to regress the implicit space vector corresponding to the input training edge graph so as to finish the training of the neural network; step S3, using a differentiable edge extraction algorithm to extract an edge graph of the image to be edited as a first edge graph, and combining the first edge graph with the hand-drawn sketch according to a mask to obtain a new edge graph as a second edge graph; step S4, calculating a hidden space vector corresponding to the second edge map by using the trained neural network as an initial hidden space vector, wherein the initial hidden space vector is a starting point of hidden space search; step S5, generating a generated image according to the initial hidden space vector by the generator; step S6, using a differentiable edge extraction algorithm to extract an edge graph of the generated image as a third edge graph, calculating the distance between the third edge graph and a characteristic graph of the hand-drawn sketch in a mask area, and simultaneously calculating the Euclidean distance and the perception distance between the generated image and an image to be edited in a non-mask area outside the mask; step S7, transmitting the feature map distance, the Euclidean distance and the perception distance to an initial hidden space vector through backward propagation, and reducing the feature map distance, the Euclidean distance and the perception distance by using a gradient descent algorithm to the initial hidden space vector; and S8, repeating the steps S5 to S7 until the distance of the feature map, the Euclidean distance and the perception distance are smaller than a preset threshold value, and finally fusing the generated mask region of the generated image and the unmasked region of the image to be edited to obtain a final editing result.
The hidden space search-based hand-drawn sketch guided image editing method provided by the invention can also have the technical characteristics that the characteristic diagram distance is obtained through calculation of a neural network, a network characteristic diagram loss function is used, the network characteristic diagram loss function is used for inputting the third edge diagram and the hand-drawn sketch into the neural network, the characteristic diagrams of the third edge diagram and the hand-drawn sketch at specific layers of the network are taken, the distance between every two characteristic diagrams is calculated, and the calculated distances are further added to obtain the characteristic diagram distance.
The method for editing an image guided by a hand-drawn sketch based on hidden space search may further have a technical feature that, in step S7, a feature map distance, a euclidean distance, and a perceived distance are returned to an initial hidden space vector by a generator, and a parameter of the initial hidden space vector is updated in a direction of reducing loss by using a gradient descent algorithm.
The hidden space search-based hand-drawn sketch guided image editing method provided by the invention can also have the technical characteristics that the generation countermeasure network is a StyleGAN network, a generator of the StyleGAN network is provided with a mapping module and a synthesis module, the pair also comprises a hidden space vector, and the hidden space vector is an intermediate vector which is input into the generator by a random vector and is obtained by mapping through the mapping module.
The image editing method guided by the hand-drawn sketch based on the hidden space search provided by the invention can also have the technical characteristics that the differentiable edge extraction algorithm is a Gaussian difference method or a deep learning method obtained by pre-training.
The hidden space search-based hand-drawn sketch guided image editing method provided by the invention can also have the technical characteristics that the neural network adopts a VGG network structure, and an L1 loss function is adopted in step S2 to return the hidden space vector corresponding to the edge map to the output of the neural network.
Action and Effect of the invention
According to the hidden space search-based hand-drawn sketch guided image editing method, due to the fact that the measurement of the distance between the hand-drawn sketches is lost by matching with the feature diagram through the differentiable edge extraction algorithm, the hidden space vector of the confrontation network is gradually updated and generated, the edge extraction result of the generation result corresponding to the vector can be close to the guided hand-drawn sketch input by the user, and meanwhile, the generation quality is high. In this way, an image editing result (such as a human face image) with high quality and strong sense of reality can be generated in the image editing task guided by the hand-drawn sketch, and the result has high tolerance for inputting the hand-drawn sketch, so that the input hand-drawn sketch can be more faithfully followed.
Drawings
FIG. 1 is a flowchart of an image editing method guided by a freehand sketch based on hidden space search according to an embodiment of the present invention; and
fig. 2 is a model structure diagram used in the hidden space search-based hand-drawn sketch guided image editing method in the embodiment of the present invention.
Detailed Description
In order to make the technical means, the creation features, the achievement objectives and the efficacy of the present invention easy to understand, the following describes the image editing method guided by the hand-drawn sketch based on hidden space search specifically with reference to the embodiments and the accompanying drawings.
< example >
Fig. 1 is a flowchart of an image editing method guided by a hand-drawn sketch based on hidden space search in an embodiment of the present invention, and fig. 2 is a model structure diagram used in the image editing method guided by the hand-drawn sketch based on hidden space search in the embodiment of the present invention.
As shown in fig. 1 and 2, the image editing method guided by the hand-drawn sketch based on the hidden space search specifically includes the following steps:
in step S1, a large number of n-dimensional random vectors are randomly sampled from a normal distribution, and then the random vectors are input into a pre-trained generator for generating a countermeasure network, so that the generator generates corresponding images according to the vectors. The number of dimensions n of the random vector is the same as the number of dimensions of the hidden space variable (512 dimensions in this example), and the hidden space vector is used for generating the countermeasure network.
The generative countermeasure network used in step S1 may be any trained generative countermeasure network. When executing an image editing task of a certain category, the generation countermeasure network using the corresponding category will achieve better results. In this embodiment, the generation of the countermeasure network uses a StyleGAN, the generator of the StyleGAN includes a mapping module and a synthesizing module, and step S1 is to obtain a large number of pairs of input vectors, corresponding mapped intermediate vectors, and generated images (i.e., training images) after inputting the random vectors into the generator. The implicit space vector mentioned in the present embodiment refers to the intermediate vector mapped by the mapping module.
Step S2, extracting an edge map of the training image as a training edge map by using a predetermined differentiable edge extraction algorithm, inputting the training edge map into a neural network, and making the neural network regress a hidden space vector corresponding to the training edge map to complete training of the neural network.
The edge map is an image with white as the ground color and some black lines, and the black lines are the outlines in the original image object. The neural network of the embodiment adopts a VGG network structure, and is different from the conventional VGG network in that the neural network of the embodiment reduces the number of parameters of the used VGG full-link layer, and finally outputs a 512-dimensional vector.
In addition, in the training process, the present embodiment returns the hidden space vector corresponding to the output of the neural network and the edge map back and forth by using the L1 loss function, and the gradient calculated by the loss function is returned to the neural network to update the parameters thereof.
In step S3, an edge map of the image to be edited is extracted using a differentiable edge extraction algorithm, and the edge map is combined with a freehand sketch for guidance according to the input mask.
In this embodiment, the input of the user includes an image to be edited, a mask of a region to be edited, and a hand-drawn sketch for guiding editing. In step S3, an edge map (i.e., a first edge map) of the image to be edited is extracted using a gaussian difference edge extraction algorithm, and then the unmasked region of the edge map and the masked region of the hand-drawn sketch are combined to obtain a new edge map (i.e., a second edge map). The second edge map is input to the VGG network trained in step S2 for calculation of the initial hidden space variable, which is specifically referred to in step S4 below.
Step S4, calculating a hidden space variable corresponding to the second edge map as an initial hidden space vector using the neural network trained in step S2, where the initial hidden space vector is a starting point of the hidden space search.
In step S5, the initial hidden space vector calculated in step S4 is generated as an image using the generator as a generated image.
And step S6, using a differentiable edge extraction algorithm to extract an edge map of the generated image as a third edge map, calculating the distance between the third edge map and a feature map of the guide hand-drawing sketch in the mask region, wherein the feature map distance is calculated by the neural network trained in the step S2, and simultaneously calculating the Euclidean distance and the perception distance between the generated image and the region outside the mask of the image to be edited.
In this embodiment, the feature map distance uses a network feature map loss function. The loss function is to input two images into the same neural network, take the feature maps of the two images at a specific layer of the network, calculate the distance between every two feature maps corresponding to the two edge maps, and add the calculated distances to obtain the feature map loss (i.e., the feature map distance).
The non-masked regions of the image to be edited are used to constrain the corresponding non-masked regions of the generated image. The L1 distance (i.e., euclidean distance) and the perceived distance between the two image unmasked regions are calculated in step S6. Perceptual distance is also a loss calculation similar to the above-described feature map loss function, except that a VGG network pre-trained on the ImageNet classification task is used.
And step S7, transmitting the calculated feature map distance, Euclidean distance and sensing distance to an initial hidden space vector through backward propagation, and using a gradient descent algorithm to the initial hidden space vector to continuously reduce the three distances.
In this embodiment, the gradients of the three distances are returned to the hidden space vector by the generator that generates the countermeasure network, and the parameters of the vector are updated in the direction of reducing the loss by using the gradient descent algorithm. After each update of the vector, the re-use generator generates a corresponding image, extracts its edge map, and calculates a new loss value.
And S8, repeating the steps S5 to S7 until the three distances are small enough, and fusing the mask region of the finally generated image and the non-mask region of the image to be edited to obtain a final editing result.
When the loss value drops to a sufficiently low level, the resulting image has the following characteristics: the unmasked area is close to the input original image, and the edge image extracted by the masked area is close to the guiding hand-drawn sketch. And then fusing the generated image to the mask region and the non-mask region of the original image, thereby obtaining the image editing result guided by the hand-drawn sketch.
Examples effects and effects
According to the hidden space search-based freehand sketch guided image editing method provided by the embodiment, the differentiable edge extraction algorithm is used, and the measurement of the distance between the freehand sketch and the characteristic diagram is lost, so that the hidden space vector for generating the countermeasure network is gradually updated, and therefore the edge extraction result of the generation result corresponding to the vector is close to the guided freehand sketch input by the user, and meanwhile, the generation quality is high. In this way, an image editing result (such as a human face image) with high quality and strong sense of reality can be generated in the image editing task guided by the hand-drawn sketch, and the result has high tolerance for inputting the hand-drawn sketch, so that the input hand-drawn sketch can be more faithfully followed.
The above-described embodiments are merely illustrative of specific embodiments of the present invention, and the present invention is not limited to the description of the above-described embodiments.
For example, in the above embodiment, the differentiable edge extraction algorithm employs a gaussian difference method, which may be a deep learning method trained in advance as an alternative.

Claims (6)

1. A hidden space search-based hand-drawn sketch guided image editing method is used for editing an image to be edited according to an input image to be edited, a mask of a region to be edited and a hand-drawn sketch for guiding editing, and is characterized by comprising the following steps:
step S1, randomly sampling a plurality of n-dimensional random vectors from a normal distribution, and inputting the random vectors into a generator which is trained in advance and generates a countermeasure network, thereby obtaining a plurality of pairs which are at least composed of training images generated by the generator and the random vectors which are correspondingly input;
step S2, extracting an edge map of the training image as a training edge map by using a predetermined differentiable edge extraction algorithm, inputting the training edge map to a neural network, and making the neural network regress a hidden space vector corresponding to the inputted training edge map to complete training of the neural network;
step S3, using the differentiable edge extraction algorithm to extract an edge map of the image to be edited as a first edge map, and combining the first edge map and the freehand sketch according to the mask to obtain a new edge map as a second edge map;
step S4, calculating a hidden space vector corresponding to the second edge map by using the trained neural network as an initial hidden space vector, wherein the initial hidden space vector is a starting point of hidden space search;
step S5, generating a generated image according to the initial hidden space vector by the generator;
step S6, using the differentiable edge extraction algorithm to extract the edge map of the generated image as a third edge map, calculating the distance between the third edge map and the characteristic map of the hand-drawn sketch in a mask area, and simultaneously calculating the Euclidean distance and the perception distance between the generated image and the image to be edited in a non-mask area outside the mask;
step S7, transmitting the feature map distance, the Euclidean distance and the perception distance to the initial hidden space vector through back propagation, and using a gradient descent algorithm to the initial hidden space vector to reduce the feature map distance, the Euclidean distance and the perception distance;
and step S8, repeating the steps S5 to S7 until the feature map distance, the Euclidean distance and the perception distance are smaller than a preset threshold value, and finally fusing the mask area of the generated image and the non-mask area of the image to be edited to obtain a final editing result.
2. The hidden-space-search-based hand-drawn sketch guided image editing method as claimed in claim 1, wherein:
the feature map distance is obtained through calculation by the neural network, a network feature map loss function is used, the third edge map and the hand-drawn sketch are input into the neural network, feature maps of the third edge map and the hand-drawn sketch at specific network layers are taken, the distance between every two feature maps is calculated, and the calculated distances are further added to obtain the feature map distance.
3. The hidden-space-search-based hand-drawn sketch guided image editing method as claimed in claim 1, wherein:
in step S7, the feature map distance, the euclidean distance, and the perceptual distance are returned to the initial hidden space vector by the generator, and the parameters of the initial hidden space vector are updated in a direction of reducing the loss by using the gradient descent algorithm.
4. The hidden-space-search-based hand-drawn sketch guided image editing method as claimed in claim 1, wherein:
wherein the generation countermeasure network is a StyleGAN network, the generator of which has a mapping module and a synthesizing module,
the pair also comprises a hidden space vector which is an intermediate vector obtained by inputting the random vector into the generator and mapping the random vector by the mapping module.
5. The hidden-space-search-based hand-drawn sketch guided image editing method as claimed in claim 1, wherein:
the differentiable edge extraction algorithm is a Gaussian difference method or a deep learning method obtained by pre-training.
6. The hidden-space-search-based hand-drawn sketch guided image editing method as claimed in claim 1, wherein:
wherein the neural network adopts a VGG network structure,
in step S2, an L1 loss function is used to return the hidden space vector corresponding to the output of the neural network and the edge map.
CN202110393721.3A 2021-04-13 2021-04-13 Hidden space search-based image editing method guided by hand-drawn sketch Active CN113112572B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110393721.3A CN113112572B (en) 2021-04-13 2021-04-13 Hidden space search-based image editing method guided by hand-drawn sketch

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110393721.3A CN113112572B (en) 2021-04-13 2021-04-13 Hidden space search-based image editing method guided by hand-drawn sketch

Publications (2)

Publication Number Publication Date
CN113112572A true CN113112572A (en) 2021-07-13
CN113112572B CN113112572B (en) 2022-09-06

Family

ID=76716271

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110393721.3A Active CN113112572B (en) 2021-04-13 2021-04-13 Hidden space search-based image editing method guided by hand-drawn sketch

Country Status (1)

Country Link
CN (1) CN113112572B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115496824A (en) * 2022-09-27 2022-12-20 北京航空航天大学 Multi-class object-level natural image generation method based on hand drawing

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107085716A (en) * 2017-05-24 2017-08-22 复旦大学 Across the visual angle gait recognition method of confrontation network is generated based on multitask
US20190188534A1 (en) * 2017-12-14 2019-06-20 Honda Motor Co., Ltd. Methods and systems for converting a line drawing to a rendered image
US20190287283A1 (en) * 2018-03-15 2019-09-19 Adobe Inc. User-guided image completion with image completion neural networks
CN111489405A (en) * 2020-03-21 2020-08-04 复旦大学 Face sketch synthesis system for generating confrontation network based on condition enhancement
CN111814566A (en) * 2020-06-11 2020-10-23 北京三快在线科技有限公司 Image editing method, image editing device, electronic equipment and storage medium
CN111915693A (en) * 2020-05-22 2020-11-10 中国科学院计算技术研究所 Sketch-based face image generation method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107085716A (en) * 2017-05-24 2017-08-22 复旦大学 Across the visual angle gait recognition method of confrontation network is generated based on multitask
US20190188534A1 (en) * 2017-12-14 2019-06-20 Honda Motor Co., Ltd. Methods and systems for converting a line drawing to a rendered image
US20190287283A1 (en) * 2018-03-15 2019-09-19 Adobe Inc. User-guided image completion with image completion neural networks
CN111489405A (en) * 2020-03-21 2020-08-04 复旦大学 Face sketch synthesis system for generating confrontation network based on condition enhancement
CN111915693A (en) * 2020-05-22 2020-11-10 中国科学院计算技术研究所 Sketch-based face image generation method and system
CN111814566A (en) * 2020-06-11 2020-10-23 北京三快在线科技有限公司 Image editing method, image editing device, electronic equipment and storage medium

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
KAMYAR NAZERI等: "EdgeConnect: Structure Guided Image Inpainting using Edge Prediction", 《2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOP (ICCVW)》, 5 March 2020 (2020-03-05), pages 19433024 *
SHUNXIN XU等: "E2I: Generative Inpainting From Edge to Image", 《2017 IEEE VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP)》, 1 March 2018 (2018-03-01), pages 17614370 *
冷佳明等: "基于条件生成对抗网络的图像转化方法研究", 《数码世界》, no. 09, 30 September 2020 (2020-09-30), pages 9 - 11 *
崔小曼等: "基于条件生成对抗网络的多风格素描-照片生成", 《激光与光电子学进展》, vol. 57, no. 18, 30 September 2020 (2020-09-30), pages 197 - 203 *
王鹏程: "基于感知注意力和隐空间正则化的GAN在草图到真实图像的转换研究", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》, no. 08, 15 August 2020 (2020-08-15), pages 138 - 456 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115496824A (en) * 2022-09-27 2022-12-20 北京航空航天大学 Multi-class object-level natural image generation method based on hand drawing
CN115496824B (en) * 2022-09-27 2023-08-18 北京航空航天大学 Multi-class object-level natural image generation method based on hand drawing

Also Published As

Publication number Publication date
CN113112572B (en) 2022-09-06

Similar Documents

Publication Publication Date Title
CN110163110B (en) Pedestrian re-recognition method based on transfer learning and depth feature fusion
CN108875807B (en) Image description method based on multiple attention and multiple scales
CN110263912B (en) Image question-answering method based on multi-target association depth reasoning
JP7011146B2 (en) Image processing device, image processing method, image processing program, and teacher data generation method
CN110728192B (en) High-resolution remote sensing image classification method based on novel characteristic pyramid depth network
Stylianou et al. Visualizing deep similarity networks
CN110321813A (en) Cross-domain pedestrian recognition methods again based on pedestrian&#39;s segmentation
CN110738207A (en) character detection method for fusing character area edge information in character image
CN113343705B (en) Text semantic based detail preservation image generation method and system
US10943352B2 (en) Object shape regression using wasserstein distance
CN106778852A (en) A kind of picture material recognition methods for correcting erroneous judgement
CN109064389B (en) Deep learning method for generating realistic images by hand-drawn line drawings
CN113744311A (en) Twin neural network moving target tracking method based on full-connection attention module
CN113963409A (en) Training of face attribute editing model and face attribute editing method
CN111881716A (en) Pedestrian re-identification method based on multi-view-angle generation countermeasure network
CN113112572B (en) Hidden space search-based image editing method guided by hand-drawn sketch
CN111931908A (en) Face image automatic generation method based on face contour
CN114677580A (en) Image description method based on self-adaptive enhanced self-attention network
CN114757864A (en) Multi-level fine-grained image generation method based on multi-scale feature decoupling
Vasani et al. Generation of indian sign language by sentence processing and generative adversarial networks
Dupont et al. UCP-net: unstructured contour points for instance segmentation
CN113283372A (en) Method and apparatus for processing image of person
CN113222002A (en) Zero sample classification method based on generative discriminative contrast optimization
Vobecky et al. Advanced pedestrian dataset augmentation for autonomous driving
Moeini et al. Multimodal facial expression recognition based on 3D face reconstruction from 2D images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant