CN109544450A - A kind of confrontation generates network establishing method and device, image reconstructing method and device - Google Patents
A kind of confrontation generates network establishing method and device, image reconstructing method and device Download PDFInfo
- Publication number
- CN109544450A CN109544450A CN201811332629.0A CN201811332629A CN109544450A CN 109544450 A CN109544450 A CN 109544450A CN 201811332629 A CN201811332629 A CN 201811332629A CN 109544450 A CN109544450 A CN 109544450A
- Authority
- CN
- China
- Prior art keywords
- network
- image
- initial
- training sample
- prior
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 61
- 238000012549 training Methods 0.000 claims abstract description 221
- 238000011156 evaluation Methods 0.000 claims abstract description 151
- 238000005457 optimization Methods 0.000 claims abstract description 39
- 230000004927 fusion Effects 0.000 claims description 19
- 238000010276 construction Methods 0.000 claims description 14
- 238000013527 convolutional neural network Methods 0.000 claims description 10
- 238000010586 diagram Methods 0.000 claims description 10
- 230000004913 activation Effects 0.000 claims description 5
- 238000011176 pooling Methods 0.000 claims description 5
- 238000013528 artificial neural network Methods 0.000 abstract description 6
- 230000003115 biocidal effect Effects 0.000 abstract 1
- 230000004069 differentiation Effects 0.000 abstract 1
- 230000006870 function Effects 0.000 description 22
- 238000012545 processing Methods 0.000 description 17
- 230000008569 process Effects 0.000 description 5
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 4
- 238000005070 sampling Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000011478 gradient descent method Methods 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 210000000056 organ Anatomy 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000002939 conjugate gradient method Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4046—Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The present invention provides a kind of confrontation to generate network establishing method and device, image reconstructing method and device, it includes: for each training sample that confrontation, which generates network establishing method, low-resolution image in the training sample is input to initial pre-evaluation network, obtains the first priori features figure;Low-resolution image and the first priori features figure are subjected to image co-registration, obtain initial pictures;Initial pictures are input to and are initially generated network, obtain target image;Target image and high-definition picture are input to initial differentiation network, obtain the evaluation of estimate of the training sample;Judged whether to adjust model parameter according to the evaluation of estimate of each training sample;If it is determined that adjustment, updates each network with model optimization parameter obtained, execution is returned by low-resolution image and is input to initial pre-evaluation network;Otherwise, with initial pre-evaluation network and being initially generated the pairs of antibiosis of group of networks into network.Using the embodiment of the present invention, neural network accuracy is improved.
Description
Technical Field
The invention relates to the field of image processing, in particular to a method and a device for constructing a confrontation generation network and a method and a device for reconstructing an image.
Background
Images have received increasing attention as one of the main sources from which people acquire information. In recent years, image-based applications have been increased explosively, and high-resolution images have higher pixel density and richer detail information than low-resolution images, and can better meet practical application requirements. Image reconstruction techniques have been developed to convert low-resolution images into high-resolution images. The image reconstruction technology can break through the limitation of image resolution without changing hardware conditions, and reconstruct a low-resolution image with poor imaging quality into a high-resolution image with higher identification degree.
At present, the most widely applied image reconstruction technology is a super-resolution reconstruction technology based on deep learning, such as an image reconstruction method based on a convolutional neural network and a countermeasure generation network, but because the network structure is not comprehensive enough and the network precision is low, the extracted image feature information is not rich enough, so that the problems of insignificant image quality improvement, insufficient diversity and unnaturalness still exist, and the precision of the used countermeasure generation network becomes an important factor influencing the quality of the reconstructed image.
Therefore, it is necessary to design a new method for constructing a countermeasure generation network to overcome the above problems.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a method and a device for constructing a confrontation generation network and a method and a device for reconstructing an image so as to improve the network precision.
The invention is realized by the following steps:
in a first aspect, the present invention provides a method for constructing a countermeasure generation network, the method including:
obtaining a training set, and loading a preset initial prior evaluation network, a preset initial generation network and a preset initial discrimination network, wherein the training set comprises a plurality of training samples, and each training sample comprises a low-resolution image and a high-resolution image corresponding to the low-resolution image;
for each training sample in a training set, inputting a low-resolution image in the training sample into the initial prior evaluation network to obtain a first prior feature map of the training sample; carrying out image fusion on the low-resolution image and the first prior feature map to obtain an initial image of the training sample; inputting the initial image into the initial generation network to obtain a target image of the training sample; inputting the target image and the high-resolution image in the training sample into the initial discrimination network to obtain an evaluation value of the training sample; the first prior feature map of the training sample is obtained by extracting the face key feature points and the face contour features of the low-resolution images in the training sample by the initial prior evaluation network;
after the evaluation value of each training sample is obtained, judging whether to adjust the model parameters according to each obtained evaluation value;
if the model parameter is judged to be adjusted, optimizing a preset loss function by using a model optimization algorithm to obtain model optimization parameters, respectively updating the initial prior evaluation network, the initial generation network and the initial discrimination network by using the obtained model optimization parameters, and returning to the step of inputting the low-resolution image in the training sample into the initial prior evaluation network;
and if the model parameters are not adjusted, forming a counteraction generation network by using the initial prior evaluation network and the initial generation network.
Optionally, the preset initial prior evaluation network is a convolutional neural network, the preset initial generation network is a residual error network, and the preset initial discrimination network is a full convolutional network.
Optionally, after obtaining the training set, the method further includes:
cutting each low-resolution image in the training set to a first preset size;
inputting the low-resolution images in the training sample into the initial prior evaluation network, including:
and inputting the low-resolution image which is cut to a first preset size in the training sample into the initial prior evaluation network.
Optionally, after determining to adjust the model parameter, before optimizing the preset loss function with the model optimization algorithm, the method further includes:
for each training sample in a training set, inputting a high-resolution image in the training sample into the initial prior evaluation network to obtain a second prior feature map of the training sample; and extracting the key feature points and the face contour features of the face of the high-resolution image in the training sample by the initial prior evaluation network to obtain a second prior feature image of the training sample.
Optionally, the loss function is: l istotal=wpLp+wpixelLpixel+wvggLvgg+wadvLadv;
Wherein,
wp、wpixel、wvggand wadvRespectively representing each preset weight, N representing the total number of training samples,and p(i)Respectively representing a first prior feature map and a second prior feature map of the ith training sample;and y(i)Respectively representing a target image of an ith training sample and a high-resolution image in the ith training sample;indicates when inputtingInitially judging the activation characteristic diagram of the jth convolutional layer before the kth maximum pooling layer in the network,representing the initial image of the ith training sample.A target image equivalent to the ith training sample,indicating the evaluation value of the ith training sample.
Optionally, determining whether to adjust the model parameter according to each obtained evaluation value includes:
calculating the average value of each evaluation value, and judging whether the average value is smaller than a preset threshold value or not;
if the model parameter is smaller than the preset value, judging to adjust the model parameter;
and if not, judging that the model parameters are not adjusted.
In a second aspect, the present invention provides a method of image reconstruction, the method comprising:
obtaining an image to be reconstructed;
loading a countermeasure generation network, and inputting the image to be reconstructed into the countermeasure generation network to obtain a reconstructed image output by the countermeasure generation network; the countermeasure generation network is constructed according to any one of the above construction methods, and a reconstructed image output by the countermeasure generation network is obtained by:
inputting an image to be reconstructed into an initial prior evaluation network in a countermeasure generation network to obtain a target prior feature map output by the initial prior evaluation network;
and carrying out image fusion on the image to be reconstructed and the target prior characteristic image, and inputting an image fusion result to an initial generation network in the countermeasure generation network to obtain a reconstructed image output by the initial generation network.
Optionally, before the image to be reconstructed is input to the countermeasure generation network, the method further includes:
zooming the image to be reconstructed to a first preset size;
inputting the image to be reconstructed to the countermeasure generation network, including:
and inputting the image to be reconstructed scaled to the first preset size into the countermeasure generating network.
In a third aspect, the present invention provides a countermeasure generation network construction apparatus, including:
the device comprises a first obtaining module, a second obtaining module and a third obtaining module, wherein the first obtaining module is used for obtaining a training set and loading a preset initial prior evaluation network, a preset initial generation network and a preset initial discrimination network, the training set comprises a plurality of training samples, and each training sample comprises a low-resolution image and a high-resolution image corresponding to the low-resolution image;
the first input module is used for inputting the low-resolution images in the training samples to the initial prior evaluation network for each training sample in the training set to obtain a first prior feature map of the training sample; carrying out image fusion on the low-resolution image and the first prior feature map to obtain an initial image of the training sample; inputting the initial image into the initial generation network to obtain a target image of the training sample; inputting the target image and the high-resolution image in the training sample into the initial discrimination network to obtain an evaluation value of the training sample; the first prior feature map of the training sample is obtained by extracting the face key feature points and the face contour features of the low-resolution images in the training sample by the initial prior evaluation network;
the judging module is used for judging whether to adjust the model parameters according to each obtained evaluation value after the evaluation value of each training sample is obtained;
the adjusting module is used for optimizing a preset loss function by using a model optimization algorithm to obtain model optimization parameters when the judgment result of the judging module is yes, respectively updating the initial prior evaluation network, the initial generation network and the initial judgment network by using the obtained model optimization parameters, and returning to execute the input of the low-resolution images in the training sample to the initial prior evaluation network;
and the generation module is used for forming a counterattack generation network by using the initial prior evaluation network and the initial generation network when the judgment result of the judgment module is negative.
Optionally, the preset initial prior evaluation network is a convolutional neural network, the preset initial generation network is a residual error network, and the preset initial discrimination network is a full convolutional network.
Optionally, the apparatus further includes a clipping module, configured to:
after obtaining a training set, cropping each low-resolution image in the training set to a first preset size;
the first input module inputs the low-resolution images in the training sample to the initial prior evaluation network, specifically:
and inputting the low-resolution image which is cut to a first preset size in the training sample into the initial prior evaluation network.
Optionally, the apparatus further includes a second input module, configured to:
after judging and adjusting model parameters, before optimizing a preset loss function by using a model optimization algorithm, for each training sample in a training set, inputting a high-resolution image in the training sample into the initial prior evaluation network to obtain a second prior feature map of the training sample; and extracting the key feature points and the face contour features of the face of the high-resolution image in the training sample by the initial prior evaluation network to obtain a second prior feature image of the training sample.
Optionally, the loss function is: l istotal=wpLp+wpixelLpixel+wvggLvgg+wadvLadv;
Wherein,
wp、wpixel、wvggand wadvRespectively representing each preset weight, N representing the total number of training samples,and p(i)Respectively representing a first prior feature map and a second prior feature map of the ith training sample;and y(i)Respectively representing a target image of an ith training sample and a high-resolution image in the ith training sample;indicates when inputtingInitially judging the activation characteristic diagram of the jth convolutional layer before the kth maximum pooling layer in the network,representing the initial image of the ith training sample.A target image equivalent to the ith training sample,indicating the evaluation value of the ith training sample.
Optionally, the determining module determines whether to adjust the model parameter according to each obtained evaluation value, specifically:
calculating the average value of each evaluation value, and judging whether the average value is smaller than a preset threshold value or not;
if the model parameter is smaller than the preset value, judging to adjust the model parameter;
and if not, judging that the model parameters are not adjusted.
In a fourth aspect, the present invention provides an image reconstruction apparatus, comprising:
a second obtaining module, configured to obtain an image to be reconstructed;
the loading module is used for loading a countermeasure generation network and inputting the image to be reconstructed into the countermeasure generation network to obtain a reconstructed image output by the countermeasure generation network; the countermeasure generation network is constructed according to any one of the above construction methods, and a reconstructed image output by the countermeasure generation network is obtained by:
inputting an image to be reconstructed into an initial prior evaluation network in a countermeasure generation network to obtain a target prior feature map output by the initial prior evaluation network;
and carrying out image fusion on the image to be reconstructed and the target prior characteristic image, and inputting an image fusion result to an initial generation network in the countermeasure generation network to obtain a reconstructed image output by the initial generation network.
Optionally, the apparatus further includes a scaling module, configured to scale the image to be reconstructed to a first preset size before the image to be reconstructed is input to the countermeasure generating network;
the loading module inputs the image to be reconstructed into the countermeasure generation network, and specifically includes:
and inputting the image to be reconstructed scaled to the first preset size into the countermeasure generating network.
The invention has the following beneficial effects: by applying the embodiment of the invention, the training set is input into the initial prior evaluation network, the target image is generated by the initial generation network, the evaluation is carried out through the initial discrimination network, whether the model parameter is adjusted or not is judged according to the evaluation value, if the model parameter is judged to be adjusted, the initial prior evaluation network, the initial generation network and the initial discrimination network are respectively updated by using the model optimization parameter, and if the model parameter is not adjusted, the confrontation generation network is formed by using the initial prior evaluation network and the initial generation network.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a method for constructing a countermeasure generation network according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an initial generation network generating a target image according to an embodiment of the present invention;
FIG. 3 is a schematic flowchart of an image reconstruction method according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a countermeasure generation network construction apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an image reconstruction apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the countermeasure generation network construction method provided by the present invention can be applied to an electronic device, wherein in a specific application, the electronic device can be a computer, a personal computer, a tablet, a mobile phone, and the like, which is reasonable.
Referring to fig. 1, an embodiment of the present invention provides a method for constructing a countermeasure generation network, where the method includes the following steps:
s101, obtaining a training set, and loading a preset initial prior evaluation network, a preset initial generation network and a preset initial discrimination network, wherein the training set comprises a plurality of training samples, and each training sample comprises a low-resolution image and a high-resolution image corresponding to the low-resolution image;
each training sample may include a low resolution image and a high resolution image corresponding to the low resolution image. The low-resolution image and the high-resolution image corresponding to the low-resolution image have the same content, but the resolution of the images is different, the resolution of the low-resolution (LR) image is lower than that of the high-resolution (HR) image, the high resolution means that the density of pixels in the images is high, more details can be provided, and the details can show the image content more clearly.
The preset initial prior evaluation network may be one of neural networks such as a convolutional neural network, a radial basis function neural network, and a deconvolution network, the preset initial generation network may be one of machine learning networks such as a residual error network, a radial basis function neural network, a deconvolution network, and a support vector machine network, and the preset initial discrimination network may be one of neural networks such as a full convolutional network, a radial basis function neural network, and a deconvolution network. The residual network is a deep convolutional neural network, and the full convolutional network is a convolutional neural network in which all layers are convolutional layers.
For convenience of training, each network may be a certain convolutional neural network, specifically, the preset initial prior evaluation network may be a convolutional neural network, the preset initial generation network may be a residual error network, and the preset initial discrimination network may be a full convolutional network. The full convolutional network may be a 19-tier VGG model, a 16-tier VGG model, or the like. VGG stands for the department of scientific engineering of oxford university (Visual Geometry Group), whose published network model starts with VGG.
Model parameters in the preset initial prior evaluation network, the preset initial generation network and the preset initial judgment network are preset initial values, and the networks can be continuously updated by continuously adjusting the model parameters of the networks.
S102, for each training sample in a training set, inputting a low-resolution image in the training sample to an initial prior evaluation network to obtain a first prior feature map of the training sample; carrying out image fusion on the low-resolution image and the first prior feature map to obtain an initial image of the training sample; inputting the initial image into an initial generation network to obtain a target image of the training sample; inputting the target image and the high-resolution image in the training sample into an initial discrimination network to obtain an evaluation value of the training sample; the first prior feature map of the training sample is obtained by extracting human face key feature points and human face contour features of a low-resolution image in the training sample through an initial prior evaluation network;
to simplify the training process, after obtaining the training set, the method may further comprise:
cutting each low-resolution image in the training set to a first preset size;
inputting the low-resolution images in the training sample into the initial prior evaluation network, including:
and inputting the low-resolution image which is cut to a first preset size in the training sample into the initial prior evaluation network.
Each low-resolution image in the training set can be cut to a first preset size by adopting a random cutting mode or a mode of sequentially cutting according to preset cutting point positions, so that one low-resolution image can be divided into a plurality of images, the number of training samples is increased, and the reliability and accuracy of training are improved. The initial prior evaluation network may store a size parameter in advance, where the size parameter is a first preset size, and the first preset size may be preset according to a requirement, for example, the first preset size may be: 128x128, 256x256, and so on.
In addition, in an embodiment, before cropping each low resolution image in the training set to a first preset size, the method may further include:
amplifying each low-resolution image in the training set to a second preset size by using an interpolation algorithm;
cropping each low-resolution image in the training set to a first preset size, comprising:
and cutting each low-resolution image amplified to a second preset size in the training set to a first preset size.
The image size can be enlarged by adopting interpolation algorithms such as nearest neighbor interpolation, bilinear interpolation or bicubic interpolation, and the like, all the low-resolution images are enlarged to the same size, and then all the enlarged low-resolution images are cut to a first preset size. The second preset size may be larger than the first preset size, and the second preset size may be preset, for example, 170x170, 180x180, and the like.
The size of the low-resolution images is enlarged through an interpolation algorithm, so that more images can be cut, the number of training samples is further increased, and the reliability and accuracy of training can be improved.
After the initial prior evaluation network obtains the image, the face key feature points and the face contour features of the image can be extracted, and a prior feature map is output. The first prior feature map is obtained by extracting human face key feature points and human face contour features of a low-resolution image by the initial prior evaluation network; the first prior feature map is obtained by extracting human face key feature points and human face contour features of a high-resolution image by the initial prior evaluation network. The face key feature points may include key feature points of face organs such as canthus, tip of nose, corner of mouth, etc., and the face contour features may include edge features of face organ parts such as nose, eyes, mouth, etc.
For each training sample, after obtaining a first prior feature map of the training sample, performing image fusion on a low-resolution image of the training sample and the first prior feature map to obtain an initial image of the training sample; further, the initial image may be input to an initial generation network to obtain a target image of the training sample.
Illustratively, as shown in fig. 2, the initial generation network is a residual network, which includes an input module 201, a cubic downsampling and convolution processing module 202, a residual block processing module 203, a cubic upsampling and convolution processing module 204, and an output module 205;
the specific process of initially generating a target image of a training sample output by a network is as follows:
the input module 201 obtains an initial image of the training sample; wherein the size of the initial image is 128x 3;
the cubic downsampling and convolution processing module 202 performs cubic downsampling and convolution processing on the initial image, which may specifically be: performing convolution processing on an initial image by k3n32s1p1 (wherein k3n32s1p1 is used for describing convolution kernel parameters and respectively indicates that the convolution kernel k is 3 x 3, the kernel number n is 32, the step size s is 1 and the filling length p is 1) to obtain an image block 1 with the size of 128x 32, and performing convolution processing by using the convolution kernel parameter k3n32s1p1 to obtain an image block 2 with the size of 128x 32; performing first downsampling (the convolution kernel parameter of the first downsampling is k3n64s2p0), and obtaining an image block 3 with the size of 64 x 64; obtaining an image block 4 with the size of 64 × 64 after convolution processing with a convolution kernel parameter k3n64s1p 1; performing a second downsampling (the convolution kernel parameter of the second downsampling is k3n128s2p0) to obtain an image block 5 with the size of 32 x128, and performing convolution processing with the convolution kernel parameter of k3n128s1p1 to obtain an image block 6 with the size of 32 x 128; carrying out third down-sampling (the convolution kernel parameter of the third down-sampling is k3n256s2p0) to obtain an image block with the size of 16 x 256;
the residual block processing module 203 continuously uses six residual blocks 7 to perform image feature extraction on the 16 × 256 image blocks, and the sixth residual block outputs image blocks with the size of 16 × 128, so as to complete the encoding of the initial image;
the three-time upsampling and convolution processing module 204 may obtain image blocks of size 16 × 128, obtain image blocks 8 of size 32 × 128 through first upsampling (the upsampling parameter is sw2sh2, which respectively indicates that the lateral sampling multiple sw is 2 and the longitudinal sampling multiple sh is 2), and obtain image blocks 9 and 10 of size 32 × 128 through two times of convolution processing with convolution kernel parameters k3n128s1p 1; after convolution processing with convolution kernel parameters k3n64s1p1, the image block 11 with the size of 32 x 64 is obtained; after the second upsampling (the upsampling parameter is sw2sh2), obtaining an image block 12 with the size of 64 × 64, and after the two convolution processes with the parameter of k3n64s1p1, obtaining an image block 13 with the size of 64 × 64 and an image block 14; obtaining image blocks 15 with the size of 64 × 32 after convolution processing with convolution kernel parameters k3n32s1p 1; performing third upsampling (the upsampling parameter is sw2sh2) to obtain an image block 16 with the size of 128 × 32, and performing convolution processing with a convolution kernel parameter of k3n32s1p1 to obtain an image block 17 with the size of 128 × 32; after convolution processing with convolution kernel parameters of k3n3s1p1, obtaining image blocks with the size of 128 × 3; thus obtaining the target image.
The output module 205 outputs a target image of size 128x 3.
After obtaining the target image of the training sample, the target image and the high-resolution image in the training sample may be input to the initial discrimination network, so as to obtain the evaluation value of the training sample. Preferably, the initial discrimination network can be a full convolution network, the full convolution network can accept image input of any size, the full convolution network is used for training, the process is more stable, the model convergence speed is high, and the consumption of computing resources can be reduced.
The evaluation value can reflect the similarity degree of the target image and the high-resolution image, and when the evaluation value is lower, the lower the evaluation value is, the more the target image and the high-resolution image cannot be distinguished, the higher the similarity degree of the target image and the high-resolution image is; when the evaluation value is lower, it indicates that it is easier to distinguish the target image from the high-resolution image, and the degree of similarity between the target image and the high-resolution image is lower. The evaluation value may range from 0 to 1, for example, when the evaluation value is 0, it indicates that the target image and the high-resolution image cannot be distinguished at all; when the evaluation value is 1, it indicates that it is easy to distinguish the target image and the high-resolution image.
S103, after the evaluation value of each training sample is obtained, judging whether to adjust the model parameters according to each obtained evaluation value; if the model parameter is judged to be adjusted, executing S104; if the model parameters are not adjusted, executing S105;
in order to improve the reliability of model optimization, in one embodiment, determining whether to adjust the model parameters according to the obtained evaluation values may include:
calculating the average value of each evaluation value, and judging whether the average value is smaller than a preset threshold value or not;
if the model parameter is smaller than the preset value, judging to adjust the model parameter;
and if not, judging that the model parameters are not adjusted.
The preset threshold may be set in advance, and may be, for example, 0.1, 0.01, or the like. The model parameters may be network parameters such as upsampling parameters, downsampling parameters, convolution kernel parameters, and the like.
It can be seen that the resolution of the target image generated by the initial generation network may be higher than or not higher than the initial image, the resolution of the target image generated by the initial generation network is higher than the initial image and approaches the high-resolution image by continuously adjusting the model parameters, and the initial discrimination network cannot distinguish the target image from the high-resolution image, i.e. the training of the initial generation network is completed; moreover, by adjusting the initial discrimination network, the discrimination result of the initial discrimination network can be more accurate, and the initial generation network which is trained to be mature is more reliable; by adjusting the initial prior evaluation network, the resolution of the initial image input to the initial generation network can be improved, so that the initial generation network can be trained quickly, and the model construction process is accelerated.
And comprehensive evaluation aiming at the training set is obtained through the average value, so that the evaluation effect is more accurate, further, whether model parameters need to be adjusted or not can be judged more reliably, and the reliability of model optimization is further improved.
In other embodiments, it may also be determined whether the smallest of the evaluation values is smaller than a preset threshold; if the model parameter is smaller than the preset value, judging to adjust the model parameter; and if not, judging that the model parameters are not adjusted.
S104, optimizing a preset loss function by using a model optimization algorithm to obtain model optimization parameters, respectively updating an initial prior evaluation network, an initial generation network and an initial discrimination network by using the obtained model optimization parameters, and returning to the step of inputting the low-resolution images in the training sample into the initial prior evaluation network;
the value of the loss function can reflect the difference degree between the target image generated by the model and the real high-resolution image in the training sample, and the smaller the value of the loss function is, the closer the generated image is to the real high-resolution image, and the higher the accuracy of the model is. The loss function may be empirically set in advance, and may be L, for exampletotal=wpLp+wpixelLpixel+wvggLvgg+wadvLadv;
Wherein,
wp、wpixel、wvggand wadvRespectively representing each preset weight, N representing the total number of training samples,and p(i)Respectively representing a first prior feature map and a second prior feature map of the ith training sample;and y(i)Respectively representing a target image of an ith training sample and a high-resolution image in the ith training sample;indicates when inputtingInitially judging the activation characteristic diagram of the jth convolutional layer before the kth maximum pooling layer in the network,representing the initial image of the ith training sample.A target image equivalent to the ith training sample,indicating the evaluation value of the ith training sample.
Optionally, wp、wpixel、wvggAnd wadv0.9, 1e-2 and 1e-4, respectively. The values of k and j may be preset, for example, may be divided intoRespectively taking k as 5 and j as 4.
LpIs a measure of the pixel-level mean distance between the first and second a priori profiles, LpixelIs a measure of the pixel-level average distance between the target image and the high-resolution image in the training sample (which may be referred to as a true high-definition image), LvggCan reflect the correlation, L, of the generated target image and the real high-definition image in the feature spaceadvThe total evaluation value of the training set is reflected, the loss function is adopted, the difference between the actual output and the expected output of the initial prior evaluation network, the initial generation network and the initial judgment network is comprehensively considered, the improvement and optimization of the loss function are realized, the model optimization parameters can be more accurately obtained, the confrontation generation network can be trained more quickly, and the obtained confrontation generation network is good in robustness and high in precision.
Alternatively, in other embodiments, L may be providedtotal=wpixelLpixel+wvggLvgg+wadvLadv(ii) a Alternatively, L may be usedtotal=wpLp+wpixelLpixel+wadvLadvAnd so on.
If the loss function is Ltotal=wpLp+wpixelLpixel+wvggLvgg+wadvLadvAfter determining to adjust the model parameters, before optimizing the preset loss function with the model optimization algorithm, the method further includes:
for each training sample in a training set, inputting a high-resolution image in the training sample into an initial prior evaluation network to obtain a second prior feature map of the training sample; and extracting the key feature points and the face contour features of the face of the high-resolution image in the training sample by the initial prior evaluation network to obtain the second prior feature image of the training sample.
In addition, in another embodiment, before inputting the high resolution image in the training sample to the initial prior evaluation network, the method further comprises:
amplifying each high-resolution image in the training set to a second preset size by using an interpolation algorithm;
and cutting each high-resolution image amplified to the second preset size to the first preset size.
The model optimization algorithm is one of gradient Descent method, newton method, quasi-newton method, conjugate gradient method, SGD (stochastic gradient Descent) algorithm, Adam (Adaptive motion Estimation) algorithm, and the like.
By using a model optimization algorithm, model optimization parameters can be obtained, and then the initial prior evaluation network, the initial generation network and the initial discrimination network are adjusted according to the model optimization parameters to obtain an updated initial prior evaluation network, initial generation network and initial discrimination network. The model optimization parameters refer to optimized model parameters.
In addition, in other embodiments, each network may correspond to a model optimization algorithm, for example, if the initial prior evaluation network, the initial generation network, and the initial discrimination network correspond to a gradient descent method, a newton method, and a newton-like method, respectively, the gradient descent method, the newton method, and the newton-like method are used to optimize the loss function, respectively, to obtain respective model optimization parameters, and the respective model optimization parameters are respectively assigned to the corresponding networks, so as to obtain the updated initial prior evaluation network, the initial generation network, and the initial discrimination network, respectively.
And S105, forming a countermeasure generation network by using the initial prior evaluation network and the initial generation network.
Therefore, by applying the embodiment of the invention, the training set is input into the initial prior evaluation network, the target image is generated by the initial generation network, the evaluation is carried out through the initial discrimination network, whether the model parameter is adjusted or not is judged according to the evaluation value, if the model parameter is judged to be adjusted, the initial prior evaluation network, the initial generation network and the initial discrimination network are respectively updated by the model optimization parameter, and if the model parameter is not adjusted, the confrontation generation network is formed by the initial prior evaluation network and the initial generation network, so that the network structure of the confrontation generation network is more comprehensive and higher in precision, the training mode from the input end to the output end is realized, the manual intervention is reduced, and the model is more stable.
In order to solve the problem of low quality of a reconstructed image caused by low model precision in the prior art, the embodiment of the invention also discloses an image reconstruction method and an image reconstruction device.
It should be noted that the image reconstruction method provided by the embodiment of the present invention is applied to an electronic device, wherein in a specific application, the electronic device may be a server or a terminal device, which is reasonable. In addition, the functional software for implementing the image reconstruction method provided by the embodiment of the invention can be special image reconstruction software, and can also be a plug-in the existing image reconstruction software or other software with the image reconstruction function.
Referring to fig. 3, fig. 3 is a schematic flowchart of an image reconstruction method according to an embodiment of the present invention, including the following steps:
s301, obtaining an image to be reconstructed;
the human face image is used as an important individual identity identification medium, and the fact that the individual identity information is confirmed by utilizing the high-resolution human face image has extremely important practical significance. However, due to the limitation of monitoring hardware equipment, imaging environment and other factors, the collected face image often has the problems of low resolution and poor image quality and low identification degree, the construction cost can be greatly increased by unilaterally improving the imaging precision of the hardware equipment, the interference of the imaging environment is difficult to completely solve, and the reconstruction of the face image becomes very important. Therefore, the images in the training sample in the invention can comprise the low-resolution face image and the high-resolution face image corresponding to the low-resolution face image, and the image to be reconstructed can also be the face image.
S302, loading a countermeasure generation network, and inputting an image to be reconstructed into the countermeasure generation network to obtain a reconstructed image output by the countermeasure generation network; the countermeasure generating network is constructed according to the construction method of the countermeasure generating network, and a reconstructed image output by the countermeasure generating network is obtained in the following mode:
inputting an image to be reconstructed into an initial prior evaluation network in a countermeasure generation network to obtain a target prior feature map output by the initial prior evaluation network;
and carrying out image fusion on the image to be reconstructed and the target prior characteristic image, and inputting an image fusion result to an initial generation network in the countermeasure generation network to obtain a reconstructed image output by the initial generation network.
As can be seen, with the image reconstruction method provided by the embodiment of the present invention, the countermeasure generation network is: the construction method of the countermeasure generation network has the advantages that the accuracy of the countermeasure generation network is higher, the resolution of the reconstructed image output by the countermeasure generation network is higher, the image to be reconstructed is converted into the image with higher resolution, and the resolution of the reconstructed image is improved.
Optionally, before the image to be reconstructed is input to the countermeasure generation network, the method further includes:
zooming an image to be reconstructed to a first preset size;
inputting an image to be reconstructed into a countermeasure generation network, including:
and inputting the image to be reconstructed scaled to the first preset size into the countermeasure generating network.
Corresponding to the above method embodiment, the embodiment of the present invention further provides a device for constructing a countermeasure generation network.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a countermeasure generation network constructing apparatus according to an embodiment of the present invention, where the apparatus includes:
a first obtaining module 401, configured to obtain a training set, and load a preset initial prior evaluation network, a preset initial generation network, and a preset initial discrimination network, where the training set includes a plurality of training samples, and each training sample includes a low-resolution image and a high-resolution image corresponding to the low-resolution image;
a first input module 402, configured to, for each training sample in a training set, input a low-resolution image in the training sample to an initial prior evaluation network to obtain a first prior feature map of the training sample; carrying out image fusion on the low-resolution image and the first prior feature map to obtain an initial image of the training sample; inputting the initial image into an initial generation network to obtain a target image of the training sample; inputting the target image and the high-resolution image in the training sample into an initial discrimination network to obtain an evaluation value of the training sample; the first prior feature map of the training sample is obtained by extracting human face key feature points and human face contour features of a low-resolution image in the training sample through an initial prior evaluation network;
a determining module 403, configured to determine whether to adjust a model parameter according to each obtained evaluation value after obtaining the evaluation value of each training sample;
an adjusting module 404, configured to optimize a preset loss function by using a model optimization algorithm to obtain model optimization parameters when the determination result of the determining module 403 is yes, update the initial prior evaluation network, the initial generation network, and the initial discrimination network by using the obtained model optimization parameters, and return to execute the input of the low-resolution image in the training sample to the initial prior evaluation network;
a generating module 405, configured to, when the determination result of the determining module 403 is negative, form a countervailing generating network by using the initial prior evaluation network and the initial generating network.
Therefore, by applying the embodiment of the invention, the training set is input into the initial prior evaluation network, the target image is generated by the initial generation network, the evaluation is carried out through the initial discrimination network, whether the model parameter is adjusted or not is judged according to the evaluation value, if the model parameter is judged to be adjusted, the initial prior evaluation network, the initial generation network and the initial discrimination network are respectively updated by the model optimization parameter, and if the model parameter is not adjusted, the confrontation generation network is formed by the initial prior evaluation network and the initial generation network, so that the network structure of the confrontation generation network is more comprehensive and higher in precision, the training mode from the input end to the output end is realized, the manual intervention is reduced, and the model is more stable.
Optionally, the preset initial prior evaluation network is a convolutional neural network, the preset initial generation network is a residual error network, and the preset initial discrimination network is a full convolutional network.
Optionally, the apparatus further comprises a clipping module configured to:
after obtaining the training set, cutting each low-resolution image in the training set to a first preset size;
the first input module inputs the low-resolution images in the training sample to an initial prior evaluation network, specifically:
and inputting the low-resolution image which is cut to a first preset size in the training sample into an initial prior evaluation network.
Optionally, the apparatus further comprises a second input module, configured to:
after judging and adjusting model parameters, before optimizing a preset loss function by using a model optimization algorithm, for each training sample in a training set, inputting a high-resolution image in the training sample into an initial prior evaluation network to obtain a second prior feature map of the training sample; and extracting the key feature points and the face contour features of the face of the high-resolution image in the training sample by the initial prior evaluation network to obtain the second prior feature image of the training sample.
OptionalThe loss function is: l istotal=wpLp+wpixelLpixel+wvggLvgg+wadvLadv;
Wherein,
wp、wpixel、wvggand wadvRespectively representing each preset weight, N representing the total number of training samples,and p(i)Respectively representing a first prior feature map and a second prior feature map of the ith training sample;and y(i)Respectively representing a target image of an ith training sample and a high-resolution image in the ith training sample;indicates when inputtingInitially judging the activation characteristic diagram of the jth convolutional layer before the kth maximum pooling layer in the network,representing the initial image of the ith training sample.A target image equivalent to the ith training sample,indicating the evaluation value of the ith training sample.
Optionally, the determining module determines whether to adjust the model parameter according to each obtained evaluation value, specifically:
calculating the average value of each evaluation value, and judging whether the average value is smaller than a preset threshold value or not;
if the model parameter is smaller than the preset value, judging to adjust the model parameter; and if not, judging that the model parameters are not adjusted.
Corresponding to the above method embodiment, the embodiment of the present invention further provides an image reconstruction apparatus.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an image reconstructing apparatus according to an embodiment of the present invention, where the apparatus includes:
a second obtaining module 501, configured to obtain an image to be reconstructed;
a loading module 502, configured to load the countermeasure generating network, and input the image to be reconstructed into the countermeasure generating network to obtain a reconstructed image output by the countermeasure generating network; the countermeasure generation network is constructed according to any one of the above construction methods of the countermeasure generation network, and a reconstructed image output by the countermeasure generation network is obtained in the following manner:
inputting an image to be reconstructed into an initial prior evaluation network in a countermeasure generation network to obtain a target prior feature map output by the initial prior evaluation network;
and carrying out image fusion on the image to be reconstructed and the target prior characteristic image, and inputting an image fusion result to an initial generation network in the countermeasure generation network to obtain a reconstructed image output by the initial generation network.
Therefore, the application of the embodiment of the invention provides that the countermeasure generation network is: the construction method of the countermeasure generation network has the advantages that the accuracy of the countermeasure generation network is higher, the resolution of the reconstructed image output by the countermeasure generation network is higher, the image to be reconstructed is converted into the image with higher resolution, and the resolution of the reconstructed image is improved.
Optionally, the apparatus further includes a scaling module, configured to scale the image to be reconstructed to a first preset size before the image to be reconstructed is input to the countermeasure generating network;
the loading module 502 inputs the image to be reconstructed into the countermeasure generation network, specifically:
and inputting the image to be reconstructed scaled to the first preset size into the countermeasure generating network.
The present invention is not limited to the above preferred embodiments, and any modifications, equivalent substitutions, improvements, etc. within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. A countermeasure generation network construction method, the method comprising:
obtaining a training set, and loading a preset initial prior evaluation network, a preset initial generation network and a preset initial discrimination network, wherein the training set comprises a plurality of training samples, and each training sample comprises a low-resolution image and a high-resolution image corresponding to the low-resolution image;
for each training sample in a training set, inputting a low-resolution image in the training sample into the initial prior evaluation network to obtain a first prior feature map of the training sample; carrying out image fusion on the low-resolution image and the first prior feature map to obtain an initial image of the training sample; inputting the initial image into the initial generation network to obtain a target image of the training sample; inputting the target image and the high-resolution image in the training sample into the initial discrimination network to obtain an evaluation value of the training sample; the first prior feature map of the training sample is obtained by extracting the face key feature points and the face contour features of the low-resolution images in the training sample by the initial prior evaluation network;
after the evaluation value of each training sample is obtained, judging whether to adjust the model parameters according to each obtained evaluation value;
if the model parameter is judged to be adjusted, optimizing a preset loss function by using a model optimization algorithm to obtain model optimization parameters, respectively updating the initial prior evaluation network, the initial generation network and the initial discrimination network by using the obtained model optimization parameters, and returning to the step of inputting the low-resolution image in the training sample into the initial prior evaluation network;
and if the model parameters are not adjusted, forming a counteraction generation network by using the initial prior evaluation network and the initial generation network.
2. The method of claim 1, wherein the predetermined initial prior evaluation network is a convolutional neural network, the predetermined initial generation network is a residual network, and the predetermined initial discrimination network is a full convolutional network.
3. The method of claim 1 or 2, wherein after obtaining the training set, the method further comprises:
cutting each low-resolution image in the training set to a first preset size;
inputting the low-resolution images in the training sample into the initial prior evaluation network, including:
and inputting the low-resolution image which is cut to a first preset size in the training sample into the initial prior evaluation network.
4. The method of claim 1, wherein after determining to adjust the model parameters, prior to optimizing the preset loss function with the model optimization algorithm, the method further comprises:
for each training sample in a training set, inputting a high-resolution image in the training sample into the initial prior evaluation network to obtain a second prior feature map of the training sample; and extracting the key feature points and the face contour features of the face of the high-resolution image in the training sample by the initial prior evaluation network to obtain a second prior feature image of the training sample.
5. The method of claim 4, wherein the loss function is: l istotal=wpLp+wpixelLpixel+wvggLvgg+wadvLadv;
Wherein,
wp、wpixel、wvggand wadvRespectively representing each preset weight, N representing the total number of training samples,and p(i)Respectively representing a first prior feature map and a second prior feature map of the ith training sample;and y(i)Respectively representing a target image of an ith training sample and a high-resolution image in the ith training sample;table shows when inputtingInitially judging the activation characteristic diagram of the jth convolutional layer before the kth maximum pooling layer in the network,representing the initial image of the ith training sample.A target image equivalent to the ith training sample,indicating the evaluation value of the ith training sample.
6. The method of claim 1, wherein determining whether to adjust the model parameters based on the obtained evaluation values comprises:
calculating the average value of each evaluation value, and judging whether the average value is smaller than a preset threshold value or not;
if the model parameter is smaller than the preset value, judging to adjust the model parameter;
and if not, judging that the model parameters are not adjusted.
7. A method of image reconstruction, the method comprising:
obtaining an image to be reconstructed;
loading a countermeasure generation network, and inputting the image to be reconstructed into the countermeasure generation network to obtain a reconstructed image output by the countermeasure generation network; wherein the challenge-generating network is constructed according to the method of any one of claims 1 to 6, and the reconstructed image output by the challenge-generating network is obtained by:
inputting an image to be reconstructed into an initial prior evaluation network in a countermeasure generation network to obtain a target prior feature map output by the initial prior evaluation network;
and carrying out image fusion on the image to be reconstructed and the target prior characteristic image, and inputting an image fusion result to an initial generation network in the countermeasure generation network to obtain a reconstructed image output by the initial generation network.
8. The method according to claim 7, wherein before inputting the image to be reconstructed to the countermeasure generating network, the method further comprises:
zooming the image to be reconstructed to a first preset size;
inputting the image to be reconstructed to the countermeasure generation network, including:
and inputting the image to be reconstructed scaled to the first preset size into the countermeasure generating network.
9. A countermeasure generation network construction apparatus, the apparatus comprising:
the device comprises a first obtaining module, a second obtaining module and a third obtaining module, wherein the first obtaining module is used for obtaining a training set and loading a preset initial prior evaluation network, a preset initial generation network and a preset initial discrimination network, the training set comprises a plurality of training samples, and each training sample comprises a low-resolution image and a high-resolution image corresponding to the low-resolution image;
the first input module is used for inputting the low-resolution images in the training samples to the initial prior evaluation network for each training sample in the training set to obtain a first prior feature map of the training sample; carrying out image fusion on the low-resolution image and the first prior feature map to obtain an initial image of the training sample; inputting the initial image into the initial generation network to obtain a target image of the training sample; inputting the target image and the high-resolution image in the training sample into the initial discrimination network to obtain an evaluation value of the training sample; the first prior feature map of the training sample is obtained by extracting the face key feature points and the face contour features of the low-resolution images in the training sample by the initial prior evaluation network;
the judging module is used for judging whether to adjust the model parameters according to each obtained evaluation value after the evaluation value of each training sample is obtained;
the adjusting module is used for optimizing a preset loss function by using a model optimization algorithm to obtain model optimization parameters when the judgment result of the judging module is yes, respectively updating the initial prior evaluation network, the initial generation network and the initial judgment network by using the obtained model optimization parameters, and returning to execute the input of the low-resolution images in the training sample to the initial prior evaluation network;
and the generation module is used for forming a counterattack generation network by using the initial prior evaluation network and the initial generation network when the judgment result of the judgment module is negative.
10. An image reconstruction apparatus, characterized in that the apparatus comprises:
a second obtaining module, configured to obtain an image to be reconstructed;
the loading module is used for loading a countermeasure generation network and inputting the image to be reconstructed into the countermeasure generation network to obtain a reconstructed image output by the countermeasure generation network; wherein the countermeasure generation network is constructed according to the countermeasure generation network construction method of any one of claims 1 to 6, and a reconstructed image output by the countermeasure generation network is obtained by:
inputting an image to be reconstructed into an initial prior evaluation network in a countermeasure generation network to obtain a target prior feature map output by the initial prior evaluation network;
and carrying out image fusion on the image to be reconstructed and the target prior characteristic image, and inputting an image fusion result to an initial generation network in the countermeasure generation network to obtain a reconstructed image output by the initial generation network.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811332629.0A CN109544450B (en) | 2018-11-09 | 2018-11-09 | Method and device for constructing confrontation generation network and method and device for reconstructing image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811332629.0A CN109544450B (en) | 2018-11-09 | 2018-11-09 | Method and device for constructing confrontation generation network and method and device for reconstructing image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109544450A true CN109544450A (en) | 2019-03-29 |
CN109544450B CN109544450B (en) | 2022-08-19 |
Family
ID=65846646
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811332629.0A Active CN109544450B (en) | 2018-11-09 | 2018-11-09 | Method and device for constructing confrontation generation network and method and device for reconstructing image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109544450B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110276399A (en) * | 2019-06-24 | 2019-09-24 | 厦门美图之家科技有限公司 | Image switching network training method, device, computer equipment and storage medium |
CN110706157A (en) * | 2019-09-18 | 2020-01-17 | 中国科学技术大学 | Face super-resolution reconstruction method for generating confrontation network based on identity prior |
CN110853040A (en) * | 2019-11-12 | 2020-02-28 | 北京深境智能科技有限公司 | Image collaborative segmentation method based on super-resolution reconstruction |
CN110866437A (en) * | 2019-09-23 | 2020-03-06 | 平安科技(深圳)有限公司 | Color value determination model optimization method and device, electronic equipment and storage medium |
CN112801281A (en) * | 2021-03-22 | 2021-05-14 | 东南大学 | Countermeasure generation network construction method based on quantization generation model and neural network |
CN113284073A (en) * | 2021-07-08 | 2021-08-20 | 腾讯科技(深圳)有限公司 | Image restoration method, device and storage medium |
CN113554047A (en) * | 2020-04-24 | 2021-10-26 | 京东方科技集团股份有限公司 | Training method of image processing model, image processing method and corresponding device |
CN114298137A (en) * | 2021-11-12 | 2022-04-08 | 广州辰创科技发展有限公司 | Tiny target detection system based on countermeasure generation network |
WO2024187901A1 (en) * | 2023-03-10 | 2024-09-19 | 支付宝(杭州)信息技术有限公司 | Image high-quality harmonization model training and device |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107958246A (en) * | 2018-01-17 | 2018-04-24 | 深圳市唯特视科技有限公司 | A kind of image alignment method based on new end-to-end human face super-resolution network |
CN108022213A (en) * | 2017-11-29 | 2018-05-11 | 天津大学 | Video super-resolution algorithm for reconstructing based on generation confrontation network |
US20180144214A1 (en) * | 2016-11-23 | 2018-05-24 | General Electric Company | Deep learning medical systems and methods for image reconstruction and quality evaluation |
-
2018
- 2018-11-09 CN CN201811332629.0A patent/CN109544450B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180144214A1 (en) * | 2016-11-23 | 2018-05-24 | General Electric Company | Deep learning medical systems and methods for image reconstruction and quality evaluation |
CN108022213A (en) * | 2017-11-29 | 2018-05-11 | 天津大学 | Video super-resolution algorithm for reconstructing based on generation confrontation network |
CN107958246A (en) * | 2018-01-17 | 2018-04-24 | 深圳市唯特视科技有限公司 | A kind of image alignment method based on new end-to-end human face super-resolution network |
Non-Patent Citations (2)
Title |
---|
HUANG BIN 等: "High-Quality Face Image Super-Resolution Using Conditional Generative Adversarial Networks", 《ARXIV》 * |
贾洁: "基于生成对抗网络的人脸超分辨率重建及识别", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110276399B (en) * | 2019-06-24 | 2021-06-04 | 厦门美图之家科技有限公司 | Image conversion network training method and device, computer equipment and storage medium |
CN110276399A (en) * | 2019-06-24 | 2019-09-24 | 厦门美图之家科技有限公司 | Image switching network training method, device, computer equipment and storage medium |
CN110706157B (en) * | 2019-09-18 | 2022-09-30 | 中国科学技术大学 | Face super-resolution reconstruction method for generating confrontation network based on identity prior |
CN110706157A (en) * | 2019-09-18 | 2020-01-17 | 中国科学技术大学 | Face super-resolution reconstruction method for generating confrontation network based on identity prior |
CN110866437A (en) * | 2019-09-23 | 2020-03-06 | 平安科技(深圳)有限公司 | Color value determination model optimization method and device, electronic equipment and storage medium |
CN110853040A (en) * | 2019-11-12 | 2020-02-28 | 北京深境智能科技有限公司 | Image collaborative segmentation method based on super-resolution reconstruction |
CN110853040B (en) * | 2019-11-12 | 2023-04-28 | 北京深境智能科技有限公司 | Image collaborative segmentation method based on super-resolution reconstruction |
CN113554047A (en) * | 2020-04-24 | 2021-10-26 | 京东方科技集团股份有限公司 | Training method of image processing model, image processing method and corresponding device |
CN112801281A (en) * | 2021-03-22 | 2021-05-14 | 东南大学 | Countermeasure generation network construction method based on quantization generation model and neural network |
CN113284073A (en) * | 2021-07-08 | 2021-08-20 | 腾讯科技(深圳)有限公司 | Image restoration method, device and storage medium |
CN113284073B (en) * | 2021-07-08 | 2022-04-15 | 腾讯科技(深圳)有限公司 | Image restoration method, device and storage medium |
CN114298137A (en) * | 2021-11-12 | 2022-04-08 | 广州辰创科技发展有限公司 | Tiny target detection system based on countermeasure generation network |
WO2024187901A1 (en) * | 2023-03-10 | 2024-09-19 | 支付宝(杭州)信息技术有限公司 | Image high-quality harmonization model training and device |
Also Published As
Publication number | Publication date |
---|---|
CN109544450B (en) | 2022-08-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109544450B (en) | Method and device for constructing confrontation generation network and method and device for reconstructing image | |
CN111598998B (en) | Three-dimensional virtual model reconstruction method, three-dimensional virtual model reconstruction device, computer equipment and storage medium | |
CN111105352B (en) | Super-resolution image reconstruction method, system, computer equipment and storage medium | |
CN111179177B (en) | Image reconstruction model training method, image reconstruction method, device and medium | |
CN111598779B (en) | Image super-resolution processing method and device, electronic equipment and storage medium | |
CN111507333B (en) | Image correction method and device, electronic equipment and storage medium | |
WO2020165557A1 (en) | 3d face reconstruction system and method | |
CN112541864A (en) | Image restoration method based on multi-scale generation type confrontation network model | |
CN110211045A (en) | Super-resolution face image method based on SRGAN network | |
CN110580680B (en) | Face super-resolution method and device based on combined learning | |
CN113221925B (en) | Target detection method and device based on multi-scale image | |
CN113610087B (en) | Priori super-resolution-based image small target detection method and storage medium | |
CN111914756A (en) | Video data processing method and device | |
CN115578515B (en) | Training method of three-dimensional reconstruction model, three-dimensional scene rendering method and device | |
CN114782864B (en) | Information processing method, device, computer equipment and storage medium | |
US20230115765A1 (en) | Method and apparatus of transferring image, and method and apparatus of training image transfer model | |
CN112907448A (en) | Method, system, equipment and storage medium for super-resolution of any-ratio image | |
CN116977200A (en) | Processing method and device of video denoising model, computer equipment and storage medium | |
CN116977674A (en) | Image matching method, related device, storage medium and program product | |
CN114926734A (en) | Solid waste detection device and method based on feature aggregation and attention fusion | |
CN113096015B (en) | Image super-resolution reconstruction method based on progressive perception and ultra-lightweight network | |
CN113963009A (en) | Local self-attention image processing method and model based on deformable blocks | |
CN113920023A (en) | Image processing method and device, computer readable medium and electronic device | |
CN117593187A (en) | Remote sensing image super-resolution reconstruction method based on meta-learning and transducer | |
CN109871814B (en) | Age estimation method and device, electronic equipment and computer storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |