CN107133934B - Image completion method and device - Google Patents
Image completion method and device Download PDFInfo
- Publication number
- CN107133934B CN107133934B CN201710354183.0A CN201710354183A CN107133934B CN 107133934 B CN107133934 B CN 107133934B CN 201710354183 A CN201710354183 A CN 201710354183A CN 107133934 B CN107133934 B CN 107133934B
- Authority
- CN
- China
- Prior art keywords
- image
- network
- graph
- layer
- generation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 85
- 230000006872 improvement Effects 0.000 claims description 100
- 230000006870 function Effects 0.000 claims description 74
- 238000010586 diagram Methods 0.000 claims description 50
- 238000012549 training Methods 0.000 claims description 49
- 238000005070 sampling Methods 0.000 claims description 20
- 230000008569 process Effects 0.000 claims description 14
- 230000000295 complement effect Effects 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims 1
- 230000009286 beneficial effect Effects 0.000 description 7
- 238000012545 processing Methods 0.000 description 6
- 239000013598 vector Substances 0.000 description 6
- 238000006073 displacement reaction Methods 0.000 description 4
- 230000004913 activation Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001502 supplementing effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The disclosure relates to an image complementing method and device. The method comprises the following steps: generating n layers of Gaussian pyramids according to an image to be restored, wherein n is a positive integer; complementing the image of each layer in the n layers of Gaussian pyramids by a block matching method to obtain a complemented real image of each layer; generating a generation network of a countermeasure network according to the nth layer image of the n layers of Gaussian pyramids and the real images of all the layers; and completing the image to be repaired according to the nth layer image and the generated network to obtain a completed image. According to the technical scheme, the generation network is obtained through the Gaussian pyramid of the image to be restored, the real images of all layers of the Gaussian pyramid and the countermeasure network, so that the clear, complete and high-resolution image can be quickly generated, the resolution of the completed image can be improved, the learning of the generation network can be accelerated, and the more real image can be obtained.
Description
Technical Field
The present disclosure relates to the field of image processing, and in particular, to an image completion method and apparatus.
Background
With the development of the times, image completion is applied more and more, for example, details are lost after an image is amplified, and an image part is damaged, but the current scheme performs completion only through image searching, matching and filtering, and the completion effect is poor.
Disclosure of Invention
The embodiment of the disclosure provides an image completion method and device. The technical scheme is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided an image completion method, including:
generating n layers of Gaussian pyramids according to an image to be restored, wherein n is a positive integer;
complementing the image of each layer in the n layers of Gaussian pyramids by a block matching method to obtain a complemented real image of each layer;
generating a generation network of a countermeasure network according to the nth layer image of the n layers of Gaussian pyramids and the real images of all the layers;
and completing the image to be repaired according to the nth layer image and the generated network to obtain a completed image.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the generation network is obtained through the Gaussian pyramid of the image to be restored, the real image of each layer of the Gaussian pyramid and the countermeasure network, so that the clear, complete and high-resolution image can be quickly generated, the resolution of the completed image can be improved, the learning of the generation network can be accelerated, and a more real image can be obtained.
In an embodiment, the completing the image to be repaired according to the nth layer image and the generated network to obtain a completed image includes:
acquiring an ith generation diagram, wherein i is an integer from 0 to n;
inputting the ith generation diagram and the ith layer real diagram into the generation network to obtain an ith-1 generation diagram;
wherein the 0 th generation map is the complement map; the nth generation image is an image which is randomly up-sampled from the nth layer image and has the same resolution as the nth layer image.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: and increasing image pixels and realizing generation of a network completion image in a circulating mode.
In one embodiment, the generating a generation network of a countermeasure network according to the nth layer image of the n layers of gaussian pyramids and the each layer real image includes:
training a discrimination network of the countermeasure network meeting the requirement of an objective function through the nth layer image and the real images of all layers;
training the generation network meeting the requirement of an objective function through the nth layer image, the real images of all layers and the discrimination network;
wherein the objective function is used for measuring the loss in the completion process.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: and training a generation network by training a discrimination network so as to realize image completion.
In one embodiment, the training of the discriminative network of the countermeasure network satisfying the requirement of the objective function through the nth layer image and the each layer real image includes:
acquiring a jth updating graph and a jth layer real graph, wherein j is an integer from 1 to n;
inputting the jth layer real image and the jth updated image into an improved discrimination network to obtain a first judgment result;
when the first judgment result represents that the jth updated graph and the jth layer real graph are the same image, keeping parameters of the improved judgment network;
when the first judgment result represents that the jth updated graph and the jth layer real graph are not the same image, updating parameters of the improved judgment network through a random gradient method and the target function;
inputting the jth updating graph into an initial generation network to obtain a jth-1 updating graph;
when the parameters of the improved discrimination network are not updated any more, the improved discrimination network is used as the discrimination network, and the nth updated image is an image which is obtained by random up-sampling from the nth layer image and has the same resolution as the nth layer image.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the training process of the discriminant network is introduced.
In one embodiment, the training of the generation network satisfying the requirement of the objective function through the nth layer image, the each layer real image and the discriminant network includes:
acquiring a pth improvement graph and the pth layer real graph, wherein p is an integer from 1 to n;
inputting the p-th layer real image and the p-th improved image into the judgment network to obtain a second judgment result;
when the second judgment result represents that the p-th improvement graph and the p-th layer real graph are the same image, maintaining parameters of an improvement generation network; inputting the p-th improvement graph into the improvement generation network to obtain a p-1-th improvement graph;
when the second judgment result represents that the pth improved graph and the pth layer real graph are not the same image, updating parameters of the improved generation network according to a random gradient method and the objective function; inputting the p-th improvement graph into an updated improvement generation network to obtain a p-1-th improvement graph;
wherein the improved generation network is taken as the generation network when the parameters of the improved generation network are not updated any more; and the nth improvement graph is an image with the same resolution as the nth layer image obtained by random up-sampling from the nth layer image.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: after the discriminative network is determined, a training process for generating the network is introduced.
In one embodiment, the training of the generation network satisfying the requirement of the objective function through the nth layer image, the each layer real image and the discriminant network includes:
acquiring a pth improvement graph and the pth layer real graph, wherein p is an integer from 1 to n;
inputting the p-th layer real image and the p-th improved image into the judgment network to obtain a second judgment result;
when the second judgment result represents that the p-th improvement graph and the p-th layer real graph are the same image, maintaining parameters of an improvement generation network; inputting the p-th improvement graph into the improvement generation network to obtain a p-1-th improvement graph;
when the second judgment result represents that the pth improved graph and the pth layer real graph are not the same image, updating parameters of the improved generation network according to a random gradient method and the objective function; inputting the p-th improvement graph into an updated improvement generation network to obtain a p-1-th improvement graph;
wherein the improved generation network is taken as the generation network when the parameters of the improved generation network are not updated any more; and the nth improvement graph is an image with the same resolution as the nth layer image obtained by random up-sampling from the nth layer image.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: and improving the generation network and the discrimination network through the objective function.
In one embodiment, the objective function is:
wherein G is a generation network, D is a discrimination network, and IqIs the qth real image; e2]Is desired; the G isqIs the q-th generation image; q is an integer from n to 0, and a is a predetermined value.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: an objective function is introduced that achieves the best training results.
According to a second aspect of the embodiments of the present disclosure, there is provided an image complementing apparatus comprising:
the image restoration method comprises a first generation module, a second generation module and a third generation module, wherein the first generation module is used for generating n layers of Gaussian pyramids according to an image to be restored, and n is a positive integer;
the first complementing module is used for complementing the images of all layers in the n layers of Gaussian pyramids through a block matching device to obtain complemented real images of all layers;
the second generation module is used for generating a generation network of the countermeasure network according to the nth layer image of the n layers of Gaussian pyramids and the real images of all the layers;
and the second completion module is used for completing the image to be repaired according to the nth layer image and the generated network to obtain a completed image.
In one embodiment, the second completion module comprises:
the obtaining submodule is used for obtaining the ith generation diagram, wherein i is an integer from 0 to n;
the input submodule is used for inputting the ith generation diagram and the ith layer real diagram into the generation network to obtain an ith-1 generation diagram;
wherein the 0 th generation map is the complement map; the nth generation image is an image which is randomly up-sampled from the nth layer image and has the same resolution as the nth layer image.
In one embodiment, the second generating module comprises:
the first training submodule is used for training a discrimination network of the countermeasure network meeting the requirement of an objective function through the nth layer image and each layer real image;
the second training submodule is used for training the generation network meeting the requirement of an objective function through the nth layer image, the real images of all layers and the discrimination network;
wherein the objective function is used for measuring the loss in the completion process.
In one embodiment, the first training submodule comprises:
a first obtaining unit, configured to obtain a jth update graph and a jth layer real graph, where j is an integer between 1 and the n;
the first input unit is used for inputting the jth layer real image and the jth updated image into an improved judgment network to obtain a first judgment result;
a first holding unit, configured to hold a parameter of the improved discriminant network when the first determination result indicates that the jth updated graph and the jth layer real graph are the same image;
the first updating unit is used for updating parameters of the improved judgment network through a random gradient method and the objective function when the first judgment result represents that the jth updated graph and the jth layer real graph are not the same image;
the second input unit is used for inputting the jth updating graph into an initial generation network to obtain a jth-1 updating graph;
when the parameters of the improved discrimination network are not updated any more, the improved discrimination network is used as the discrimination network, and the nth updated image is an image which is obtained by random up-sampling from the nth layer image and has the same resolution as the nth layer image.
In one embodiment, the second training submodule comprises:
a second obtaining unit, configured to obtain a pth refinement map and the pth layer real map, where p is an integer between 1 and n;
a third input unit, configured to input the p-th layer real graph and the p-th improved graph into the decision network, so as to obtain a second decision result;
a second holding unit, configured to hold a parameter of an improvement generation network when the second determination result indicates that the pth improvement map and the pth layer real map are the same image;
a fourth input unit, configured to input the pth improvement map into the improvement generation network, so as to obtain a pth-1 improvement map;
a second updating unit, configured to update parameters of the improved generation network according to a random gradient method and the objective function when the second determination result indicates that the pth improved graph and the pth layer real graph are not the same image;
the fourth input unit is further configured to input the updated improvement generation network with the pth improvement map to obtain a pth-1 improvement map;
wherein the improved generation network is taken as the generation network when the parameters of the improved generation network are not updated any more; and the nth improvement graph is an image with the same resolution as the nth layer image obtained by random up-sampling from the nth layer image.
In one embodiment, the objective function is:
wherein G is a generation network, D is a discrimination network, and IqIs the qth real image; e2]Is desired; the G isqIs the q-th generation image; q is an integer from n to 0, and a is a predetermined value.
According to a second aspect of the embodiments of the present disclosure, there is provided an image complementing apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
generating n layers of Gaussian pyramids according to an image to be restored, wherein n is a positive integer;
complementing the image of each layer in the n layers of Gaussian pyramids by a block matching method to obtain a complemented real image of each layer;
generating a generation network of a countermeasure network according to the nth layer image of the n layers of Gaussian pyramids and the real images of all the layers;
and completing the image to be repaired according to the nth layer image and the generated network to obtain a completed image. It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow diagram illustrating an image completion method according to an exemplary embodiment.
FIG. 2 is a flow diagram illustrating an image completion method according to an example embodiment.
FIG. 3 is a flow diagram illustrating an image completion method according to an example embodiment.
Fig. 4 is a block diagram illustrating an image complementing apparatus according to an exemplary embodiment.
Fig. 5 is a block diagram illustrating an image complementing apparatus according to an exemplary embodiment.
Fig. 6 is a block diagram illustrating an image complementing apparatus according to an exemplary embodiment.
Fig. 7 is a block diagram illustrating an image complementing apparatus according to an exemplary embodiment.
Fig. 8 is a block diagram illustrating an image complementing apparatus according to an exemplary embodiment.
Fig. 9 is a block diagram illustrating an image complementing apparatus according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Among the related arts, among the picture completion techniques, the most commonly used method is a block matching (patch) method, which has a basic idea of dividing each frame image into a series of sub-blocks) to calculate an error function of each sub-block in a current frame and each sub-block in an adjacent frame, taking a corresponding sub-block of the adjacent frame having the smallest error as a prediction block of the current block, and defining a relative displacement of the two blocks as a displacement vector.
However, the patchmatch needs a search space, which means that when the hole to be complemented is too large and the searched area cannot be found, the complementing effect is not ideal; when the holes are patched by the charting, the characteristics of illumination, texture and the like of the area where the holes are located are considered to be different from the source block (patch), so that the patched image has no more sharp edge and has better display effect, filtering processing is carried out, and the patched image is directly blurred.
Example one
Fig. 1 is a flowchart illustrating an image complementing method according to an exemplary embodiment, where, as shown in fig. 1, the image complementing method is used in an image complementing device, which may be applied in a terminal or a server, and the method includes the following steps 101 and 104:
in step 101, an n-level gaussian pyramid is generated according to an image to be restored.
Here, n is a positive integer.
Acquiring a mask image corresponding to the image to be repaired, marking a part needing to be repaired as 0, marking a part not needing to be repaired as 255, and constructing the image to be repaired and a Gaussian pyramid corresponding to the mask image until 0 does not exist in the mask. Here, n is a positive integer.
In step 102, the images of each layer in the n-layer gaussian pyramid are complemented by a block matching method, so as to obtain a complemented real image of each layer.
Starting processing from the nth layer of the Gaussian pyramid, randomly initializing the compensation (offset) of each patch, and obtaining a compensation map (offset map) of the layer; descending Gaussian pyramids layer by layer, performing upsampling on an offset map, performing random initialization on a blank place, and searching the place which is the blank place through nearest neighbor to find a most matched matching value; when the lowest layer of the Gaussian pyramid is reached, corresponding displacement information matrixes among all the patch are obtained, information of corresponding positions is obtained through the displacement matrixes, and then information of positions to be repaired is generated in a map pasting mode, so that real maps of all layers are obtained.
Here, the method of completing the real drawings of each layer is not limited to the block matching method, and any current matching method is possible.
In step 103, a generation network of the countermeasure network is generated according to the nth layer image of the n-layer gaussian pyramid and each layer real image.
The countermeasure network (GANS) is a generating network, the basic idea behind the method is to obtain a plurality of training samples from training, learn the probability distribution generated by training cases, so that the implementation method is to make two network models compete with each other, one of the two network models is called as a generating network, and continuously capture the probability distribution of real pictures of a training library to convert input random noise into new samples; the other is called a discrimination network, which can simultaneously observe the real image and the new sample to judge whether the new sample is real.
In step 104, the image to be repaired is completed according to the nth layer image and the generated network, and a completed image is obtained.
In the embodiment, the generation network is obtained through the Gaussian pyramid of the image to be restored, the real image of each layer of the Gaussian pyramid and the countermeasure network, so that the clear, complete and high-resolution image can be quickly generated, the resolution of the completed image can be improved, the learning of the generation network can be accelerated, and a more real image can be obtained.
In one embodiment, step 103 may comprise:
acquiring an ith generation diagram, wherein i is an integer from 0 to n;
inputting the ith generation diagram and the ith layer real diagram into the generation network to obtain an ith-1 generation diagram;
wherein the 0 th generation map is the complement map; the nth generation image is an image which is randomly up-sampled from the nth layer image and has the same resolution as the nth layer image.
It should be noted that the real diagrams of each layer may be supplemented in advance, or may be obtained by supplementing which layer is needed in the above cycle process.
Here, the nth generated map is obtained in the following manner. The present embodiment may determine whether the ith layer real map and the ith generation map are the same image through an objective function.
Randomly selecting a preset number (for example 100) of pixels from the image of the nth layer, replacing random noise with the pixels to serve as input vectors of a convolutional neural network, converting the vectors into feature maps through a full connection layer, entering a deconvolution layer, halving the number of feature image channels through each deconvolution layer, namely, up-sampling the resolution of the feature maps, and finally obtaining an nth generated image with the same resolution as that of the image of the nth layer. The default value for the fully-connected layer may be 8192; the pixels of the feature image may be 4 × 512. The convolutional neural network is preset.
In one embodiment, step 102 may comprise:
training a discrimination network of the countermeasure network meeting the requirement of the objective function through the nth layer image and each layer real image; training the generation network meeting the requirement of the objective function through the nth layer image, each layer real image and the discrimination network; wherein the objective function is used to measure the loss during the completion process.
In one embodiment, the training of the discriminative network of the countermeasure network satisfying the requirement of the objective function through the nth layer image and the each layer real image includes:
acquiring a jth updating graph and a jth layer real graph, wherein j is an integer from 1 to n; inputting the jth layer real image and the jth updated image into an improved judgment network to obtain a first judgment result; when the first judgment result represents that the jth updated graph and the jth layer real graph are the same image, keeping parameters of the improved judgment network; when the first judgment result represents that the jth updated graph and the jth layer real graph are not the same image, updating and improving parameters of the discrimination network by a random gradient method and an objective function; inputting the jth updating graph into an initial generation network to obtain a jth-1 updating graph;
when the parameters of the improved discrimination network are not updated any more, the improved discrimination network is used as the discrimination network, and the nth updated image is an image which is obtained by random up-sampling from the nth layer image and has the same resolution as the nth layer image.
Here, the nth update map and the nth generation map are obtained in the same manner, and the obtained images are different because the pixels obtained at random are different.
Here, the above is a loop process, and if the parameters in the improved discrimination network are already stable and are not updated, the improved discrimination network is used as the discrimination network. The initialized improved discrimination network and the initial generation network are both randomly generated, and the parameters are also randomly generated, so that the initial generation network in the above process is unchanged.
In this embodiment, a graph composed of an ith generation graph and an ith layer real graph (since the generation graph and the real graph are 3 channels, the composed graph is 6 channels) is input, wherein the discrimination network includes y layers of convolution layers, each convolution layer is followed by a modified Linear unit (ReLU) activation layer and a max aggregation (max boosting) layer, and finally whether the ith generation graph and the ith layer real graph are unified images is discriminated. y may be a positive integer, where y is 6.
In this embodiment, inputting the updated j-th generation image into the updated improved generation network to obtain the j-1-th generation image includes:
taking the jth generated image as the input of a generation network, analyzing the jth generated image into a vector through w convolution layers, and enabling each layer of convolution base layer to be followed by a ReLU activation layer; then, the analyzed vector passes through w +1 deconvolution layers, the vector is up-sampled to be a j-1 generation image with 3 channels of resolution (2 × height i-1,2 × width hi-1), and each convolution layer is followed by a ReLU activation layer; w may be a positive integer, where w is 5.
In one embodiment, the training of the generated network meeting the requirement of the objective function through the nth layer image, each layer real image and the discriminant network comprises:
acquiring a pth improvement graph and a pth layer real graph, wherein p is an integer between 1 and n; inputting the p-th layer real image and the p-th improved image into a judgment network to obtain a second judgment result; when the second judgment result represents that the pth improved graph and the pth layer real graph are the same image, maintaining parameters of the improved generation network; inputting the p-th improvement graph into an improvement generation network to obtain a p-1-th improvement graph; when the second judgment result represents that the pth improved graph and the pth layer real graph are not the same image, updating parameters of the improved generation network according to a random gradient method and an objective function; inputting the p-th improvement graph into the updated improvement generation network to obtain a p-1-th improvement graph;
when the parameters of the improved generation network are not updated any more, taking the improved generation network as the generation network; the nth improvement graph is an image which is randomly up-sampled from the nth layer image and has the same resolution as the nth layer image. Here, the method of acquiring the nth update map and the nth modified map is the same, and the obtained images are different because the pixels acquired at random are different.
In one embodiment, the objective function is:
wherein G is a generation network, D is a discrimination network, IqIs the qth real image; e2]Is desired; gqIs the q-th generation image; q is an integer from n to 0, a is an integer, and may be 10, e, etc.
In this embodiment, the objective function is composed of the above three expectations, the first expectation represents an average value of the qth real image, and the second expectation represents an expectation that the qth generated image is determined to be a real image; the third expectation indicates a distance expectation between the qth generated image and the qth real image.
Example two
Fig. 2 is a flowchart illustrating an image complementing method according to an exemplary embodiment, where, as shown in fig. 2, the image complementing method is used in an image complementing device, which may be applied in a terminal or a server, the method includes the following steps 201 and 205:
in step 201, an n-level gaussian pyramid is generated according to an image to be restored.
Here, n is a positive integer.
In step 202, the images of each layer in the n-layer gaussian pyramid are complemented by a block matching method, so as to obtain a complemented real image of each layer.
In step 203, a discrimination network of the countermeasure network meeting the requirement of the objective function is trained through the nth layer image and each layer real image.
In step 204, a generation network meeting the requirement of the objective function is trained through the nth layer image, each layer real image and the discrimination network.
In step 205, the image to be repaired is completed according to the nth layer image and the generated network, so as to obtain a completed image.
The countermeasure network provided by the embodiment can increase pixels which conform to the picture of the real image, so that the image can be supplemented, and the resolution of the supplemented image can be improved.
EXAMPLE III
Fig. 3 is a flowchart illustrating an image complementing method according to an exemplary embodiment, where, as shown in fig. 3, the image complementing method is used in an image complementing device, which may be applied in a terminal or a server, the method includes the following steps 301-320:
in step 301, an n-level gaussian pyramid is generated according to an image to be restored.
In step 302, from the nth layer image of the n layers of gaussian pyramids, an nth generation image with the same resolution as that of the nth layer image is obtained through random upsampling.
In step 303, the jth generation map is obtained.
In step 304, a block matching method is adopted to complement the jth layer image of the n layers of Gaussian pyramids to obtain a jth layer real image.
In step 305, the jth layer real graph and the jth generation graph are input into the improved discriminant network to obtain a first determination result.
In step 306, it is determined whether the first determination result indicates that the jth generated image and the jth layer real image are the same image. If yes, go to step 307; if not, go to step 308.
In step 307, the parameters of the improved discrimination network are maintained. Step 310 is performed.
In step 308, parameters of the improved discrimination network are updated by a stochastic gradient method and an objective function.
In step 309, the jth generation diagram is input to the initial generation network to obtain the jth-1 generation image, and step 303 is executed.
In step 310, a determination is made as to whether the improvement discrimination network is no longer updated. If yes, go to step 311; if not, go to step 309.
In step 311, the discrimination network is improved as the discrimination network.
In step 312, the pth improvement map is obtained.
In step 313, the p-th layer real image and the p-th improved image are input into the discrimination network to obtain a second discrimination result.
In step 314, it is determined whether the second determination result indicates that the pth modified map and the pth real image are the same image. If yes, go to step 315; if not, go to step 317.
In step 315, the parameters of the generated network are kept improved.
In step 316, the p-th improvement graph is input into the improvement generation network to obtain the p-1-th improvement graph. Step 312 is performed.
The parameters of the initial improved generation network are randomly generated.
In step 317, parameters of the refinement generation network are updated according to the stochastic gradient method and the objective function.
In step 318, the p-th improvement graph is input into the updated improvement generation network to obtain the p-1-th improvement graph. Step 312 is performed.
In step 319, it is determined whether the improvement generation network is no longer updated. If yes, go to step 320; if not, go to step 312.
In step 320, the improvement generation network is used as the generation network, and the p-th improvement diagram is input into the improvement generation network to obtain the p-1-th improvement diagram. Step 312 is performed.
In the embodiment, the generation network is obtained through the Gaussian pyramid and the countermeasure network of the image to be restored, so that a clear, complete and high-resolution image can be quickly generated, the resolution of the image after completion can be improved, learning of the generation network can be accelerated, and a more real image can be obtained.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods.
Example four
Fig. 4 is a block diagram illustrating an image complementing apparatus, which may be implemented as part or all of an electronic device through software, hardware, or a combination of both, according to an exemplary embodiment. As shown in fig. 4, the image complementing apparatus includes:
a first generating module 401, configured to generate n layers of gaussian pyramids according to an image to be restored, where n is a positive integer;
a first complementing module 402, configured to complement, by using a block matching device, the image of each layer in the n-layer gaussian pyramid, so as to obtain a complemented real image of each layer;
a second generating module 403, configured to generate a generation network of a countermeasure network according to the nth layer image of the n layers of gaussian pyramids and the real images of the layers;
and a second completion module 404, configured to complete the image to be repaired according to the nth layer image and the generated network, so as to obtain a completed completion image.
In one embodiment, as shown in FIG. 5, the second completion module 404 includes:
an obtaining submodule 4041, configured to obtain an ith generation diagram, where i is an integer between 0 and n;
the input sub-module 4042 is configured to input the ith generated map and the ith layer real map into the generation network, so as to obtain an ith-1 generated map;
wherein the 0 th generation map is the complement map; the nth generation image is an image which is randomly up-sampled from the nth layer image and has the same resolution as the nth layer image.
In one embodiment, as shown in fig. 6, the second generating module 403 includes:
a first training submodule 4031, configured to train, through the nth layer image and the layer-by-layer real image, a discrimination network of the countermeasure network that meets a requirement of an objective function;
a second training submodule 4032, configured to train the generation network that meets the requirement of an objective function through the nth layer image, the layer real images, and the discrimination network;
wherein the objective function is used for measuring the loss in the completion process.
In one embodiment, as shown in fig. 7, the first training submodule 4031 includes:
a first obtaining unit 40311, configured to obtain a j-th update map and a j-th layer real map, where j is an integer from 1 to n;
a first input unit 40312, configured to input the jth layer real diagram and the jth updated diagram into an improved discriminant network, so as to obtain a first determination result;
a first holding unit 40313, configured to hold a parameter of the improved discriminant network when the first determination result indicates that the jth updated graph and the jth layer real graph are the same image;
a first updating unit 40314, configured to update parameters of the improved discriminant network by a random gradient method and the objective function when the first determination result indicates that the jth updated graph and the jth layer real graph are not the same image;
a second input unit 40315, configured to input the jth update map into the initial generation network, so as to obtain a jth-1 update map;
when the parameters of the improved discrimination network are not updated any more, the improved discrimination network is used as the discrimination network, and the nth updated image is an image which is obtained by random up-sampling from the nth layer image and has the same resolution as the nth layer image.
In one embodiment, as shown in fig. 8, the second training submodule 4032 includes:
a second obtaining unit 40321, configured to obtain a pth refinement map and the pth layer real map, where p is an integer from 1 to n;
a third input unit 40322, configured to input the p-th layer real diagram and the p-th improved diagram into the decision network, so as to obtain a second decision result;
a second holding unit 40323, configured to hold parameters of an improvement generation network when the second determination result indicates that the pth improvement map and the pth layer real map are the same image;
a fourth input unit 40324, configured to input the pth improvement map into the improvement generation network, so as to obtain a pth-1 improvement map;
a second updating unit 40325, configured to update a parameter of the improvement generation network according to a random gradient method and the objective function when the second determination result indicates that the pth improvement map and the pth layer real map are not the same image;
the fourth input unit 40324 is further configured to input the pth improvement map into the updated improvement generation network, so as to obtain a pth-1 improvement map;
wherein the improved generation network is taken as the generation network when the parameters of the improved generation network are not updated any more; and the nth improvement graph is an image with the same resolution as the nth layer image obtained by random up-sampling from the nth layer image.
In one embodiment, the objective function is:
wherein G is a generation network, D is a discrimination network, and IqIs the qth real image; e2]Is desired; the G isqIs the q-th generation image; q is an integer from n to 0, and a is a predetermined value.
According to a fourth aspect of the embodiments of the present disclosure, there is provided an image complementing device, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
generating a corresponding n-layer Gaussian pyramid according to an image to be restored, wherein n is a positive integer;
generating a generation network of the countermeasure network according to the n layers of Gaussian pyramids;
and completing the image to be repaired according to the generated network to obtain a high-resolution completed image.
According to a third aspect of the embodiments of the present disclosure, there is provided an image complementing apparatus comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
generating n layers of Gaussian pyramids according to an image to be restored, wherein n is a positive integer;
complementing the image of each layer in the n layers of Gaussian pyramids by a block matching method to obtain a complemented real image of each layer;
generating a generation network of a countermeasure network according to the nth layer image of the n layers of Gaussian pyramids and the real images of all the layers;
and completing the image to be repaired according to the nth layer image and the generated network to obtain a completed image. The processor may be further configured to:
the completing the image to be repaired according to the nth layer image and the generated network to obtain a completed graph comprises:
acquiring an ith generation diagram, wherein i is an integer from 0 to n;
inputting the ith generation diagram and the ith layer real diagram into the generation network to obtain an ith-1 generation diagram;
wherein the 0 th generation map is the complement map; the nth generation image is an image which is randomly up-sampled from the nth layer image and has the same resolution as the nth layer image.
The generating network for generating the countermeasure network according to the nth layer image of the n layers of Gaussian pyramids and the real images of all the layers comprises:
training a discrimination network of the countermeasure network meeting the requirement of an objective function through the nth layer image and the real images of all layers;
training the generation network meeting the requirement of an objective function through the nth layer image, the real images of all layers and the discrimination network;
wherein the objective function is used for measuring the loss in the completion process.
The training of the discriminant network of the countermeasure network meeting the requirement of the objective function through the nth layer image and the each layer real image comprises the following steps:
acquiring a jth updating graph and a jth layer real graph, wherein j is an integer from 1 to n;
inputting the jth layer real image and the jth updated image into an improved discrimination network to obtain a first judgment result;
when the first judgment result represents that the jth updated graph and the jth layer real graph are the same image, keeping parameters of the improved judgment network;
when the first judgment result represents that the jth updated graph and the jth layer real graph are not the same image, updating parameters of the improved judgment network through a random gradient method and the target function;
inputting the jth updating graph into an initial generation network to obtain a jth-1 updating graph;
when the parameters of the improved discrimination network are not updated any more, the improved discrimination network is used as the discrimination network, and the nth updated image is an image which is obtained by random up-sampling from the nth layer image and has the same resolution as the nth layer image.
The training of the generated network meeting the requirement of the objective function through the nth layer image, the real images of the layers and the discriminant network comprises:
acquiring a pth improvement graph and the pth layer real graph, wherein p is an integer from 1 to n;
inputting the p-th layer real image and the p-th improved image into the judgment network to obtain a second judgment result;
when the second judgment result represents that the p-th improvement graph and the p-th layer real graph are the same image, maintaining parameters of an improvement generation network; inputting the p-th improvement graph into the improvement generation network to obtain a p-1-th improvement graph;
when the second judgment result represents that the pth improved graph and the pth layer real graph are not the same image, updating parameters of the improved generation network according to a random gradient method and the objective function; inputting the p-th improvement graph into an updated improvement generation network to obtain a p-1-th improvement graph;
wherein the improved generation network is taken as the generation network when the parameters of the improved generation network are not updated any more; and the nth improvement graph is an image with the same resolution as the nth layer image obtained by random up-sampling from the nth layer image.
The objective function is:
wherein G is a generation network, D is a discrimination network, and IqIs the qth real image;E[]Is desired; the G isqIs the q-th generation image; q is an integer from n to 0, and a is a predetermined value.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 9 is a block diagram illustrating an apparatus for image completion according to an exemplary embodiment. For example, the apparatus 1900 may be provided as a server. The device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by the processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The device 1900 may also include a power component 1926 configured to perform power management of the device 1900, a wired or wireless network interface 1950 configured to connect the device 1900 to a network, and an input/output (I/O) interface 1958. The device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, MacOS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
A non-transitory computer readable storage medium, instructions in which, when executed by a processor of an apparatus 1900, enable the apparatus 1900 to perform the above-described image complementing method, the method comprising:
generating n layers of Gaussian pyramids according to an image to be restored, wherein n is a positive integer;
complementing the image of each layer in the n layers of Gaussian pyramids by a block matching method to obtain a complemented real image of each layer;
generating a generation network of a countermeasure network according to the nth layer image of the n layers of Gaussian pyramids and the real images of all the layers;
and completing the image to be repaired according to the nth layer image and the generated network to obtain a completed image.
The completing the image to be repaired according to the nth layer image and the generated network to obtain a completed graph comprises:
acquiring an ith generation diagram, wherein i is an integer from 0 to n;
inputting the ith generation diagram and the ith layer real diagram into the generation network to obtain an ith-1 generation diagram;
wherein the 0 th generation map is the complement map; the nth generation image is an image which is randomly up-sampled from the nth layer image and has the same resolution as the nth layer image.
The generating network for generating the countermeasure network according to the nth layer image of the n layers of Gaussian pyramids and the real images of all the layers comprises:
training a discrimination network of the countermeasure network meeting the requirement of an objective function through the nth layer image and the real images of all layers;
training the generation network meeting the requirement of an objective function through the nth layer image, the real images of all layers and the discrimination network;
wherein the objective function is used for measuring the loss in the completion process.
The training of the discriminant network of the countermeasure network meeting the requirement of the objective function through the nth layer image and the each layer real image comprises the following steps:
acquiring a jth updating graph and a jth layer real graph, wherein j is an integer from 1 to n;
inputting the jth layer real image and the jth updated image into an improved discrimination network to obtain a first judgment result;
when the first judgment result represents that the jth updated graph and the jth layer real graph are the same image, keeping parameters of the improved judgment network;
when the first judgment result represents that the jth updated graph and the jth layer real graph are not the same image, updating parameters of the improved judgment network through a random gradient method and the target function;
inputting the jth updating graph into an initial generation network to obtain a jth-1 updating graph;
when the parameters of the improved discrimination network are not updated any more, the improved discrimination network is used as the discrimination network, and the nth updated image is an image which is obtained by random up-sampling from the nth layer image and has the same resolution as the nth layer image.
The training of the generated network meeting the requirement of the objective function through the nth layer image, the real images of the layers and the discriminant network comprises:
acquiring a pth improvement graph and the pth layer real graph, wherein p is an integer from 1 to n;
inputting the p-th layer real image and the p-th improved image into the judgment network to obtain a second judgment result;
when the second judgment result represents that the p-th improvement graph and the p-th layer real graph are the same image, maintaining parameters of an improvement generation network; inputting the p-th improvement graph into the improvement generation network to obtain a p-1-th improvement graph;
when the second judgment result represents that the pth improved graph and the pth layer real graph are not the same image, updating parameters of the improved generation network according to a random gradient method and the objective function; inputting the p-th improvement graph into an updated improvement generation network to obtain a p-1-th improvement graph;
wherein the improved generation network is taken as the generation network when the parameters of the improved generation network are not updated any more; and the nth improvement graph is an image with the same resolution as the nth layer image obtained by random up-sampling from the nth layer image.
The objective function is:
wherein G is a generation network, D is a discrimination network, and IqIs the qth real image; e2]Is desired; the G isqIs the q generationAn image; q is an integer from n to 0, and a is a predetermined value.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (10)
1. An image completion method, comprising:
generating n layers of Gaussian pyramids according to an image to be restored, wherein n is a positive integer;
complementing the image of each layer in the n layers of Gaussian pyramids by a block matching method to obtain a complemented real image of each layer;
generating a generation network of a countermeasure network according to the nth layer image of the n layers of Gaussian pyramids and the real images of all the layers;
according to the nth layer image and the generated network, completing the image to be repaired to obtain a completed image;
the generating network for generating the countermeasure network according to the nth layer image of the n layers of Gaussian pyramids and the real images of all the layers comprises:
training a discrimination network of the countermeasure network meeting the requirement of an objective function through the nth layer image and the real images of all layers;
training the generation network meeting the requirement of an objective function through the nth layer image, the real images of all layers and the discrimination network;
wherein the objective function is used for measuring loss in the completion process;
the training of the discriminant network of the countermeasure network meeting the requirement of the objective function through the nth layer image and the each layer real image comprises the following steps:
acquiring a jth updating graph and a jth layer real graph, wherein j is an integer from 1 to n;
inputting the jth layer real image and the jth updated image into an improved discrimination network to obtain a first judgment result;
when the first judgment result represents that the jth updated graph and the jth layer real graph are the same image, keeping parameters of the improved judgment network;
when the first judgment result represents that the jth updated graph and the jth layer real graph are not the same image, updating parameters of the improved judgment network through a random gradient method and the target function;
inputting the jth updating graph into an initial generation network to obtain a jth-1 updating graph;
when the parameters of the improved discrimination network are not updated any more, the improved discrimination network is used as the discrimination network, and the nth updated image is an image which is obtained by random up-sampling from the nth layer image and has the same resolution as the nth layer image.
2. The method according to claim 1, wherein the completing the image to be repaired according to the nth layer image and the generated network to obtain a completed image comprises:
acquiring an ith generation diagram, wherein i is an integer from 0 to n;
inputting the ith generation diagram and the ith layer real diagram into the generation network to obtain an ith-1 generation diagram;
wherein the 0 th generation diagram is a complementary diagram; the nth generation graph is an image which is obtained by randomly up-sampling the nth layer image and has the same resolution as the nth layer image.
3. The method according to claim 1, wherein training the generation network satisfying the requirement of the objective function through the nth layer image, the each layer real image and the discriminant network comprises:
acquiring a pth improvement graph and the pth layer real graph, wherein p is an integer from 1 to n;
inputting the p-th layer real image and the p-th improved image into the judgment network to obtain a second judgment result;
when the second judgment result represents that the p-th improvement graph and the p-th layer real graph are the same image, maintaining parameters of an improvement generation network; inputting the p-th improvement graph into the improvement generation network to obtain a p-1-th improvement graph;
when the second judgment result represents that the pth improved graph and the pth layer real graph are not the same image, updating parameters of the improved generation network according to a random gradient method and the objective function; inputting the p-th improvement graph into an updated improvement generation network to obtain a p-1-th improvement graph;
wherein the improved generation network is taken as the generation network when the parameters of the improved generation network are not updated any more; and the nth improvement graph is an image with the same resolution as the nth layer image obtained by random up-sampling from the nth layer image.
5. An image complementing apparatus, comprising:
the image restoration method comprises a first generation module, a second generation module and a third generation module, wherein the first generation module is used for generating n layers of Gaussian pyramids according to an image to be restored, and n is a positive integer;
the first complementing module is used for complementing the images of all layers in the n layers of Gaussian pyramids through a block matching device to obtain complemented real images of all layers;
the second generation module is used for generating a generation network of the countermeasure network according to the nth layer image of the n layers of Gaussian pyramids and the real images of all the layers;
a second completion module, configured to complete the image to be repaired according to the nth layer image and the generated network, so as to obtain a completed image;
the second generation module comprises:
the first training submodule is used for training a discrimination network of the countermeasure network meeting the requirement of an objective function through the nth layer image and each layer real image;
the second training submodule is used for training the generation network meeting the requirement of an objective function through the nth layer image, the real images of all layers and the discrimination network;
wherein the objective function is used for measuring loss in the completion process;
the first training submodule includes:
a first obtaining unit, configured to obtain a jth update graph and a jth layer real graph, where j is an integer between 1 and the n;
the first input unit is used for inputting the jth layer real image and the jth updated image into an improved judgment network to obtain a first judgment result;
a first holding unit, configured to hold a parameter of the improved discriminant network when the first determination result indicates that the jth updated graph and the jth layer real graph are the same image;
the first updating unit is used for updating parameters of the improved judgment network through a random gradient method and the objective function when the first judgment result represents that the jth updated graph and the jth layer real graph are not the same image;
the second input unit is used for inputting the jth updating graph into an initial generation network to obtain a jth-1 updating graph;
when the parameters of the improved discrimination network are not updated any more, the improved discrimination network is used as the discrimination network, and the nth updated image is an image which is obtained by random up-sampling from the nth layer image and has the same resolution as the nth layer image.
6. The apparatus of claim 5, wherein the second completion module comprises:
the obtaining submodule is used for obtaining the ith generation diagram, wherein i is an integer from 0 to n;
the input submodule is used for inputting the ith generation diagram and the ith layer real diagram into the generation network to obtain an ith-1 generation diagram;
wherein the 0 th generation map is the complement map; the nth generation graph is an image which is obtained by randomly up-sampling the nth layer image and has the same resolution as the nth layer image.
7. The apparatus of claim 5, wherein the second training submodule comprises:
a second obtaining unit configured to obtain a pth refinement map and the pth layer real map, where p is an integer between 1 and the n;
a third input unit, configured to input the p-th layer real graph and the p-th improved graph into the decision network, so as to obtain a second decision result;
a second holding unit, configured to hold a parameter of an improvement generation network when the second determination result indicates that the pth improvement map and the pth layer real map are the same image;
a fourth input unit, configured to input the pth improvement map into the improvement generation network, so as to obtain a pth-1 improvement map;
a second updating unit, configured to update parameters of the improved generation network according to a random gradient method and the objective function when the second determination result indicates that the pth improved graph and the pth layer real graph are not the same image;
the fourth input unit is further configured to input the updated improvement generation network with the pth improvement map to obtain a pth-1 improvement map;
wherein the improved generation network is taken as the generation network when the parameters of the improved generation network are not updated any more; and the nth improvement graph is an image with the same resolution as the nth layer image obtained by random up-sampling from the nth layer image.
9. An image complementing apparatus, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
generating n layers of Gaussian pyramids according to an image to be restored, wherein n is a positive integer;
complementing the image of each layer in the n layers of Gaussian pyramids by a block matching method to obtain a complemented real image of each layer;
generating a generation network of a countermeasure network according to the nth layer image of the n layers of Gaussian pyramids and the real images of all the layers;
according to the nth layer image and the generated network, completing the image to be repaired to obtain a completed image;
the generating network for generating the countermeasure network according to the nth layer image of the n layers of Gaussian pyramids and the real images of all the layers comprises:
training a discrimination network of the countermeasure network meeting the requirement of an objective function through the nth layer image and the real images of all layers;
training the generation network meeting the requirement of an objective function through the nth layer image, the real images of all layers and the discrimination network;
wherein the objective function is used for measuring loss in the completion process;
the training of the discriminant network of the countermeasure network meeting the requirement of the objective function through the nth layer image and the each layer real image comprises the following steps:
acquiring a jth updating graph and a jth layer real graph, wherein j is an integer from 1 to n;
inputting the jth layer real image and the jth updated image into an improved discrimination network to obtain a first judgment result;
when the first judgment result represents that the jth updated graph and the jth layer real graph are the same image, keeping parameters of the improved judgment network;
when the first judgment result represents that the jth updated graph and the jth layer real graph are not the same image, updating parameters of the improved judgment network through a random gradient method and the target function;
inputting the jth updating graph into an initial generation network to obtain a jth-1 updating graph;
when the parameters of the improved discrimination network are not updated any more, the improved discrimination network is used as the discrimination network, and the nth updated image is an image which is obtained by random up-sampling from the nth layer image and has the same resolution as the nth layer image.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710354183.0A CN107133934B (en) | 2017-05-18 | 2017-05-18 | Image completion method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710354183.0A CN107133934B (en) | 2017-05-18 | 2017-05-18 | Image completion method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107133934A CN107133934A (en) | 2017-09-05 |
CN107133934B true CN107133934B (en) | 2020-03-17 |
Family
ID=59732014
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710354183.0A Active CN107133934B (en) | 2017-05-18 | 2017-05-18 | Image completion method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107133934B (en) |
Families Citing this family (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107609560A (en) * | 2017-09-27 | 2018-01-19 | 北京小米移动软件有限公司 | Character recognition method and device |
CN107679533A (en) * | 2017-09-27 | 2018-02-09 | 北京小米移动软件有限公司 | Character recognition method and device |
CN107958471B (en) * | 2017-10-30 | 2020-12-18 | 深圳先进技术研究院 | CT imaging method and device based on undersampled data, CT equipment and storage medium |
CN107958472B (en) * | 2017-10-30 | 2020-12-25 | 深圳先进技术研究院 | PET imaging method, device and equipment based on sparse projection data and storage medium |
CN107767384B (en) * | 2017-11-03 | 2021-12-03 | 电子科技大学 | Image semantic segmentation method based on countermeasure training |
CN107945133B (en) * | 2017-11-30 | 2022-08-05 | 北京小米移动软件有限公司 | Image processing method and device |
CN107977511A (en) * | 2017-11-30 | 2018-05-01 | 浙江传媒学院 | A kind of industrial design material high-fidelity real-time emulation algorithm based on deep learning |
CN107798669B (en) * | 2017-12-08 | 2021-12-21 | 北京小米移动软件有限公司 | Image defogging method and device and computer readable storage medium |
CN107945140A (en) * | 2017-12-20 | 2018-04-20 | 中国科学院深圳先进技术研究院 | A kind of image repair method, device and equipment |
CN108122212A (en) * | 2017-12-21 | 2018-06-05 | 北京小米移动软件有限公司 | Image repair method and device |
CN108171663B (en) * | 2017-12-22 | 2021-05-25 | 哈尔滨工业大学 | Image filling system of convolutional neural network based on feature map nearest neighbor replacement |
CN108269245A (en) * | 2018-01-26 | 2018-07-10 | 深圳市唯特视科技有限公司 | A kind of eyes image restorative procedure based on novel generation confrontation network |
CN108226892B (en) * | 2018-03-27 | 2021-09-28 | 天津大学 | Deep learning-based radar signal recovery method in complex noise environment |
CN108470326B (en) * | 2018-03-27 | 2022-01-11 | 北京小米移动软件有限公司 | Image completion method and device |
CN108564527B (en) * | 2018-04-04 | 2022-09-20 | 百度在线网络技术(北京)有限公司 | Panoramic image content completion and restoration method and device based on neural network |
CN108520503B (en) * | 2018-04-13 | 2020-12-22 | 湘潭大学 | Face defect image restoration method based on self-encoder and generation countermeasure network |
CN108573479A (en) * | 2018-04-16 | 2018-09-25 | 西安电子科技大学 | The facial image deblurring and restoration methods of confrontation type network are generated based on antithesis |
CN108564550B (en) * | 2018-04-25 | 2020-10-02 | Oppo广东移动通信有限公司 | Image processing method and device and terminal equipment |
CN108460830A (en) * | 2018-05-09 | 2018-08-28 | 厦门美图之家科技有限公司 | Image repair method, device and image processing equipment |
CN108805418B (en) * | 2018-05-22 | 2021-08-31 | 福州大学 | Traffic data filling method based on generating type countermeasure network |
CN108898527B (en) * | 2018-06-21 | 2021-10-29 | 福州大学 | Traffic data filling method of generative model based on destructive measurement |
CN109118438A (en) * | 2018-06-29 | 2019-01-01 | 上海航天控制技术研究所 | A kind of Gaussian Blur image recovery method based on generation confrontation network |
CN109165664B (en) * | 2018-07-04 | 2020-09-22 | 华南理工大学 | Attribute-missing data set completion and prediction method based on generation of countermeasure network |
CN109003297B (en) * | 2018-07-18 | 2020-11-24 | 亮风台(上海)信息科技有限公司 | Monocular depth estimation method, device, terminal and storage medium |
CN109360159A (en) * | 2018-09-07 | 2019-02-19 | 华南理工大学 | A kind of image completion method based on generation confrontation network model |
CN109685724B (en) * | 2018-11-13 | 2020-04-03 | 天津大学 | Symmetric perception face image completion method based on deep learning |
CN109712092B (en) * | 2018-12-18 | 2021-01-05 | 上海信联信息发展股份有限公司 | File scanning image restoration method and device and electronic equipment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102509271A (en) * | 2011-11-21 | 2012-06-20 | 洪涛 | Image restoration method based on multi-dimensional decomposition, iteration enhancement and correction |
CN104282000A (en) * | 2014-09-15 | 2015-01-14 | 天津大学 | Image repairing method based on rotation and scale change |
CN106683048A (en) * | 2016-11-30 | 2017-05-17 | 浙江宇视科技有限公司 | Image super-resolution method and image super-resolution equipment |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3281176A1 (en) * | 2015-04-08 | 2018-02-14 | Google LLC | Image editing and repair |
-
2017
- 2017-05-18 CN CN201710354183.0A patent/CN107133934B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102509271A (en) * | 2011-11-21 | 2012-06-20 | 洪涛 | Image restoration method based on multi-dimensional decomposition, iteration enhancement and correction |
CN104282000A (en) * | 2014-09-15 | 2015-01-14 | 天津大学 | Image repairing method based on rotation and scale change |
CN106683048A (en) * | 2016-11-30 | 2017-05-17 | 浙江宇视科技有限公司 | Image super-resolution method and image super-resolution equipment |
Non-Patent Citations (2)
Title |
---|
Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks;Emily Denton 等;《29th Annual Conference on Neural Information Processing Systems》;20151231;正文第1-10页 * |
Generative Adversarial Nets;Ian J. Goodfellow 等;《Advances in neural information processing systems》;20140610;正文第1-9页 * |
Also Published As
Publication number | Publication date |
---|---|
CN107133934A (en) | 2017-09-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107133934B (en) | Image completion method and device | |
US10740897B2 (en) | Method and device for three-dimensional feature-embedded image object component-level semantic segmentation | |
Cordonnier et al. | Differentiable patch selection for image recognition | |
CN109583483B (en) | Target detection method and system based on convolutional neural network | |
US11314989B2 (en) | Training a generative model and a discriminative model | |
CN111598182B (en) | Method, device, equipment and medium for training neural network and image recognition | |
US9697583B2 (en) | Image processing apparatus, image processing method, and computer-readable recording medium | |
US10614347B2 (en) | Identifying parameter image adjustments using image variation and sequential processing | |
WO2015192316A1 (en) | Face hallucination using convolutional neural networks | |
CN110728295B (en) | Semi-supervised landform classification model training and landform graph construction method | |
CN110766050B (en) | Model generation method, text recognition method, device, equipment and storage medium | |
JP2023533907A (en) | Image processing using self-attention-based neural networks | |
Trouvé et al. | Single image local blur identification | |
CN111767962A (en) | One-stage target detection method, system and device based on generation countermeasure network | |
CN113421276A (en) | Image processing method, device and storage medium | |
CN111046755A (en) | Character recognition method, character recognition device, computer equipment and computer-readable storage medium | |
CN114581789A (en) | Hyperspectral image classification method and system | |
Young et al. | Feature-align network with knowledge distillation for efficient denoising | |
CN114240770A (en) | Image processing method, device, server and storage medium | |
CN111242176B (en) | Method and device for processing computer vision task and electronic system | |
CN117576724A (en) | Unmanned plane bird detection method, system, equipment and medium | |
CN116740487A (en) | Target object recognition model construction method and device and computer equipment | |
CN116309056A (en) | Image reconstruction method, device and computer storage medium | |
CN108154169A (en) | Image processing method and device | |
Schoonhoven et al. | LEAN: graph-based pruning for convolutional neural networks by extracting longest chains |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |