CN108921220A - Image restoration model training method, device and image recovery method and device - Google Patents

Image restoration model training method, device and image recovery method and device Download PDF

Info

Publication number
CN108921220A
CN108921220A CN201810712546.8A CN201810712546A CN108921220A CN 108921220 A CN108921220 A CN 108921220A CN 201810712546 A CN201810712546 A CN 201810712546A CN 108921220 A CN108921220 A CN 108921220A
Authority
CN
China
Prior art keywords
image
processed
neural network
feature
forgery
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810712546.8A
Other languages
Chinese (zh)
Inventor
刘永康
杜家鸣
张福刚
段立新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guoxin Youe Data Co Ltd
Original Assignee
Guoxin Youe Data Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guoxin Youe Data Co Ltd filed Critical Guoxin Youe Data Co Ltd
Priority to CN201810712546.8A priority Critical patent/CN108921220A/en
Publication of CN108921220A publication Critical patent/CN108921220A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration

Abstract

This application provides a kind of image restoration model training method, device and image recovery method and devices, wherein image restoration model training method includes:Image input picture to be processed with pixel loss region is generated into neural network, the image to be processed is restored, the forgery image of the image to be processed is obtained;The original image of the image to be processed and the forgery image input picture are differentiated into neural network, differentiate that neural network is that the original image and the forgery image are classified using described image;Based on the comparison result for forging image and the original image, neural network is generated to described image and carries out epicycle training, and neural network, which carries out epicycle training, to be differentiated to described image based on classification results;By generating more wheels training of neural network, image discriminating neural network to described image, image restoration model is obtained.The embodiment of the present application can reduce the difference of generation forged between image and original image.

Description

Image restoration model training method, device and image recovery method and device
Technical field
This application involves technical field of image processing, in particular to a kind of image restoration model training method, device And image recovery method and device.
Background technique
With the continuous development of artificial intelligence, the application of computer vision is also more and more extensive.Such as in product testing stream Journey more and more uses Machine Vision Detection scheme, i.e., is analyzed using image of the image processing method to product, automatically Output test result;But in practical operation, there are many manufacturers in order to anti-fake, can add in the picture shot for product Watermarking, causing image, there are the pixels loss area such as watermark domains.In another example image is during transmission or transcoding, it may Leading to figure, there are the pixels loss area such as defect domains, in order to improve picture quality, it is necessary to handle image, to pixel loss Region is supplemented, and resolution ratio is promoted.In another example photo in storing process, understands the photo color as caused by oxidation, dirty etc. Tune changes, defect, it is dirty situations such as lead on photo that there are pixel loss regions, therefore photograph taking figure can be directed to Picture, and image restoration is carried out in the image of shooting.
The method for currently generalling use deep learning restores image.But current image recovery method, exist The problem excessive with original image difference after image restoration
Summary of the invention
In view of this, the embodiment of the present application is designed to provide a kind of image restoration model training method, device and figure As restored method and device, the difference forged between image and original image can be reduced.
In a first aspect, the embodiment of the present application provides a kind of image restoration model training method, including:
By with pixel loss region image input picture to be processed generate neural network, to the image to be processed into Row restores, and obtains the forgery image of the image to be processed;
The original image of the image to be processed and the forgery image input picture are differentiated into neural network, use institute Stating image discriminating neural network is that the original image and the forgery image are classified;
Based on the comparison result for forging image and the original image, neural network is generated to described image and carries out this Wheel training, and neural network, which carries out epicycle training, to be differentiated to described image based on classification results;
By generating more wheels training of neural network, image discriminating neural network to described image, image restoration mould is obtained Type.
With reference to first aspect, the embodiment of the present application provides the first possible embodiment of first aspect, wherein:? It further include the pretreatment operation to image to be processed before image input picture to be processed is generated neural network:
Pixel loss region detection is carried out to the image to be processed, and mask is carried out to the pixel loss region detected Mask is extracted, and obtains the mask image in pixel loss region;
It is the pixel assignment of the mask image according to presetted pixel value;
The picture element matrix that will complete the mask image of assignment, the picture element matrix dot product with the image to be processed, obtains pre- Treated image to be processed.
The possible embodiment of with reference to first aspect the first, the embodiment of the present application provide second of first aspect Possible embodiment, wherein:It is the pixel assignment of the mask image according to presetted pixel value, including:
Binarization operation is carried out to the picture element matrix of the mask image.
The first or second of possible embodiment, the embodiment of the present application with reference to first aspect provides first party The third possible embodiment in face, wherein:Before the pixel assignment for the mask image, further include:
The mask image is subjected to morphologic dilation operation and erosion operation.
The third possible embodiment with reference to first aspect, the embodiment of the present application provide the 4th kind of first aspect The parameter of possible embodiment, the dilation operation meets the first operation times;The erosion operation number meets the second fortune Number is calculated, and first operation times are greater than second operation times;And the expansion fortune between adjacent erosion operation twice The number of calculation is without departing from preset quantity threshold value.
With reference to first aspect, the embodiment of the present application provides the 5th kind of possible embodiment of first aspect, wherein:Institute Stating image generation neural network includes:Feature extraction layer and feature zone of recovery;
Image input picture to be processed is generated into neural network, the image to be processed is restored, obtain it is described to The forgery image of image is handled, including:
Image to be processed is inputted into the feature extraction layer;
Feature learning is carried out to the image to be processed using the feature extraction layer, and specific characteristic extract layer is extracted Median feature vector save;
Feature completion is carried out to the image to be processed based on the median feature vector of preservation using the feature zone of recovery, Obtain the forgery image of the image to be processed.
The 5th kind of possible embodiment with reference to first aspect, the embodiment of the present application provide the 6th kind of first aspect Possible embodiment, wherein:Feature learning carried out to the image to be processed using the feature extraction layer, and by specified spy The median feature vector that extract layer extracts is levied to save, including:
Process of convolution is carried out to the image to be processed using the feature extraction layer, and is extracted in specific characteristic extract layer Median feature vector simultaneously saves;And
Obtain the first eigenvector of the last layer feature extraction layer;
Feature completion is carried out to the image to be processed based on the median feature vector of preservation using the feature zone of recovery, The forgery image of the image to be processed is obtained, including:
Deconvolution processing successively is carried out to the first eigenvector using the feature zone of recovery;And
The result of the deconvolution of the median feature vector of preservation and specific characteristic retrieving layer is superimposed;
Deconvolution based on characteristic recovery layer described in the last layer is as a result, generate the forgery image of the image to be processed.
With reference to first aspect the 5th kind or the 6th kind of possible embodiment, the embodiment of the present application provide first aspect The 7th kind of possible embodiment, wherein:Using the feature zone of recovery based on the median feature vector of preservation to described After image to be processed carries out feature completion, further include:
Generate the fisrt feature matrix for forging image;
Based on the comparison result for forging image and the original image, neural network is generated to described image and carries out this Wheel training, including:
It executes following matrix and compares operation, until the first-loss value determined based on comparison result is in first-loss range It is interior;
The matrix compares operation:
The fisrt feature matrix and the second characteristic matrix generated for the original image are compared;
It is raw for the first-loss value determined based on obtained comparison result the not situation within the scope of the first-loss At the first feedback information, and neural network is generated to described image based on first feedback information and carries out parameter adjustment;
It the use of described image generation neural network is that the forgery image generates the first new spy based on parameter adjusted Matrix is levied, and executes the matrix again and compares operation.
With reference to first aspect, the embodiment of the present application provides the 8th kind of possible embodiment of first aspect, wherein:Make Differentiate that neural network is that the original image and the forgery image are classified with described image;Based on classification results to described Image discriminating neural network carries out epicycle training, including:
Following two sort operation is executed, until the second penalty values for determining based on two classification results and corresponding mark value are the In two loss ranges;Wherein, the mark value of the original image and the mark value for forging image are respectively two classification Corresponding different value;
Two sort operation, including:
Differentiate that neural network has carried out supervision to the original image and the forgery image respectively and learned using described image It practises and carries out two classification;
For the second penalty values determined based on two classification results and corresponding mark value not in second loss range The case where, the second feedback information is generated, and neural network, which carries out parameter, to be differentiated to described image based on second feedback information Adjustment;
Based on parameter adjusted, two sort operation is executed again.
The 8th kind of possible embodiment with reference to first aspect, the embodiment of the present application provide the 9th kind of first aspect Possible embodiment, wherein:After executing two sort operations, further include:
It executes following compare to operate, until the mark based on two classification results for forging image and the original image It is worth determining third penalty values in third loss range;
The comparison operation, including:
Two classification results for forging image are compared with the mark value of the original image;
Situation of the third penalty values not in the third loss range is characterized for comparison result, it is anti-to generate third Feedforward information, and neural network, which carries out parameter adjustment, to be differentiated to described image based on the third feedback information;
Based on parameter adjusted, two sort operation is executed again.
With reference to first aspect the 5th kind or the 6th kind of possible embodiment, the embodiment of the present application provide first aspect The tenth kind of possible embodiment, wherein:Using the feature zone of recovery based on the median feature vector of preservation to described After image to be processed carries out feature completion, further include:
Generate the fisrt feature matrix for forging image;
Based on the comparison result for forging image and the original image, neural network is generated to described image and carries out this Wheel training, including:
It executes following matrix and compares operation, until total losses value is within the scope of total losses;
The matrix compares operation:
The fisrt feature matrix and the second characteristic matrix generated for the original image are compared;
First-loss value is determined based on obtained comparison result, and the total losses value is updated;
For the total losses value not situation within the scope of the total losses of update, the first feedback information is generated, and be based on First feedback information generates neural network to described image and carries out parameter adjustment;
It the use of described image generation neural network is that the forgery image generates the first new spy based on parameter adjusted Matrix is levied, and executes the matrix again and compares operation;
Differentiate that neural network is that the original image and the forgery image are classified using described image;Based on classification As a result neural network, which carries out epicycle training, to be differentiated to described image, including:
Following two sort operation is executed, until the total losses value is within the scope of the total losses;Wherein, the original graph The mark value of picture and the mark value for forging image are respectively the corresponding different value of two classification;
Two sort operation, including:
Differentiate that neural network has carried out supervision to the original image and the forgery image respectively and learned using described image It practises and carries out two classification;
The second penalty values are determined with corresponding mark value based on two classification results, and the total losses value is updated;Needle To the total losses value of the update not situation within the scope of the total losses, the second feedback information is generated, and anti-based on described second Feedforward information differentiates that neural network carries out parameter adjustment to described image;And/or
Third penalty values are determined based on the mark value of two classification results for forging image and the original image, and right The total losses value is updated;For the total losses value not situation within the scope of the total losses of update, it is anti-to generate third Feedforward information, and neural network, which carries out parameter adjustment, to be differentiated to described image based on the third feedback information;
Based on parameter adjusted, two sort operation is executed again;
Wherein, the total losses value is the ranking operation value of first-loss value, the second penalty values and third penalty values.
Second aspect, the embodiment of the present application also provide a kind of image restoration model training apparatus, including:
Generation module, for that will have the image input picture to be processed in pixel loss region to generate neural network, to institute It states image to be processed to be restored, obtains the forgery image of the image to be processed;
Training module, for the original image of the image to be processed and the forgery image input picture to be differentiated mind Through network, differentiate that neural network is that the original image and the forgery image are classified using described image;And it is based on The comparison result for forging image and the original image generates neural network to described image and carries out epicycle training, and Neural network, which carries out epicycle training, to be differentiated to described image based on classification results;And by described image generate neural network, More wheels training of image discriminating neural network, obtains image restoration model.
The third aspect, the embodiment of the present application also provide a kind of image recovery method, including:
Obtain parked image;
The parked image is input to through image restoration model training described in above-mentioned first aspect any one In the image restoration model that method obtains, the target for obtaining the parked image forges image;
Wherein, described image restoration model includes:Image generates neural network.
The third aspect, the embodiment of the present application also provide a kind of image restoration device, including:
Module is obtained, for obtaining parked image;
Restoration module, for being input to the parked image by image described in above-mentioned first aspect any one In the image restoration model that restoration model training method obtains, the target for obtaining the parked image forges image;
Wherein, described image restoration model includes:Image generates neural network.
It in the embodiment of the present application, is the forgery image that image to be processed generates due to requiring image to generate neural network, Will as close as original image, image discriminating neural network then will as far as possible by original image and forge image classification just Really, therefore based on the comparison result for forging image and the original image, neural network is generated to described image and carries out this training in rotation Practice, and neural network, which carries out the process of epicycle training, to be differentiated to described image based on classification results, it is substantive then be antagonism Neural network and image discriminating neural network generated to image are trained so that during this dual training, figure Ability as generating neural network and image discriminating neural network is continuously available promotion, the acquired image for finally obtaining training Neural network is generated as image restoration model, practical this image restoration model is obtained when restoring to image The difference forged between image and original image is smaller.
To enable the above objects, features, and advantages of the application to be clearer and more comprehensible, preferred embodiment is cited below particularly, and cooperate Appended attached drawing, is described in detail below.
Detailed description of the invention
Technical solution in ord to more clearly illustrate embodiments of the present application, below will be to needed in the embodiment attached Figure is briefly described, it should be understood that the following drawings illustrates only some embodiments of the application, therefore is not construed as pair The restriction of range for those of ordinary skill in the art without creative efforts, can also be according to this A little attached drawings obtain other relevant attached drawings.
Fig. 1 shows a kind of flow chart of image restoration model training method provided by the embodiment of the present application;
Fig. 2 shows in image restoration model training method provided by the embodiment of the present application, obtain the figure to be processed The flow chart of the specific method of the forgery image of picture;
Fig. 3 is shown in image restoration model training method provided by the embodiment of the present application, and matrix compares the tool of operation The flow chart of body method;
Fig. 4 is shown in image restoration model training method provided by the embodiment of the present application, two sort operations it is specific The flow chart of method;
Fig. 5 is shown in image restoration model training method provided by the embodiment of the present application, to image discriminating nerve net The flow chart of the specific method for the method that network is trained;
Fig. 6 shows a kind of structural schematic diagram of image restoration model training apparatus provided by the embodiment of the present application;
Fig. 7 shows the structural schematic diagram of computer equipment 100 provided by the embodiment of the present application;
Fig. 8 shows a kind of flow chart of image recovery method provided by the embodiment of the present application;
Fig. 9 shows a kind of structural schematic diagram of image restoration device provided by the embodiment of the present application;
Figure 10 shows the structural schematic diagram of computer equipment 200 provided by the embodiment of the present application.
Specific embodiment
To keep the purposes, technical schemes and advantages of the embodiment of the present application clearer, below in conjunction with the embodiment of the present application Middle attached drawing, the technical scheme in the embodiment of the application is clearly and completely described, it is clear that described embodiment is only It is some embodiments of the present application, instead of all the embodiments.The application being usually described and illustrated herein in the accompanying drawings is real The component for applying example can be arranged and be designed with a variety of different configurations.Therefore, below to the application's provided in the accompanying drawings The detailed description of embodiment is not intended to limit claimed scope of the present application, but is merely representative of the selected reality of the application Apply example.Based on embodiments herein, those skilled in the art institute obtained without making creative work There are other embodiments, shall fall in the protection scope of this application.
It is at present when being restored there are the image in pixel loss region, there is forgery image and original image difference is excessive Problem is based on this, a kind of image restoration model training method, device and image recovery method and device provided by the present application, can With to there are the images in pixel loss region to restore, so that restoring obtained image closer to original image.
In the embodiment of the present application, pixel loss region refers to pixel value different from the area of corresponding position pixel value in original image Domain.For example, having the region of situations such as watermark, defect, tone difference, deformation in image, pixel loss may be identified as Region.
To be instructed to a kind of image restoration model disclosed in the embodiment of the present application first convenient for understanding the present embodiment Practice method to describe in detail, this method can be used for the recovery of the image to a plurality of types of pixel loss regions.But it needs It is noted that an image restoration model training method, generally just for same type or the pixel of a variety of similar types The image in loss region is restored.
Shown in Figure 1, image restoration model training method provided by the embodiments of the present application includes:
S101:Image input picture to be processed with pixel loss region is generated into neural network, to described to be processed Image is restored, and the forgery image of the image to be processed is obtained.
When specific implementation, image to be processed is from the training number constructed when being trained to image restoration model According to collection.It is concentrated in the training data, includes the trained image of multiple groups, every group of training image, which includes one, has pixel loss The image to be processed in region, and original image corresponding with image to be processed, and original image does not have pixel loss region.Together When, concentrated in the training data, the type in pixel loss region on image to be handled be same or similar.It uses Wherein one group of trained image can complete the wheel training to image restoration model.
Image generates neural network when restoring to image to be processed, generally comprises two processes, feature extraction with And feature is restored.
Natural image has its inherent characteristic, and the statistical nature of a part in image is phase with the statistical nature of other parts With, it means that it can be by this part of feature learnt on another part.To with pixel loss region When image to be processed is restored, it will be able to use the statistical nature in the region that there is no pixel loss on image to be processed, weight Conformation element loses the statistical nature in region, then, based on the statistical nature in the region that there is no pixel loss on image to be processed, Feature recovery is carried out with for the statistical nature of pixel loss regional restructuring, and then obtains the forgery image of image to be processed.
Specifically, in order to realize that two processes are restored in feature extraction and feature, image generates neural network and includes:Feature mentions Take layer and feature zone of recovery.Wherein, feature extraction layer is for carrying out feature extraction;Feature zone of recovery carries out feature recovery, and Feature extraction layer and feature zone of recovery have multilayer in general.
Shown in Figure 2 based on the structure of this image nerve neural network, image generates neural network and can use down It states method and image input picture to be processed is generated into neural network, the image to be processed is restored, obtain described wait locate Manage the forgery image of image:
S201:Image to be processed is inputted into the feature extraction layer;
S202:Feature learning is carried out to the image to be processed using the feature extraction layer, and specific characteristic is extracted The median feature vector that layer extracts saves.
Here, feature learning, for image to be processed, each layer of feature are carried out to image to be processed using feature extraction layer Extract layer can obtain a median feature vector, and the quantity of acquired median feature vector and the quantity of feature extraction layer are It is consistent.
Then the median feature vector of specific characteristic extract layer is saved.
Specifically, feature learning is carried out to image to be processed by multilayer feature extract layer, each layer all learns to wait locate Manage some features in image.Feature extraction layer is more forward, this feature extract layer be image zooming-out to be processed intermediate features to Amount is also just closer to the original feature vector of image to be processed;This feature extract layer learns the feature to image to be processed In, each characteristic element more characterizes the difference of different zones in image to be processed.Feature extraction layer more rearward, extract by this feature Layer is the median feature vector of feature extraction to be processed also just further away from the original feature vector of image to be processed;This feature is extracted In the feature for the image to be processed that layer learns, what each characteristic element more characterized is different zones in image to be processed General character.
Specified extract layer can be the feature extraction layer of any one in multilayer feature extract layer.When it is implemented, specified Extract layer can be chosen according to actual needs, not limit here.
Specifically, feature extraction layer can be obtained and this feature extract layer by carrying out process of convolution to image to be processed Corresponding median feature vector.
At this point, carrying out feature learning to the image to be processed using the feature extraction layer, and specific characteristic is extracted The median feature vector preservation that layer extracts specifically includes:
Process of convolution is carried out to the image to be processed using the feature extraction layer, and is extracted in specific characteristic extract layer Median feature vector simultaneously saves.
In addition, also to obtain the first eigenvector of the last layer feature extraction layer;The first eigenvector is used for conduct The input of first layer feature zone of recovery.
S203:Feature is carried out to the image to be processed based on the median feature vector of preservation using the feature zone of recovery Completion obtains the forgery image of the image to be processed.
Here, feature completion is carried out to image to be processed based on the median feature vector of preservation using feature zone of recovery, is To use what feature extraction layer extract can characterize on image to be processed general character between different zones to a certain extent Feature vector, the feature in pixel loss region in completion image to be processed, to obtain the forgery image of image to be processed.
Specifically, feature zone of recovery can be obtained multiple with this feature by carrying out deconvolution processing to first eigenvector The corresponding deconvolution of former layer as a result, and in the process for carrying out deconvolution processing to first eigenvector, also use specific characteristic The median feature vector that extract layer extracts influences the result of feature completion.
At this point, carrying out feature completion to image to be processed based on the median feature vector of preservation using feature zone of recovery, obtain To the forgery image of the image to be processed, including:
Deconvolution processing successively is carried out to the first eigenvector using feature zone of recovery;And
The result of the deconvolution of the median feature vector of preservation and specific characteristic retrieving layer is superimposed;
Deconvolution based on characteristic recovery layer described in the last layer is as a result, generate the forgery image of the image to be processed.
Specifically, the median feature vector based on preservation carries out the image to be processed by multilayer feature zone of recovery Feature completion, some features in each layer of feature zone of recovery all restoring part images to be processed.
Deconvolution processing is carried out to first eigenvector using feature zone of recovery, is sought to the lesser feature of script dimension Vector dimension becomes larger;In order to increase the dimension of feature vector it is necessary to the portion of the feature vector small to dimension information into Row element is filled up, and then being capable of the bigger feature vector of dimension.In this process, due to first eigenvector itself Feature possessed by a part of image script to be processed is had lost, then deconvolution is carried out to first eigenvector, realizes element It fills up, filling up the feature that feature should have with image to be processed again between has certain difference, therefore at this point, to use spy Levy the extracted median feature vector of extract layer, interference element completion as a result, make the element of completion be better able to characterization to Handle feature possessed by image script.
In order to realize this purpose, can by the deconvolution result of the median feature vector of preservation and specific characteristic retrieving layer into Row superposition.At this time, it may be necessary to which it is noted that the dimension of the result of the deconvolution of median feature vector and specified retrieving layer should be one It causes.
The deconvolution result of the median feature vector of preservation and specific characteristic retrieving layer is overlapped, it can will be intermediate special Sign vector sum specifies the element of the corresponding position of the result of retrieving layer to be directly added, and to median feature vector and can also specify extensive The element of the corresponding position of the result of cladding is weighted summation.
Then, it will be able to which the deconvolution based on the last layer characteristic recovery layer is as a result, generate the forgery figure of image to be processed Picture.
S102:The original image of the image to be processed and the forgery image input picture are differentiated into neural network, Differentiate that neural network is that the original image and the forgery image are classified using described image.
When specific implementation, classifies to realize to image to be processed and original image, need image discriminating Neural network is original image and forges image progress feature extraction first, and respectively original image and forgery image zooming-out can Characterize the feature vector of the two.The difference between feature vector extracted respectively for the two is bigger, then can more distinguish the two Clearly;The difference between feature vector extracted respectively for the two is smaller, then more can not clearly distinguish the two.
Later, the feature vector based on respectively original image and forgery image zooming-out, according between two feature vectors Apart from size or the size of difference, classify to original image and the forgery image.
Specifically, image discriminating neural network may include multiple volume bases and a full articulamentum, wherein volume base's energy Enough to carry out feature extraction to original image and forgery image, full articulamentum is used to be based upon original image and forges image zooming-out Feature carries out original image and forges the differentiation output of image.There are two the results of differentiation:Original image, and forge image.
S103:Based on the comparison result for forging image and the original image, neural network is generated to described image Epicycle training is carried out, and neural network, which carries out epicycle training, to be differentiated to described image based on classification results.
When specific implementation, based on the comparison result for forging image and the original image, to described image It generates neural network and carries out epicycle training, seek to based on the similarity forged between image and the original image come to image Neural network is generated to be adjusted.In the application, forgery image is characterized by forging the loss between image and original image Similarity between original image.
Neural network, which carries out epicycle training, to be differentiated to described image based on classification results, is sought to based on image discriminating network Classify to original image and forgery image, the correct degree of the result of classification to carry out the parameter of image discriminating neural network Adjustment.It in this application, is the correct degree that classification results are characterized by Classification Loss.
Specifically, in order to determine the loss forged between image and original image, the embodiment of the present application is using the spy After sign zone of recovery carries out feature completion to the image to be processed based on the median feature vector of preservation, the puppet can be also generated Make the fisrt feature matrix of image.Wherein, fisrt feature matrix can be based on the deconvolution of characteristic recovery layer described in the last layer As a result it directly generates, such as carries out the operation of feature extraction for the deconvolution result of characteristic recovery layer described in the last layer, it is raw At the fisrt feature matrix for forging image.Or directly using the deconvolution result of characteristic recovery layer described in the last layer as first Eigenmatrix.
The embodiment of the present application can by following manner based on it is described forge image and the original image comparison result, Neural network is generated to described image and carries out epicycle training:
It is shown in Figure 3, it executes following matrix and compares operation:
S301:The fisrt feature matrix and the second characteristic matrix generated for the original image are compared;
Preferably, second characteristic matrix is identical with the generation method of fisrt feature matrix, and fisrt feature matrix and second The dimension of eigenmatrix is identical.
S302:First-loss value is determined based on comparison result.
For example, when obtaining first-loss value according to the comparison result of fisrt feature matrix and second characteristic matrix, first Penalty values grecMeet following formula:
Wherein, the dimension of fisrt feature matrix and second characteristic matrix is H × W;Ig (x, y) indicates the of original image The element that xth row y is arranged in two eigenmatrixes;G (I) (x, y) indicates to forge xth row y column in the fisrt feature matrix of image Element.
S303:It detects based on the determining first-loss value of obtained comparison result whether within the scope of the first-loss; If it is not, then jumping to S304;If it is, terminating the epicycle training that image generates network.
S304:Generate the first feedback information, and based on first feedback information to described image generate neural network into The adjustment of row parameter;
S305:Based on parameter adjusted, generating neural network using described image is that the image to be processed generates newly Fisrt feature matrix, and jump to S301.
When detecting based on the determining first-loss value of obtained comparison result within the scope of the first-loss, terminate Epicycle generates the training of neural network to image.
By above-mentioned training process, image is enabled to generate neural network forgery image generated closer to original graph Picture.
At the same time, it also to synchronize and epicycle training is carried out to image discriminating neural network.
Specifically, the embodiment of the present application can differentiate that neural network is described original using described image by following manner Image and the forgery image are classified, and differentiate that neural network carries out epicycle training to described image based on classification results:
As shown in figure 4, executing following two sort operations:
S401:Differentiate that neural network has carried out prison to the original image and the forgery image respectively using described image Educational inspector practises and carries out two classification.
S402:The second penalty values are determined with corresponding mark value based on two classification results.Wherein, the mark of the original image Value and the mark value for forging image are respectively the corresponding different value of two classification;
For example, the second loss duration L can be calculated by following formulaadv_d
Ladv_d=α | | D (Ig) -1 | |2+β||D(G(I))-0||2
Wherein, D (Ig) indicates that the classification results of original image, D (G (I)) indicate to forge the classification results of image.And this When, 1 is the mark value of original image, and 0 is the mark value for forging image.α and β indicates the design factor used for convenience of calculating, It can according to need specific setting, such as be set as 1,Deng also can be set as other values, herein with no restrictions.
If the mark value of original image is 0, the mark value for forging image is 1,
Then above-mentioned formula can also be written as:Ladv_d=α | | D (Ig) -0 | |2+β||D(G(I))-1||2
S403:It whether detects based on two classification results with the second determining penalty values of corresponding mark value in the second loss range It is interior;If it is not, then jumping to S404;If so, terminating the epicycle training based on the loss of the second volume to image discriminating network.
S404:Generate the second feedback information, and based on second feedback information to described image differentiate neural network into The adjustment of row parameter.
S405:Based on parameter adjusted, two sort operations, return step S401 are executed again.
Herein, it is based on parameter adjusted, two sort operations is executed again, seeks to sentence using the image after adjusting parameter Other neural network re-starts supervised learning to original image and forgery image respectively and carries out two classification.Then for new life At two classification results characterize situation of second penalty values not in the second loss is anti-, generate the second feedback information, and base again Parameter adjustment is carried out to image discriminating neural network in the second feedback information generated again, until based on the forgery image The third penalty values that the mark value of two classification results and the original image determines are in third loss range.
By above-mentioned training process, enable to forge image and original image as far as possible in image discriminating neural network Pull open in order to classify to the two.
Optionally, in another embodiment, another method being trained to image discriminating neural network is also provided. This method synchronous can be carried out with method corresponding to above-mentioned Fig. 4, can also individually be carried out independently of above-mentioned Fig. 4.
In this way method corresponding with above-mentioned Fig. 4 it is synchronous carry out for, to it is provided in this embodiment to image discriminating mind It is illustrated through the method that network is trained, the method being trained to image discriminating neural network includes:
D501:Differentiate that neural network has carried out prison to the original image and the forgery image respectively using described image Educational inspector practises and carries out two classification.D 502 and D 506 is executed, the execution of D 502 and D 506 have no sequencing herein.
D 502:The second penalty values are determined with corresponding mark value based on two classification results.Wherein, the mark of the original image Note value and the mark value for forging image are respectively the corresponding different value of two classification.Jump to D 503;
D 503:It whether detects based on two classification results with the second determining penalty values of corresponding mark value in the second loss model In enclosing;If it is not, then jumping to D 504;If so, jumping to D 505.
D 504:Generate the second feedback information, and based on second feedback information to described image differentiate neural network into The adjustment of row parameter.Jump to D 501.
D 505:Terminate the epicycle training based on corresponding penalty values to image discriminating neural network.
Above-mentioned 501~D of D 505 is similar with above-mentioned S401~S405, and details are not described herein.
D 506:Two classification results for forging image are compared with the mark value of the original image;It jumps to D 507。
Herein, the process two classification results for forging image being compared with the mark value of the original image, It can be regarded as the process that third penalty values are determined based on the mark value of two classification results and original image of forging image.
For example, can calculate third by following formula loses duration Ladv_g
Ladv_g=ρ | | D (G (I)) -1 | |2
Wherein, D (G (I)) is indicated to two classification results for forging image.And at this point, 1 is the mark value of original image, 0 is Forge the mark value of image.ρ indicates for convenience of the design factor for calculating and being arranged.It can be set according to actual needs, example Such as be set as 1,Deng it is not limited here.
If the mark value of original image is 0, the mark value for forging image is 1,
Then above-mentioned formula can also be written as:
D 507:Comparison result is detected to characterize in the whether described third loss range of third penalty values;If it is, Jump to D 505;If it is not, then jumping to D 508.
D 508:Generate third feedback information, and based on the third feedback information to described image differentiate neural network into The adjustment of row parameter.Jump to D 501.
By above-mentioned training process, enable to forge image and original image as far as possible in image discriminating neural network While pulling open in order to classify to the two so that forging image closer to original image label.
In addition, total damage that the embodiment of the present application can be constituted according to first-loss value, the second penalty values and third penalty values Mistake value generates neural network to image and image discriminating neural network carries out the constraint of parameter.It is raw to image based on total losses value The constraint that parameter is carried out at neural network and image discriminating neural network generates neural network and image discriminating for balancing image The parameter of neural network.
Specifically, based on the comparison result for forging image and the original image, nerve net is generated to described image Network carries out epicycle training, including:
Before generating neural network to image and being trained, first in the use feature zone of recovery based in preservation Between after feature vector carries out feature completion to the image to be processed, generate the fisrt feature matrix for forging image.
It is raw to described image later by following manner based on the comparison result for forging image and the original image Epicycle training is carried out at neural network:
It executes following matrix and compares operation, until total losses value is within the scope of total losses;
The matrix compares operation:
D 601:The fisrt feature matrix and the second characteristic matrix generated for the original image are compared;
D 602:First-loss value is determined based on obtained comparison result, and the total losses value is updated;
D 603:For the total losses value not situation within the scope of the total losses of update, the first feedback information is generated, And neural network is generated to described image based on first feedback information and carries out parameter adjustment.
D 604:Based on parameter adjusted, generating neural network using described image is that the forgery image generates newly Fisrt feature matrix, and the alignment matrix operation is executed again.
Differentiate that neural network is that the original image and the forgery image are classified using described image;Based on classification As a result neural network, which carries out epicycle training, to be differentiated to described image, including:
Following two sort operation is executed, until the total losses value is within the scope of the total losses;Wherein, the original graph The mark value of picture and the mark value for forging image are respectively the corresponding different value of two classification;
Two sort operation, including:
D 701:Differentiate that neural network respectively has the original image and the forgery image using described image Supervised learning simultaneously carries out two classification;
D 702:The second penalty values are determined with corresponding mark value based on two classification results, and the total losses value is carried out more Newly;For the total losses value not situation within the scope of the total losses of update, the second feedback information is generated, and based on described the Two feedback informations differentiate that neural network carries out parameter adjustment to described image;And/or
D 703:Determine that third is lost based on the mark value of two classification results for forging image and the original image Value, and the total losses value is updated;For the total losses value not situation within the scope of the total losses of update, generate Third feedback information, and neural network, which carries out parameter adjustment, to be differentiated to described image based on the third feedback information;
D 704:Based on parameter adjusted, two sort operation is executed again;
Wherein, the total losses value is the ranking operation value of first-loss value, the second penalty values and third penalty values.
For example, total losses value GfMeet:
Gf1×grec2×Ladv_d3×Ladv_g
Wherein, grecIndicate first-loss value;Ladv_dIndicate the second penalty values;Ladv_gIndicate third penalty values;λ1Indicate the The weight coefficient of one penalty values;λ2Indicate the weight coefficient of the second penalty values;λ3λ1Indicate the weight coefficient of third penalty values.
In this way, being sentenced based on one or more completion in above-mentioned several embodiments to image generation neural network and image The epicycle training of other neural network.
Using the image and original image to be processed for including in next group of trained image, completed based on the above process to right Image generates the next round training of neural network and image discriminating neural network.
……
In this way, finally obtaining figure by more wheels training to neural network and image discriminating neural network is generated to image As restoration model.Herein, obtaining image restoration model is the image generation neural network for having carried out more wheel training
It in the embodiment of the present application, is the forgery image that image to be processed generates due to requiring image to generate neural network, Will as close as original image, image discriminating neural network then will as far as possible by original image and forge image classification just Really, therefore based on the comparison result for forging image and the original image, neural network is generated to described image and carries out this training in rotation Practice, and neural network, which carries out the process of epicycle training, to be differentiated to described image based on classification results, it is substantive then be antagonism Neural network and image discriminating neural network generated to image are trained so that during this dual training, figure Ability as generating neural network and image discriminating neural network is continuously available promotion, the acquired image for finally obtaining training Neural network is generated as image restoration model, practical this image restoration model is obtained when restoring to image The difference forged between image and original image is smaller.
It is shown in Figure 6 in another embodiment of the application, by image input picture to be processed generate neural network it Before, it further include the pretreatment operation to image to be processed:
S501:Pixel loss region detection carried out to the image to be processed, and to the pixel loss region detected into Row mask (mask) extracts, and obtains the mask image in pixel loss region.
When specific implementation, pixel loss region detection can be carried out to image to be processed in several ways.Example Such as by the way that the pixel value of image to be processed and the pixel of original image corresponding position to be compared, by image to be processed and original The different pixel of the pixel value of beginning image corresponding position is as the pixel in pixel loss region.In another example in certain feelings Under condition, in the case where being watermark such as pixel loss region, the tone for the watermark of original image addition is actually than more consistent , therefore the method that can be detected by tone, identify the pixel loss region in image to be processed.
Mask is carried out behind the pixel loss region that image to be processed has been determined it is necessary to the pixel loss region to detection road It extracts, is to recalculate the value of each pixel in pixel loss region according to core for pixel loss region, it can be more clear Clear determines in the pixel loss region in image to be processed from image to be processed, and generates mask image.Wherein, should The resolution ratio of mask image is consistent with the resolution ratio of image to be processed.
S502:It is the pixel assignment of the mask image according to presetted pixel value.
In order to improve the generalization ability of model, herein, the pixel loss of image to be processed can be corresponded to by mask image The pixel value in region is adjusted to the first pixel value.And by mask image, the non-pixel loss region of image to be processed is corresponded to Pixel value is adjusted to second pixel value, and pixel loss region is therefrom determined.
S503:The picture element matrix that will complete the mask image of assignment, the picture element matrix dot product with the image to be processed, obtains To pretreated image to be processed.
Herein, the picture element matrix that the mask image of assignment will be completed, the picture element matrix dot product with the image to be processed, just It is that the color in all pixels loss region in tape handling image is adjusted to the corresponding color of the first pixel value.
Image restoration model is trained and can be obtained using the image to be processed obtained after pretreatment operation To better training effect.
Optionally, due to it is completely black in image or completely white pixel be it is fewer, can be by covering The mode that the picture element matrix of code image carries out binarization operation realizes the process of the pixel assignment to mask image, so that final To pretreated image to be processed in pixel loss region be it is completely black or complete white, can be by the picture in image to be processed Element loss region is explicitly determined, is reduced in image to be processed, the pixel value of the pixel in certain non-pixel loss regions When consistent with presetted pixel value, interfered caused by training result.
In addition, in another embodiment, before the pixel assignment for the mask image, further including:By the mask Image carries out morphologic dilation operation and erosion operation.
The mask image is subjected to morphologic dilation operation and erosion operation, it being capable of covering pixel damage as much as possible Lose region, thus avoid determining as pixel loss region it is imperfect caused by error existing for model.
Wherein, the parameter of the dilation operation meets the first operation times;The erosion operation number meets the second operation Number, and first operation times are greater than second operation times;And the dilation operation between adjacent erosion operation twice Number without departing from preset quantity threshold value.
In the case where the often remaining erosion operation of dilation operation, pixel loss region determined by script has to a certain degree Expand outwardly, to expand pixel loss region determined by original, make it possible to more determine pixel loss Region, thus the accuracy of lift scheme.
Based on the same inventive concept, figure corresponding with image restoration model training method is additionally provided in the embodiment of the present application As restoration model training device, the principle solved the problems, such as due to the device in the embodiment of the present application and the above-mentioned figure of the embodiment of the present application Picture restoration model training method is similar, therefore the implementation of device may refer to the implementation of method, and overlaps will not be repeated.
It is shown in Figure 6, image restoration model training apparatus provided by the embodiments of the present application, including:
Generation module 61, it is right for that will have the image input picture to be processed in pixel loss region to generate neural network The image to be processed is restored, and the forgery image of the image to be processed is obtained;
Categorization module 62, for differentiating the original image of the image to be processed and the forgery image input picture Neural network differentiates that neural network is that the original image and the forgery image are classified using described image;Based on institute The comparison result for forging image and the original image is stated, neural network is generated to described image and carries out epicycle training, Yi Jiji Neural network, which carries out epicycle training, to be differentiated to described image in classification results;And by generating neural network, figure to described image More wheels training as differentiating neural network, obtains image restoration model.
It in the embodiment of the present application, is the forgery image that image to be processed generates due to requiring image to generate neural network, Will as close as original image, image discriminating neural network then will as far as possible by original image and forge image classification just Really, therefore based on the comparison result for forging image and the original image, neural network is generated to described image and carries out this training in rotation Practice, and neural network, which carries out the process of epicycle training, to be differentiated to described image based on classification results, it is substantive then be antagonism Neural network and image discriminating neural network generated to image are trained so that during this dual training, figure Ability as generating neural network and image discriminating neural network is continuously available promotion, the acquired image for finally obtaining training Neural network is generated as image restoration model, practical this image restoration model is obtained when restoring to image The difference forged between image and original image is smaller.
Optionally, further include preprocessing module 63, be used for before image input picture to be processed is generated neural network, Pretreatment to image to be processed:Pixel loss region detection is carried out to the image to be processed, and the pixel detected is damaged It loses region and carries out mask mask extraction, obtain the mask image in pixel loss region;
It is the pixel assignment of the mask image according to presetted pixel value;
The picture element matrix that will complete the mask image of assignment, the picture element matrix dot product with the image to be processed, obtains pre- Treated image to be processed.
Optionally, preprocessing module 63 is specifically used for through following step being the mask image according to presetted pixel value Pixel assignment:Binarization operation is carried out to the picture element matrix of the mask image.
Optionally, preprocessing module 63 is also used to before the pixel assignment for the mask image, by the mask figure As carrying out morphologic dilation operation and erosion operation.
Optionally, the parameter of the dilation operation meets the first operation times;The erosion operation number meets the second fortune Number is calculated, and first operation times are greater than second operation times;And the expansion fortune between adjacent erosion operation twice The number of calculation is without departing from preset quantity threshold value.
Optionally, described image generation neural network includes:Feature extraction layer and feature zone of recovery;
Generation module 61, specifically for obtaining the forgery image of the image to be processed by following step:
Image to be processed is inputted into the feature extraction layer;
Feature learning is carried out to the image to be processed using the feature extraction layer, and specific characteristic extract layer is extracted Median feature vector save;
Feature completion is carried out to the image to be processed based on the median feature vector of preservation using the feature zone of recovery, Obtain the forgery image of the image to be processed.
Optionally, generation module 61 are specifically used for carrying out convolution to the image to be processed using the feature extraction layer Processing, and extract median feature vector in specific characteristic extract layer and save;And
Obtain the first eigenvector of the last layer feature extraction layer;
Feature completion is carried out to the image to be processed based on the median feature vector of preservation using the feature zone of recovery, The forgery image of the image to be processed is obtained, including:
Deconvolution processing successively is carried out to the first eigenvector using the feature zone of recovery;And
The result of the deconvolution of the median feature vector of preservation and specific characteristic retrieving layer is superimposed;
Deconvolution based on characteristic recovery layer described in the last layer is as a result, generate the forgery image of the image to be processed.
Optionally, generation module 61 are also used to using the median feature vector pair of the feature zone of recovery based on preservation After the image to be processed carries out feature completion, the fisrt feature matrix for forging image is generated;
Training module 62, specifically for the comparison knot by following step based on the forgery image and the original image Fruit generates neural network to described image and carries out epicycle training:It executes following matrix and compares operation, until true based on comparison result Fixed first-loss value is within the scope of first-loss;
The matrix compares operation:
The fisrt feature matrix and the second characteristic matrix generated for the original image are compared;
It is raw for the first-loss value determined based on obtained comparison result the not situation within the scope of the first-loss At the first feedback information, and neural network is generated to described image based on first feedback information and carries out parameter adjustment;
It the use of described image generation neural network is that the forgery image generates the first new spy based on parameter adjusted Matrix is levied, and executes the matrix again and compares operation.
Training module 62 is specifically used for being based on classification results by following step to described image differentiation neural network progress Epicycle training:Following two sort operation is executed, until existing based on two classification results with the second penalty values that corresponding mark value determines In second loss range;Wherein, the mark value of the original image and the mark value for forging image are respectively described two points The corresponding different value of class;
Two sort operation, including:
Differentiate that neural network has carried out supervision to the original image and the forgery image respectively and learned using described image It practises and carries out two classification;
For the second penalty values determined based on two classification results and corresponding mark value not in second loss range The case where, the second feedback information is generated, and neural network, which carries out parameter, to be differentiated to described image based on second feedback information Adjustment;
Based on parameter adjusted, two sort operation is executed again.
Training module 62 is also used to after executing two sort operations, is executed following compare and is operated, until being based on the puppet The third penalty values that the mark value of two classification results and the original image of making image determines are in third loss range;
It executes following compare to operate, until the mark based on two classification results for forging image and the original image It is worth determining third penalty values in third loss range;
The comparison operation, including:
Two classification results for forging image are compared with the mark value of the original image;
Situation of the third penalty values not in the third loss range is characterized for comparison result, it is anti-to generate third Feedforward information, and neural network, which carries out parameter adjustment, to be differentiated to described image based on the third feedback information;
Based on parameter adjusted, two sort operation is executed again.
Optionally, generation module 61 are also used for the feature zone of recovery based on the median feature vector of preservation to institute After stating image progress feature completion to be processed, the fisrt feature matrix for forging image is generated;
Training module 62, specifically for the comparison knot by following step based on the forgery image and the original image Fruit generates neural network to described image and carries out epicycle training:It executes following matrix and compares operation, until total losses value is damaged always It is out of normal activity in enclosing;
The matrix compares operation:
The fisrt feature matrix and the second characteristic matrix generated for the original image are compared;
First-loss value is determined based on obtained comparison result, and the total losses value is updated;
For the total losses value not situation within the scope of the total losses of update, the first feedback information is generated, and be based on First feedback information generates neural network to described image and carries out parameter adjustment;
It the use of described image generation neural network is that the forgery image generates the first new spy based on parameter adjusted Matrix is levied, and executes the matrix again and compares operation;
Training module 62, be specifically used for by following step described image differentiate neural network be the original image and The forgery image is classified;Neural network, which carries out epicycle training, to be differentiated to described image based on classification results:It executes as follows Two sort operations, until the total losses value is within the scope of the total losses;Wherein, the mark value of the original image and described The mark value for forging image is respectively the corresponding different value of two classification;
Two sort operation, including:
Differentiate that neural network has carried out supervision to the original image and the forgery image respectively and learned using described image It practises and carries out two classification;
The second penalty values are determined with corresponding mark value based on two classification results, and the total losses value is updated;Needle To the total losses value of the update not situation within the scope of the total losses, the second feedback information is generated, and anti-based on described second Feedforward information differentiates that neural network carries out parameter adjustment to described image;And/or
Third penalty values are determined based on the mark value of two classification results for forging image and the original image, and right The total losses value is updated;For the total losses value not situation within the scope of the total losses of update, it is anti-to generate third Feedforward information, and neural network, which carries out parameter adjustment, to be differentiated to described image based on the third feedback information;
Based on parameter adjusted, two sort operation is executed again;
Wherein, the total losses value is the ranking operation value of first-loss value, the second penalty values and third penalty values.
Corresponding to the image restoration model training method in Fig. 1, the embodiment of the present application also provides a kind of computer equipments 100, as shown in fig. 7, the equipment includes memory 1000, processor 2000 and is stored on the memory 1000 and can be at this The computer program run on reason device 2000, wherein above-mentioned processor 2000 realizes above-mentioned figure when executing above-mentioned computer program As the step of restoration model training method.
Specifically, above-mentioned memory 1000 and processor 2000 can be general memory and processor, not do here It is specific to limit, when the computer program of 2000 run memory 1000 of processor storage, it is able to carry out above-mentioned image restoration mould Type training method to solve to image restoration, the problem excessive with original image difference after image restoration, and then reaches reduction and forges The effect of difference between image and original image.
Corresponding to the image restoration model training method in Fig. 1, the embodiment of the present application also provides a kind of computer-readable Storage medium is stored with computer program on the computer readable storage medium, which holds when being run by processor The step of row above-mentioned image restoration model training.
Specifically, which can be general storage medium, such as mobile disk, hard disk, on the storage medium Computer program when being run, above-mentioned image restoration model training method is able to carry out, to solve to image restoration, image The problem excessive with original image difference after recovery, and then achieve the effect that reduce the difference forged between image and original image.
Shown in Figure 8, the embodiment of the present application also provides a kind of image recovery method, including:
S801:Obtain parked image;
S802:The parked image is input to through image restoration model described in the embodiment of the present application any one In the image restoration model that training method obtains, the target for obtaining the parked image forges image;
Wherein, described image restoration model includes:Image generates neural network.
Shown in Figure 9, the embodiment of the present application also provides a kind of image restoration device, including:
Module 91 is obtained, for obtaining parked image;
Restoration module 92, for being input to the parked image by figure described in the embodiment of the present application any one In the image restoration model obtained as restoration model training method, the target for obtaining the parked image forges image;
Wherein, described image restoration model includes:Image generates neural network.
Corresponding to the image recovery method in Fig. 8, the embodiment of the present application also provides a kind of computer equipments 200, such as scheme Shown in 10, which includes memory 3000, processor 4000 and is stored on the memory 3000 and can be in the processor The computer program run on 4000, wherein above-mentioned processor 4000 realizes that above-mentioned image is multiple when executing above-mentioned computer program The step of original method.
Specifically, above-mentioned memory 3000 and processor 4000 can be general memory and processor, not do here It is specific to limit, when the computer program of 4000 run memory 3000 of processor storage, it is able to carry out above-mentioned image restoration side Method to solve to image restoration, the problem excessive with original image difference after image restoration, and then reaches reduction and forges image and original The effect of difference between beginning image.
Corresponding to the image recovery method in Fig. 1, the embodiment of the present application also provides a kind of computer readable storage medium, It is stored with computer program on the computer readable storage medium, which executes above-mentioned image when being run by processor The step of recovery.
Specifically, which can be general storage medium, such as mobile disk, hard disk, on the storage medium Computer program when being run, be able to carry out above-mentioned image recovery method, to solve to image restoration, after image restoration with The excessive problem of original image difference, and then achieve the effect that reduce the difference forged between image and original image.
The meter of image restoration model training method, device and image recovery method and device provided by the embodiment of the present application Calculation machine program product, the computer readable storage medium including storing program code, the instruction that said program code includes can For executing previous methods method as described in the examples, specific implementation can be found in embodiment of the method, and details are not described herein.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description It with the specific work process of device, can refer to corresponding processes in the foregoing method embodiment, details are not described herein.
It, can be with if the function is realized in the form of SFU software functional unit and when sold or used as an independent product It is stored in a computer readable storage medium.Based on this understanding, the technical solution of the application is substantially in other words The part of the part that contributes to existing technology or the technical solution can be embodied in the form of software products, the meter Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be a People's computer, server or network equipment etc.) execute each embodiment the method for the application all or part of the steps. And storage medium above-mentioned includes:USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), arbitrary access are deposited The various media that can store program code such as reservoir (RAM, Random Access Memory), magnetic or disk.
The above, the only specific embodiment of the application, but the protection scope of the application is not limited thereto, it is any Those familiar with the art within the technical scope of the present application, can easily think of the change or the replacement, and should all contain Lid is within the scope of protection of this application.Therefore, the protection scope of the application shall be subject to the protection scope of the claim.

Claims (10)

1. a kind of image restoration model training method, which is characterized in that including:
Image input picture to be processed with pixel loss region is generated into neural network, the image to be processed is answered Original obtains the forgery image of the image to be processed;
The original image of the image to be processed and the forgery image input picture are differentiated into neural network, use the figure As differentiating that neural network is that the original image and the forgery image are classified;
Based on the comparison result for forging image and the original image, neural network is generated to described image and carries out this training in rotation Practice, and neural network, which carries out epicycle training, to be differentiated to described image based on classification results;
By generating more wheels training of neural network, image discriminating neural network to described image, image restoration model is obtained.
2. the method according to claim 1, wherein by image input picture to be processed generate neural network it Before, it further include the pretreatment operation to image to be processed:
Pixel loss region detection is carried out to the image to be processed, and mask mask is carried out to the pixel loss region detected It extracts, obtains the mask image in pixel loss region;
It is the pixel assignment of the mask image according to presetted pixel value;
The picture element matrix that will complete the mask image of assignment, the picture element matrix dot product with the image to be processed, is pre-processed Image to be processed afterwards.
3. according to the method described in claim 2, it is characterized in that, being assigned according to the pixel that presetted pixel value is the mask image Value, including:
Binarization operation is carried out to the picture element matrix of the mask image.
4. according to the method in claim 2 or 3, further including before the pixel assignment for the mask image:
The mask image is subjected to morphologic dilation operation and erosion operation.
5. according to the method described in claim 4, it is characterized in that, the parameter of the dilation operation meets the first operation times; The erosion operation number meets the second operation times, and first operation times are greater than second operation times;And phase The number of dilation operation between adjacent erosion operation twice is without departing from preset quantity threshold value.
6. the method according to claim 1, wherein described image generation neural network includes:Feature extraction layer And feature zone of recovery;
Image input picture to be processed is generated into neural network, the image to be processed is restored, is obtained described to be processed The forgery image of image, including:
Image to be processed is inputted into the feature extraction layer;
Feature learning carried out to the image to be processed using the feature extraction layer, and specific characteristic extract layer extracted Between feature vector save;
Feature completion is carried out to the image to be processed based on the median feature vector of preservation using the feature zone of recovery, is obtained The forgery image of the image to be processed.
7. according to the method described in claim 6, it is characterized in that, using the feature extraction layer to the image to be processed into Row feature learning, and the median feature vector that specific characteristic extract layer is extracted saves, including:
Process of convolution is carried out to the image to be processed using the feature extraction layer, and among the extraction of specific characteristic extract layer Feature vector simultaneously saves;And
Obtain the first eigenvector of the last layer feature extraction layer;
Feature completion is carried out to the image to be processed based on the median feature vector of preservation using the feature zone of recovery, is obtained The forgery image of the image to be processed, including:
Deconvolution processing successively is carried out to the first eigenvector using the feature zone of recovery;And
The result of the deconvolution of the median feature vector of preservation and specific characteristic retrieving layer is superimposed;
Deconvolution based on characteristic recovery layer described in the last layer is as a result, generate the forgery image of the image to be processed.
8. a kind of image restoration model training apparatus, which is characterized in that including:
Generation module, for will have the image input picture to be processed in pixel loss region generate neural network, to it is described to Processing image is restored, and the forgery image of the image to be processed is obtained;
Training module, for the original image of the image to be processed and the forgery image input picture to be differentiated nerve net Network differentiates that neural network is that the original image and the forgery image are classified using described image;And based on described The comparison result for forging image and the original image generates neural network to described image and carries out epicycle training, and is based on Classification results differentiate that neural network carries out epicycle training to described image;And by generating neural network, image to described image The more wheels training for differentiating neural network, obtains image restoration model.
9. a kind of image recovery method, which is characterized in that including:
Obtain parked image;
The parked image is input to and is obtained by image restoration model training method described in claim 1-8 any one To image restoration model in, obtain the parked image target forge image;
Wherein, described image restoration model includes:Image generates neural network.
10. a kind of image restoration device, which is characterized in that including:
Module is obtained, for obtaining parked image;
Restoration module, for being input to the parked image by image restoration described in claim 1-8 any one In the image restoration model that model training method obtains, the target for obtaining the parked image forges image;
Wherein, described image restoration model includes:Image generates neural network.
CN201810712546.8A 2018-06-29 2018-06-29 Image restoration model training method, device and image recovery method and device Pending CN108921220A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810712546.8A CN108921220A (en) 2018-06-29 2018-06-29 Image restoration model training method, device and image recovery method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810712546.8A CN108921220A (en) 2018-06-29 2018-06-29 Image restoration model training method, device and image recovery method and device

Publications (1)

Publication Number Publication Date
CN108921220A true CN108921220A (en) 2018-11-30

Family

ID=64424543

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810712546.8A Pending CN108921220A (en) 2018-06-29 2018-06-29 Image restoration model training method, device and image recovery method and device

Country Status (1)

Country Link
CN (1) CN108921220A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109859113A (en) * 2018-12-25 2019-06-07 北京奇艺世纪科技有限公司 Model generating method, image enchancing method, device and computer readable storage medium
CN109886891A (en) * 2019-02-15 2019-06-14 北京市商汤科技开发有限公司 A kind of image recovery method and device, electronic equipment, storage medium
CN109902823A (en) * 2018-12-29 2019-06-18 华为技术有限公司 A kind of model training method and equipment based on generation confrontation network
CN110349103A (en) * 2019-07-01 2019-10-18 昆明理工大学 It is a kind of based on deep neural network and jump connection without clean label image denoising method
CN110458185A (en) * 2019-06-26 2019-11-15 平安科技(深圳)有限公司 Image-recognizing method and device, storage medium, computer equipment
CN111325232A (en) * 2018-12-13 2020-06-23 财团法人工业技术研究院 Training method of phase image generator and training method of phase image classifier
CN111353514A (en) * 2018-12-20 2020-06-30 马上消费金融股份有限公司 Model training method, image recognition method, device and terminal equipment
CN111488895A (en) * 2019-01-28 2020-08-04 北京达佳互联信息技术有限公司 Countermeasure data generation method, device, equipment and storage medium
CN111582221A (en) * 2020-05-19 2020-08-25 北京汽车股份有限公司 Lane line identification method, device and equipment
CN111986103A (en) * 2020-07-20 2020-11-24 北京市商汤科技开发有限公司 Image processing method, image processing device, electronic equipment and computer storage medium
CN112614066A (en) * 2020-12-23 2021-04-06 文思海辉智科科技有限公司 Image restoration method and device and electronic equipment
CN113139911A (en) * 2020-01-20 2021-07-20 北京迈格威科技有限公司 Image processing method and device, and training method and device of image processing model
CN116503294A (en) * 2023-06-30 2023-07-28 广东工业大学 Cultural relic image restoration method, device and equipment based on artificial intelligence

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127702A (en) * 2016-06-17 2016-11-16 兰州理工大学 A kind of image mist elimination algorithm based on degree of depth study
CN107274358A (en) * 2017-05-23 2017-10-20 广东工业大学 Image Super-resolution recovery technology based on cGAN algorithms
CN107730458A (en) * 2017-09-05 2018-02-23 北京飞搜科技有限公司 A kind of fuzzy facial reconstruction method and system based on production confrontation network
CN107945118A (en) * 2017-10-30 2018-04-20 南京邮电大学 A kind of facial image restorative procedure based on production confrontation network
CN108197670A (en) * 2018-01-31 2018-06-22 国信优易数据有限公司 Pseudo label generation model training method, device and pseudo label generation method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127702A (en) * 2016-06-17 2016-11-16 兰州理工大学 A kind of image mist elimination algorithm based on degree of depth study
CN107274358A (en) * 2017-05-23 2017-10-20 广东工业大学 Image Super-resolution recovery technology based on cGAN algorithms
CN107730458A (en) * 2017-09-05 2018-02-23 北京飞搜科技有限公司 A kind of fuzzy facial reconstruction method and system based on production confrontation network
CN107945118A (en) * 2017-10-30 2018-04-20 南京邮电大学 A kind of facial image restorative procedure based on production confrontation network
CN108197670A (en) * 2018-01-31 2018-06-22 国信优易数据有限公司 Pseudo label generation model training method, device and pseudo label generation method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GUILIN LIU等: ""Image Inpainting for Irregular Holes Using Partial Convolutions"", 《ARXIV》 *
IAN J. GOODFELLOW等: ""Generative Adversarial Nets"", 《ARXIV》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111325232A (en) * 2018-12-13 2020-06-23 财团法人工业技术研究院 Training method of phase image generator and training method of phase image classifier
CN111325232B (en) * 2018-12-13 2024-01-02 财团法人工业技术研究院 Training method of phase image generator and training method of phase image classifier
CN111353514A (en) * 2018-12-20 2020-06-30 马上消费金融股份有限公司 Model training method, image recognition method, device and terminal equipment
CN109859113A (en) * 2018-12-25 2019-06-07 北京奇艺世纪科技有限公司 Model generating method, image enchancing method, device and computer readable storage medium
CN109902823A (en) * 2018-12-29 2019-06-18 华为技术有限公司 A kind of model training method and equipment based on generation confrontation network
CN111488895A (en) * 2019-01-28 2020-08-04 北京达佳互联信息技术有限公司 Countermeasure data generation method, device, equipment and storage medium
CN111488895B (en) * 2019-01-28 2024-01-30 北京达佳互联信息技术有限公司 Countermeasure data generation method, device, equipment and storage medium
CN109886891A (en) * 2019-02-15 2019-06-14 北京市商汤科技开发有限公司 A kind of image recovery method and device, electronic equipment, storage medium
CN110458185A (en) * 2019-06-26 2019-11-15 平安科技(深圳)有限公司 Image-recognizing method and device, storage medium, computer equipment
CN110349103A (en) * 2019-07-01 2019-10-18 昆明理工大学 It is a kind of based on deep neural network and jump connection without clean label image denoising method
CN113139911A (en) * 2020-01-20 2021-07-20 北京迈格威科技有限公司 Image processing method and device, and training method and device of image processing model
CN111582221A (en) * 2020-05-19 2020-08-25 北京汽车股份有限公司 Lane line identification method, device and equipment
CN111986103A (en) * 2020-07-20 2020-11-24 北京市商汤科技开发有限公司 Image processing method, image processing device, electronic equipment and computer storage medium
CN112614066A (en) * 2020-12-23 2021-04-06 文思海辉智科科技有限公司 Image restoration method and device and electronic equipment
CN116503294A (en) * 2023-06-30 2023-07-28 广东工业大学 Cultural relic image restoration method, device and equipment based on artificial intelligence
CN116503294B (en) * 2023-06-30 2024-03-29 广东工业大学 Cultural relic image restoration method, device and equipment based on artificial intelligence

Similar Documents

Publication Publication Date Title
CN108921220A (en) Image restoration model training method, device and image recovery method and device
US10846566B2 (en) Method and system for multi-scale cell image segmentation using multiple parallel convolutional neural networks
US11120303B2 (en) Enhanced deep reinforcement learning deep q-network models
JP6557783B2 (en) Cascade neural network with scale-dependent pooling for object detection
Chen et al. Semantic image segmentation with task-specific edge detection using cnns and a discriminatively trained domain transform
US8379994B2 (en) Digital image analysis utilizing multiple human labels
US10776698B2 (en) Method for training an artificial neural network
CN109145766A (en) Model training method, device, recognition methods, electronic equipment and storage medium
CN104463161A (en) Color document image segmentation and binarization using automatic inpainting
CN111695421B (en) Image recognition method and device and electronic equipment
EP3861482A1 (en) Verification of classification decisions in convolutional neural networks
CN110210482B (en) Target detection method for improving class imbalance
CN107247952B (en) Deep supervision-based visual saliency detection method for cyclic convolution neural network
CN112861718A (en) Lightweight feature fusion crowd counting method and system
CN114722892A (en) Continuous learning method and device based on machine learning
Rueda-Plata et al. Supervised greedy layer-wise training for deep convolutional networks with small datasets
CN115994900A (en) Unsupervised defect detection method and system based on transfer learning and storage medium
CN111242870A (en) Low-light image enhancement method based on deep learning knowledge distillation technology
Riedel Bag of tricks for training brain-like deep neural networks
CN113221757B (en) Method, terminal and medium for improving accuracy rate of pedestrian attribute identification
KR20220072693A (en) Brain Neural Network Structure Image Processing System, Brain Neural Network Structure Image Processing Method, and a computer-readable storage medium
CN113095328A (en) Self-training-based semantic segmentation method guided by Gini index
CN112613341A (en) Training method and device, fingerprint identification method and device, and electronic device
Prabowo et al. Indonesian Traditional Shadow Puppet Classification using Convolutional Neural Network
CN112418168B (en) Vehicle identification method, device, system, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 101-8, 1st floor, building 31, area 1, 188 South Fourth Ring Road West, Fengtai District, Beijing

Applicant after: Guoxin Youyi Data Co., Ltd

Address before: 100070, No. 188, building 31, headquarters square, South Fourth Ring Road West, Fengtai District, Beijing

Applicant before: SIC YOUE DATA Co.,Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20181130