CN110322403A - A kind of more supervision Image Super-resolution Reconstruction methods based on generation confrontation network - Google Patents
A kind of more supervision Image Super-resolution Reconstruction methods based on generation confrontation network Download PDFInfo
- Publication number
- CN110322403A CN110322403A CN201910533885.4A CN201910533885A CN110322403A CN 110322403 A CN110322403 A CN 110322403A CN 201910533885 A CN201910533885 A CN 201910533885A CN 110322403 A CN110322403 A CN 110322403A
- Authority
- CN
- China
- Prior art keywords
- image
- resolution
- generator
- low
- subregion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 238000012549 training Methods 0.000 claims abstract description 47
- 238000003384 imaging method Methods 0.000 claims abstract description 14
- 230000006870 function Effects 0.000 claims description 17
- 230000003321 amplification Effects 0.000 claims description 8
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 8
- 238000012935 Averaging Methods 0.000 claims description 6
- 239000000284 extract Substances 0.000 claims description 6
- 238000004519 manufacturing process Methods 0.000 claims description 6
- 230000008447 perception Effects 0.000 claims description 6
- 238000010586 diagram Methods 0.000 claims description 5
- 238000012937 correction Methods 0.000 claims description 4
- 238000013527 convolutional neural network Methods 0.000 claims description 3
- 230000009977 dual effect Effects 0.000 claims description 2
- 230000001575 pathological effect Effects 0.000 abstract description 9
- 238000003860 storage Methods 0.000 abstract description 3
- 230000000694 effects Effects 0.000 description 8
- 238000005070 sampling Methods 0.000 description 4
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000007812 deficiency Effects 0.000 description 2
- 230000007170 pathology Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000000399 optical microscopy Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
- G06T3/4076—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution using the original low-resolution images to iteratively correct the high-resolution images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10056—Microscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of based on the more supervision Image Super-resolution Reconstruction methods for generating confrontation network, and the sectioning image of the different resolution of same slice is registrated;Training dataset is made using the sectioning image after registration;On training dataset, model training is generated using more supervision multistages and generates confrontation model;Using trained generation confrontation model, low resolution image is redeveloped into full resolution pricture.The image reconstruction shot under low power lens is high-resolution image by the present invention, and can either save imaging time again can save the hardware space of storage image, overcomes fuzzy and artifact of the common methods in pathological data.
Description
Technical field
The invention belongs to technical field of image processing, more specifically, and in particular to a kind of based on the more of generation confrontation network
Image Super-resolution Reconstruction method is supervised, pathological section image is particularly suitable for.
Background technique
At present cell microscopic image need object lens 20x or more amplification factor can blur-free imaging, but at figure
Time is long, and the memory space needed is big.Image taking speed under 4x object lens is fast, low to equipment precision requirement.But under 4x object lens
Imaging resolution is low, the depth of field is big, and imaging does not have practicability.If the image shot under low power lens can reconstruct high-resolution
The image of rate, can either save imaging time again can save the hardware space of storage image.
Several researchers have proposed the concepts based on study or the super-resolution of Case-based Reasoning both at home and abroad.Such method it is basic
Thought is to obtain priori knowledge by study come reconstruction image, but effect is limited.In recent years, the super-resolution side based on deep learning
Method significant effect, but most model is based on degraded data, for the image of the different resolution acquired from real world,
Using having certain limitations.
In conclusion low power objective imaging super-resolution has very big practicability, but various super-resolution reconstruction methods are in pathology
Effect is limited in data.
Summary of the invention
In view of the deficiencies of the prior art, the invention proposes a kind of based on the Image Super-resolution Reconstruction side for generating confrontation network
The image reconstruction shot under low power lens is by method it is intended that overcoming fuzzy and artifact of the common methods in pathological data
High-resolution image, can either save imaging time again can save the hardware space of storage image.
A kind of more supervision Image Super-resolution Reconstruction methods based on generation confrontation network, low-resolution image is inputted and is generated
Network model is fought, confrontation network model is generated and exports high-definition picture, the generation confrontation network model is according to such as lower section
Formula building:
(1) training dataset is made
Extract the low-resolution image and high-definition picture of same targeted imaging region, respectively in low-resolution image and
Foreground area L, H is chosen in high-definition picture, and region L and H are registrated, the high-low resolution of the same area is obtained
Patch (L, H);
(2) training pattern
Building includes that the generation of generator and discriminator fights network model, by the low-resolution image in Patch (L, H)
Input of the L as generator, generator output and the similar image H ' of high-definition picture H in Patch (L, H), discriminator
Identify the true and false of image H and H ', learn repeatedly and fight, training obtains generator.
Further, the loss function for generating confrontation model and using are as follows:
LG=Lmse+α*Lp+β*Ladver
Wherein, LmseIt is mean square error, LpIt is perception loss, LadverIt is generator confrontation loss, LDIt is discriminator to damage-retardation
Lose LGIt is generator total losses function;N is sample size, and C, H, W is picture size, x{n}For n-th of sample, I{n}It is low point
Resolution image x{n}Corresponding full resolution pricture, y are the labels of true full resolution pricture;Gθ, DΘRespectively correspond generator and identification
Device, Φ are the good vgg models of pre-training, and output is characteristic pattern;α, β balance every loss function.
Further, the specific embodiment of step (1) the production training dataset are as follows:
(11) rough registration
Low-resolution image is divided into multiple subregions, is multiplied by subregion coordinate with multiple and maps all subregion
To high-definition picture, the multiple is amplification factor of the high-definition picture relative to low-resolution image;
Redundancy extension is carried out to determining subregion is mapped in high-definition picture, expands boundary;
It is related to the subregion progress that it is mapped in high-definition picture to the subregion in low-resolution image one by one
Property matching, obtain correlation maximum when subregion coordinate shift amount;
All subregion coordinate shift amounts are averaging, low-resolution image is obtained and is sat to the rough registration of high-definition picture
Mark offset;
Error low, between high-definition picture is corrected using the resulting coordinate shift amount of rough registration;
(12) essence registration
Foreground area L is chosen in low fractional diagram picture, will be mapped in the high-definition picture after correcting behind the region, note
For region R;
Redundancy extension is carried out to the region R in high-definition picture, expands the boundary of R;
The corresponding region H of L is found on R using the template matching based on correlation, taking (L, H) is sample pair, obtains one
To data Patch (L, H).
Further, the generator uses the convolutional neural networks with residual error structure, and the discriminator uses VGG mould
Type.
A kind of more supervision Image Super-resolution Reconstruction methods based on generation confrontation network, low-resolution image is inputted and is generated
Network model is fought, confrontation network model is generated and exports high-definition picture, the generation confrontation network model is according to such as lower section
Formula building:
(1) training sample set is made
Extract the low-resolution image, mid-resolution image and high-definition picture of same targeted imaging region;
Foreground area L is chosen in low-resolution image, and the area maps are corresponding with high-definition picture to centre
Region B, C;L, B area are registrated, obtain sample to (L, M);The region L, C is registrated, obtain sample to (L,
H);
According to (L, M) and (L, H) building sample (L, M, H);
(2) training pattern
Building includes the confrontation network model of the first generator and the first discriminator, by the low resolution in sample (L, M, H)
Input of the rate image L as the first generator, the first generator export image M ' similar with mid-resolution image M, and first
Discriminator identifies the true and false of image M and M ', learns repeatedly and fights, and training obtains the first generator;
Building includes the confrontation network model of the second generator and the second discriminator, by the output M ' conduct of the first generator
The input of second generator, the second generator export image H ' similar with high-definition picture H, and the second discriminator identifies image
The true and false of H and H ' learns repeatedly and fights, and training obtains the second generator;
Joint training is done again to the first generator and the second generator, it is final to determine the first generator and the second generator.
Further, the specific embodiment of step (1) the production training sample pair are as follows:
(11) rough registration in
Low-resolution image is divided into multiple subregions, by subregion coordinate and the first multiple product by all subregion
It is mapped to high-definition picture, first multiple is amplification factor of the medium resolution image relative to low-resolution image;
Redundancy extension is carried out to determining subregion is mapped in medium resolution image, expands boundary;
It is related to the subregion progress that it is mapped in medium resolution image to the subregion in low-resolution image one by one
Property matching, obtain correlation maximum when the first subregion coordinate shift amount;
All first subregion coordinate shift amounts are averaging, obtain low-resolution image to the of mid-resolution image
One rough registration coordinate shift amount;
Error low, between mid-resolution image is corrected using the first rough registration coordinate shift amount;
(12) high rough registration is arrived in
Medium resolution image is divided into multiple subregions, by subregion coordinate and the second multiple product by all subregion
It is mapped to high-definition picture, second multiple is amplification factor of the high-definition picture relative to medium resolution image;
Redundancy extension is carried out to determining subregion is mapped in high-definition picture, expands boundary;
It is related to the subregion progress that it is mapped in high-definition picture to the subregion in medium resolution image one by one
Property matching, obtain correlation maximum when the second subregion coordinate shift amount;
All second subregion coordinate shift amounts are averaging, obtain low-resolution image to the second of high-definition picture
Rough registration coordinate shift amount;
The error between middle and high image in different resolution is corrected using the second rough registration coordinate shift amount;
(13) essence registration
Foreground area L is chosen in low-resolution image, which is respectively mapped to centre and high-resolution after correction
Rate image is denoted as region B, C;
The corresponding region M of L is found on B using the template matching based on correlation, obtains sample to (L, M);Use base
The corresponding region H of L is found on C in the template matching of correlation, the region L, C is registrated, and obtains sample to (L, H);
According to (L, M) and (L, H) building sample (L, M, H).
Further, the loss function that the study dual training uses are as follows:
LG=Lmse+α*Lp+β*Ladver
Wherein, LmseIt is mean square error, LpIt is perception loss, LadverIt is the confrontation loss of generator, LDIt is pair of discriminator
Damage-retardation is lost, LGIt is generator total losses function;N is sample size, and C, H, W is picture size, x{n}For n-th of sample, I{n}For
Low-resolution image x{n}Corresponding full resolution pricture, y are the labels of true full resolution pricture;Gθ, DΘRespectively correspond generator and
Discriminator, Φ are the good vgg models of pre-training, and output is characteristic pattern;α, β balance every loss function.Of the invention has
Beneficial technical effect is embodied in:
(1) present invention is using confrontation network training generator, so that low-resolution image is reconstructed into high-resolution, it can
The high-definition picture close with high power objective shooting effect is only obtained from low-resolution image, can be saved a large amount of slice and be swept
Time and hardware cost are retouched, efficiently solves the problems, such as that pathological image Super-resolution Reconstruction is fuzzy and artifact.
(2) preferably, proposed by the present invention based on the ultra-resolution method for generating confrontation type model, it is being registrated good data
Optimum efficiency can be reached on collection.Image registration scheme registration error proposed by the present invention can achieve sub-pix rank while protect
Hold certain speed.The different resolution image of same slice is often to be scanned by same equipment difference camera lens, generally
Only exist error horizontally and vertically.But since pathological section can achieve greatly 60k*60k pixel very much,
Often there is picture mosaic in slice, this can introduce the inconsistent error in part.That is an overall situation cannot be obtained
Offset correct the error between two slices, internal remain unchanged of slice may be inconsistent.And registration in two steps of the invention
Scheme can smoothly solve this problem, and rough registration obtains the coarse relative displacement of two slices, reduce search range, essence
Registration is on this basis, accurate to be registrated the zonule needed.Rough registration is improved with Quasi velosity and essence registration guarantee registration accuracy.
(3) preferably, proposed by the present invention based on the ultra-resolution method for generating confrontation type model, in conjunction with the data of high quality
Collection, can generate the high-definition picture of high quality.The production of data set has a great impact to effect of the invention, the present invention
The data making method of proposition includes regional choice, inhibit background area can allow model learning to height resolution between difference
It is different, and then improve the ability of Model Reconstruction full resolution pricture.If not doing aforesaid operations, in contrast due to pathological section cell
It is more sparse, it can generate white piece excessive.Loss function take it is average due to, these the white piece study energy that will affect model
Power.
(4) proposed by the present invention based on the ultra-resolution method for generating confrontation type model, by using more supervision and sublevel
The optimal way of section can make model convergence rapider, and optimum results are good.For the relatively single supervision of more supervision, due to being added
Intermediate supervision reduces model convergence difficulty, and be equivalent to display tells model, during Super-resolution Reconstruction, image
How resolution ratio changes.In multistage training process, for example first optimize 4x to 10x, then optimizes 10x to 20x, gradually
Up-sampling scheme, which reduces, disposably to be up-sampled the artifact being excessively easy to produce and obscures.By intermediate parity as transition, drop
Low low power and the excessive bring of high power depth of field difference are difficult to convergence problem.
Generally speaking, proposed by the present invention based on confrontation type ultra-resolution method is generated, it is that a kind of pair of real world is acquired
Data universal method, not only to pathological section use, be also applicable in as long as establishing suitable data set to other slice of data.
Detailed description of the invention
Fig. 1 is a kind of process signal based on the multistage Image Super-resolution Reconstruction method for generating confrontation network of the present invention
Figure;
Fig. 2 is the flow diagram that the present invention implements the image registration provided;
Fig. 3 is the schematic network structure that the present invention implements a kind of target network model provided;
Fig. 4 is the high-definition picture that the present invention implements the super-resolution image of the generation provided and true microscope generates
Effect contrast figure, wherein Fig. 4 (a) is true 4 times of mirror images, and Fig. 4 (b) is that 4 times of mirrors generate 20 times of mirror images, and Fig. 4 (c) is
True 20 times of mirror images.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right
The present invention is further elaborated.It should be appreciated that described herein, specific examples are only used to explain the present invention, not
For limiting the present invention.As long as in addition, technical characteristic involved in the various embodiments of the present invention described below that
Not constituting conflict between this can be combined with each other.
Term " first ", " second " and " third " in description and claims of this specification etc. are for distinguishing not
Same object, not for description particular order.
Low-resolution image, mid-resolution image, high resolution graphics seem relative concept in the present invention.Such as, 4x schemes
As being low-resolution image relative to 10x, 10x is high-definition picture.10x image is low-resolution image relative to 20x,
20x is high-definition picture.If low middle high-resolution occurs simultaneously, then divides and refer to 4x, 10x, 20x image.
Technical thought of the invention is: the sectioning image of the different resolution of same slice is registrated;Utilize registration
Sectioning image afterwards makes training dataset;On training dataset, generate confrontation using multistage thought training is supervised more
Model;Using trained generation confrontation model, low resolution image is redeveloped into full resolution pricture.
According to a kind of preferable mode, the present invention for example first optimizes 4x to 10x, then optimizes in intermediate supervision is added
10x to 20x, up-sampling scheme gradually, which reduces, disposably to be up-sampled the artifact being excessively easy to produce and obscures.Pass through centre
10x supervision is used as transition, reduces 4x and the excessive bring of 20x depth of field difference is difficult to convergence problem.
Fig. 1 gives a preferred embodiment of the present invention method flow diagram, method includes the following steps:
(1) by optical microscopy imaging hardware, the Cellular Pathology Image of 4x, 10x and 20x of same piece are obtained.
(2) rough registration is carried out by standard of 4x image, obtains the image shift amount of 10x image each slice;Again with 10x
Image is that standard carries out rough registration, obtains the image shift amount of each slice of 20x image.Specific step is as follows:
Training depth convolutional neural networks require height to video memory, and the data of original size are larger, will lead to computer resource
Deficiency, it is therefore desirable to original image is cut to smaller size of picture as training data.But because at different points of imaging
The distance for needing frequent moving stage and object lens that the image under different object lens is caused to have pixel scale when the slice of resolution
Error, so should first do the registration of image before image slice.
It is sliced 15 sizes at random in the image of 4x and is the subregion of 2000*2000 pixel, and subregion coordinate is multiplied
2.5, it maps on 10x image.In the corresponding subregion of 10x image in 2000 pixels of redundancy in all directions, by the sub-district of 4x
Domain up-samples 2.5 times, then slides in redundancy frame, obtains coordinate shift amount when image correlation maximum.And to 15 sons
Area coordinate offset is averaged, obtained coordinate shift amount be individual pathological image from 4x to 10x image shift amount.
For the image registration problem of 10x and 20x, being equally sliced 15 sizes at random on 10x image is 2000*2000
The subregion of pixel, and subregion coordinate is multiplied 2, it maps on 20x image.In the corresponding subregion of 20x image in all directions
10x subregion is up-sampled 2 times, then slided in redundancy frame, when obtaining image correlation maximum by 2000 pixels of upper redundancy
Coordinate shift amount.15 sub-regions coordinate shifts are measured it is average, obtained coordinate shift amount be individual pathological image from
The image shift amount of 10x to 20x.
(3) it is reference with rough registration coordinate shift amount, the coordinate of each training data image is obtained in reference zone
Offset, production 4x training data image and corresponding 10x training data image, 20x training data image data set,
Wherein 10x, 20x image are the supervision of 4x image.Specific step is as follows:
Since objective table is mobile and stitching error, can not find a global offset matches two slices completely
It is quasi-.Therefore when making training dataset it is necessary to further be registrated, making acquired image is exact matching.
The rough registration coordinate shift amount of 4x to the 10x of individual piece obtained using step (2) is standard, on 4x image,
The region L of 128*128 pixel size is randomly selected, position of the region L after the correction of rough registration offset on 10x image is obtained
It sets.All directions in the position take the redundancy of 600 pixels, and the region after note redundancy extension is B.After L is up-sampled 2.5 times
It is slided on B.Region M corresponding with L on B when taking correlation maximum.If the correlation of L and M is less than threshold value 0.8, it is registrated mistake
It loses, chooses L again, repeat the above steps.
The rough registration coordinate shift amount of 10x to the 20x of individual piece obtained using step (2) is standard, in the figure of 10x
The mid-resolution image M obtained as on for previous step obtains its position on 20x after the correction of rough registration offset
It sets, all directions take the redundancy of 600 pixel distances in the position, and the region after note redundancy extension is C.After M is up-sampled 2 times
It is slided in C.Region H corresponding with M on C when taking correlation maximum.If the correlation of M and H is less than threshold value 0.8, it is registrated mistake
It loses, chooses L again, repeat the above steps.If correlation meets threshold condition, this is registrated successfully.By sample to (L, M, H)
It saves.
It repeats the above steps, 7000 samples is chosen on every slice to (L, M, H).
(4) using residual error network as the main body framework of generator, using VGG as the main body framework of discriminator, building is generated
Network model is fought, training data is read and is trained, generate the neural network for generating high-definition picture.Specific steps
It is as follows:
As shown in figure 3, constructing network structure using residual error network as the main body framework of generator, 15 residual error moulds are used
Block, each residual error module are made of two convolutional layers and a PReLu activation primitive.And it is inputting, intermediate parity and last
The convolution of a 9x9 is connect at supervision respectively to increase receptive field.In order to realize that the multistage supervises, generator is divided into two parts, G1
And G2, G1 extract feature using 10 residual error modules, G2 extracts feature using 5 residual error modules, and G1 and G2 connect sampling mould
Block realizes super-resolution stage by stage.First stage is exercised supervision using 10x image, and second stage is exercised supervision using 20x image,
And preferably pixel shuffle module is as up-sampling module.
Loss function consists of three parts, first part, generates image and supervises the mean square error of image, Mean Square Error Ratio
To the error between pixel, Y-PSNR can be improved;Second part, will generate image and supervision image by one
The VGG network that pre-training is crossed on Image Net data set extracts the 31st layer of characteristic pattern, calculates square between two groups of characteristic patterns
Error, for calculating perception loss.Part III generates confrontation loss, and generator and identification are confronted with each other, and mutually improves.Damage
Function is lost specifically to be expressed as follows:
LG=Lmse+α*Lp+β*Ladver
Wherein, LmseIt is mean square error, LpIt is perception loss, LadverIt is the confrontation loss of generator.LGIt is that generator always damages
Lose function, LDIt is discriminator confrontation loss;N is sample size, and C, H, W is picture size, x{n}For n-th of sample, I{n}For x{n}
Corresponding full resolution pricture, y are the labels of true full resolution pricture.Gθ, DΘCorresponding generator and discriminator, Φ are that pre-training is good
Vgg model, output be characteristic pattern.α, β balance every loss function.
(5) training set after registration is read, and inputs network and is trained.Specific steps:
4x, 10x, 20x data for reading registration are trained.Training strategy is first to optimize generator G1 and discriminator D1
Parameter, use 10x image as supervision, wherein the learning rate of generator and discriminator is disposed as 1e-4, and optimizer uses
Adam, optimization 2K wheel.Then it fixes generator G1 and the parameter of discriminator D1 (sets 0 for the learning rate of G1 and D1, makes
It cannot be optimised), successively optimize the parameter of generator G2 and discriminator D2, uses 20x image as supervising, wherein generator
It is disposed as 1e-4 with the learning rate of discriminator, using Adam optimizer, optimizes 2K wheel.Finally, release G1 and D1 parameter, joint
Training G1 and G2 and D1 and D2, while being exercised supervision using 10x and 20x data, 1e-5 is set by learning rate at this time, altogether
Optimize K wheel.
(6) it is loaded into preferred generator network parameter, untrained low power lens low-resolution image is inputted in generator,
The outstanding high-definition picture of imaging effect is produced, as shown in Figure 4.Specific steps:
A series of network weights of preservation are successively loaded into network, generator is allowed to generate the high-definition picture of 5 times of amplification,
The PSNR and SSIM of true picture under high-definition picture and high power lens are calculated, and calculates the score average of test set.It will divide
The high network weight of number is preferably the network parameter of generator.
The low fractional diagram picture shot under low power lens is inputted in generator, high-definition picture is generated.It can from Fig. 4
Out, the low-resolution image high blur of input, it is virtually impossible to see nucleus detailed information, but after Super-resolution Reconstruction
Image and true 20x image it is very close.By low-resolution image after ultra-resolution method proposed by the present invention reconstruction,
The details in the 20x image being not present in 4x image can be restored, so the image of 4x resolution ratio can embody huge
Big value.The maximum advantage of 4x image is fast with image taking speed, but a disadvantage is that image is fuzzy, it is impracticable.By this method,
4x image can take into account picture quality while guaranteeing image taking speed.
Low power image composer provided by the invention, can save the scan rebuilding time of image, while can save hard disk
Memory space.
As it will be easily appreciated by one skilled in the art that the foregoing is merely illustrative of the preferred embodiments of the present invention, not to
The limitation present invention, any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should all include
Within protection scope of the present invention.
Claims (7)
1. it is a kind of based on the more supervision Image Super-resolution Reconstruction methods for generating confrontation network, by low-resolution image input generation pair
Anti- network model generates confrontation network model and exports high-definition picture, which is characterized in that the generation confrontation network model is pressed
It is constructed according to such as under type:
(1) training dataset is made
The low-resolution image and high-definition picture for extracting same targeted imaging region, respectively in low-resolution image and high score
In resolution image choose foreground area L, H, region L and H are registrated, obtain the same area high-low resolution Patch (L,
H);
(2) training pattern
Building includes that the generation of generator and discriminator fights network model, and the low-resolution image L in Patch (L, H) is made
For the input of generator, generator output and the similar image H ' of high-definition picture H in Patch (L, H), discriminator identify
The true and false of image H and H ' learn repeatedly and fight, and training obtains generator.
2. according to claim 1 based on the more supervision Image Super-resolution Reconstruction methods for generating confrontation network, feature exists
In the loss function for generating confrontation model and using are as follows:
LG=Lmse+α*Lp+β*Ladver
Wherein, LmseIt is mean square error, LpIt is perception loss, LadverIt is generator confrontation loss, LDIt is discriminator confrontation loss LG
It is generator total losses function;N is sample size, and C, H, W is picture size, x{n}For n-th of sample, I{n}For low resolution figure
As x{n}Corresponding full resolution pricture, y are the labels of true full resolution pricture;Gθ, DΘRespectively correspond generator and discriminator, Φ
It is the good vgg model of pre-training, output is characteristic pattern;α, β balance every loss function.
3. according to claim 1 based on the more supervision Image Super-resolution Reconstruction methods for generating confrontation network, feature exists
In the specific embodiment of step (1) the production training dataset are as follows:
(11) rough registration
Low-resolution image is divided into multiple subregions, is multiplied by subregion coordinate with multiple and all subregion is mapped to height
Image in different resolution, the multiple are amplification factor of the high-definition picture relative to low-resolution image;
Redundancy extension is carried out to determining subregion is mapped in high-definition picture, expands boundary;
The subregion mapped in high-definition picture with it subregion in low-resolution image one by one carries out correlation
Match, obtains subregion coordinate shift amount when correlation maximum;
All subregion coordinate shift amounts are averaging, it is inclined to the rough registration coordinate of high-definition picture to obtain low-resolution image
Shifting amount;
Error low, between high-definition picture is corrected using the resulting coordinate shift amount of rough registration;
(12) essence registration
Foreground area L is chosen in low fractional diagram picture, will be mapped in the high-definition picture after correcting behind the region, be denoted as area
Domain R;
Redundancy extension is carried out to the region R in high-definition picture, expands the boundary of R;
The corresponding region H of L is found on R using the template matching based on correlation, taking (L, H) is sample pair, obtains a logarithm
According to Patch (L, H).
4. according to claim 1 or 2 or 3 based on the more supervision image super-resolution methods for generating confrontation network, feature
It is, the generator uses the convolutional neural networks with residual error structure, and the discriminator uses VGG model.
5. it is a kind of based on the more supervision Image Super-resolution Reconstruction methods for generating confrontation network, by low-resolution image input generation pair
Anti- network model generates confrontation network model and exports high-definition picture, which is characterized in that the generation confrontation network model is pressed
It is constructed according to such as under type:
(1) training sample set is made
Extract the low-resolution image, mid-resolution image and high-definition picture of same targeted imaging region;
Choose foreground area L in low-resolution image, and by the area maps to corresponding with the high-definition picture area in centre
Domain B, C;L, B area are registrated, obtain sample to (L, M);The region L, C is registrated, obtains sample to (L, H);
According to (L, M) and (L, H) building sample (L, M, H);
(2) training pattern
Building includes the confrontation network model of the first generator and the first discriminator, by the low resolution figure in sample (L, M, H)
Input as L as the first generator, the first generator export image M ' similar with mid-resolution image M, and first identifies
Device identifies the true and false of image M and M ', learns repeatedly and fights, and training obtains the first generator;
Building includes the confrontation network model of the second generator and the second discriminator, regard the output M ' of the first generator as second
The input of generator, the second generator export image H ' similar with high-definition picture H, the second discriminator identify image H with
The true and false of H ' learns repeatedly and fights, and training obtains the second generator;
Joint training is done again to the first generator and the second generator, it is final to determine the first generator and the second generator.
6. according to claim 5 based on the more supervision Image Super-resolution Reconstruction methods for generating confrontation network, feature exists
In the specific embodiment of step (1) the production training sample pair are as follows:
(11) rough registration in
Low-resolution image is divided into multiple subregions, is mapped all subregion with the first multiple product by subregion coordinate
To high-definition picture, first multiple is amplification factor of the medium resolution image relative to low-resolution image;
Redundancy extension is carried out to determining subregion is mapped in medium resolution image, expands boundary;
The subregion mapped in medium resolution image with it subregion in low-resolution image one by one carries out correlation
Match, obtains the first subregion coordinate shift amount when correlation maximum;
To all first subregion coordinate shift amounts be averaging, obtain low-resolution image to mid-resolution image first slightly
It is registrated coordinate shift amount;
Error low, between mid-resolution image is corrected using the first rough registration coordinate shift amount;
(12) high rough registration is arrived in
Medium resolution image is divided into multiple subregions, is mapped all subregion with the second multiple product by subregion coordinate
To high-definition picture, second multiple is amplification factor of the high-definition picture relative to medium resolution image;
Redundancy extension is carried out to determining subregion is mapped in high-definition picture, expands boundary;
The subregion mapped in high-definition picture with it subregion in medium resolution image one by one carries out correlation
Match, obtains the second subregion coordinate shift amount when correlation maximum;
All second subregion coordinate shift amounts are averaging, obtain low-resolution image to high-definition picture second is slightly matched
Quasi coordinates offset;
The error between middle and high image in different resolution is corrected using the second rough registration coordinate shift amount;
(13) essence registration
Foreground area L is chosen in low-resolution image, which is respectively mapped to centre and high resolution graphics after correction
Picture is denoted as region B, C;
The corresponding region M of L is found on B using the template matching based on correlation, obtains sample to (L, M);Using based on phase
The template matching of closing property finds the corresponding region H of L on C, is registrated to the region L, C, obtains sample to (L, H);
According to (L, M) and (L, H) building sample (L, M, H).
7. according to claim 5 or 6 based on the more supervision Image Super-resolution Reconstruction methods for generating confrontation network, feature
It is, the loss function that the study dual training uses are as follows:
LG=Lmse+α*Lp+β*Ladver
Wherein, LmseIt is mean square error, LpIt is perception loss, LadverIt is the confrontation loss of generator, LDIt is discriminator to damage-retardation
It loses, LGIt is generator total losses function;N is sample size, and C, H, W is picture size, x{n}For n-th of sample, I{n}It is low point
Resolution image x{n}Corresponding full resolution pricture, y are the labels of true full resolution pricture;Gθ, DΘRespectively correspond generator and identification
Device, Φ are the good vgg models of pre-training, and output is characteristic pattern;α, β balance every loss function.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910533885.4A CN110322403A (en) | 2019-06-19 | 2019-06-19 | A kind of more supervision Image Super-resolution Reconstruction methods based on generation confrontation network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910533885.4A CN110322403A (en) | 2019-06-19 | 2019-06-19 | A kind of more supervision Image Super-resolution Reconstruction methods based on generation confrontation network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110322403A true CN110322403A (en) | 2019-10-11 |
Family
ID=68119854
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910533885.4A Pending CN110322403A (en) | 2019-06-19 | 2019-06-19 | A kind of more supervision Image Super-resolution Reconstruction methods based on generation confrontation network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110322403A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111489404A (en) * | 2020-03-20 | 2020-08-04 | 深圳先进技术研究院 | Image reconstruction method, image processing device and device with storage function |
CN111553840A (en) * | 2020-04-10 | 2020-08-18 | 北京百度网讯科技有限公司 | Image super-resolution model training and processing method, device, equipment and medium |
CN111652850A (en) * | 2020-05-08 | 2020-09-11 | 怀光智能科技(武汉)有限公司 | Screening system based on mobile device |
CN111652851A (en) * | 2020-05-08 | 2020-09-11 | 怀光智能科技(武汉)有限公司 | Super-resolution microscopic system based on mobile device |
CN113191949A (en) * | 2021-04-28 | 2021-07-30 | 中南大学 | Multi-scale super-resolution pathological image digitization method and system and storage medium |
CN114819163A (en) * | 2022-04-11 | 2022-07-29 | 合肥本源量子计算科技有限责任公司 | Quantum generation countermeasure network training method, device, medium, and electronic device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102428479A (en) * | 2009-04-17 | 2012-04-25 | 里弗兰医疗集团公司 | Chest X-ray registration, subtraction and display |
US20180075581A1 (en) * | 2016-09-15 | 2018-03-15 | Twitter, Inc. | Super resolution using a generative adversarial network |
CN108965658A (en) * | 2018-06-25 | 2018-12-07 | 中国科学院自动化研究所 | Essence registration image collecting device towards depth Digital Zoom |
CN109146784A (en) * | 2018-07-27 | 2019-01-04 | 徐州工程学院 | A kind of image super-resolution rebuilding method based on multiple dimensioned generation confrontation network |
CN109509152A (en) * | 2018-12-29 | 2019-03-22 | 大连海事大学 | A kind of image super-resolution rebuilding method of the generation confrontation network based on Fusion Features |
CN109685716A (en) * | 2018-12-14 | 2019-04-26 | 大连海事大学 | A kind of image super-resolution rebuilding method of the generation confrontation network based on Gauss encoder feedback |
-
2019
- 2019-06-19 CN CN201910533885.4A patent/CN110322403A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102428479A (en) * | 2009-04-17 | 2012-04-25 | 里弗兰医疗集团公司 | Chest X-ray registration, subtraction and display |
US20180075581A1 (en) * | 2016-09-15 | 2018-03-15 | Twitter, Inc. | Super resolution using a generative adversarial network |
CN108965658A (en) * | 2018-06-25 | 2018-12-07 | 中国科学院自动化研究所 | Essence registration image collecting device towards depth Digital Zoom |
CN109146784A (en) * | 2018-07-27 | 2019-01-04 | 徐州工程学院 | A kind of image super-resolution rebuilding method based on multiple dimensioned generation confrontation network |
CN109685716A (en) * | 2018-12-14 | 2019-04-26 | 大连海事大学 | A kind of image super-resolution rebuilding method of the generation confrontation network based on Gauss encoder feedback |
CN109509152A (en) * | 2018-12-29 | 2019-03-22 | 大连海事大学 | A kind of image super-resolution rebuilding method of the generation confrontation network based on Fusion Features |
Non-Patent Citations (3)
Title |
---|
林懿伦等: "人工智能研究的新前线:生成式对抗网络", 《自动化学报》 * |
覃凤清等: "一种基于子像素配准视频超分辨率重建方法", 《光电子.激光》 * |
马昊宇等: "基于小递归卷积神经网络的图像超分辨算法", 《光子学报》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111489404A (en) * | 2020-03-20 | 2020-08-04 | 深圳先进技术研究院 | Image reconstruction method, image processing device and device with storage function |
CN111489404B (en) * | 2020-03-20 | 2023-09-05 | 深圳先进技术研究院 | Image reconstruction method, image processing device and device with storage function |
CN111553840A (en) * | 2020-04-10 | 2020-08-18 | 北京百度网讯科技有限公司 | Image super-resolution model training and processing method, device, equipment and medium |
CN111652850A (en) * | 2020-05-08 | 2020-09-11 | 怀光智能科技(武汉)有限公司 | Screening system based on mobile device |
CN111652851A (en) * | 2020-05-08 | 2020-09-11 | 怀光智能科技(武汉)有限公司 | Super-resolution microscopic system based on mobile device |
CN113191949A (en) * | 2021-04-28 | 2021-07-30 | 中南大学 | Multi-scale super-resolution pathological image digitization method and system and storage medium |
CN113191949B (en) * | 2021-04-28 | 2023-06-20 | 中南大学 | Multi-scale super-resolution pathology image digitizing method, system and storage medium |
CN114819163A (en) * | 2022-04-11 | 2022-07-29 | 合肥本源量子计算科技有限责任公司 | Quantum generation countermeasure network training method, device, medium, and electronic device |
CN114819163B (en) * | 2022-04-11 | 2023-08-08 | 本源量子计算科技(合肥)股份有限公司 | Training method and device for quantum generation countermeasure network, medium and electronic device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110322403A (en) | A kind of more supervision Image Super-resolution Reconstruction methods based on generation confrontation network | |
CN107767413B (en) | Image depth estimation method based on convolutional neural network | |
CN110570353B (en) | Super-resolution reconstruction method for generating single image of countermeasure network by dense connection | |
CN112001960B (en) | Monocular image depth estimation method based on multi-scale residual error pyramid attention network model | |
CN111754446A (en) | Image fusion method, system and storage medium based on generation countermeasure network | |
CN107194872A (en) | Remote sensed image super-resolution reconstruction method based on perception of content deep learning network | |
CN109035172B (en) | Non-local mean ultrasonic image denoising method based on deep learning | |
CN109584156A (en) | Micro- sequence image splicing method and device | |
CN113012172A (en) | AS-UNet-based medical image segmentation method and system | |
CN106023230B (en) | A kind of dense matching method of suitable deformation pattern | |
CN111626927B (en) | Binocular image super-resolution method, system and device adopting parallax constraint | |
CN110070539A (en) | Image quality evaluating method based on comentropy | |
CN108182669A (en) | A kind of Super-Resolution method of the generation confrontation network based on multiple dimension of pictures | |
CN104077742B (en) | Human face sketch synthetic method and system based on Gabor characteristic | |
CN110070574A (en) | A kind of binocular vision Stereo Matching Algorithm based on improvement PSMNet | |
CN109376641A (en) | A kind of moving vehicle detection method based on unmanned plane video | |
CN105678712B (en) | The improvement Criminisi image repair methods of combining adaptive gradient piecemeal and equilong transformation | |
CN116681894A (en) | Adjacent layer feature fusion Unet multi-organ segmentation method, system, equipment and medium combining large-kernel convolution | |
CN113343822A (en) | Light field saliency target detection method based on 3D convolution | |
CN114663880A (en) | Three-dimensional target detection method based on multi-level cross-modal self-attention mechanism | |
CN110956601A (en) | Infrared image fusion method and device based on multi-sensor mode coefficients and computer readable storage medium | |
CN112802186B (en) | Dynamic scene real-time three-dimensional reconstruction method based on binarization characteristic coding matching | |
CN112598604A (en) | Blind face restoration method and system | |
CN107578406A (en) | Based on grid with Wei pool statistical property without with reference to stereo image quality evaluation method | |
CN111696167A (en) | Single image super-resolution reconstruction method guided by self-example learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
AD01 | Patent right deemed abandoned |
Effective date of abandoning: 20231201 |
|
AD01 | Patent right deemed abandoned |