CN115063293A - Rock microscopic image super-resolution reconstruction method adopting generation of countermeasure network - Google Patents

Rock microscopic image super-resolution reconstruction method adopting generation of countermeasure network Download PDF

Info

Publication number
CN115063293A
CN115063293A CN202210606345.6A CN202210606345A CN115063293A CN 115063293 A CN115063293 A CN 115063293A CN 202210606345 A CN202210606345 A CN 202210606345A CN 115063293 A CN115063293 A CN 115063293A
Authority
CN
China
Prior art keywords
loss function
resolution
network
image
focus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210606345.6A
Other languages
Chinese (zh)
Inventor
白相志
许欣然
孙衡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202210606345.6A priority Critical patent/CN115063293A/en
Publication of CN115063293A publication Critical patent/CN115063293A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4046Scaling the whole image or part thereof using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/28Indexing scheme for image data processing or generation, in general involving image processing hardware
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention provides a rock microscopic image super-resolution reconstruction method by adopting a generated countermeasure network, which comprises the following specific steps of: the method comprises the following steps: generating input data; the input of the resolution improving branch is a three-channel original image; inputting a focus balance branch into a focus attention diagram; the focusing attention diagram is obtained by calculation of an original image through two steps of fuzzy kernel filtering and guided filtering; step two: building a double-branch forward propagation network; the two branches are connected by a bidirectional biased fusion module BBFM; the module is based on an MS-CAM module, takes full consideration of global and local information, and realizes bidirectional biased regulation and control by flexible combination of multiplication, addition and splicing and auxiliary trainable weight; step three: carrying out reverse training on a loss function; and calculating and generating a confrontation loss function, a mean square error loss function, a perception loss function and a focusing constraint loss function after forward propagation, and performing backward propagation by a random gradient descent method to train the network.

Description

Rock microscopic image super-resolution reconstruction method adopting generation of countermeasure network
Technical Field
The invention relates to a rock microscopic image super-resolution reconstruction method for generating a countermeasure network, which is characterized in that a focus visual field of an ultra-resolution image is generated in a balanced manner through a focus area attention branch and a bidirectional fusion module by depending on an image under a low-power lens, so that the ultra-resolution image with clear texture and a large depth of field effect is obtained. The method belongs to the field of combination of deep learning and mineralogy, and mainly relates to a mineral sheet sample microscopic imaging and deep learning super-resolution restoration technology. Has wide application prospect in mineral microscopic image dynamic and global observation tasks.
Background
The technique and method for observing and researching the rock slices by using a polarizing microscope are one of the most basic, most effective, most rapid and cheapest methods in the mineralogy. The sample was ground into a thin slice with a thickness of 0.03 mm and photographed using a polarization microscope while being placed on a polarization microscope stage, so that a microscopic image of the rock thin slice could be obtained. The microscopic image of the rock slice comprehensively and truly reflects the characteristics of the microscopic structure, granularity, mineral components and the like of the rock, and can help researchers to accurately determine the conditions of the rock structure, type, void evolution law, fracture development and control and the like.
The resolution of the microscope refers to the minimum distance between two object points which can be clearly distinguished by the microscope, and directly reflects the capability of an imaging system to reflect the detailed information of the object actually. Compared with the low-resolution image, the high-resolution image generally contains larger pixel density, richer texture details and higher reliability, and can better assist scientific research. The price of the microscope is greatly increased along with the increase of the magnification factor, and the traditional optical microscope is limited by the optical diffraction limit, and the transverse resolution and the axial resolution are respectively stopped at about 200nm and 500 nm. The observation technologies such as a scanning electron microscope and a scanning tunnel microscope have the problem of great damage to the sample. The resolution of the imaging result is improved by the super-resolution reconstruction algorithm, so that the method has the advantages of high speed, high efficiency and low cost, is not limited by a hardware system, does not need to damage a sample, and is concerned by researchers in recent years (reference documents: Wangmada and the like, deep learning in a fluorescence microscope to realize cross-mode super-resolution, a natural method, 2019,16(1): 103-.
In addition, rock samples are usually made by mixing different minerals, and the hardness difference of each component is large. Rock hardness is the ability of a rock to resist scratching or indentation into its surface by other objects, measured in Pa (pascal) or MPa (megapascals). Taking granite in magma as an example, the main components of the granite comprise quartz, mica, feldspar and the like. Wherein the hardness of quartz is 7, the hardness of feldspar is about 6-6.5, and the hardness of mica is only 2-3. Different minerals with large hardness difference are difficult to grind into a smooth surface in the process of preparing a rock slice microscopic sample, the phenomenon of definition difference of each area of the sample in a visual field caused by small depth of field can be usually observed under more than ten times of magnification, and the definition difference is more and more intensified along with the increase of the magnification due to the inverse length relation of the magnification and the depth of field. In dynamic observation tasks such as rock inclusion freezing point capture and the like, due to the limitation of depth of field, experimenters can only simultaneously focus on 1-2 clear targets in a field in a complete experiment process, and a large amount of unnecessary time and labor consumption are caused.
The invention provides a method for generating confrontation network super-resolution reconstruction aiming at double branches of a rock polarization microscopic image, aiming at the problem of super-resolution reconstruction of a microscopic imaging result and taking a large depth-of-field imaging effect as an entry point.
Disclosure of Invention
In order to obtain a high-resolution microscopic image and simultaneously improve the problem of small depth-of-field imaging under the condition of magnification improvement, the invention provides a method for generating confrontation network super-resolution reconstruction aiming at double branches of a rock polarization microscopic image.
The invention relates to a double-branch generation confrontation network super-resolution reconstruction method, wherein a network comprises a generator and two discriminators; the generator is formed by connecting a resolution improving branch and a focusing balance branch in parallel, the resolution improving branch and the focusing balance branch are linked through a bidirectional Biased Fusion module BBFM (bidirectional Biased Fusion module), and finally, a recovered high power under-lens image is output. Both discriminators are VGG networks, and are respectively used for discriminating output results of the resolution enhancement branch and the focus balance branch (see fig. 1). The method comprises the following specific steps:
the method comprises the following steps: and generating input data. The input of the resolution improving branch is a three-channel original image; the focus balance branch input is a focus attention map. Preferably, the focus attention map is obtained by performing two-step calculation of fuzzy kernel filtering and guiding filtering on the original map.
Step two: and (4) building a double-branch forward propagation network. The two branches are connected by a bidirectional biased fusion module BBFM. The module is based on an MS-CAM module, fully considers global and local information, and realizes bidirectional biased regulation and control by flexible combination of multiplication, addition and splicing and auxiliary trainable weight (see figure 2).
Step three: the loss function is trained in reverse. The method calculates and generates an antagonistic loss function, a mean square error loss function, a perceptual loss function and a focusing constraint loss function after forward propagation, and performs backward propagation and network training through a gradient descent method of a self-adaptive learning rate.
Has the advantages and the effects that:
the method generates the high-power under-mirror imaging result based on the low-power under-mirror image, has the characteristics of high efficiency, convenience and low cost, and can obtain more delicate mineral textures compared with a non-generated network by generating the over-resolution result through the antagonistic network. Furthermore, the method balances the relation between the depth of field and the resolution ratio through focusing attention branch and loss function constraint, and can obtain a high-power image with a large depth of field effect. In the process, the method avoids the problems of data loss and insufficient fusion caused by simple addition or splicing through the bidirectional biased fusion module BBFM, bidirectionally exchanges the intermediate characteristics of two branches, plays a mutual promotion role, and further improves the method effect. The method has the advantages that the method has wide application value in a rock sample microscopic dynamic observation task.
Drawings
Fig. 1 shows the basic structure of the dual branch generation countermeasure network of the present invention.
Fig. 2 is a basic structure of the bidirectional biased fusion module of the present invention.
Fig. 3a is a hyper-resolution image focus attention calculation result, fig. 3b is a high power mirror image focus attention calculation result, and fig. 3c is a low power mirror image focus attention calculation result. The four panels in the figure represent the original input image, the blurred image, the guide map of the guide filtering and the finally obtained focus attention map, respectively.
FIG. 4 is the result of the present invention's hyper-differentiation in cross-polarized olivine samples.
Fig. 5 is a process of the invention for cross-polarized marble fusion label hyper-segmentation training.
Detailed Description
For better understanding of the technical solutions of the present invention, the following further describes embodiments of the present invention with reference to the accompanying drawings.
The numerical aperture of a microscope, a measure of the angle covered by an objective, characterizes the condensing power of the objective, usually expressed as NA, which is calculated by the formula:
NA=n·sin(α),
where n is the refractive index of the medium and α is half the aperture angle. Light passing through the aperture is diffracted, forming airy discs comprising a central bright region and a series of concentric rings on the imaging plane, the overlapping of the airy discs making the image points indistinguishable. The resolution r of the microscope is the shortest distance between two points on the specimen that can be distinguished as separate entities, and the calculation formula is as follows:
r=λ/(2NA),
where λ represents the wavelength. It can be seen that the larger the numerical aperture, the smaller the resolution, and the sharper the image.
The relationship between microscope magnification and resolution is complex. The effective magnification is the ratio of the resolution of the naked eye to the resolution of the microscope, and the value is determined in a polarizing microscopeAbout 0.2mm/0.2um is 1000 times. In this range, the microscope magnification W is calculated as: w ═ W 1 ·W 2 ·W 3 Wherein W is 1 ,W 2 ,W 3 The respective magnification is only related to the objective lens, the eyepiece lens and the built-in add-on lens system, namely the magnification of the objective lens, the eyepiece lens and the built-in add-on lens system. W is a group of 1 The value is the ratio of the length of the microscope optical cylinder to the focal length of the objective lens. Therefore, when the length and the diameter of the objective lens barrel are not changed, the magnification factor, the numerical aperture and the image definition are in positive correlation.
Depth of field is an important characteristic of a microscope, defined specifically as the ability of the lens to maintain a desired image quality (spatial frequency at a given contrast) without refocusing when the object is positioned close to and away from the best focus, i.e., the distance between the plane of the object that is most recently focused to the plane that is farthest simultaneously focused. Depth of field d of the microscope, similar to the resolution tot Also directly related to numerical aperture and wavelength, the formula is as follows:
Figure BDA0003671519280000041
where n is the refractive index of the medium between the cover glass and the objective front lens and e is the limit resolving distance of the detector. Thus, the numerical aperture is inversely related to the depth of field, i.e., the larger the numerical aperture, the smaller the depth of field. It follows that a large depth of field and a high magnification are relatively contradictory at the hardware level,
in the invention, a focus area of an input low-magnification image is used as auxiliary information to balance the depth of field and the image definition. The method comprises the following specific steps:
the method comprises the following steps: and generating input data. The in-focus area has more detail than the out-of-focus area, and more information is lost after blur filtering. Therefore, in the calculation of the focus area attention map, the present invention selects a guiding filtering algorithm based on the blur difference (ref: Qiaxiaohua et al, guiding filtering, Signal Processing: Image Communication, 2012,35(6): 1397-. Guided filtering is an image filtering technique in which an algorithm considers a point on a function to be a linear combination of points in adjacent parts, thereby expressing a complex function by an average value of a series of local linear functions (ref: hooming et al, guided filtering, pattern recognition and machine intelligence 2012,35(6): 1397-. The edge and the texture of the input original image I are kept through a guide graph RFM, the advantages of effective edge keeping and non-iterative computation of bilateral filtering are achieved, and the speed is greatly increased to O (1) magnitude. In the process of constructing the focus attention map, firstly, calculating a difference value between the original image I and the degradation map M after the fuzzy kernel convolution as a guide map RFM, and representing the position of a pixel by (x, y), wherein the calculation formula is as follows:
M(x,y)=I(x,y)*f m
RFM(x,y)=|I(x,y)-M(x,y)|,
wherein, f m Representing a blur kernel, the kernel size is set to 3, taking into account edge loss.
The steering filter is a local linear model between the steering map RFM and the filtered output map Q, with i representing the position of the pixel, ω k Representing a local window, the relationship is:
Figure BDA0003671519280000051
a k ,b k are all constant coefficients. The core of the guided filtering is to calculate a constant coefficient that minimizes the difference between the input P and the output graph Q, i.e., to make the noise N closest to 0, so that the constraint equation is obtained as follows:
Q i =P i -N i
Figure BDA0003671519280000052
where e is the regularization parameter. Through derivative derivation, the constant coefficient calculation formula is as follows:
Figure BDA0003671519280000061
Figure BDA0003671519280000062
wherein mu k
Figure BDA0003671519280000063
Representing the mean and variance of the RFM map in the window,
Figure BDA0003671519280000064
representing the average value of the input image in the window, and ω represents the total number of pixels in the window, and in the present invention, the window size is selected to be 3 × 3. It should be noted that there are many convolution calculations in the guiding filtering algorithm based on the blur difference, so that if the image of the input training data is small in application, the edge reconstruction effect is affected. The size of the cropped label image should be 256 x 256 and above. If the computation is not sufficiently supported, the guided filtering step may optionally be omitted, using only the blur difference as the focus attention input network.
Step two: and (4) building a double-branch forward propagation network. The network comprises two discriminators and a generator. The network generator is composed of a resolution enhancement branch and a focusing balance branch, and the two branches are connected in parallel through a bidirectional biased fusion module BBFM (see fig. 1). The BBFM is based on an MS-CAM Module, and the core idea is to realize full fusion of attention weight trainable and bidirectional regulation intermediate features, wherein the MS-CAM Module improves Se-inclusion Module in SEnet, simultaneously considers local and global views, and replaces a full connection layer with a single-point convolution (see FIG. 2), and the calculation MC (x) formula with x as input is as follows:
MC(x)=Sigmoid(GAP(SE(x))+SE(x))
wherein GAP refers to the global average pooling metricAnd the SE module comprises a single-point convolution layer, a BatchNorm layer, a Relu activation function layer, a single-point convolution layer and a BatchNorm layer, and the Sigmoid is a selected nonlinear activation function. Based on the above attention weights, the present invention designs a BBFM module and its simplified version BBFM _ O. BBFM module, X, Y stands for input, X out ,Y out Representing the input, the specific calculation formula is as follows:
Figure BDA0003671519280000065
Figure BDA0003671519280000066
MC 1 ,MC 2 calculated for two independent MC. Meanwhile, the simplified version of BBFM _ O module is calculated by using MC only once, and the formula is as follows:
Figure BDA0003671519280000067
Figure BDA0003671519280000071
step three: the loss function is trained in reverse. And the network discriminators are all VGG networks and all choose to use the characteristic values in front of the VGG activation layer to calculate the reconstruction loss.
The network supports two different training modes, and the difference is realized on the calculation of the loss function according to whether the input data label is a fusion label or not. The network loss function consists of three parts: and generating a confrontation loss function, an over-division constraint loss function and a focusing constraint loss function. The super-resolution constraint loss function comprises a pixel mean square error loss function and a perception loss function which are respectively used for optimizing the pixel level truth degree and the perception effect of the network; the focus constraint loss function comprises both an accuracy evaluation of the reconstructed high-resolution focus map and a comparison between two branch outputs; generating an antagonistic loss functionIncluding training interaction with both discriminators. Thus, a loss function loss of the network is generated G The definition is as follows:
Figure BDA0003671519280000072
in the formula alpha 1 ,α 2 ,β 1 ,β 2 ,γ 1 ,γ 2 All the parameters are adjustable weight parameters, and the magnitude selection is approximately 0.1, 1, 0.01, 0.5, 0.05 and 0.005 in practical application. loss pixel ,loss lpips
Figure BDA0003671519280000073
loss adv
Figure BDA0003671519280000074
Respectively representing a pixel mean square error loss function, a perception loss function, a reconstructed focusing diagram loss function, a branch consistency loss function and a generation countermeasure loss function of two discriminators. The invention uses G to represent a network generator, D foc Respectively representing a hyper-resolution reconstruction discriminator and a focus reconstruction discriminator, F is a focus map calculation function, Iw, I H ,I S ,I F The input low-resolution image, the high-resolution image as a label, the output super-resolution image reconstruction result, and the output focal region reconstruction result are respectively represented. Pixel mean square error loss function loss pixel The method is used for evaluating the pixel-level difference between the generated hyper-resolution image and the high-resolution image serving as the label and ensuring that the reconstruction result is true and accurate, and the formula is shown as follows, wherein
Figure BDA0003671519280000075
To calculate as desired:
Figure BDA0003671519280000076
loss lpips for constraining the reconstruction result and scale as a function of perceptual lossAnd texture similarity of the labeled images is measured by calculating the mean square error loss of the features obtained after the trained VGG network phi is calculated, and the similarity is measured by the following formula:
Figure BDA0003671519280000077
in the focus constraint loss function
Figure BDA0003671519280000078
For evaluating the accuracy of the reconstructed high-resolution focal map, the calculation formula is different from a training data set in two forms. When the training data set is the original high and low magnification data pair collected by the microscope, the depth of field effect of the focus area of the reconstruction result is closer to the input low power image, so the formula is as follows:
Figure BDA0003671519280000081
where the function P upsamples the input data. When the input data set is the original low power microscope data and the fused high power microscope data, the constraint relationship can be directly established between the result and the label, and the formula is as follows:
Figure BDA0003671519280000082
the focus constraint loss function also includes a consistency constraint between two branch outputs, and the formula is as follows:
Figure BDA0003671519280000083
the resulting penalty function is:
Figure BDA0003671519280000084
Figure BDA0003671519280000085
the loss function pays attention to the perception effect while ensuring the trueness and reliability of the network reconstruction result, and balances the focus area of the reconstruction result.
By adjusting the loss function, the invention supports the training of two different corresponding mode input image pairs. When the original high-power and low-power images of olivine under the mirror are taken as training data, the olivine image under the mirror is obtained
Figure BDA0003671519280000086
The mean square error of the reconstructed focal region and the up-sampled input image is set, and the result is shown in fig. 4. L, S and H respectively represent an input microscope image with 5 times magnification, a network hyper-resolution reconstruction result image and a microscope image under a 20-time microscope as a label.
When the fusion result of the original marble low-power image and the original marble high-power image is used as a training image,
Figure BDA0003671519280000087
the function is defined as the mean square error of the reconstructed focusing area and the focusing area of the label image, and the reconstruction result changes with the turn as shown in FIG. 5. The left graph represents the input low-resolution graph, the right graph represents the change of the reconstructed high-resolution image along with the increase of the training round in 64 rounds of training from left to right and from top to bottom, and the gradual enrichment and the improvement of the detail of the image texture can be intuitively sensed.

Claims (8)

1. A rock microscopic image super-resolution reconstruction method adopting a generated countermeasure network is characterized in that a focus visual field of a super-resolution result is generated in a balanced manner through a focus area attention branch and a bidirectional fusion module, and a large-magnification image with clear texture, strong reality and large depth of field is obtained, and the method specifically comprises the following steps:
the method comprises the following steps: generating input data; the input of the resolution improving branch is a three-channel original image; inputting a focus balance branch into a focus attention diagram; the focusing attention diagram is obtained by calculation of an original image through two steps of fuzzy kernel filtering and guided filtering;
step two: building a double-branch forward propagation network; the two branches are connected by a bidirectional biased fusion module BBFM; the module is based on an MS-CAM module, takes full consideration of global and local information, and realizes bidirectional biased regulation and control by flexible combination of multiplication, addition and splicing and auxiliary trainable weight;
step three: reverse training of a loss function; and calculating and generating an antagonistic loss function, a mean square error loss function, a perception loss function and a focusing constraint loss function after forward propagation, and performing backward propagation by a gradient descent method of a self-adaptive learning rate to train the network.
2. The super-resolution reconstruction method for the rock microscopic image by using the generation countermeasure network as claimed in claim 1, wherein: in the first step, in the process of constructing the focus attention map, firstly, a difference value between the original image I and the degradation map M after the convolution of the blur kernel is calculated as a guide map RFM, and the position of the pixel is represented by (x, y), and the calculation formula is as follows:
M(x,y)=I(x,y)*f m
RFM(x,y)=|I(x,y)-M(x,y)|,
wherein, f m Representing a blur kernel, the kernel size is set to 3, taking into account edge loss.
3. The super-resolution reconstruction method for the rock microscopic image by using the generation countermeasure network as claimed in claim 1, wherein: in step one, the pilot filter is a local linear model between the pilot map RFM and the filtered output map Q, with i representing the position of the pixel, ω k Representing a local window, the relationship is:
Figure FDA0003671519270000011
a k ,b k are all constant coefficients; the core of the guided filtering lies in the meterCalculating the constant coefficients that minimize the difference between the input and output maps P, Q, even if the noise N is closest to 0, the constraint equation is obtained as follows:
Q i =P i -N i
Figure FDA0003671519270000021
wherein the epsilon is a regularization parameter; through derivative derivation, the constant coefficient calculation formula is as follows:
Figure FDA0003671519270000022
Figure FDA0003671519270000023
wherein mu k
Figure FDA0003671519270000024
Representing the mean and variance of the RFM map in the window,
Figure FDA0003671519270000025
represents the mean of the input image in the window, ω represents the total number of pixels of the window, and the window size is 3 × 3.
4. The super-resolution reconstruction method for rock microscopic images by using a generation countermeasure network according to claim 2 or 3, characterized in that: the size of the label image to be cut is 256 multiplied by 256 and more; if the calculation is not sufficiently supported, the guiding filtering step is omitted, and only the fuzzy difference value is used as the focusing attention input network.
5. The super-resolution reconstruction method for the rock microscopic image by using the generation countermeasure network as claimed in claim 1, wherein: in the second step, the network comprises two discriminators and a generator; the network generator is composed of a resolution improving branch and a focusing balance branch, and the two branches are connected in parallel through a bidirectional biased fusion module BBFM; replacing the fully-connected layer with a single-point convolution, and calculating MC (x) with x as input according to the following formula:
MC(x)=Sigmoid(GAP(SE(x))+SE(x))
the SE module comprises five parts of calculation of a single-point convolution layer, a BatchNorm layer, a Relu activation function layer, a single-point convolution layer and a BatchNorm layer, and the Sigmoid is a selected nonlinear activation function.
6. The super-resolution reconstruction method for rock microscopic images by using a generation countermeasure network as claimed in claim 5, wherein: designing a BBFM module and a simplified version BBFM _ O thereof; BBFM module, X, Y stands for input, X out ,Y out Representing the input, the specific calculation formula is as follows:
Figure FDA0003671519270000026
Figure FDA0003671519270000027
MC 1 ,MC 2 calculating for two independent MCs; meanwhile, the simplified version of BBFM _ O module is calculated by using MC only once, and the formula is as follows:
Figure FDA0003671519270000031
Figure FDA0003671519270000032
7. the super-resolution reconstruction method for the rock microscopic image by using the generation countermeasure network as claimed in claim 1, wherein: in the third step, the network discriminators are all VGG networks, and all the network discriminators select to use the characteristic value before the VGG activation layer to calculate the reconstruction loss;
the network supports two different training modes, and the difference is realized on the calculation of the loss function according to whether the input data label is a fusion label or not; the network loss function consists of three parts: generating a confrontation loss function, a hyper-resolution constraint loss function and a focusing constraint loss function; the super-resolution constraint loss function comprises a pixel mean square error loss function and a perception loss function which are respectively used for optimizing the pixel level truth degree and the perception effect of the network; the focus constraint loss function comprises accuracy evaluation of a reconstructed high-resolution focus image and comparison between two branch outputs; generating the countervailing loss function comprises training interaction with the two discriminators; thus, a loss function loss of the network is generated G The definition is as follows:
Figure FDA0003671519270000033
alpha in the formula 1 ,α 2 ,β 1 ,β 2 ,γ 1 ,γ 2 All the parameters are adjustable weight parameters, and the magnitude of the parameters is selected to be 0.1, 1, 0.01, 0.5, 0.05 and 0.005 in practical application; loss pixel ,loss lpips
Figure FDA0003671519270000034
loss adv
Figure FDA0003671519270000035
Respectively representing a pixel mean square error loss function, a perception loss function, a reconstructed focusing diagram loss function, a branch consistency loss function and a generation countermeasure loss function of two discriminators.
8. The super-resolution reconstruction method for rock microscopic images using generation countermeasure network as claimed in claim 7, wherein the super-resolution reconstruction method is characterized in that: in step three, G stands for network generator, D foc Respectively representing a hyper-resolution reconstruction discriminator and a focus reconstruction discriminator, F being a focus map calculation function, I L ,I H ,I S ,I F Respectively representing an input low-resolution image, a high-resolution image serving as a label, an output super-resolution image reconstruction result and an output focusing area reconstruction result; loss function loss of pixel mean square error pixel The method is used for evaluating the pixel-level difference between the generated hyper-resolution image and the high-resolution image serving as the label and ensuring that the reconstruction result is true and accurate, and the formula is shown as follows, wherein
Figure FDA0003671519270000041
To calculate as desired:
Figure FDA0003671519270000042
loss lpips the similarity is measured by calculating the mean square error loss of the features obtained after the trained VGG network phi is calculated, and the similarity is measured by the following formula:
Figure FDA0003671519270000043
in the focus constraint loss function
Figure FDA0003671519270000044
The method is used for evaluating the accuracy of the reconstructed high-resolution focal map, and the calculation formula has two different forms from a training data set; when the training data set is the original high and low magnification data pair collected by the microscope, the depth of field effect of the focus area of the reconstruction result is closer to the input low power image, so the formula is as follows:
Figure FDA0003671519270000045
the function P performs up-sampling processing on input data; when the input data set is the original low power microscope data and the fused high power microscope data, the constraint relationship can be directly established between the result and the label, and the formula is as follows:
Figure FDA0003671519270000046
the focus constraint loss function also includes a consistency constraint between two branch outputs, and the formula is as follows:
Figure FDA0003671519270000047
the resulting penalty function is:
Figure FDA0003671519270000048
Figure FDA0003671519270000049
the loss function pays attention to the perception effect while ensuring the trueness and reliability of the network reconstruction result, and balances the focus area of the reconstruction result.
CN202210606345.6A 2022-05-31 2022-05-31 Rock microscopic image super-resolution reconstruction method adopting generation of countermeasure network Pending CN115063293A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210606345.6A CN115063293A (en) 2022-05-31 2022-05-31 Rock microscopic image super-resolution reconstruction method adopting generation of countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210606345.6A CN115063293A (en) 2022-05-31 2022-05-31 Rock microscopic image super-resolution reconstruction method adopting generation of countermeasure network

Publications (1)

Publication Number Publication Date
CN115063293A true CN115063293A (en) 2022-09-16

Family

ID=83199345

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210606345.6A Pending CN115063293A (en) 2022-05-31 2022-05-31 Rock microscopic image super-resolution reconstruction method adopting generation of countermeasure network

Country Status (1)

Country Link
CN (1) CN115063293A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180075581A1 (en) * 2016-09-15 2018-03-15 Twitter, Inc. Super resolution using a generative adversarial network
CN110136063A (en) * 2019-05-13 2019-08-16 南京信息工程大学 A kind of single image super resolution ratio reconstruction method generating confrontation network based on condition
CN111583109A (en) * 2020-04-23 2020-08-25 华南理工大学 Image super-resolution method based on generation countermeasure network
CN112001847A (en) * 2020-08-28 2020-11-27 徐州工程学院 Method for generating high-quality image by relatively generating antagonistic super-resolution reconstruction model
WO2021022929A1 (en) * 2019-08-08 2021-02-11 齐鲁工业大学 Single-frame image super-resolution reconstruction method
CN113781311A (en) * 2021-10-10 2021-12-10 北京工业大学 Image super-resolution reconstruction method based on generation countermeasure network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180075581A1 (en) * 2016-09-15 2018-03-15 Twitter, Inc. Super resolution using a generative adversarial network
CN110136063A (en) * 2019-05-13 2019-08-16 南京信息工程大学 A kind of single image super resolution ratio reconstruction method generating confrontation network based on condition
WO2021022929A1 (en) * 2019-08-08 2021-02-11 齐鲁工业大学 Single-frame image super-resolution reconstruction method
CN111583109A (en) * 2020-04-23 2020-08-25 华南理工大学 Image super-resolution method based on generation countermeasure network
CN112001847A (en) * 2020-08-28 2020-11-27 徐州工程学院 Method for generating high-quality image by relatively generating antagonistic super-resolution reconstruction model
CN113781311A (en) * 2021-10-10 2021-12-10 北京工业大学 Image super-resolution reconstruction method based on generation countermeasure network

Similar Documents

Publication Publication Date Title
Pan et al. High-resolution and large field-of-view Fourier ptychographic microscopy and its applications in biomedicine
Aguet et al. Model-based 2.5-D deconvolution for extended depth of field in brightfield microscopy
Wesemann et al. Selective near-perfect absorbing mirror as a spatial frequency filter for optical image processing
Aguet et al. Super-resolution orientation estimation and localization of fluorescent dipoles using 3-D steerable filters
Fay et al. Three‐dimensional molecular distribution in single cells analysed using the digital imaging microscope
JP2002531840A (en) Adaptive image forming apparatus and method using computer
Zhou et al. Underwater image restoration based on secondary guided transmission map
CN104111242A (en) Three dimensional pixel super-resolution microscopic imaging method
Liu et al. Continuous optical zoom microscope with extended depth of field and 3D reconstruction
CN112446828B (en) Thermal imaging super-resolution reconstruction method fusing visible image gradient information
Thiéry et al. The multifocus imaging technique in petrology
Tang et al. Autofocusing and image fusion for multi-focus plankton imaging by digital holographic microscopy
Shi et al. Rapid all-in-focus imaging via physical neural network optical encoding
Haeberlé Focusing of light through a stratified medium: a practical approach for computing microscope point spread functions: Part II: confocal and multiphoton microscopy
Wang et al. High‐accuracy, direct aberration determination using self‐attention‐armed deep convolutional neural networks
CN115063293A (en) Rock microscopic image super-resolution reconstruction method adopting generation of countermeasure network
CN105022995B (en) Method for extracting and analyzing diffusion and permeation information of painting and calligraphy elements based on light intensity information
Ghosh et al. Characterization of a three-dimensional double-helix point-spread function for fluorescence microscopy in the presence of spherical aberration
Mahmood et al. 3D shape recovery from image focus using kernel regression in eigenspace
Qiao et al. Underwater image enhancement combining low-dimensional and global features
US20110317000A1 (en) Method and system for enhancing microscopy image
Garud et al. Volume visualization approach for depth-of-field extension in digital pathology
Cogswell et al. Confocal brightfield imaging techniques using an on-axis scanning optical microscope
Preza et al. Image reconstruction for three-dimensional transmitted-light DIC microscopy
Li et al. Rapid whole slide imaging via learning-based two-shot virtual autofocusing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination