CN112132959A - Digital rock core image processing method and device, computer equipment and storage medium - Google Patents
Digital rock core image processing method and device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN112132959A CN112132959A CN202011027637.1A CN202011027637A CN112132959A CN 112132959 A CN112132959 A CN 112132959A CN 202011027637 A CN202011027637 A CN 202011027637A CN 112132959 A CN112132959 A CN 112132959A
- Authority
- CN
- China
- Prior art keywords
- image
- loss
- dimensional
- digital core
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 25
- 239000011435 rock Substances 0.000 title claims abstract description 24
- 238000012549 training Methods 0.000 claims abstract description 116
- 238000012545 processing Methods 0.000 claims abstract description 34
- 238000002591 computed tomography Methods 0.000 claims description 148
- 238000000034 method Methods 0.000 claims description 39
- 230000006870 function Effects 0.000 claims description 38
- 230000011218 segmentation Effects 0.000 claims description 28
- 230000004927 fusion Effects 0.000 claims description 12
- 230000003042 antagnostic effect Effects 0.000 claims description 11
- 230000007246 mechanism Effects 0.000 claims description 11
- 238000007781 pre-processing Methods 0.000 claims description 8
- 230000004913 activation Effects 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 6
- 238000013135 deep learning Methods 0.000 claims description 4
- 230000006641 stabilisation Effects 0.000 claims description 3
- 238000011105 stabilization Methods 0.000 claims description 3
- 238000012546 transfer Methods 0.000 claims description 3
- 230000000087 stabilizing effect Effects 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 13
- 238000004422 calculation algorithm Methods 0.000 description 11
- 230000000694 effects Effects 0.000 description 8
- 230000008569 process Effects 0.000 description 8
- 230000003287 optical effect Effects 0.000 description 6
- 238000013528 artificial neural network Methods 0.000 description 5
- 239000003245 coal Substances 0.000 description 4
- 230000008447 perception Effects 0.000 description 4
- 238000011176 pooling Methods 0.000 description 4
- BVKZGUZCCUSVTD-UHFFFAOYSA-L Carbonate Chemical compound [O-]C([O-])=O BVKZGUZCCUSVTD-UHFFFAOYSA-L 0.000 description 3
- 230000015556 catabolic process Effects 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000006731 degradation reaction Methods 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000016776 visual perception Effects 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 238000001035 drying Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000004438 eyesight Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000000704 physical effect Effects 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4007—Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/60—Rotation of whole images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the invention discloses a digital rock core image processing method, a digital rock core image processing device, computer equipment and a storage medium, wherein the digital rock core image processing method comprises the following steps: acquiring a digital core training image; the digital core training images comprise digital core three-dimensional CT images with different resolutions; respectively training a generator model and a discriminator model for generating a confrontation network according to the digital core training images to obtain a three-dimensional digital core image generation confrontation network model; and generating a generator model of a confrontation network model according to the three-dimensional digital core image, and reconstructing the input digital core image to be optimized to obtain a target three-dimensional digital core image. The embodiment of the invention can improve the image quality of the three-dimensional digital core reconstructed image.
Description
Technical Field
The embodiment of the invention relates to the technical field of image processing, in particular to a digital core image processing method and device, computer equipment and a storage medium.
Background
Computed Tomography (CT) techniques can generate 3D images detailing the internal microstructure of porous rock samples, which can help determine the petrophysical properties and flow characteristics of the rock. CT can resolve features as low as a few microns across a Field of view (FOV) of a few millimeters, is sufficient to characterize the microstructure of conventional rock, can assist geological researchers in analyzing the physical properties of rock, and plays an important role in the fields of geological and oil exploration. In fact, a complete 3D digital core CT image is composed of two-dimensional (2D) slice images, and due to the inherent limitations of CT equipment, setting high resolution not only requires high cost, but also results in a reduction in FOV, thereby resulting in a loss of the long-range properties of reservoir rock. In many cases, only low resolution CT images can be obtained. Therefore, the super-resolution reconstruction algorithm is an effective method for improving the resolution of the digital core CT image, and can provide clearer sample data for subsequent geological research.
Image super-resolution reconstruction is one of the classical tasks in computer vision, with the goal of reconstructing a high resolution image from a low resolution input image. The method mainly comprises three types of implementation methods: interpolation-based algorithms, reconstruction-based algorithms, and learning-based algorithms. The interpolation algorithm for obtaining the target pixel value by using the neighborhood pixels to perform weight combination is widely applied due to the characteristics of simple form and high processing speed. However, such algorithms lose high frequency information of the image, lose image details, and cause image blurring. On the basis, an edge-based interpolation algorithm appears, which retains high-frequency information of the image to a certain extent, however, the edge-based interpolation algorithm is difficult to process texture regions of the image, and the application range is greatly limited. The reconstruction-based method is to constrain the consistency between high and low resolution image variables by modeling a degradation model of the image, and then estimate the high resolution image. This kind of method obtains stable solution through regular term constraint, but adds mandatory prior information and destroys the original structural feature of the image, leads to the distortion of the reconstructed image. And the calculation complexity is high, and the requirement of real-time processing in reconstruction cannot be met.
The learning-based method utilizes supervised machine learning to establish a nonlinear mapping relation between a low-resolution image and a high-resolution image for image reconstruction. The method can extract the prior information of the image from the sample set, thereby obtaining higher reconstruction precision. However, the conventional reconstruction method based on learning can only extract simpler features of the image, which is not enough to fully represent the image information, so that the reconstruction quality of the image is limited. In recent years, a Deep Learning method is applied to Image Super-Resolution reconstruction, and a Super-Resolution reconstruction method (SRCNN) based on a Convolutional neural Network is proposed, in which an end-to-end mapping function from a low-Resolution Image to a high-Resolution Image is directly learned in an end-to-end method, and a very clear generated Image can be generated. However, due to the influence of the gradient diffusion phenomenon in the convolutional neural network, the SRCNN has a network degradation phenomenon under the condition of a large convolution depth, that is, the reconstruction quality of the image is reduced, and the reconstruction performance of the algorithm is limited. Subsequently, a Generative countermeasure network is used for image Super-Resolution reconstruction, and a Super-Resolution Generative adaptive network (SRGAN) is proposed, which can recover more high-frequency details. At present, on the basis of the SRGAN, the ESRGAN network is proposed to further improve the countermeasure loss and the perception loss, and a residual dense block is introduced to construct the network, so that the reconstruction precision of the image is improved.
Although the super-resolution method based on the deep neural network has great breakthrough in image reconstruction quality, the method has many defects. The lack of a deep neural network (e.g., SRCNN) trained against this training can make the reconstructed image too smooth to fit human perception of natural pictures. By adopting a method of resisting a neural Network, such as an SRGAN (simplified Super-Resolution adaptive Network) and an ESRGAN (Enhanced Super-Resolution generation resisting Network), the problem of too smooth reconstructed images can be solved, and the problem is more suitable for human perception of natural pictures. In addition, the algorithm is mainly applied to two-dimensional image reconstruction at present and is not applied to three-dimensional core CT image reconstruction.
Disclosure of Invention
The embodiment of the invention provides a digital core image processing method, a digital core image processing device, computer equipment and a storage medium, which are used for improving the image quality of a three-dimensional digital core reconstruction image.
In a first aspect, an embodiment of the present invention provides a digital core image processing method, including:
acquiring a digital core training image; the digital core training images comprise digital core three-dimensional CT images with different resolutions;
respectively training a generator model and a discriminator model for generating a confrontation network according to the digital core training images to obtain a three-dimensional digital core image generation confrontation network model;
and generating a generator model of a confrontation network model according to the three-dimensional digital core image, and reconstructing the input digital core image to be optimized to obtain a target three-dimensional digital core image.
In a second aspect, an embodiment of the present invention further provides a digital core image processing apparatus, including:
the training image acquisition module is used for acquiring a digital core training image; the digital core training images comprise digital core three-dimensional CT images with different resolutions;
the network model acquisition module is used for respectively training a generator model and a discriminator model for generating a confrontation network according to the digital core training image to obtain a three-dimensional digital core image generation confrontation network model;
and the image reconstruction module is used for reconstructing the input digital core image to be optimized according to a generator model of the three-dimensional digital core image generation countermeasure network model to obtain a target three-dimensional digital core image.
In a third aspect, an embodiment of the present invention further provides a computer device, where the computer device includes:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a method for digital core image processing as provided by any of the embodiments of the present invention.
In a fourth aspect, an embodiment of the present invention further provides a computer storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the digital core image processing method provided in any embodiment of the present invention.
According to the embodiment of the invention, the digital core three-dimensional CT images with different resolutions are used as the digital core training images, the generator model and the discriminator model of the countermeasure network are respectively generated by training according to the digital core training images, the three-dimensional digital core image generation countermeasure network model is obtained, the generator model of the countermeasure network model is generated according to the three-dimensional digital core image, the input digital core image to be optimized is reconstructed, the target three-dimensional digital core image is obtained, the problem of low image quality of the existing three-dimensional digital core image reconstruction image is solved, and the image quality of the three-dimensional digital core reconstruction image is improved.
Drawings
Fig. 1 is a flowchart of a digital core image processing method according to an embodiment of the present invention;
fig. 2 is a flowchart of a digital core image processing method according to a second embodiment of the present disclosure;
fig. 3 is a schematic flowchart of model training and digital core image processing according to a second embodiment of the present disclosure;
FIG. 4 is a schematic structural diagram of a generator model according to a second embodiment of the present invention;
fig. 5 is a schematic structural diagram of a discriminator model according to a second embodiment of the present invention;
fig. 6 is a schematic diagram illustrating an effect of generating a target three-dimensional digital core image by using a generator model for generating a confrontation network model by using a three-dimensional digital core image according to a second embodiment of the present invention;
FIG. 7 is a schematic diagram of a comparison effect of evaluation indexes provided in the second embodiment of the present invention;
fig. 8 is a schematic diagram of a digital core image processing apparatus according to a third embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a computer device according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention.
It should be further noted that, for the convenience of description, only some but not all of the relevant aspects of the present invention are shown in the drawings. Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
Example one
Fig. 1 is a flowchart of a digital core image processing method according to an embodiment of the present invention, where the present embodiment is applicable to a case where a three-dimensional digital core image to be optimized is reconstructed to improve image quality, and the method may be executed by a digital core image processing apparatus, and the apparatus may be implemented by software and/or hardware, and may be generally integrated in a computer device. Accordingly, as shown in fig. 1, the method comprises the following operations:
s110, acquiring a digital core training image; the digital core training image comprises digital core three-dimensional CT images with different resolutions.
The digital core training image can be a training image constructed according to an existing digital core image and is used for training a three-dimensional digital core image generation countermeasure network model. And the digital core three-dimensional CT image is also the three-dimensional CT image of the core.
In an embodiment of the present invention, the digital core training images may include digital core three-dimensional CT images of different resolutions. Optionally, the digital core training image may include two digital core three-dimensional CT images with different resolutions, such as a low resolution and a high resolution. The resolution corresponding to the digital core three-dimensional CT image with the low resolution is lower than a first set resolution threshold, and the resolution corresponding to the digital core three-dimensional CT image with the high resolution is higher than a second set resolution threshold. The first set resolution threshold and the second set resolution threshold may be set according to actual requirements, and the embodiment of the present invention does not limit specific values of the first set resolution threshold and the second set resolution threshold.
And S120, respectively training a generator model and a discriminator model for generating the confrontation network according to the digital core training image to obtain a three-dimensional digital core image generation confrontation network model.
The generation of the countermeasure network consists of two neural networks, namely a generator model and a discriminator model. The generator model is mainly used to generate new data from acquired content (such as images) and the discriminator model is mainly used to discriminate whether the data generated by the generator model is legitimate. The three-dimensional digital core image generation confrontation network model is a generated confrontation network model obtained by successfully training the generator model and the discriminator model by utilizing the digital core training image.
In the embodiment of the invention, the countermeasure network is trained by using the digital core three-dimensional CT images with different resolutions as the digital core training images, so that the three-dimensional digital core images obtained by training can be effectively reconstructed by generating the countermeasure network model.
S130, a generator model of a confrontation network model is generated according to the three-dimensional digital core image, and the input digital core image to be optimized is reconstructed to obtain a target three-dimensional digital core image.
The digital core image to be optimized may be a low-resolution three-dimensional digital core image. The target three-dimensional digital core image may be a three-dimensional digital core image having a particular resolution.
Correspondingly, when the training of the three-dimensional digital core image generation countermeasure network model is successful, the digital core image to be optimized is input into the generator model of the three-dimensional digital core image generation countermeasure network model, and the generator model of the three-dimensional digital core image generation countermeasure network model is used for reconstructing the input digital core image to be optimized to obtain the target three-dimensional digital core image. The resolution ratio of the target three-dimensional digital core image is higher than that of the digital core image to be optimized, so that the reconstructed target three-dimensional digital core image can recover detail information in the digital core image to be optimized, the definition of the image is higher, and the image quality of the reconstructed three-dimensional digital core image is improved.
According to the embodiment of the invention, the digital core three-dimensional CT images with different resolutions are used as the digital core training images, the generator model and the discriminator model of the countermeasure network are respectively generated by training according to the digital core training images, the three-dimensional digital core image generation countermeasure network model is obtained, the generator model of the countermeasure network model is generated according to the three-dimensional digital core image, the input digital core image to be optimized is reconstructed, the target three-dimensional digital core image is obtained, the problem of low image quality of the existing three-dimensional digital core image reconstruction image is solved, and the image quality of the three-dimensional digital core reconstruction image is improved.
Example two
Fig. 2 is a flowchart of a digital core image processing method according to a second embodiment of the present invention, and fig. 3 is a flowchart of a model training and digital core image processing according to a second embodiment of the present invention. Accordingly, as shown in fig. 2 and 3, the method of the present embodiment may include:
s210, acquiring a two-dimensional CT image of at least one known rock.
The known rock may be a rock type such as sandstone, carbonate rock, or coal geological rock (i.e., coal rock), and the specific rock type of the known rock is not limited in the embodiments of the present invention.
In an embodiment of the present invention, when acquiring the digital core training image, a two-dimensional CT image of at least one known rock may be acquired first. For example, two-dimensional CT images from multiple sandstone, carbonate, and coal geological cores may be gathered.
S220, performing image preprocessing and image enhancement processing on the two-dimensional CT image to obtain a two-dimensional CT processed image.
The image preprocessing is to perform preprocessing operation on the image, and the image enhancement processing is to perform data enhancement on the image. The two-dimensional CT processed image can be a two-dimensional CT image obtained by performing image preprocessing and image enhancement on the two-dimensional CT image.
Correspondingly, after the two-dimensional CT image of at least one known rock is obtained, image preprocessing operations such as drying removal and the like can be carried out on the two-dimensional CT image, and then image enhancement processing such as rotation, turning and blurring operations and the like can be carried out on the two-dimensional CT image obtained through image preprocessing, so that the number of samples in the digital core training image is expanded.
And S230, constructing a three-dimensional CT image according to the two-dimensional CT processing image.
Furthermore, after the two-dimensional CT processing image is obtained, a three-dimensional CT image can be constructed according to the two-dimensional CT processing image.
Illustratively, the collected two-dimensional CT images may be processed and the resulting images stacked to form a three-dimensional CT image.
S240, constructing the digital core training image according to the three-dimensional CT image.
Correspondingly, after the three-dimensional CT image is constructed according to the two-dimensional CT processing image, the digital core training image can be constructed by utilizing the three-dimensional CT image.
In an optional embodiment of the present invention, the constructing the digital core training image according to the three-dimensional CT image may include: segmenting the three-dimensional CT image to obtain a three-dimensional CT segmented image with a first resolution; carrying out zooming processing on the three-dimensional CT segmentation image according to a set zooming value to obtain a three-dimensional CT zoomed image with a second resolution; and constructing the digital core training image according to the three-dimensional CT segmentation image and the three-dimensional CT scaling image.
The three-dimensional CT segmented image may be a three-dimensional CT image of a small module obtained by segmenting the three-dimensional CT image, and the first resolution is an original image resolution of the three-dimensional CT image constructed according to the two-dimensional CT processed image. The setting of the scaling value may be set according to actual requirements, such as 1/2, 1/4, 1/8, and the embodiment of the invention does not limit the specific value of the setting scaling value. The second resolution is also the resolution corresponding to the zoomed image obtained by zooming the three-dimensional CT segmented image. It will be appreciated that the second resolution is a lower numerical value than the first resolution.
Specifically, when the digital core training image is constructed according to the three-dimensional CT image, the acquired three-dimensional CT image may be segmented to obtain a three-dimensional CT segmented image with a first resolution, for example, the acquired three-dimensional CT image is segmented into a cube with a fixed size, so as to save training time and calculation overhead. Then, the three-dimensional CT segmented image may be scaled according to a set scaling value, for example, by a fixed factor using bicubic interpolation. Thus, the generated three-dimensional CT zoom image is a low-resolution image, and the three-dimensional CT segmentation image is a high-resolution image of the original scale. Correspondingly, the three-dimensional CT segmentation image and the three-dimensional CT scaling image construct a digital core training image.
In one specific example, assume that a digital core training image is constructed using a disclosed digital core data set. The data set contains twelve thousand 500 x 500 high resolution raw images of various digital rocks (sandstone, carbonate, and coal, etc.) with image resolutions ranging from 2.7 to 25 μm. After a two-dimensional CT image of a known rock is obtained, image denoising pretreatment is firstly carried out, and then data enhancement is carried out by adopting rotation, turning and fuzzy operations, so that the number of samples of a training image data set is expanded. The 500 x 500 two-dimensional CT images in the data set may then be two-dimensional slice stacked to form a 500 x 500 three-dimensional CT image. Further, the three-dimensional CT image is divided into 80 × 80 × 80 cubes to obtain a high-resolution three-dimensional CT divided image, and the three-dimensional CT divided image is reduced to 1/2(40 × 40 × 40 pixel size) and 1/4(20 × 20 × 20 pixel size) using bicubic interpolation to generate a low-resolution three-dimensional CT scaled image. Correspondingly, the three-dimensional CT segmentation image of the original scale obtained by segmentation is used as a high resolution image, the three-dimensional CT scaling image is used as a low resolution image, and the two images form a digital core training image.
And S250, inputting the three-dimensional CT scaling image with the second resolution in the digital core training image into the generator model, and taking the image output by the generator model as a third resolution image.
Wherein the third resolution image is an image output by the generator model according to the low-resolution three-dimensional CT scaling image, and the resolution of the third resolution image is higher than the second resolution of the three-dimensional CT scaling image.
In the embodiment of the invention, when the generator model and the discriminator model of the countermeasure network are generated by training with the digital core training image, the three-dimensional CT scaling image of the second resolution can be input into the generator model, and the image output by the generator model is taken as the third resolution image. The third resolution image may also be referred to as a super-resolution image.
S260, simultaneously inputting the third resolution image and the three-dimensional CT segmentation image with the first resolution in the digital core training image into the discriminator model, and determining whether the third resolution image is matched with the three-dimensional CT segmentation image with the first resolution according to an output value of the discriminator model so as to train the discriminator model.
Correspondingly, after the third resolution image is obtained, the third resolution image and the three-dimensional CT segmentation image can be simultaneously input into the discriminator model, and whether the third resolution image is matched with the three-dimensional CT segmentation image or not is determined according to an output value of the discriminator model, so that the discriminator model is trained. The output value of the discriminator model may be the probability of matching the third resolution image with the three-dimensional CT segmented image, or the same value or different value of the third resolution image and the three-dimensional CT segmented image, which is not limited in the embodiment of the present invention.
When the generation countermeasure network is trained, the discriminator model can be trained firstly, and after the discriminator model is trained successfully, the generator model is trained by using the successfully trained discriminator model. The specific process of training the discriminator model may be: if the third resolution image is determined to be not matched with the three-dimensional CT segmented image according to the output value of the discriminator model, if the probability of matching the third resolution image with the three-dimensional CT segmented image is lower than a set probability threshold (for example, 90% and the like can be set according to actual requirements), or the third resolution image is directly output to be different from the three-dimensional CT segmented image, the generator model is continuously optimized until the third resolution image generated by the generator model can "cheat" the discriminator, namely when the third resolution image is determined to be matched with the three-dimensional CT segmented image according to the output value of the discriminator model, the success of the training of the discriminator model can be determined.
S270, after the discriminant model is successfully trained, fixing the discriminant model, and returning to execute the operation of inputting the three-dimensional CT scaling image with the second resolution in the character core training image into the generator model to continue training the generator model until the generator model is successfully trained.
Accordingly, after the training of the discriminator model is successful, the discriminator model may be fixed, and the operation of inputting the three-dimensional CT scaled image into the generator model is returned to continue training the generator model until it is determined that the training of the generator model is successful. That is, the training process of the generator model and the discriminator model belongs to a process in which the two optimize training alternately.
In an optional embodiment of the present invention, the generator model is a fusion attention mechanism residual U-net network, the fusion attention mechanism residual U-net network includes a set number of network structures, and an encoder and a decoder of the fusion attention mechanism residual U-net network perform multi-scale image fusion through an attention gate structure; wherein the attention gate structure may be implemented based on the following formula:
wherein,representing a transfer function, #T、Andrepresents a 1 × 1 × 1 convolution operation, bgAnd bψRepresenting the offset coefficient, σ1The representation of the function of the ReLU,input representing the attention gate structure, giA gate signal indicating the provision of high-level context information,denotes the attention coefficient, σ2Representing sigmoid activation function, ΘattDenotes a parameter set, which may be psiT、bgAnd bψ,Representing the output of the attention gate structure.
The set number may be, for example, 5, and the specific numerical value of the set number is not limited in the embodiment of the present invention.
Optionally, in the embodiment of the present invention, a fusion attention mechanism residual U-net network may be constructed as a generator model, a residual depth structure is adopted to ensure data fitting capability of the network, the network is easier to train to alleviate the problem of gradient disappearance, an attention mechanism is introduced to inhibit feature response in an irrelevant background region in an image, and a U-net network structure is adopted to ensure transmission of image features of different levels by adopting skip connection, so as to realize fusion of image features of different scales.
Fig. 4 is a schematic structural diagram of a generator model according to a second embodiment of the present invention, and in a specific example, as shown in fig. 4, the entire generator model may be a U-net structure with a five-layer network depth. On the encoder side: the first layer (E1) contains 64 1 × 1 × 1 convolutions, batch normalization, prilu activation and max pooling; the second layer (E2) comprises 128 1 × 1 × 1 convolutions, PReLU activation, a residual module and maximum pooling, wherein two 128 3 × 3 × 3 convolution layers, batch normalization and PReLU activation structures are connected in series in the residual module, and the bottom-layer input and the top-layer output of the whole feature learning layer are connected by using global skip connection to perform global residual learning; the third layer (E3), fourth layer (E4) and fifth layer (E5) are identical in structure to the second layer, but the number of convolution kernels is 256, 512 and 1024, respectively. E5 is followed by an upsampled layer to become the fifth layer of the decoder (D5). The decoder adopts a similar structure for the encoder, removes the maximum pooling in the encoder, upscales the restored image by upsampling layer-by-layer from bottom to top (D4, D3, D2, D1), while the encoder and decoder perform multi-scale image fusion through an attention gate structure.
The discriminator model contains two inputs, one being the original high resolution image (HR, i.e. the three-dimensional CT segmented image) and the other being the super resolution image (SR) generated by the generator. Fig. 5 is a schematic structural diagram of a discriminator model according to a second embodiment of the present invention, and in a specific example, as shown in fig. 5, an SR image to be discriminated is input to the discriminator model, features are extracted by 5 convolutional layers, in order to increase a local receptive field, a convolutional kernel with a size of 4 × 4 × 4 is adopted, the number of convolutional kernels starts from 64 in the first layer, and the number of each layer increases by 2 times until 1024 in the fifth layer. Then, the sixth layer performs dimensionality reduction using the convolution layer of the 1 × 1 × 1 convolution kernel, followed by a convolution operation using the 3 × 3 × 3 convolution kernels of the two layers, and feeds the output of the fifth layer and the output of the eighth layer to the next layer. And at the end of the discriminator model, smoothing the feature mapping, flattening the dimensionality of the image data, and then sequentially sending the image data into the full connection layer, the LReLU activation layer and the full connection layer and obtaining a discrimination result Logits through Sigmoid. As shown in fig. 5, the discriminator model may also include a residual module.
In an optional embodiment of the present invention, the determining that the discriminant model is successfully trained may include: calculating a discriminator loss of the third resolution image on the discriminator model; determining that the discriminant model is successfully trained under the condition that the discriminant loss is determined to meet the discriminant stability condition; the determining that training of the generator model is successful may include: calculating pixel loss and feature loss between the third-resolution image and the three-dimensional CT segmentation image of the first resolution, and the confrontation loss of the third-resolution image on the discriminator model; determining that the generator model training was successful if it is determined that the pixel loss, the feature loss, and the antagonistic loss satisfy a generator stability condition.
The discriminator loss is the loss generated when the discriminator discriminates the third resolution image. The discriminator-stable condition may be a condition for judging that the loss function of the discriminator model tends to be stable, and the generator-stable condition may be a condition for judging that the loss function of the generator model tends to be stable. The trend toward stability may be that the output value of the loss function tends toward a stable value, or the output value of the loss function is smaller than a set threshold, and the like, which is not limited by the embodiment of the present invention. The pixel loss is the pixel difference between the third resolution image and the three-dimensional CT segmentation image, and the characteristic loss is the characteristic difference between the third resolution image and the three-dimensional CT segmentation image. The countermeasure loss is the loss generated after the third resolution image is discriminated by the discriminator.
In an optional embodiment of the present invention, the calculating a discriminator loss of the third resolution image on the discriminator model may include: calculating the discriminator loss based on a network loss function:
wherein D islossRepresenting the discriminator loss, N representing a natural number, and p representing a discriminator function; i denotes the number of training times, SRiRepresenting the third resolution image, HRiA three-dimensional CT segmentation image representing the first resolution.
The determining that the discriminator loss satisfies a discriminator stabilizing condition may include: and when the loss of the discriminator reaches a first set target value, determining that the loss of the discriminator meets a stable condition of the discriminator.
The calculating of pixel loss between the image of the third resolution and the three-dimensional CT segmented image of the first resolution may include: calculating the pixel loss based on a pixel loss function:
wherein L is1Indicating pixel loss.
The calculating of the feature loss between the third resolution image and the three-dimensional CT segmentation image of the first resolution may include: calculating the characteristic loss based on a characteristic loss function:
wherein, VGGlossIs indicative of the loss of the characteristic,representing a deep learning network function, such as the VGG-19 network function, the VGG-19 penalty corresponds to the fourth convolution output before the fifth largest pooling layer in the VGG-19 network. That is, after VGG (Visual Geometry Group) feature extraction is performed on the third-resolution image and the three-dimensional CT segmented image of the first resolution, the feature loss between the third-resolution image and the three-dimensional CT segmented image of the first resolution may be calculated by using a feature loss function.
Alternatively, a 19-layer VGG network can be used to compensate the image pixel level L1And (4) loss perception information, calculating content loss of the third resolution image and the three-dimensional CT segmentation image on the VGG-19, and comparing semantic information of the super-resolution image and the high-resolution image.
The calculating the countermeasure loss of the third resolution image on the discriminator model may include: calculating the challenge loss based on a challenge loss function as follows:
wherein AdvlossRepresenting the challenge loss.
The determining that the pixel loss, the characteristic loss, and the antagonistic loss satisfy a generator stabilization condition may include: constructing a generator total loss from the pixel loss, the feature loss, and the antagonistic loss based on the following formula:
Gloss=L1+α·VGGloss+β·Advloss
wherein G islossRepresenting the total loss of the generator, and alpha and beta represent weighting coefficients.
And when the total loss of the generator reaches a second set target value, determining that the pixel loss, the characteristic loss and the antagonistic loss meet a generator stability condition.
The first set target value and the second set target value may be set according to an actual requirement, such as a minimum value, which is not limited in the embodiment of the present invention.
In particular, the discriminant model and the generator model may be trained in conjunction with a variety of loss types each time DlossAnd GlossWhen the minimum value of (c) is reached, it can be determined that the discriminant model and the generator model are successfully trained. During the training, if DlossAnd GlossIf the minimum is not reached, a back-propagation algorithm may be employed to update the network parameters that adjust the generation of the counterpoise network. Optionally, an Adam optimizer may be employed to generate the countermeasure network for countermeasure training until it converges.
S280, a generator model of a confrontation network model is generated according to the three-dimensional digital core image, and the input digital core image to be optimized is reconstructed to obtain a target three-dimensional digital core image.
Wherein the image resolution of the target three-dimensional digital core image is related to the resolution of the digital core training image. For example, when the three-dimensional CT zoom image is 40 × 40 × 40 pixels and 20 × 20 × 20 pixels, the corresponding target three-dimensional digital core image may be 80 × 80 × 80 pixels. That is, the image resolution of the target three-dimensional digital core image can be 2 times or 4 times of the image reconstruction.
Fig. 6 is a schematic diagram of an effect of generating a target three-dimensional digital core image by using a generator model for generating a countering network model by using a three-dimensional digital core image according to a second embodiment of the present invention, and as shown in fig. 6, a low-resolution image (i.e., a digital core image to be optimized) is input to the generator model for generating the countering network model by using the three-dimensional digital core image, and a super-resolution image (i.e., a target three-dimensional digital core image) generated by reconstructing the low-resolution image has a higher definition, so that a disadvantage that an existing image reconstruction method is too smooth is eliminated, and the super-resolution image has a.
According to the embodiment of the invention, the network depth can be deepened by using the residual U-net network, so that the image reconstruction effect of the generated countermeasure network is improved. Meanwhile, redundant information is suppressed by adopting an attention mechanism, and the visual perception effect of the reconstructed image is enhanced by adopting a mixed loss function, so that the target three-dimensional digital core image can be blurred and is more real.
Optionally, the Peak Signal-to-Noise Ratio (PSNR) may be used to evaluate the effect of the target three-dimensional digital core image obtained by the digital core image processing method provided by the embodiment of the present invention. Fig. 7 is a schematic diagram of a comparison effect of the evaluation index provided in the second embodiment of the present invention, as shown in fig. 7, whether the scale of the digital core image to be optimized is increased by 2 times, that is, the resolution is increased by 2 times, or the scale of the digital core image to be optimized is increased by 4 times, that is, the resolution is increased by 4 times. Compared with Bicubic (Bicubic), A + reconstruction (3D-A +), super-resolution reconstruction (3D-SRCNN) based on a convolutional neural network and a super-resolution generation countermeasure network (3D-SRGAN), the image quality of the target three-dimensional digital core image obtained by the digital core image processing method provided by the invention is optimal.
According to the technical scheme, the network structure of the generator model is designed into the residual U-net structure, so that the depth of the neural network is greatly increased, multi-scale image information can be fused, and the multiple local jump connections can help important feature information to cross different modules and layers to be transmitted. Global jump connection is introduced into the whole feature learning layer, residual errors between high-resolution images and low-resolution images are learned, and the problems of gradient loss and network degradation are effectively solved. An attention gate structure is introduced between the encoder and the decoder, so that the generated network pays more attention to the region with more high-frequency information, the characteristic weight containing rich high-frequency information is amplified, the weight containing redundant low-frequency information is reduced, the network convergence is accelerated, and the network performance is improved. Meanwhile, measurement between the distribution of the reconstructed image and the original high-resolution image is carried out by utilizing the mixed loss, the visual perception effect of the reconstructed image is enhanced by constructing the characteristic matching loss, and the details and the definition of the three-dimensional digital core reconstructed image are greatly improved, so that the image quality is improved.
It should be noted that any permutation and combination between the technical features in the above embodiments also belong to the scope of the present invention.
EXAMPLE III
Fig. 8 is a schematic diagram of a digital core image processing apparatus according to a third embodiment of the present invention, and as shown in fig. 8, the apparatus includes: a training image acquisition module 310, a network model acquisition module 320, and an image reconstruction module 330, wherein:
a training image obtaining module 310, configured to obtain a digital core training image; the digital core training images comprise digital core three-dimensional CT images with different resolutions;
a network model obtaining module 320, configured to respectively train a generator model and a discriminator model for generating a confrontation network according to the digital core training image, so as to obtain a three-dimensional digital core image generation confrontation network model;
and the image reconstruction module 330 is configured to reconstruct the input digital core image to be optimized according to a generator model of the three-dimensional digital core image generation countermeasure network model, so as to obtain a target three-dimensional digital core image.
According to the embodiment of the invention, the digital core three-dimensional CT images with different resolutions are used as the digital core training images, the generator model and the discriminator model of the countermeasure network are respectively generated by training according to the digital core training images, the three-dimensional digital core image generation countermeasure network model is obtained, the generator model of the countermeasure network model is generated according to the three-dimensional digital core image, the input digital core image to be optimized is reconstructed, the target three-dimensional digital core image is obtained, the problem of low image quality of the existing three-dimensional digital core image reconstruction image is solved, and the image quality of the three-dimensional digital core reconstruction image is improved.
Optionally, the training image obtaining module 310 is configured to: acquiring a two-dimensional CT image of at least one known rock; performing image preprocessing and image enhancement processing on the two-dimensional CT image to obtain a two-dimensional CT processed image; constructing a three-dimensional CT image according to the two-dimensional CT processing image; and constructing the digital core training image according to the three-dimensional CT image.
Optionally, the training image obtaining module 310 is configured to: segmenting the three-dimensional CT image to obtain a three-dimensional CT segmented image with a first resolution; carrying out zooming processing on the three-dimensional CT segmentation image according to a set zooming value to obtain a three-dimensional CT zoomed image with a second resolution; and constructing the digital core training image according to the three-dimensional CT segmentation image and the three-dimensional CT scaling image.
Optionally, the network model obtaining module 320 is configured to input a three-dimensional CT scaled image with a second resolution in the digital core training image into the generator model, and use an image output by the generator model as a third resolution image; inputting the three-dimensional CT segmentation image of the first resolution in the third resolution image and the digital core training image into the discriminator model at the same time, and determining whether the three-dimensional CT segmentation image of the third resolution is matched with the three-dimensional CT segmentation image of the first resolution according to the output value of the discriminator model so as to train the discriminator model; and after the discriminant model is successfully trained, fixing the discriminant model, and returning to perform the operation of inputting the three-dimensional CT scaling image with the second resolution in the digital core training image into the generator model to continue training the generator model until the generator model is successfully trained.
Optionally, the generator model is a fusion attention mechanism residual U-net network, the fusion attention mechanism residual U-net network includes a set number of network structures, and an encoder and a decoder of the fusion attention mechanism residual U-net network perform multi-scale image fusion through an attention gate structure; wherein the attention gate structure is implemented based on the following formula:
wherein,representing a transfer function, #T、Andrepresents a 1 × 1 × 1 convolution operation, bgAnd bψRepresenting the offset coefficient, σ1The representation of the function of the ReLU,input representing the attention gate structure, giA gate signal indicating the provision of high-level context information,denotes the attention coefficient, σ2Representing sigmoid activation function, ΘattA parameter set is represented which is,representing the output of the attention gate structure.
Optionally, the network model obtaining module 320 is configured to calculate a discriminator loss of the third resolution image on the discriminator model; determining that the discriminant model is successfully trained under the condition that the discriminant loss is determined to meet the discriminant stability condition; calculating pixel loss and feature loss between the third-resolution image and the three-dimensional CT segmentation image of the first resolution, and the confrontation loss of the third-resolution image on the discriminator model; determining that the generator model training was successful if it is determined that the pixel loss, the feature loss, and the antagonistic loss satisfy a generator stability condition.
Optionally, the network model obtaining module 320 is configured to calculate the arbiter loss based on the following network loss function:
wherein D islossRepresenting the discriminator loss, N representing a natural number, and p representing a discriminator function; i denotes the number of training times, SRiRepresenting the third resolution image, HRiA three-dimensional CT segmented image representing the first resolution;
when the loss of the discriminator reaches a first set target value, determining that the loss of the discriminator meets a stable condition of the discriminator;
calculating the pixel loss based on a pixel loss function:
wherein L is1Represents pixel loss;
calculating the characteristic loss based on a characteristic loss function:
wherein, VGGlossIs indicative of the loss of the characteristic,representing a deep learning network function;
calculating the challenge loss based on a challenge loss function as follows:
wherein AdvlossRepresenting the challenge loss;
constructing a generator total loss from the pixel loss, the feature loss, and the antagonistic loss based on the following formula:
Gloss=L1+α·VGGloss+β·Advloss
wherein G islossRepresenting the total loss of the generator, alpha and beta representing weight coefficients;
and when the total loss of the generator reaches a second set target value, determining that the pixel loss, the characteristic loss and the antagonistic loss meet a generator stability condition.
The digital core image processing device can execute the digital core image processing method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method. For technical details that are not described in detail in this embodiment, reference may be made to the digital core image processing method provided in any embodiment of the present invention.
Since the digital core image processing apparatus described above is an apparatus capable of executing the digital core image processing method in the embodiment of the present invention, based on the digital core image processing method described in the embodiment of the present invention, a person skilled in the art can understand a specific implementation manner of the digital core image processing apparatus of the embodiment and various variations thereof, and therefore, a detailed description of how the digital core image processing method in the embodiment of the present invention is implemented by the digital core image processing apparatus is not described here. The apparatus used by those skilled in the art to implement the digital core image processing method in the embodiments of the present invention is within the scope of the present application.
Example four
Fig. 9 is a schematic structural diagram of a computer device according to a fourth embodiment of the present invention. FIG. 9 illustrates a block diagram of a computer device 412 suitable for use in implementing embodiments of the present invention. The computer device 412 shown in FIG. 9 is only one example and should not impose any limitations on the functionality or scope of use of embodiments of the present invention.
As shown in fig. 9, the computer device 412 is in the form of a general purpose computing device. Components of computer device 412 may include, but are not limited to: one or more processors 416, a storage device 428, and a bus 418 that couples the various system components including the storage device 428 and the processors 416.
The computer device 412 may also communicate with one or more external devices 414 (e.g., keyboard, pointing device, camera, display 424, etc.), with one or more devices that enable a user to interact with the computer device 412, and/or with any devices (e.g., network card, modem, etc.) that enable the computer device 412 to communicate with one or more other computing devices. Such communication may be through an Input/Output (I/O) interface 422. Also, computer device 412 may communicate with one or more networks (e.g., a Local Area Network (LAN), Wide Area Network (WAN), and/or a public Network, such as the internet) through Network adapter 420. As shown, network adapter 420 communicates with the other modules of computer device 412 over bus 418. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the computer device 412, including but not limited to: microcode, device drivers, Redundant processing units, external disk drive Arrays, disk array (RAID) systems, tape drives, and data backup storage systems, to name a few.
The processor 416 executes programs stored in the storage 428 to perform various functional applications and data processing, such as implementing the digital core image processing methods provided by the above-described embodiments of the present invention.
That is, the processing unit implements, when executing the program: acquiring a digital core training image; the digital core training images comprise digital core three-dimensional CT images with different resolutions; respectively training a generator model and a discriminator model for generating a confrontation network according to the digital core training images to obtain a three-dimensional digital core image generation confrontation network model; and generating a generator model of a confrontation network model according to the three-dimensional digital core image, and reconstructing the input digital core image to be optimized to obtain a target three-dimensional digital core image.
EXAMPLE five
An embodiment of the present invention further provides a computer storage medium storing a computer program, where the computer program is executed by a computer processor to perform the digital core image processing method according to any one of the above embodiments of the present invention: acquiring a digital core training image; the digital core training images comprise digital core three-dimensional CT images with different resolutions; respectively training a generator model and a discriminator model for generating a confrontation network according to the digital core training images to obtain a three-dimensional digital core image generation confrontation network model; and generating a generator model of a confrontation network model according to the three-dimensional digital core image, and reconstructing the input digital core image to be optimized to obtain a target three-dimensional digital core image.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM) or flash Memory), an optical fiber, a portable compact disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, Radio Frequency (RF), etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.
Claims (10)
1. A digital core image processing method is characterized by comprising the following steps:
acquiring a digital core training image; the digital core training images comprise digital core three-dimensional CT images with different resolutions;
respectively training a generator model and a discriminator model for generating a confrontation network according to the digital core training images to obtain a three-dimensional digital core image generation confrontation network model;
and generating a generator model of a confrontation network model according to the three-dimensional digital core image, and reconstructing the input digital core image to be optimized to obtain a target three-dimensional digital core image.
2. The method of claim 1, wherein the acquiring digital core training images comprises:
acquiring a two-dimensional CT image of at least one known rock;
performing image preprocessing and image enhancement processing on the two-dimensional CT image to obtain a two-dimensional CT processed image;
constructing a three-dimensional CT image according to the two-dimensional CT processing image;
and constructing the digital core training image according to the three-dimensional CT image.
3. The method of claim 2, wherein constructing the digital core training image from the three-dimensional CT image comprises:
segmenting the three-dimensional CT image to obtain a three-dimensional CT segmented image with a first resolution;
carrying out zooming processing on the three-dimensional CT segmentation image according to a set zooming value to obtain a three-dimensional CT zoomed image with a second resolution;
and constructing the digital core training image according to the three-dimensional CT segmentation image and the three-dimensional CT scaling image.
4. The method as claimed in claim 1, wherein the training of the generator model and the discriminator model for generating the countermeasure network from the digital core training images, respectively, comprises:
inputting a three-dimensional CT (computed tomography) zooming image with a second resolution in the digital core training image into the generator model, and taking an image output by the generator model as a third resolution image;
inputting the three-dimensional CT segmentation image of the first resolution in the third resolution image and the digital core training image into the discriminator model at the same time, and determining whether the three-dimensional CT segmentation image of the third resolution is matched with the three-dimensional CT segmentation image of the first resolution according to the output value of the discriminator model so as to train the discriminator model;
and after the discriminant model is successfully trained, fixing the discriminant model, and returning to perform the operation of inputting the three-dimensional CT scaling image with the second resolution in the digital core training image into the generator model to continue training the generator model until the generator model is successfully trained.
5. The method of claim 4, wherein the generator model is a converged attention mechanism residual U-net network comprising a set number of network structures, and wherein an encoder and decoder of the converged attention mechanism residual U-net network performs multi-scale image fusion through an attention gate structure;
wherein the attention gate structure is implemented based on the following formula:
wherein,representing a transfer function, #T、Andrepresents a 1 × 1 × 1 convolution operation, bgAnd bψRepresenting the offset coefficient, σ1The representation of the function of the ReLU,input representing the attention gate structure, giA gate signal indicating the provision of high-level context information,denotes the attention coefficient, σ2Representing sigmoid activation function, ΘattA parameter set is represented which is,representing the output of the attention gate structure.
6. The method of claim 4, wherein the determining that the discriminant model training was successful comprises:
calculating a discriminator loss of the third resolution image on the discriminator model;
determining that the discriminant model is successfully trained under the condition that the discriminant loss is determined to meet the discriminant stability condition;
the determining that the training of the generator model is successful comprises:
calculating pixel loss and feature loss between the third-resolution image and the three-dimensional CT segmentation image of the first resolution, and the confrontation loss of the third-resolution image on the discriminator model;
determining that the generator model training was successful if it is determined that the pixel loss, the feature loss, and the antagonistic loss satisfy a generator stability condition.
7. The method of claim 6, wherein the calculating a discriminant loss for the third resolution image on the discriminant model comprises:
calculating the discriminator loss based on a network loss function:
wherein D islossRepresenting the discriminator loss, N representing a natural number, and p representing a discriminator function; i denotes the number of training times, SRiRepresenting the third resolution image, HRiA three-dimensional CT segmented image representing the first resolution;
the determining that the arbiter loss satisfies an arbiter stabilizing condition includes:
when the loss of the discriminator reaches a first set target value and is stable, determining that the loss of the discriminator meets the stability condition of the discriminator;
the calculating pixel loss between the third resolution image and the three-dimensional CT segmented image of the first resolution comprises:
calculating the pixel loss based on a pixel loss function:
wherein L is1Represents pixel loss;
the calculating a feature loss between the third resolution image and the three-dimensional CT segmented image of the first resolution comprises:
calculating the characteristic loss based on a characteristic loss function:
wherein, VGGlossIs indicative of the loss of the characteristic,representing a deep learning network function;
the calculating the confrontation loss of the third resolution image on the discriminator model includes:
calculating the challenge loss based on a challenge loss function as follows:
wherein AdvlossRepresenting the challenge loss;
the determining that the pixel loss, the characteristic loss, and the antagonistic loss satisfy a generator stabilization condition includes:
constructing a generator total loss from the pixel loss, the feature loss, and the antagonistic loss based on the following formula:
Gloss=L1+α·VGGloss+β·Advloss
wherein G islossRepresenting the total loss of the generator, alpha and beta representing weight coefficients;
determining that the pixel loss, the characteristic loss, and the antagonistic loss satisfy a generator stabilization condition when the generator total loss reaches a second set target value and stabilizes.
8. A digital core image processing device is characterized by comprising:
the training image acquisition module is used for acquiring a digital core training image; the digital core training images comprise digital core three-dimensional CT images with different resolutions;
the network model acquisition module is used for respectively training a generator model and a discriminator model for generating a confrontation network according to the digital core training image to obtain a three-dimensional digital core image generation confrontation network model;
and the image reconstruction module is used for reconstructing the input digital core image to be optimized according to a generator model of the three-dimensional digital core image generation countermeasure network model to obtain a target three-dimensional digital core image.
9. A computer device, characterized in that the computer device comprises:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the digital core image processing method of any of claims 1-7.
10. A computer storage medium having a computer program stored thereon, wherein the program, when executed by a processor, implements a digital core image processing method as recited in any of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011027637.1A CN112132959B (en) | 2020-09-25 | 2020-09-25 | Digital rock core image processing method and device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011027637.1A CN112132959B (en) | 2020-09-25 | 2020-09-25 | Digital rock core image processing method and device, computer equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112132959A true CN112132959A (en) | 2020-12-25 |
CN112132959B CN112132959B (en) | 2023-03-24 |
Family
ID=73840593
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011027637.1A Active CN112132959B (en) | 2020-09-25 | 2020-09-25 | Digital rock core image processing method and device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112132959B (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112801912A (en) * | 2021-02-09 | 2021-05-14 | 华南理工大学 | Face image restoration method, system, device and storage medium |
CN113570505A (en) * | 2021-09-24 | 2021-10-29 | 中国石油大学(华东) | Shale three-dimensional super-resolution digital core grading reconstruction method and system |
CN114169457A (en) * | 2021-12-13 | 2022-03-11 | 成都理工大学 | Storm rock logging identification method based on core reconstruction |
CN115049781A (en) * | 2022-05-11 | 2022-09-13 | 西南石油大学 | Shale digital core three-dimensional reconstruction method based on deep learning |
CN115115783A (en) * | 2022-07-08 | 2022-09-27 | 西南石油大学 | Digital core construction method and system for simulating shale matrix nano-micron pores |
CN115272156A (en) * | 2022-09-01 | 2022-11-01 | 中国海洋大学 | Oil and gas reservoir high-resolution wellbore imaging characterization method based on cyclic generation countermeasure network |
US20230184087A1 (en) * | 2021-12-13 | 2023-06-15 | Saudi Arabian Oil Company | Multi-modal and Multi-dimensional Geological Core Property Prediction using Unified Machine Learning Modeling |
CN117152373A (en) * | 2023-11-01 | 2023-12-01 | 中国石油大学(华东) | Core-level pore network model construction method considering cracks |
CN117635451A (en) * | 2023-10-12 | 2024-03-01 | 中国石油大学(华东) | Multi-source multi-scale digital core image fusion method based on attention guidance |
CN117745725A (en) * | 2024-02-20 | 2024-03-22 | 阿里巴巴达摩院(杭州)科技有限公司 | Image processing method, image processing model training method, three-dimensional medical image processing method, computing device, and storage medium |
CN117975174A (en) * | 2024-04-02 | 2024-05-03 | 西南石油大学 | Three-dimensional digital core reconstruction method based on improvement VQGAN |
WO2024092962A1 (en) * | 2022-10-31 | 2024-05-10 | 中国科学院深圳先进技术研究院 | Three-dimensional reconstruction method, device and computer storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109978762A (en) * | 2019-02-27 | 2019-07-05 | 南京信息工程大学 | A kind of super resolution ratio reconstruction method generating confrontation network based on condition |
US20190302290A1 (en) * | 2018-03-27 | 2019-10-03 | Westerngeco Llc | Generative adversarial network seismic data processor |
CN111402266A (en) * | 2020-03-13 | 2020-07-10 | 中国石油大学(华东) | Method and system for constructing digital core |
CN111461303A (en) * | 2020-03-31 | 2020-07-28 | 中国石油大学(北京) | Digital core reconstruction method and system based on generation of antagonistic neural network |
CN111583148A (en) * | 2020-05-07 | 2020-08-25 | 苏州闪掣智能科技有限公司 | Rock core image reconstruction method based on generation countermeasure network |
-
2020
- 2020-09-25 CN CN202011027637.1A patent/CN112132959B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190302290A1 (en) * | 2018-03-27 | 2019-10-03 | Westerngeco Llc | Generative adversarial network seismic data processor |
CN109978762A (en) * | 2019-02-27 | 2019-07-05 | 南京信息工程大学 | A kind of super resolution ratio reconstruction method generating confrontation network based on condition |
CN111402266A (en) * | 2020-03-13 | 2020-07-10 | 中国石油大学(华东) | Method and system for constructing digital core |
CN111461303A (en) * | 2020-03-31 | 2020-07-28 | 中国石油大学(北京) | Digital core reconstruction method and system based on generation of antagonistic neural network |
CN111583148A (en) * | 2020-05-07 | 2020-08-25 | 苏州闪掣智能科技有限公司 | Rock core image reconstruction method based on generation countermeasure network |
Non-Patent Citations (2)
Title |
---|
HONGGANG CHEN 等: "Super-resolution of real-world rock microcomputed tomography images using cycle-consistent generative adversarial networks", 《PHYSICAL REVIEW E》 * |
袁飘逸 等: "双判别器生成对抗网络图像的超分辨率重建方法", 《激光与光电子学进展》 * |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112801912B (en) * | 2021-02-09 | 2023-10-31 | 华南理工大学 | Face image restoration method, system, device and storage medium |
CN112801912A (en) * | 2021-02-09 | 2021-05-14 | 华南理工大学 | Face image restoration method, system, device and storage medium |
CN113570505A (en) * | 2021-09-24 | 2021-10-29 | 中国石油大学(华东) | Shale three-dimensional super-resolution digital core grading reconstruction method and system |
CN114169457A (en) * | 2021-12-13 | 2022-03-11 | 成都理工大学 | Storm rock logging identification method based on core reconstruction |
CN114169457B (en) * | 2021-12-13 | 2023-05-23 | 成都理工大学 | Storm rock logging identification method based on core reconstruction |
US20230184087A1 (en) * | 2021-12-13 | 2023-06-15 | Saudi Arabian Oil Company | Multi-modal and Multi-dimensional Geological Core Property Prediction using Unified Machine Learning Modeling |
CN115049781A (en) * | 2022-05-11 | 2022-09-13 | 西南石油大学 | Shale digital core three-dimensional reconstruction method based on deep learning |
CN115115783B (en) * | 2022-07-08 | 2023-08-15 | 西南石油大学 | Digital rock core construction method and system for simulating shale matrix nano-micro pores |
CN115115783A (en) * | 2022-07-08 | 2022-09-27 | 西南石油大学 | Digital core construction method and system for simulating shale matrix nano-micron pores |
CN115272156B (en) * | 2022-09-01 | 2023-06-30 | 中国海洋大学 | High-resolution wellbore imaging characterization method for oil and gas reservoirs based on cyclic generation countermeasure network |
CN115272156A (en) * | 2022-09-01 | 2022-11-01 | 中国海洋大学 | Oil and gas reservoir high-resolution wellbore imaging characterization method based on cyclic generation countermeasure network |
WO2024092962A1 (en) * | 2022-10-31 | 2024-05-10 | 中国科学院深圳先进技术研究院 | Three-dimensional reconstruction method, device and computer storage medium |
CN117635451A (en) * | 2023-10-12 | 2024-03-01 | 中国石油大学(华东) | Multi-source multi-scale digital core image fusion method based on attention guidance |
CN117152373A (en) * | 2023-11-01 | 2023-12-01 | 中国石油大学(华东) | Core-level pore network model construction method considering cracks |
CN117152373B (en) * | 2023-11-01 | 2024-02-02 | 中国石油大学(华东) | Core-level pore network model construction method considering cracks |
CN117745725A (en) * | 2024-02-20 | 2024-03-22 | 阿里巴巴达摩院(杭州)科技有限公司 | Image processing method, image processing model training method, three-dimensional medical image processing method, computing device, and storage medium |
CN117745725B (en) * | 2024-02-20 | 2024-05-14 | 阿里巴巴达摩院(杭州)科技有限公司 | Image processing method, image processing model training method, three-dimensional medical image processing method, computing device, and storage medium |
CN117975174A (en) * | 2024-04-02 | 2024-05-03 | 西南石油大学 | Three-dimensional digital core reconstruction method based on improvement VQGAN |
CN117975174B (en) * | 2024-04-02 | 2024-06-04 | 西南石油大学 | Three-dimensional digital core reconstruction method based on improvement VQGAN |
Also Published As
Publication number | Publication date |
---|---|
CN112132959B (en) | 2023-03-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112132959B (en) | Digital rock core image processing method and device, computer equipment and storage medium | |
CN110189255B (en) | Face detection method based on two-stage detection | |
CN111311704B (en) | Image reconstruction method, image reconstruction device, computer equipment and storage medium | |
CN110009013A (en) | Encoder training and characterization information extracting method and device | |
CN112541864A (en) | Image restoration method based on multi-scale generation type confrontation network model | |
CN115661144B (en) | Adaptive medical image segmentation method based on deformable U-Net | |
CN111275686B (en) | Method and device for generating medical image data for artificial neural network training | |
CN113298718A (en) | Single image super-resolution reconstruction method and system | |
CN113658040A (en) | Face super-resolution method based on prior information and attention fusion mechanism | |
Zhang et al. | Dense haze removal based on dynamic collaborative inference learning for remote sensing images | |
CN116739899A (en) | Image super-resolution reconstruction method based on SAUGAN network | |
CN115761358A (en) | Method for classifying myocardial fibrosis based on residual capsule network | |
CN117934824A (en) | Target region segmentation method and system for ultrasonic image and electronic equipment | |
CN117726540A (en) | Image denoising method for enhanced gate control converter | |
CN117437423A (en) | Weak supervision medical image segmentation method and device based on SAM collaborative learning and cross-layer feature aggregation enhancement | |
CN117974693A (en) | Image segmentation method, device, computer equipment and storage medium | |
CN111507950B (en) | Image segmentation method and device, electronic equipment and computer-readable storage medium | |
CN117593187A (en) | Remote sensing image super-resolution reconstruction method based on meta-learning and transducer | |
CN117710295A (en) | Image processing method, device, apparatus, medium, and program product | |
CN116310851B (en) | Remote sensing image change detection method | |
Chao et al. | Instance-aware image dehazing | |
CN117314751A (en) | Remote sensing image super-resolution reconstruction method based on generation type countermeasure network | |
CN117253034A (en) | Image semantic segmentation method and system based on differentiated context | |
CN112017113A (en) | Image processing method and device, model training method and device, equipment and medium | |
Chen et al. | Image denoising via generative adversarial networks with detail loss |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |