CN116612206B - Method and system for reducing CT scanning time by using convolutional neural network - Google Patents

Method and system for reducing CT scanning time by using convolutional neural network Download PDF

Info

Publication number
CN116612206B
CN116612206B CN202310883773.8A CN202310883773A CN116612206B CN 116612206 B CN116612206 B CN 116612206B CN 202310883773 A CN202310883773 A CN 202310883773A CN 116612206 B CN116612206 B CN 116612206B
Authority
CN
China
Prior art keywords
image
model
neural network
convolutional neural
quality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310883773.8A
Other languages
Chinese (zh)
Other versions
CN116612206A (en
Inventor
叶旺全
陈亮
李承峰
郑荣儿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ocean University of China
Original Assignee
Ocean University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ocean University of China filed Critical Ocean University of China
Priority to CN202310883773.8A priority Critical patent/CN116612206B/en
Publication of CN116612206A publication Critical patent/CN116612206A/en
Application granted granted Critical
Publication of CN116612206B publication Critical patent/CN116612206B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Abstract

The application belongs to the technical field of image processing, and discloses a method and a system for reducing CT scanning time by using a convolutional neural network. The method comprises the following steps: preparing a high-low resolution image dataset; building an image enhancement model based on a convolutional neural network; inputting the low-quality image into an image enhancement model, fitting nonlinear characteristics in the image, and extracting high-frequency information representing image details; determining an optimizer and learning parameters, updating network model parameters by using a back propagation algorithm, and improving the capability of the model to learn the mapping relationship between the low-quality image and the high-quality image by minimizing a loss function; storing the image enhancement model parameters with the best PSNR result; and inputting the low-quality image into the image enhancement model which completes verification, and obtaining a corresponding enhancement image. The application can improve the image quality of 100 projection reconstructed images to 1000 projection reconstructed images, reduce the scanning time by 10 times and simultaneously maintain good image quality.

Description

Method and system for reducing CT scanning time by using convolutional neural network
Technical Field
The application belongs to the technical field of image processing, and particularly relates to a method and a system for reducing CT scanning time by using a convolutional neural network.
Background
The natural gas hydrate (commonly called as 'combustible ice', hereinafter called as hydrate) is a solid substance formed by natural gas and water under the environment of high pressure and low temperature (the temperature is about-10 to 25 ℃ and the pressure is about 3 to 30 MPa), is generally similar to opaque water ice, is widely distributed in deep sea sediments or land permafrost zones, and is a clean energy source with abundant reserves and huge development potential. In recent years, hydrate research has been turned from resource investigation to exploration and development integration, and the realization of safe, controllable and efficient exploitation of hydrates is a hotspot of current global ocean technology competition. Hydrate exploitation is essentially a process of solid-liquid-gas phase transformation and mass and heat transfer in a hydrate reservoir, gas and water generated by decomposition permeate from reservoir pores and cracks to be produced in a exploitation well under the drive of pressure difference, and reservoir skeleton deformation law, fluid seepage law and control mechanism in the process are important guidance for hydrate exploration and development and can be provided by laboratory analysis.
Related simulation experiments of growth and decomposition of hydrate in sediment have been developed in the prior art, and the mechanism of hydrate formation and recovery has been studied in depth. However, due to phase change and gas-water migration in the growth and decomposition processes of the hydrate, the sediment skeleton filled by the lost solid hydrate is extremely easy to damage and deform, so that the internal pore structure of the reservoir is extremely complex; the seepage characteristic of the reservoir is subjected to the double effects of hydrate phase change and skeleton deformation, and the complex dynamic change characteristic is presented. Thus, observing the growth and decomposition process of the hydrate in the sediment pores from a microscopic point of view helps reveal the above-mentioned laws. X-ray computed tomography (X-CT) has been widely used as a technique for visually and nondestructively analyzing the spatial structure of a substance, and for observing the three-dimensional structure of the internal space of a hydrate-containing deposit. However, due to the observation time limitation of CT technology, the spatial phase transformation rule of the growth and decomposition of the hydrate in the sediment pores is not clearly known.
The current common CT instruments can be divided into two major categories of medical CT instruments and industrial CT instruments, wherein the medical CT instruments have low imaging precision and are often used for measuring large-size hydrate samples, but the medical CT instruments can only roughly reflect the change process of generation and decomposition of the large-size hydrate, and hardly represent the distribution of small-size hydrate dispersed in the porous medium pores, so that the porosity of the porous medium hydrate cannot be accurately measured; the industrial CT instrument has high measurement precision, the general industrial CT instrument can reach micron level, the spatial distribution generated by hydrate can be quantitatively researched, the porosity of the hydrate can even be obtained, a pore channel and a value for obtaining permeability can be obtained, but the industrial CT instrument needs a long-time scanning process for obtaining a high-quality CT image, usually at an hour level, and under the condition of long single CT scanning time, the accurate observation of the hydrate synthesis and decomposition process is limited.
Through the above analysis, the problems and defects existing in the prior art are as follows: the medical CT instrument in the prior art is often applied to monitoring the dynamic generation and decomposition process of the hydrate, but has low imaging resolution and cannot reflect the detailed information in the growth and decomposition process of the hydrate; industrial CT instruments are often used to quantitatively study the spatial distribution characteristics of hydrate formation due to their high imaging resolution, but their long imaging times do not accurately reflect the state of hydrate growth and decomposition.
Disclosure of Invention
In order to overcome the problems in the related art, the disclosed embodiments of the present application provide a method and system for reducing CT scan time using convolutional neural networks. The application aims to shorten the scanning time of an industrial CT instrument and simultaneously maintain good imaging quality, and has important significance for analyzing the space phase change rule of the growth and decomposition of the hydrate.
The technical scheme is as follows: the method for reducing CT scanning time by using a convolutional neural network comprises the steps of constructing an image enhancement model based on the convolutional neural network, generating a high-low quality CT image data set under a real scene by using the constructed image enhancement model, learning a mapping relation of a high-low quality CT image by using a deep convolutional neural network, reusing the trained image enhancement model in the low quality CT image, and enhancing the image quality of the low quality CT image; the method specifically comprises the following steps:
s1, data set preparation: preparing a high-low quality image dataset;
s2, model design: building a convolutional neural network model, wherein the convolutional neural network model is an end-to-end image enhancement model based on a convolutional neural network, and the convolutional neural network model is input into a low-quality CT image and output into an enhanced low-quality CT image;
s3, extracting features: inputting the low-quality image into an image enhancement model, fitting nonlinear characteristics in the image through a convolutional neural network, and extracting high-frequency information representing image details;
s4, model training: determining an optimizer and learning parameters, updating network model parameters by using an Adam algorithm, and improving the capability of the model to learn the mapping relation between the low-projection-number reconstructed CT image and the high-projection-number reconstructed CT image by minimizing a root mean square error loss function;
s5, model verification: according to the representation result of the trained image enhancement model on the verification set, saving the image enhancement model parameter with the highest PSNR value as a final model;
s6, image reconstruction: and inputting the reconstructed CT image with low projection times into an image enhancement model which is verified, and obtaining a corresponding enhancement image.
Further, the high-low quality CT image dataset comprises:
high quality CT image: CT images reconstructed by 1000 projections;
low quality CT image: CT images reconstructed with 100 projections.
In step S1, preparing a high-low quality image dataset comprises:
the image data of 1000 projections of the sample is completely acquired by utilizing an image data acquisition process, and then projection image data of different angles are extracted from the 1000 projection image data in equal proportion by utilizing a CT image reconstruction process for image reconstruction of different projection times.
Further, the CT image reconstruction process includes:
firstly, fully mixing sediment and water in a reaction kettle;
then, methane gas is filled into the reaction kettle, and the pressure and the temperature of the reaction kettle are controlled, so that the hydrate grows and decomposes in sediment pores; carrying out CT scanning on a sample in the reaction kettle once when the pressure is reduced or increased by a set pressure, wherein the projection times are set to 1000 times, namely, 1000-angle projection data are acquired in one circle of sample rotation in each scanning process;
and finally, reconstructing a two-dimensional CT image of the acquired original projection data.
Further, the image data acquisition process includes: and after the projection times are set, the projection data acquisition of each angle is carried out by utilizing automatic acquisition software of a CT instrument.
Further, the two-dimensional CT image reconstruction includes:
for X-ray images for CT reconstruction, photons reaching the detector are calculatedPhoton +.>And taking the negative logarithm, the expression is:
all the acquired original projection images are converted into density images, then sinusoidal images of each layer of slices are acquired from the density images, and a CT reconstruction tool box is used for carrying out two-dimensional CT reconstruction on the sinusoidal images, so that all tomographic reconstruction images in a CT imaging range are obtained.
Further, the photons reaching the detector are calculatedPhoton +.>Comprises the following components in percentage by weight: when projection data are acquired, blank areas are left on two sides of the detector, photons received in the blank areas do not pass through any object, and photons of the X-ray source are obtained by averaging the blank areas>
In step S2, the convolutional neural network model designs a loss function according to a priori knowledge, where the loss function is a root mean square error loss function, and the expression is:
in the method, in the process of the application,represents a loss function->The model parameters are model inputs, X is a model input, and Y is a model output;
the convolutional neural network model is converged by reducing the MSE error between the low quality CT image and the corresponding high quality CT image during training.
Further, the training process is divided into two phases;
a first stage, in which data is propagated from a low level to a high level;
in the second stage, when the result obtained by forward propagation does not accord with the expected result, the error is propagated and trained from the high level to the bottom layer, namely, the backward propagation stage; in the forward propagation stage, the convolutional neural network processes input data layer by layer through operations such as convolution, pooling and the like, and finally an output result is obtained; in the back propagation phase, the errors are calculated and back-propagated, and the weights and offsets in the network are updated.
Another object of the present application is to provide a system for reducing CT scan time using a convolutional neural network, which implements the method for reducing CT scan time using a convolutional neural network, the system comprising:
a dataset preparation module for preparing a high-low quality image dataset;
the model design module is used for building a convolutional neural network model, wherein the convolutional neural network model is an end-to-end image enhancement model based on a convolutional neural network, is input into a low-quality CT image, and is output into an enhanced low-quality CT image;
the feature extraction module is used for inputting the low-quality image into the image enhancement model, fitting nonlinear features in the image through a convolutional neural network, and extracting high-frequency information representing image details;
the model training module is used for determining an optimizer and learning parameters, updating network model parameters by using an Adam algorithm, and improving the capability of the model to learn the mapping relation between the low-projection-number reconstructed CT image and the high-projection-number reconstructed CT image by minimizing a root mean square error loss function;
the model verification module is used for storing the image enhancement model parameter with the highest PSNR value as a final model according to the performance result of the trained image enhancement model on the verification set;
and the image reconstruction module is used for inputting the reconstructed CT image with low projection times into the image enhancement model which is verified, and obtaining a corresponding enhancement image.
By combining all the technical schemes, the application has the advantages and positive effects that: according to the application, 100 projection data are extracted from 1000 projection data in a medium proportion, so that a completely matched high-low quality image dataset is obtained by reconstruction, and the problem of image registration of the training image dataset of the deep neural network is solved; the application improves the imaging precision of CT technology by a software means, and the trained deep convolution neural network can improve the image quality of 100 projection reconstructed images to 1000 projection reconstructed images, thereby reducing the scanning time by 10 times and simultaneously maintaining good image quality; the application improves the imaging precision of CT technology by a software means, and the proposed method can shorten the time process of observing the change of the hydrate sample by the CT means, thereby providing a new technical means for researching the dynamic change process of the hydrate. The application can obtain high-quality CT image quality in a short time with extremely low cost, and can observe more changeable details of a dynamic change sample in a shorter time.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure;
FIG. 1 is a flowchart of a method for reducing CT scan time using a convolutional neural network, provided by an embodiment of the present application;
FIG. 2 is a schematic diagram of the best CT image quality obtained by 1000 projections provided by the embodiment of the application, which is the best projection times achieved by a laboratory CT instrument;
FIG. 3 is a graph showing the effect of the raw projection image data of an obtained alumina ball sample provided by an embodiment of the present application;
FIG. 4 is a graph showing the effect of a calculated sample density image provided by an embodiment of the present application;
FIG. 5 is a graph of the sinusoidal effect of a layer in the middle of an alumina sphere sample provided by an embodiment of the present application;
FIG. 6 is a graph showing the effect of a reconstructed image of a layer in the middle of an alumina sphere sample according to an embodiment of the present application;
FIG. 7 is a graph of hydrate saturation versus selected 4 sets of hydrate growth processes provided by the examples of the present application;
FIG. 8 is a graph showing the effect of hydrate content at 5℃and 3.2MPa in the decomposition stage 1 according to the embodiment of the present application;
FIG. 9 is a graph showing the effect of hydrate content at 3℃and 3.3MPa in the decomposition stage 2 according to the embodiment of the present application;
FIG. 10 is a graph showing the effect of hydrate content at 3MPa and 3 ℃ in the decomposition stage 3 according to the embodiment of the application;
FIG. 11 is a calculated hydrate saturation plot provided by an embodiment of the present application;
FIG. 12 is a PSNR histogram of a trained model provided by an embodiment of the present application on a quartz sand hydrate validation set;
FIG. 13 is a graph showing the contrast of the pixel intensity distribution before and after improving the image quality of a CT image of quartz sand hydrate in the pixel intensity distribution according to the embodiment of the present application;
fig. 14 is a comparison chart of pixel intensity distribution before and after improving image quality of a Berea sandstone hydrate CT image provided by an embodiment of the present application;
fig. 15 is a pixel intensity distribution diagram of a CT image of a marine sediment hydrate decomposition stage provided by an embodiment of the present application.
Detailed Description
In order that the above objects, features and advantages of the application will be readily understood, a more particular description of the application will be rendered by reference to the appended drawings. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. The application may be embodied in many other forms than described herein and similarly modified by those skilled in the art without departing from the spirit or scope of the application, which is therefore not limited to the specific embodiments disclosed below.
The method for reducing CT scanning time by using the convolutional neural network provided by the embodiment of the application improves the CT reconstruction image quality under short-time scanning of an industrial CT instrument from the software algorithm level, wherein the software algorithm is mainly based on an image enhancement model constructed by the convolutional neural network, and the mapping relation of high-low quality CT images is learned by using a constructed high-low quality CT image data set (high quality CT image: 1000 CT images reconstructed by projection and low quality CT image: 100 CT images reconstructed by projection) under a real scene through the deep convolutional neural network. The trained image enhancement model is reused for the low-quality CT image, thereby enhancing the image quality of the low-quality CT image.
As shown in fig. 1, the method specifically comprises the following steps:
s1, data set preparation: preparing a high-low quality image dataset;
s2, model design: building a convolutional neural network model, wherein the network model is an end-to-end image enhancement model based on a convolutional neural network, the input of the model is a low-quality CT image, the effect of the model is image feature extraction and image reconstruction, and the output of the model is an enhanced low-quality CT image; designing a loss function according to priori knowledge, wherein the loss function is a root mean square error loss function (MSE), and converging the model by reducing MSE errors between the low-quality CT image and the corresponding high-quality CT image in the training process, so that the constructed model has the capability of approaching the input low-quality CT image to the high-quality CT image;
the specific training process is illustratively divided into two phases, the first phase being a phase in which data is propagated from a low level to a high level, i.e., a forward propagation phase. Another phase is a phase of propagation training from a high level to the bottom layer, i.e., a back propagation phase, when the result of the forward propagation does not match the expected result. In the forward propagation stage, the convolutional neural network processes input data layer by layer through operations such as convolution, pooling and the like, and finally an output result is obtained. In the back propagation phase, including calculating and back-transferring errors, and updating weights and offsets in the network;
s3, extracting features: inputting the low-quality image into an image enhancement model, fitting nonlinear characteristics in the image through a convolutional neural network, and extracting high-frequency information representing image details;
it will be appreciated that the convolutional neural network performs a convolutional operation on the input image by convolutional collation, thereby extracting features of the image. In convolutional neural networks, the fitting of nonlinear features is accomplished by activating functions. In convolutional neural networks, high frequency information is controlled by the size and step size of the convolutional kernel. When the size of the convolution kernel is smaller, the detail information of the image, namely the high-frequency information, can be extracted;
s4, model training: determining an optimizer and learning parameters, updating network model parameters by using an Adam algorithm, and improving the capability of the model to learn the mapping relation between the low-projection-number reconstructed CT image and the high-projection-number reconstructed CT image by minimizing a root mean square error loss function;
it will be appreciated that the gist of the present application is to enhance the image quality of a CT image reconstructed from 100 projections by means of an image enhancement method, thereby reducing the time of CT scanning;
s5, model verification: according to the performance result (PSNR and the like) of the trained image enhancement model on the verification set, saving the image enhancement model parameter with the best PSNR result, wherein the image enhancement model parameter is the best reconstruction result which can be achieved by the trained image enhancement model;
s6, image reconstruction: and inputting the low-quality image into the image enhancement model which completes verification, and obtaining a corresponding enhancement image.
Example 1, how a high-low quality CT image dataset is constructed: the basic principle of CT imaging is that a detector receives radiation attenuation signals penetrating through a plurality of angles of an object, converts the signals into projection data, and then uses the projection data to reconstruct an image to obtain a CT reconstructed image. The CT reconstruction needs to acquire multi-angle projection data, a sample can uniformly rotate for a certain number of times within 360 degrees, and the instrument can acquire the original projection data of the sample once every time the sample rotates. The spatial resolutions of the CT two-dimensional reconstruction images under different projection numbers are the same and are 1000 multiplied by 1000, but the density resolutions are different, so that the image quality has a large gap, and in theory, the number (ProjectionNumber, PN) of the original projection data is proportional to the whole scanning time and the quality of the CT reconstruction images. In practice, as shown in fig. 2, in the hydrate CT experiment flow, 1000 projections are the optimal projection times achieved by the laboratory CT apparatus, so that the best CT image quality can be obtained.
In order to obtain the CT images reconstructed under different projection times which are completely matched, firstly, the image data of 1000 projections of a sample are completely acquired by utilizing an image data acquisition process, and then, projection image data of different angles are extracted from the 1000 projection image data in a moderate proportion by utilizing a CT image reconstruction process for image reconstruction of different projection times. For example, to reconstruct a CT image of 100 projections, only 100 projection image data need to be extracted from the corresponding 1000 projection image data in a moderate proportion for CT image reconstruction.
Exemplary hydrate CT experimental procedures, image data acquisition procedures, and CT image reconstruction procedures include:
hydrate CT experimental procedure: firstly, fully mixing sediment and water in a reaction kettle; then methane gas is filled into the reaction kettle, and the pressure and the temperature of the reaction kettle are controlled, so that the hydrate grows and decomposes in sediment pores; carrying out CT scanning on a sample in the reaction kettle every time the pressure is reduced/increased by a certain MPa, wherein the projection times are set to 1000 times, namely 1000 angles of projection data are acquired in one circle of sample rotation in each scanning process; and finally, reconstructing a two-dimensional CT image of the acquired original projection data.
Image data acquisition flow: the image data is mainly projection data of each angle, and after the number of projection times is set manually, the image data is automatically collected by commercial software of a CT instrument;
CT image reconstruction procedure (taking original projection data of alumina ball sample as an example, and other sample steps are identical):
what is needed for CT reconstruction is that the projection image for each angle consists of a line integral of the object, i.e. the value of each pixel in the projection image for each angle should represent the sum of the object densities on the rays between the X-ray source and the detector. However, the detector board of the instrument receives the intensity of the X-ray photons remaining after passing through the object, and the original projection image is changed to a density image of the object by the equation (1).Is a projection representing the sum of the densities along an X-ray. Therefore, in order to prepare an X-ray image for CT reconstruction, it is necessary to calculate the photon +.>Photon +.>And taking a negative logarithm of the ratio.
(1)。
Fig. 3 is raw projection image data of an obtained alumina sphere sample, the image size being 1000×1000, each pixel being 16 bits. Raw projection data represents photons reaching the detectorTherefore, the number of photons passing through the middle region with the sample is smallThe gray value of the image is low, the number of photons passing through the outer region without the sample is large, and the gray value of the image is high. For more convenient acquisition of photons of the X-ray source +.>Leaving a certain blank area on both sides of the detector when acquiring projection data, wherein photons received in the area do not pass through any object, and averaging the area to obtain photons of the X-ray source->
FIG. 4 is a sample density image calculated by equation (1), where the density is higher in the region where the sample is present, and the gray value of the image is higher.
All the acquired original projection images are converted into density images, then sinusoidal images of each layer of slices are acquired from the density images, and a CT reconstruction tool box ASTRA-Toolbox is used for carrying out two-dimensional CT reconstruction on the sinusoidal images, so that all tomographic reconstruction images in a CT imaging range can be obtained. Fig. 5 is a sinusoidal chart of a layer in the middle of an alumina sphere sample, and fig. 6 is a reconstructed image of a layer in the middle of an alumina sphere sample, with the pixel size of each reconstructed image being 1000 x 1000.
Exemplary, CT reconstructed image operations at different projection numbers:
the CT image reconstruction image flow is applicable to CT image reconstruction under any projection quantity. For the same sample, the acquisition of CT reconstructed images under different projection numbers is generally separated, for example, for CT reconstruction of 100 projection numbers and 1000 projection numbers, firstly, CT scanning is carried out on the sample, projection data of 100 angles in one week are acquired, and CT image reconstruction is carried out on the acquired data; then, a sample is subjected to CT scanning, 1000-angle projection data within one circle are collected, and CT image reconstruction is carried out on the collected data. This has the obvious disadvantages that the characteristics of the CT image reconstructed per scan are not completely matched, the operation is complex and time-consuming, and the operation difficulty is higher for a hydrate sample which dynamically changes, and the CT image reconstructed per scan is also not matched. Therefore, in order to simplify the difficulty of obtaining CT images under the reconstruction of different matched projection times, the application optimizes the image acquisition process, only needs to acquire 1000 projection image data of the same sample, then uniformly extracts 100 angle projection data from the acquired 1000 projection data for reconstructing the CT image under 100 projections, 200 angle projection data for reconstructing the CT image under 200 projections, and so on.
Embodiment 2 the system for reducing CT scan time using convolutional neural network provided in the embodiment of the present application includes:
a dataset preparation module for preparing a high-low quality image dataset;
the model design module is used for building a convolutional neural network model, and the network model is an end-to-end image enhancement model based on the convolutional neural network;
the feature extraction module is used for inputting the low-quality image into the image enhancement model, fitting nonlinear features in the image and extracting high-frequency information representing image details;
the model training module is used for determining an optimizer and learning parameters, updating network model parameters by using a back propagation algorithm, and improving the capability of the model to learn the mapping relation between the low-quality image and the high-quality image by minimizing a loss function;
the model verification module is used for storing the image enhancement model parameters with the best PSNR result according to the performance result of the trained image enhancement model on the verification set;
and the image reconstruction module is used for inputting the low-quality image into the image enhancement model which is verified, and obtaining a corresponding enhanced image.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
The content of the information interaction and the execution process between the devices/units and the like is based on the same conception as the method embodiment of the present application, and specific functions and technical effects brought by the content can be referred to in the method embodiment section, and will not be described herein.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. For specific working processes of the units and modules in the system, reference may be made to corresponding processes in the foregoing method embodiments.
Based on the technical solutions described in the embodiments of the present application, the following application examples may be further proposed.
According to an embodiment of the present application, there is also provided a computer apparatus including: at least one processor, a memory, and a computer program stored in the memory and executable on the at least one processor, which when executed by the processor performs the steps of any of the various method embodiments described above.
Embodiments of the present application also provide a computer readable storage medium storing a computer program which, when executed by a processor, performs the steps of the respective method embodiments described above.
The embodiment of the application also provides an information data processing terminal, which is used for providing a user input interface to implement the steps in the method embodiments when being implemented on an electronic device, and the information data processing terminal is not limited to a mobile phone, a computer and a switch.
The embodiment of the application also provides a server, which is used for realizing the steps in the method embodiments when being executed on the electronic device and providing a user input interface.
Embodiments of the present application also provide a computer program product which, when run on an electronic device, causes the electronic device to perform the steps of the method embodiments described above.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal device, recording medium, computer memory, read-only memory (ROM), random access memory (RandomAccessMemory, RAM), electrical carrier signal, telecommunication signal, and software distribution medium. Such as a U-disk, removable hard disk, magnetic or optical disk, etc.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
To further demonstrate the positive effects of the above embodiments, the present application was based on the above technical solutions to perform the following experiments.
Experiment 1, in which data of three hydrate samples were collected altogether, including one regular-shape alumina sphere sample; two laboratory synthesized hydrate samples: quartz sand hydrate (quartz zsand_hyd, sample number: S1), berea sandstone hydrate (Berea sandstone_hyd, sample number: S2); and a sediment sample of the ocean (Seasediment_Hyd, sample number: S3). The sediment grain diameter of the quartz sand sample is larger, the sediment skeleton outline is more regular, and the generation of hydrate is facilitated; the Berea sandstone mainly comprises fine-grain iron dolomite and authigenic kaolinite, contains a small amount of mud volcanic rock, has relatively developed pores and has good connectivity; the marine sediment belongs to muddy fine silt, and the range of the particle size of the sediment is large in span, wherein the sediment contains not only hydrate, but also microorganisms such as oilworm and the like. The application aims to enhance the image quality of the CT image reconstructed by 100 projections of the three hydrate samples and the CT image reconstructed by 1000 projections, so that the image quality of the CT image reconstructed by 100 projections of the three hydrate samples is equivalent to the corresponding CT image quality reconstructed by 1000 projections.
Results show that: in the image enhancement results of the three hydrate samples, the hydrate saturation contrast: the error is small, indicating reliable results.
The hydrate saturation is defined as the ratio of the volume of hydrate to the total volume of hydrate, aqueous solution and pores (occupied by methane gas). And respectively counting the number of voxels with different components by using CT images, and obtaining the saturation of the hydrate.
And (3) enhancing PN100 images of the quartz sand hydrate growth stage by using a trained model, selecting 4 groups of reconstructed images with larger hydrate content difference to construct a hydrate digital core (the digital core constructed by the PN1000 image and the PN 100-EIQ image), and intercepting samples with the size of 300 multiplied by 300 from the original digital core to calculate the hydrate saturation. The digital rock core is subjected to phase segmentation and pore extraction, and each group of different hydrate analysis objects comprise methane gas, hydrate, aqueous solution and quartz sand.
The hydrate saturation of the selected 4 groups of hydrate growth processes is compared with that of fig. 7, wherein the dark black is the hydrate saturation calculated by the digital core constructed by the PN1000 image, the light gray is the hydrate saturation calculated by the digital core constructed by the PN 100-EIQ image, and the average error is 4.43%.
And (3) enhancing PN100 images of the Berea sandstone hydrate decomposition stage by using a trained model, and selecting 3 groups of reconstructed images with larger hydrate content difference to construct a hydrate digital core (the digital cores constructed by PN100, PN 100-EIQ and PN1000 images). In the XZ direction slice image decomposition process, the XZ direction slice image decomposition of the three-dimensional digital core image constructed by the PN100 image, the PN100_ EIQ image and the PN1000 image is included. By extracting hydrate and water from XZ slices of the PN100_ EIQ image and the PN1000 image, a decomposed image of the hydrate can be obviously obtained, and the form of the hydrate of the PN100_ EIQ image is substantially identical to that of the hydrate of the PN1000 image. In the variation of the hydrate content in the Berea sandstone hydrate decomposition process, as shown in the effect diagram of the hydrate content in the decomposition stage 1, 3.2Mpa and 5 ℃ in fig. 8, as shown in the effect diagram of the hydrate content in the decomposition stage 2, 3.3Mpa and 3 ℃ in fig. 9, as shown in the effect diagram of the hydrate content in the decomposition stage 3, 3.5Mpa and 3 ℃ in fig. 10;
the calculated hydrate saturation is shown in fig. 11, with an average error of 5.16%.
The results illustrate: the CT acquisition time can be shortened by more than 10 times on the time scale, so that a faster reaction process can be observed conveniently.
Experiment 2, rapid scan CT image quality enhancement of hydrate-containing deposits will be trained on complete image pairs of three different hydrate samples using SRDenseNet network model, improving the PN100 image quality as close as possible to the PN1000 image quality. And comparing the PN 100-EIQ image of the three hydrate samples with the PN1000 image, wherein the PN 100-EIQ image comprises visual effect comparison, local amplification image comparison and local pixel value comparison. And finally, calculating and comparing the hydrate saturation parameters.
(1) Quartz sand hydrate:
fig. 12 is a PSNR histogram of the trained model on a quartz sand hydrate validation set.
In visual comparison of images, the PN100 image, the PN100_ EIQ image and the corresponding PN1000 image can know that the used deep convolutional neural network model can well restore the outline of quartz sand particles and the distribution of pores (occupied by methane gas), meanwhile, the interference of noise is reduced, the image quality of PN100 is well improved, and the detail information in the images is restored.
As shown in fig. 13, which is a comparison graph of pixel intensity distribution before and after improving image quality of a quartz sand hydrate CT image in pixel intensity distribution, it can be seen from fig. 13 that the PN100 image is greatly affected by noise, the pixel value distribution cannot show the distribution of substances on the path, and the distribution trend of the pixel values of the PN100_ EIQ image and the corresponding PN1000 image along the path is basically consistent, and the pixel value distribution curve of the PN100_ EIQ image is smoother than that of the PN1000 due to the smoothing effect caused by the MSE loss function.
From the XZ slice after the three-dimensional reconstruction of the CT image which is the hydrate growth completion stage, the overall quality of the image after the three-dimensional reconstruction of the slice image in the XY direction of PN100_ EIQ is greatly improved compared with the three-dimensional image of PN100 for the partial hydrate generated by the irregular object.
(2) Berea sandstone hydrate:
and in visual contrast of images, the PSNR histogram of the trained model on the Berea sandstone hydrate validation set is utilized, and the same is true in that the used deep convolutional neural network model can restore most of quartz sand particles, pores and sediment skeletons in PN100 images. However, due to the irregular internal structure of Berea sandstone, the small particle size range of the sediment leads to a wide pore size range distribution, wherein many pores with small diameter range cannot be distinguished in the PN100 image, and therefore the PN100_ EIQ image cannot 'forge' the details of the images which cannot be distinguished in the PN100 image. Fig. 14 is a graph showing the contrast of the pixel intensity distribution before and after the CT image of Berea sandstone hydrate improves the image quality.
(3) Marine sediment:
the PSNR histogram of the trained model on the Berea sandstone hydrate verification set is utilized, in CT image comparison of the marine sediment at the non-decomposition stage, the components in the marine sediment are complex, methane gas, hydrate and various sandstone sediments are contained, and the sediment particle size range is large.
The comparison does not go deep into the material distribution inside the marine sediment, and only the comparison is performed from the image level. Due to the influence of experimental parameters, obvious ring artifacts are generated in CT reconstructed images. But this ringing is greatly improved in the pn100_ EIQ image due to the additional effect of the smoothing of the MSE loss function.
In the contrast before and after improving the image quality of CT images of marine sediment hydrate in the non-decomposition stage, only macropores and sandstone particles can be distinguished in PN100 images, other substance components are difficult to distinguish, and the PN 100-EIQ images can well restore most detailed information from the PN100 images. However, the method is the same as the problem encountered by the Berea sandstone hydrate sample, the method can not 'forge' the information which does not exist in PN100, the PN100 image in the sandstone hydrate sample can not be distinguished, and the substances which have obvious outline characteristics and trace-circulated in the PN100 image in the sandstone hydrate sample can be well reduced in the PN 100-EIQ image.
In CT image contrast of the marine sediment hydrate decomposition stage. Since the hydrate is decomposed, a more remarkable void (occupied by methane gas) is generated inside. For the image with obvious characteristics, a good image enhancement effect can be obtained. Fig. 15 is a pixel intensity distribution diagram of a CT image of a marine sediment hydrate decomposition stage.
As can be seen from the image enhancement results of the alumina ball sample, the quartz sand hydrate sample and the Berea sandstone hydrate sample marine sediment sample, for the substance components with obvious characteristics in CT images, such as large-particle sandstone, macropores, sediment skeletons and the like, the PN100 image of the substance has poor image quality, but the outline characteristics of the substance are still reserved, and the characteristic information is better reduced while the image quality is improved by a deep convolution network model; for the material components with unobvious characteristics in the CT image, such as smaller pore structures, smaller sandstone particles and other components which are not easy to distinguish, the PN100 image of the CT image is poor in image quality, and cannot distinguish the tiny detail information, even if the PN100 image does not contain the information at all, so that the PN100_ EIQ image can lose the characteristic information. In order to obtain better image enhancement effect, the method starts from the aspects of changing the depth convolution network model and the projection times, or constructs a more suitable depth convolution network model, so that lost characteristic information can be learned from a large amount of data training; or the projection times during CT scanning are increased, so that the low-quality image has more detail information, and the difficulty of image enhancement is reduced.
While the application has been described with respect to what is presently considered to be the most practical and preferred embodiments, it is to be understood that the application is not limited to the disclosed embodiments, but on the contrary, is intended to cover various modifications, equivalents, and alternatives falling within the spirit and scope of the application.

Claims (7)

1. A method for reducing CT scanning time by using a convolutional neural network is characterized in that the method is characterized in that an image enhancement model is built based on the convolutional neural network, a high-low quality CT image data set is generated under a real scene by using the built image enhancement model, the mapping relation of high-low quality CT images is learned by a deep convolutional neural network, and the trained image enhancement model is reused for the low quality CT images to enhance the image quality of the low quality CT images; the method specifically comprises the following steps:
s1, data set preparation: preparing a high-low quality image dataset;
s2, model design: building a convolutional neural network model, wherein the convolutional neural network model is an end-to-end image enhancement model based on a convolutional neural network, and the convolutional neural network model is input into a low-quality CT image and output into an enhanced low-quality CT image;
s3, extracting features: inputting the low-quality image into an image enhancement model, fitting nonlinear characteristics in the image through a convolutional neural network, and extracting high-frequency information representing image details;
s4, model training: determining an optimizer and learning parameters, updating network model parameters by using an Adam algorithm, and improving the capability of the model to learn the mapping relation between the low-projection-number reconstructed CT image and the high-projection-number reconstructed CT image by minimizing a root mean square error loss function;
s5, model verification: according to the representation result of the trained image enhancement model on the verification set, saving the image enhancement model parameter with the highest PSNR value as a final model;
s6, image reconstruction: inputting the reconstructed CT image with low projection times into an image enhancement model which completes verification, and obtaining a corresponding enhancement image;
in step S1, preparing a high-low quality image dataset comprises:
the image data of 1000 projections of the sample is completely acquired by utilizing an image data acquisition process, and then projection image data of different angles are extracted from the 1000 projection image data in equal proportion by utilizing a CT image reconstruction process for image reconstruction of different projection times;
the CT image reconstruction process comprises the following steps:
firstly, fully mixing sediment and water in a reaction kettle;
then, methane gas is filled into the reaction kettle, and the pressure and the temperature of the reaction kettle are controlled, so that the hydrate grows and decomposes in sediment pores; carrying out CT scanning on a sample in the reaction kettle once when the pressure is reduced or increased by a set pressure, wherein the projection times are set to 1000 times, namely, 1000-angle projection data are acquired in one circle of sample rotation in each scanning process;
finally, reconstructing a two-dimensional CT image of the acquired original projection data;
the two-dimensional CT image reconstruction includes:
for X-ray images for CT reconstruction, the photons I reaching the detector are calculated s Photon I from X-ray source 0 And taking the negative logarithm, the expression is:
all the acquired original projection images are converted into density images, then sinusoidal images of each layer of slices are acquired from the density images, and a CT reconstruction tool box is used for carrying out two-dimensional CT reconstruction on the sinusoidal images, so that all tomographic reconstruction images in a CT imaging range are obtained.
2. The method of reducing CT scan time using a convolutional neural network of claim 1, wherein the high-low quality CT image dataset comprises:
high quality CT image: CT images reconstructed by 1000 projections;
low quality CT image: CT images reconstructed with 100 projections.
3. The method of reducing CT scan time using a convolutional neural network of claim 1, wherein the image data acquisition procedure comprises: and after the projection times are set, the projection data acquisition of each angle is carried out by utilizing automatic acquisition software of a CT instrument.
4. The method for reducing CT scan time using convolutional neural network of claim 1, wherein the photons I reaching the detector are calculated s Photon I from X-ray source 0 Comprises the following components in percentage by weight: when projection data are acquired, blank areas are left on two sides of the detector, photons received in the blank areas do not pass through any object, and the photons I of the X-ray source are obtained by averaging the blank areas 0
5. The method for reducing CT scan time using a convolutional neural network according to claim 1, wherein in step S2, the convolutional neural network model designs a loss function based on a priori knowledge, the loss function being a root mean square error loss function expressed as:
in the method, in the process of the application,representing a loss function, wherein Θ is a model parameter, X is a model input, and Y is a model output;
the convolutional neural network model is converged by reducing the MSE error between the low quality CT image and the corresponding high quality CT image during training.
6. The method for reducing CT scan time using a convolutional neural network of claim 5, wherein the training process is split into two phases;
a first stage, in which data is propagated from a low level to a high level;
in the second stage, when the result obtained by forward propagation does not accord with the expected result, the error is propagated and trained from the high level to the bottom layer, namely, the backward propagation stage; in the forward propagation stage, the convolutional neural network processes input data layer by layer through convolution and pooling operation, and finally an output result is obtained; in the back propagation phase, the errors are calculated and back-propagated, and the weights and offsets in the network are updated.
7. A system for reducing CT scan time using a convolutional neural network, wherein the method for reducing CT scan time using a convolutional neural network of any one of claims 1-6 is implemented, the system comprising:
a dataset preparation module for preparing a high-low quality image dataset;
the model design module is used for building a convolutional neural network model, wherein the convolutional neural network model is an end-to-end image enhancement model based on a convolutional neural network, is input into a low-quality CT image, and is output into an enhanced low-quality CT image;
the feature extraction module is used for inputting the low-quality image into the image enhancement model, fitting nonlinear features in the image through a convolutional neural network, and extracting high-frequency information representing image details;
the model training module is used for determining an optimizer and learning parameters, updating network model parameters by using an Adam algorithm, and improving the capability of the model to learn the mapping relation between the low-projection-number reconstructed CT image and the high-projection-number reconstructed CT image by minimizing a root mean square error loss function;
the model verification module is used for storing the image enhancement model parameter with the highest PSNR value as a final model according to the performance result of the trained image enhancement model on the verification set;
and the image reconstruction module is used for inputting the reconstructed CT image with low projection times into the image enhancement model which is verified, and obtaining a corresponding enhancement image.
CN202310883773.8A 2023-07-19 2023-07-19 Method and system for reducing CT scanning time by using convolutional neural network Active CN116612206B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310883773.8A CN116612206B (en) 2023-07-19 2023-07-19 Method and system for reducing CT scanning time by using convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310883773.8A CN116612206B (en) 2023-07-19 2023-07-19 Method and system for reducing CT scanning time by using convolutional neural network

Publications (2)

Publication Number Publication Date
CN116612206A CN116612206A (en) 2023-08-18
CN116612206B true CN116612206B (en) 2023-09-29

Family

ID=87680392

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310883773.8A Active CN116612206B (en) 2023-07-19 2023-07-19 Method and system for reducing CT scanning time by using convolutional neural network

Country Status (1)

Country Link
CN (1) CN116612206B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019060843A1 (en) * 2017-09-22 2019-03-28 Nview Medical Inc. Image reconstruction using machine learning regularizers
EP3576049A2 (en) * 2018-05-31 2019-12-04 Canon Medical Systems Corporation An apparatus for denoising an image, a method for denosing an image and a computer readable storage medium
EP3637099A1 (en) * 2018-10-08 2020-04-15 Ecole Polytechnique Federale de Lausanne (EPFL) Image reconstruction method based on a trained non-linear mapping
EP3671646A1 (en) * 2018-12-20 2020-06-24 Canon Medical Systems Corporation X-ray computed tomography (ct) system and method
EP3683771A1 (en) * 2019-01-18 2020-07-22 Canon Medical Systems Corporation Medical processing apparatus
WO2020237873A1 (en) * 2019-05-27 2020-12-03 清华大学 Neural network-based spiral ct image reconstruction method and device, and storage medium
CN112470190A (en) * 2019-09-25 2021-03-09 深透医疗公司 System and method for improving low dose volume contrast enhanced MRI
WO2022041521A1 (en) * 2020-08-31 2022-03-03 浙江大学 Low-dose sinogram denoising and pet image reconstruction method based on teacher-student generators
CN114187181A (en) * 2021-12-17 2022-03-15 福州大学 Double-path lung CT image super-resolution method based on residual information refining
CN114494018A (en) * 2022-02-14 2022-05-13 北京联合大学 Magnetic resonance image super-resolution reconstruction method, system and device
WO2022121160A1 (en) * 2020-12-07 2022-06-16 苏州深透智能科技有限公司 Method for enhancing quality and resolution of ct images based on deep learning
WO2023114265A1 (en) * 2021-12-15 2023-06-22 The Johns Hopkins University Methods and related aspects for mitigating unknown biases in computed tomography data

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180018757A1 (en) * 2016-07-13 2018-01-18 Kenji Suzuki Transforming projection data in tomography by means of machine learning
US11763502B2 (en) * 2018-08-06 2023-09-19 Vanderbilt University Deep-learning-based method for metal reduction in CT images and applications of same
US11039806B2 (en) * 2018-12-20 2021-06-22 Canon Medical Systems Corporation Apparatus and method that uses deep learning to correct computed tomography (CT) with sinogram completion of projection data
US10945695B2 (en) * 2018-12-21 2021-03-16 Canon Medical Systems Corporation Apparatus and method for dual-energy computed tomography (CT) image reconstruction using sparse kVp-switching and deep learning
US20220035961A1 (en) * 2020-08-03 2022-02-03 Ut-Battelle, Llc System and method for artifact reduction of computed tomography reconstruction leveraging artificial intelligence and a priori known model for the object of interest

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019060843A1 (en) * 2017-09-22 2019-03-28 Nview Medical Inc. Image reconstruction using machine learning regularizers
CN111373448A (en) * 2017-09-22 2020-07-03 尼维医疗公司 Image reconstruction using machine learning regularizer
EP3576049A2 (en) * 2018-05-31 2019-12-04 Canon Medical Systems Corporation An apparatus for denoising an image, a method for denosing an image and a computer readable storage medium
EP3637099A1 (en) * 2018-10-08 2020-04-15 Ecole Polytechnique Federale de Lausanne (EPFL) Image reconstruction method based on a trained non-linear mapping
EP3671646A1 (en) * 2018-12-20 2020-06-24 Canon Medical Systems Corporation X-ray computed tomography (ct) system and method
EP3683771A1 (en) * 2019-01-18 2020-07-22 Canon Medical Systems Corporation Medical processing apparatus
WO2020237873A1 (en) * 2019-05-27 2020-12-03 清华大学 Neural network-based spiral ct image reconstruction method and device, and storage medium
CN112470190A (en) * 2019-09-25 2021-03-09 深透医疗公司 System and method for improving low dose volume contrast enhanced MRI
WO2022041521A1 (en) * 2020-08-31 2022-03-03 浙江大学 Low-dose sinogram denoising and pet image reconstruction method based on teacher-student generators
WO2022121160A1 (en) * 2020-12-07 2022-06-16 苏州深透智能科技有限公司 Method for enhancing quality and resolution of ct images based on deep learning
WO2023114265A1 (en) * 2021-12-15 2023-06-22 The Johns Hopkins University Methods and related aspects for mitigating unknown biases in computed tomography data
CN114187181A (en) * 2021-12-17 2022-03-15 福州大学 Double-path lung CT image super-resolution method based on residual information refining
CN114494018A (en) * 2022-02-14 2022-05-13 北京联合大学 Magnetic resonance image super-resolution reconstruction method, system and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于渐进式网络处理的低剂量Micro-CT成像方法;蔡宁;王世杰;陈璐杰;张谊坤;陈阳;罗守华;顾宁;;CT理论与应用研究(04);全文 *

Also Published As

Publication number Publication date
CN116612206A (en) 2023-08-18

Similar Documents

Publication Publication Date Title
US9046509B2 (en) Method and system for estimating rock properties from rock samples using digital rock physics imaging
US8170799B2 (en) Method for determining in-situ relationships between physical properties of a porous medium from a sample thereof
Sun et al. Quantifying nano-pore heterogeneity and anisotropy in gas shale by synchrotron radiation nano-CT
Wang et al. Intra‐aggregate pore characteristics: X‐ray computed microtomography analysis
Niu et al. An innovative application of generative adversarial networks for physically accurate rock images with an unprecedented field of view
CN109712077B (en) Depth dictionary learning-based HARDI (hybrid automatic repeat-based) compressed sensing super-resolution reconstruction method
CN103698803A (en) Blowhole structural characterization method and device
CN112381916A (en) Digital rock core three-dimensional structure reconstruction method using two-dimensional slice image
Ebadi et al. Strengthening the digital rock physics, using downsampling for sub-resolved pores in tight sandstones
Krutko et al. A new approach to clastic rocks pore-scale topology reconstruction based on automatic thin-section images and CT scans analysis
Karimpouli et al. Multistep Super Resolution Double-U-net (SRDUN) for enhancing the resolution of Berea sandstone images
CN115146215A (en) Multi-scale splicing method and system for micro-aperture data based on digital core
CN116612206B (en) Method and system for reducing CT scanning time by using convolutional neural network
CN116012545B (en) Multi-scale digital core modeling method, system, storage medium and application
CN112634428A (en) Porous medium three-dimensional image reconstruction method based on bidirectional cycle generation network
CN111179296A (en) Novel method for researching heat conduction characteristic of rock based on digital rock core technology
Soltanmohammadi et al. A comparative analysis of super-resolution techniques for enhancing micro-CT images of carbonate rocks
CN114998137A (en) Ground penetrating radar image clutter suppression method based on generation countermeasure network
CN114705606A (en) Blocking method of key seepage nodes in rock based on networked analysis
CN114332282A (en) Target detection-combined sparse photoacoustic image reconstruction method and system
Wang et al. 3D reconstruction and characterization of reef limestone pores based on optical and acoustic microscopic images
Regaieg et al. Towards Large-Scale DRP Simulations: Generation of Large Super-Resolution images and Extraction of Large Pore Network Models
Zhang et al. A Super-Resolution Reconstruction Method for Shale Based on Generative Adversarial Network
Buono et al. Exploring microstructure and petrophysical properties of microporous volcanic rocks through 3D multiscale and super-resolution imaging
CN117368239B (en) Natural gas hydrate occurrence state dividing method based on CT technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant