CN112085829A - Spiral CT image reconstruction method and equipment based on neural network and storage medium - Google Patents

Spiral CT image reconstruction method and equipment based on neural network and storage medium Download PDF

Info

Publication number
CN112085829A
CN112085829A CN201910448427.0A CN201910448427A CN112085829A CN 112085829 A CN112085829 A CN 112085829A CN 201910448427 A CN201910448427 A CN 201910448427A CN 112085829 A CN112085829 A CN 112085829A
Authority
CN
China
Prior art keywords
image
section
neural network
network
reconstruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910448427.0A
Other languages
Chinese (zh)
Inventor
邢宇翔
张丽
郑奡
高河伟
梁凯超
陈志强
李亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201910448427.0A priority Critical patent/CN112085829A/en
Priority to PCT/CN2019/103038 priority patent/WO2020237873A1/en
Publication of CN112085829A publication Critical patent/CN112085829A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The present disclosure provides a spiral CT image reconstruction apparatus and method based on a neural network. Wherein, this equipment includes: a memory for storing instructions and three-dimensional projection data from a helical CT apparatus on an object under examination, the object under examination being preset to a multi-slice cross-section; a processor configured to execute the instructions to: respectively carrying out image reconstruction on each layer of section, and for the reconstruction of each layer of section, the method comprises the following steps: inputting the three-dimensional projection data related to the section to be reconstructed to the trained neural network model to obtain a section reconstruction image; and forming a three-dimensional reconstruction image according to the reconstruction image of the multilayer section. The device disclosed by the invention can reconstruct the three-dimensional data projection data into a three-dimensional reconstruction image with more information and less noise by combining the advantages of a deep neural network and the particularity of a spiral CT imaging problem.

Description

Spiral CT image reconstruction method and equipment based on neural network and storage medium
Technical Field
The present disclosure relates to radiation imaging, and more particularly, to a spiral CT image reconstruction method and apparatus based on a neural network, and a storage medium.
Background
An X-ray CT (Computed-Tomography) imaging system is widely used in the fields of medical treatment, security inspection, industrial nondestructive inspection, and the like. The ray source and the detector acquire a series of projection data according to a certain orbit, and the three-dimensional space distribution of the linear attenuation coefficient of the object under the ray energy can be obtained through the restoration of an image reconstruction algorithm. The CT image reconstruction is to restore the linear attenuation coefficient distribution from the projection data acquired by the detector, and is a core step of the CT imaging. Currently, in practical applications, Filtered Back-Projection (FBP), Feldkmap-Davis-kress (fdk) type analytical Reconstruction algorithms, and Algebra Reconstruction Technique (ART), Maximum a spatial temporal (MAP) and other iterative Reconstruction methods are mainly used.
As the problem of radiation dose is becoming more important, how to obtain regular quality or higher quality images under low dose, fast scan conditions has become a popular area of research. In the aspect of a reconstruction method, the analytic reconstruction speed is high, but the analytic reconstruction speed is limited to the traditional system architecture, and the problems of data loss, large noise and the like cannot be well solved. Compared with an analytical algorithm, the iterative reconstruction algorithm has wide application conditions in the aspect of system architecture, and can obtain better reconstruction results for various non-standard scanning tracks, low dose, large noise, projection data loss and other problems. However, the iterative reconstruction algorithm usually requires multiple iterations, and the reconstruction takes a long time. The three-dimensional spiral CT with larger data scale is more difficult to be practically applied. For spiral CT widely used in medical treatment and industry, increasing the pitch can reduce the scanning time, improve the scanning efficiency and reduce the radiation dose. However, increasing the pitch means a reduction in the effective data. The quality of the image obtained by using a conventional analytic reconstruction method is poor; the iterative reconstruction method is time-consuming and difficult to be practically applied.
Deep learning has made a great development in the aspects of computer vision, natural language processing and the like, and in particular, a convolutional neural network becomes a mainstream network structure for image classification, detection and other applications because of the advantages of a plurality of aspects such as the simplicity of the network structure, the effectiveness of feature extraction, the compression of parameter space and the like. However, there is no research on spiral CT image reconstruction using neural network.
Disclosure of Invention
According to the embodiment of the disclosure, a spiral CT image reconstruction method and equipment and a storage medium are provided.
According to an aspect of the present disclosure, there is provided a spiral CT image reconstruction apparatus based on a neural network, including:
a memory for storing instructions and three-dimensional projection data from a helical CT apparatus on an object under examination, the object under examination being preset to a multi-slice cross-section;
a processor configured to execute the instructions to:
respectively carrying out image reconstruction on each layer of section, and for the reconstruction of each layer of section, the method comprises the following steps: inputting the three-dimensional projection data related to the section to be reconstructed to the trained neural network model to obtain a section reconstruction image;
and forming a three-dimensional reconstruction image according to the reconstruction image of the multilayer section.
According to another aspect of the present disclosure, there is provided a helical CT image reconstruction method, including:
the object to be inspected is preset to be a multilayer section;
respectively carrying out image reconstruction on each layer of section, and for the reconstruction of each layer of section, the method comprises the following steps: inputting the three-dimensional projection data related to the section to be reconstructed to the trained neural network model to obtain a section reconstruction image;
and forming a three-dimensional reconstruction image according to the reconstruction image of the multilayer section.
According to yet another aspect of the present disclosure, there is provided a method for training a neural network, the neural network comprising:
the projection domain subnetwork is used for processing the input spiral CT three-dimensional projection data related to the section to be reconstructed to obtain two-dimensional projection data;
the domain conversion sub-network is used for analyzing and reconstructing the two-dimensional projection data to obtain a section image to be reconstructed;
the image domain sub-network is used for processing the image of the image domain section to obtain an accurate reconstruction image of the section to be reconstructed;
wherein the method comprises the following steps:
and adjusting parameters in the neural network by using a consistency cost function of a data model based on the input three-dimensional projection data, the image truth value and the plane reconstruction image with the set section.
According to yet another aspect of the present disclosure, a computer-readable storage medium is provided, in which computer instructions are stored, which when executed by a processor, implement the helical CT image reconstruction method as described above.
According to the spiral CT image reconstruction equipment based on the neural network, the three-dimensional projection data can be reconstructed into a more accurate three-dimensional image by combining the advantages of the depth network and the particularity of the spiral CT imaging problem;
the method trains the network by combining simulation and actual data through a targeted neural network model architecture, so that all system information and set information of an imaged object can be reliably, effectively and comprehensively covered, an object image is accurately reconstructed, and noise caused by low dose and artifacts caused by data loss are inhibited;
although the training process of the neural network model disclosed by the invention needs a large amount of data and operation, the actual reconstruction process does not need iteration, and the calculated amount required by reconstruction is far faster than that of an iterative reconstruction algorithm.
Drawings
For a better understanding of the embodiments of the present disclosure, reference will be made to the following detailed description of the embodiments in accordance with the accompanying drawings:
FIG. 1 shows a schematic structural diagram of a helical CT system according to an embodiment of the present disclosure;
FIG. 2A is a schematic diagram of a trajectory of a detector in a helical CT system as shown in FIG. 1, which has a helical motion with respect to an object under examination; fig. 2B is a schematic diagram of three-dimensional projection data corresponding to signals detected by a detector in a helical CT system.
FIG. 3 is a schematic diagram of a control and data processing apparatus in the spiral CT system shown in FIG. 1;
FIG. 4 is a schematic diagram illustrating a neural network-based spiral CT image reconstruction apparatus according to an embodiment of the present disclosure;
FIG. 5 illustrates a schematic structural diagram of a neural network, according to an embodiment of the present disclosure;
FIG. 6 is a diagram of a visualization network structure of a neural network of an embodiment of the present disclosure;
FIG. 7 illustrates an exemplary network architecture of a projected domain subnetwork;
fig. 8 is a schematic flow chart diagram illustrating a helical CT image reconstruction method according to an embodiment of the present disclosure.
Detailed Description
Specific embodiments of the present disclosure will be described in detail below, with the understanding that the embodiments described herein are illustrative only and are not intended to be limiting of the embodiments of the present disclosure. In the following description, numerous specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. However, it will be apparent to one of ordinary skill in the art that: these specific details need not be employed to practice embodiments of the present disclosure. In other instances, well-known structures, materials, or methods have not been described in detail in order to avoid obscuring embodiments of the present disclosure.
Throughout the specification, reference to "one embodiment," "an embodiment," "one example," or "an example" means: a particular feature, structure, or characteristic described in connection with the embodiment or example is included in at least one embodiment of the disclosure. Thus, the appearances of the phrases "in one embodiment," "in an embodiment," "one example" or "an example" in various places throughout this specification are not necessarily all referring to the same embodiment or example. Furthermore, the particular features, structures, or characteristics may be combined in any suitable combination and/or sub-combination in one or more embodiments or examples. Further, as used herein, the term "and/or" will be understood by those of ordinary skill in the art to include any and all combinations of one or more of the associated listed items.
For spiral CT widely used in medical treatment and industry, increasing the pitch can reduce the scanning time, improve the scanning efficiency and reduce the radiation dose. However, increasing the pitch means a decrease in the effective data. The quality of the image obtained by using a conventional analytic reconstruction method is poor; the iterative reconstruction method is time-consuming and difficult to be practically applied.
The method is based on a convolutional neural network reconstruction method, aims at helical CT equipment under large-pitch scanning, deeply excavates data information, and designs a unique network architecture and a training method by combining with the physical rule of a helical CT system, so that a higher-quality image can be reconstructed in a shorter time.
The embodiment of the disclosure provides a spiral CT image reconstruction method and device based on a neural network and a storage medium. Wherein three-dimensional projection data of an object under examination from a helical CT apparatus are processed using a neural network to obtain a volume distribution of linear attenuation coefficients of the object under examination. The neural network may include: a projection domain subnetwork, a domain switching subnetwork, and an image domain subnetwork. And the projection domain sub-network processes the input three-dimensional projection data to obtain two-dimensional projection data. And the domain conversion sub-network analyzes and reconstructs the two-dimensional projection data to obtain an image domain set section image. The image domain sub-network inputs the section image, the characteristics of the data in the image domain are collected through the convolution neural network effect comprising a plurality of layers, the image characteristics are further extracted and mutually coupled, and the accurate reconstruction image of the set section is obtained. By utilizing the scheme of the embodiment of the disclosure, the three-dimensional projection data of the examined object of the spiral CT equipment can be reconstructed to obtain a result with higher quality.
Fig. 1 shows a schematic structural diagram of a helical CT system according to an embodiment of the present disclosure. As shown in fig. 1, the CT helical system according to the present embodiment includes an X-ray source 20, a mechanical motion device 30, and a detector and data acquisition system 10, and performs helical CT scanning on an object 60 to be examined.
The X-ray source 10 may be, for example, an X-ray machine, and the appropriate focal spot size of the X-ray machine may be selected according to the resolution of the imaging. In other embodiments, instead of using an X-ray machine, a linear accelerator or the like may be used to generate the X-ray beam.
The mechanical movement device includes a stage 60 and a gantry 30. The stage is movable along the axis of the cross-section (perpendicular to the plane of the paper), and the gantry 30 is also rotatable, which simultaneously rotates the detector and the X-ray source 10 on the gantry. In this embodiment, the detector and the X-ray source are rotated synchronously according to the translation stage, so that the detector makes a spiral motion relative to the object to be inspected.
The detector and data acquisition system 10 includes an X-ray detector and data acquisition circuitry, etc. The X-ray detector may use a solid detector, and may also use a gas detector or other detectors, and embodiments of the present disclosure are not limited thereto. The data acquisition circuit comprises a reading circuit, an acquisition trigger circuit, a data transmission circuit and the like, and the detector usually acquires analog signals which can be converted into digital signals through the data acquisition circuit. In one example, the detector may be a row of detectors or a plurality of rows of detectors, and different row spacings may be provided for the plurality of rows of detectors.
The control and data processing device 60 includes, for example, a spiral CT image reconstruction device installed with a control program and based on a neural network, and is responsible for completing control of an operation process of the spiral CT system, including mechanical rotation, electrical control, safety interlock control, and the like, training the neural network (i.e., a machine learning process), and reconstructing a CT image from projection data using the trained neural network.
Fig. 2A is a schematic diagram of a trajectory of a detector in a helical motion relative to an object under examination in the CT helical system shown in fig. 1. As shown in fig. 2A, the stage can be translated back and forth (i.e., in the direction perpendicular to the plane of the paper in fig. 1), and the object to be inspected is moved in the process; meanwhile, the detector makes circular motion around the central axis of the object stage, and the relative motion relationship between the set cross section corresponding to the image mu to be reconstructed of the object to be inspected and the detector is that the detector makes spiral motion around the set cross section.
Fig. 3 shows a schematic structural diagram of the control and data processing device 60 shown in fig. 1. As shown in FIG. 3, data acquired by the detector and data acquisition system 10 is stored in the storage device 310 via the interface unit 370 and the bus 380. A Read Only Memory (ROM)320 stores configuration information of the computer data processor and programs. Random Access Memory (RAM)330 is used to temporarily store various data during operation of processor 350. In addition, the storage device 310 also stores therein computer programs for performing data processing, such as a program for training a neural network and a program for reconstructing a CT image, and the like. The internal bus 380 connects the above-described storage device 310, read only memory 320, random access memory 330, input device 340, processor 350, display device 360, and interface unit 370. The spiral CT image reconstruction device based on the neural network in the embodiment of the present disclosure and the control and data processing apparatus 60 share the storage device 310, the internal bus 380, the Read Only Memory (ROM)320, the display device 360, the processor 350, and the like, and are used for reconstructing the spiral CT image.
After an operation command input by the user through the input device 340 such as a keyboard and a mouse, the instruction codes of the computer program instruct the processor 350 to execute an algorithm for training a neural network and/or an algorithm for reconstructing a CT image. After the reconstruction result is obtained, it is displayed on a display device 360 such as an LCD display, or the processing result is directly output in the form of a hard copy such as printing.
According to the embodiment of the present disclosure, the system is used for performing spiral CT scanning on the object to be inspected to obtain the original attenuation signal. Such attenuation signal data is three-dimensional data, denoted as P, which is a matrix of size C × R × a, where C denotes the number of detector columns (column direction as indicated in fig. 2A and 2B), R denotes the number of detector rows (row direction as indicated in fig. 2A and 2B, corresponding to rows of multiple detector rows), and a denotes the number of angles of projections acquired by the detectors (dimensions as indicated in fig. 2B), i.e., the spiral CT projection data is organized in a matrix form. The raw attenuation signal is preprocessed into three-dimensional projection data (see fig. 2B). For example, the projection data may be obtained by preprocessing the projection data by a helical CT system such as negative log transform. Then, the processor 350 in the control device executes a reconstruction program, processes the projection data by using the trained neural network to obtain two-dimensional projection data of the set cross section, further analyzes and reconstructs the two-dimensional projection data to obtain an image domain set cross section image, and then can further process the image domain set cross section image to obtain a planar reconstructed image of the set cross section. For example, the images may be processed using a trained convolutional neural network (e.g., a U-net type neural network) to obtain feature maps at different scales, and the feature maps at different scales may be combined to obtain the result.
In a particular example, the convolutional neural network may include convolutional layers, pooling, and fully-connected layers. The convolutional layers identify the characterization of the input data set, with each convolutional layer carrying a nonlinear activation function operation. The pooling layer refines the representation of the features, and typical operations include average pooling and maximum pooling. One or more fully-connected layers realize high-order signal nonlinear synthesis operation, and the fully-connected layers also carry nonlinear activation functions. Common nonlinear activation functions are Sigmoid, Tanh, ReLU, etc.
During helical CT scanning, the scanning time can be reduced by increasing the screw pitch, the scanning efficiency is improved, the radiation dose is reduced, and effective data reduction is correspondingly brought. The helical CT projection data may optionally be interpolated by methods including, but not limited to, linear interpolation, cubic spline interpolation, etc., with respect to interpolation, i.e., interpolation of missing data in the detector row direction.
With further reference to fig. 1, X-rays from the X-ray sources 20 at different locations are transmitted through the object under examination 60, received by the detector, converted into electrical signals and further into digital signals representing attenuation values, which are preprocessed as projection data for reconstruction by the computer.
Fig. 4 shows a schematic diagram of a principle of a spiral CT image reconstruction device based on a neural network according to an embodiment of the present disclosure. As shown in fig. 4, in the spiral CT image reconstruction apparatus based on a neural network according to the embodiment of the present disclosure, a reconstructed image of a set cross section of an object to be inspected is obtained by inputting the three-dimensional projection data to a trained neural network model. The method comprises the following steps that parameters in a neural network model are optimized after the neural network model is trained, the network learns through data of a training set, training and generalization processing are carried out, and training optimization is carried out on the parameters in the neural network model through simulation data and/or actual data; and generalizing the optimized parameters through part of actual data, wherein the generalizing includes refining and fine-tuning the parameters.
Fig. 5 shows a schematic structural diagram of a neural network according to an embodiment of the present disclosure. As shown in fig. 5, the neural network of the embodiment of the present disclosure may include three cascaded sub-networks, which are independent neural networks, i.e., a projection domain sub-network, a domain conversion sub-network, and an image domain sub-network. Fig. 6 is a network structure diagram of a visualization of a neural network according to an embodiment of the present disclosure. Wherein, the data types before and after the sub-networks are processed at each level can be visually known. Hereinafter, the tertiary subnetworks will be described in detail with reference to fig. 5 and 6.
The projection domain sub-network inputs three-dimensional projection data received by a detector in the spiral CT system, the sub-network is used as a first part of a neural network structure and is used for converting the three-dimensional spiral projection into a two-dimensional plane projection, the network takes spiral projection data related to a certain section (a set section which is a section of an image to be reconstructed) of an object to be reconstructed as an input, in one example, the projection domain sub-network can comprise a plurality of layers of convolution neural networks, and after the spiral projection data passes through the convolution neural networks, equivalent two-dimensional fan beam (or parallel beam) projection data of the object is output. The partial network aims to extract the characteristics of original spiral CT projection data through a convolution neural network so as to estimate mutually independent fan beam (or parallel beam) projections among sections. The method mainly simplifies the high complexity problem of spiral CT projection into two-dimensional in-plane projection, not only can eliminate the influence caused by cone angle effect, but also can simplify the subsequent reconstruction problem. The resources and computational load required for two-dimensional reconstruction are much smaller than for helical CT reconstruction.
Fig. 7 illustrates an exemplary network architecture of a projected domain subnetwork. As shown in fig. 7, for the set cross-sectional image to be reconstructed, the projection data corresponding to the above projection data is selected from the projection data P, and rearranged, and the data is denoted as P' as an input of the projection domain subnetwork. The specific operation is as follows: selecting the axial coordinate of the reconstruction section as the center, covering the data of the front and back 180-degree spiral scanning angles, finding the detector line number corresponding to the reconstruction section under each scanning angle, and rearranging the detector line number into a matrix of C multiplied by A 'multiplied by R'. Where a 'represents the number of angles we have chosen for the helical projection, i.e. 360 degrees, and R' represents the maximum number of rows of corresponding detectors at all angles.
In addition, the projection data under fan beam projection conditions for a line attenuation coefficient distribution of a set cross section (a plane to be reconstructed) is denoted as p, which is a matrix of C × a' size. Before training the network, the cross-sectional image corresponding to the input helical projection data can be reconstructed by using an analytic reconstruction method including but not limited to PI-original and the like
Figure BDA0002073542470000091
The system matrix for fan-beam scanning is denoted by H, so
Figure BDA0002073542470000092
As the projected domain subnetwork residuals. As shown in fig. 8, including but not limited to a U-net type neural network structure as the projection domain subnetwork, the partial network takes the rearranged projection data P' as input, and the partial network functions to estimate the fan beam projection P of the linear attenuation coefficient μ in the two-dimensional cross section. This part of the network consists of a number of convolutional layers, which are configured with 2-dimensional convolution kernels of K scales. For a certain scale of 2-dimensional convolution kernel, there are two dimensions, where the first dimension is defined as the detector direction and the second dimension is defined as the scanning angle direction. The convolution kernel lengths in the two dimensions need not be the same, for example, taking convolution kernels of 3 x 1, 3 x 5, 7 x 3. Multiple convolution kernels may be set for each scale. All convolution kernels are the network parameters to be determined. Pooling between convolutional layers in a pooled portion of the network,and reducing the image scale layer by layer, and recovering the image scale layer by upsampling between the convolution layers in the upsampling part. To retain more image detail information, the equal scale images in the network output before the pooling part and the network output after the upsampling part are stitched in the third dimension direction, see fig. 7 for details. By phiP-net(P) represents an operator corresponding to the projection domain sub-network, and the output result of the last layer of convolution layer uses a residual error method:
Figure BDA0002073542470000093
if the projection domain sub-network is trained separately, the cost function can be set to l norm, where l is 2 as an example:
Figure BDA0002073542470000094
where k is the training sample index number,
Figure BDA0002073542470000095
is a projected label. In view of the fact that no label is available in practical applications, the reconstruction result of complete data at a small pitch or the reconstruction result of an advanced iterative reconstruction method in the field can be used as a projection label for fan beam projection.
Although fig. 7 illustrates the projected domain subnetwork as a specific structural example of a U-type network, those skilled in the art can appreciate that the technical solution of the present disclosure can be implemented with other structural networks. In addition, those skilled in the art may also think that other networks are used as the image domain network, such as an Auto-Encoder (Auto-Encoder), a full convolution neural network (full convolution neural network), and the like, and the technical solution of the present disclosure can also be implemented.
And the domain conversion sub-network inputs the two-dimensional projection data output by the projection domain sub-network, and the two-dimensional projection data are analyzed and reconstructed to obtain an image domain set section image. The sub-network is used as a second part of the neural network structure for the domain conversion of the projection domain to the image domain, the network realizes the operation of projecting domain data from a two-dimensional fan beam (or parallel beam) to a sectional image of the image domain, and the weight coefficient among network nodes (neurons) in the sub-network can be determined by the scanning geometry in the scanning relation of the two-dimensional fan beam (or parallel beam) CT. The input of this slice is the fan-beam (or parallel-beam) projection data output by the first part, and the output is the preliminary CT reconstructed image (i.e., the image domain set cross-sectional image). Since the sub-network of the first part has converted the reconstruction problem into two dimensions, the domain conversion network of this part can be directly completed by using a matrix operator for two-dimensional analytic reconstruction. The operators of this part can also be implemented via a fully connected network. The simulated or actual projection data and reconstructed image are used for training. The output of the partial network can be used as a final output result, and can also be output after being processed by the image domain sub-network.
In an exemplary embodiment, the domain conversion sub-network specifically obtains the image domain output by performing inverse calculation from the projection domain to the image domain on the p. And calculating a projection matrix by using the existing Siddon or other methods in the field, and analyzing and reconstructing the connection weight of the connection layer according to the element correspondence of the system matrix. Taking FBP fan-beam analytical reconstruction as an example,
Figure RE-GDA0002135999800000101
where W performs the weighting of the projection domain data, F corresponds to a ramp-filtered convolution operation,
Figure RE-GDA0002135999800000102
a weighted back projection is completed.
The image domain sub-network inputs an image domain set section image, and a plane reconstruction image with a set section is formed after further extraction and fusion of image features. The image domain sub-network is a third part, the image domain sub-network takes the image domain setting section image output by the domain conversion sub-network as input, the characteristics of data in the image domain are acquired under the action of a convolutional neural network comprising a plurality of layers, and the image characteristics are further extracted and mutually coupled by taking a target image as a learning target, so that the effect of optimizing the image quality in the image domain is achieved. The output of this part is the final output result of the whole network.
In an exemplary embodiment, the image domain subnetwork employs a U-net type neural network structure similar to the first portion of the network
Figure BDA0002073542470000111
As an input, the role is to achieve image domain optimization. Similar to the first part of the network, in the first half, pooling is performed between the convolutional layers, the image scale is reduced layer by layer, and in the second half, the image scale is restored layer by upsampling between the convolutional layers. The partial network can adopt a residual error training mode, namely adding the output result of the last convolutional layer to the output result of the last convolutional layer
Figure BDA0002073542470000112
Equal to the estimate mu for the two-dimensional reconstructed image. Similar to the projection domain sub-network, in the image domain sub-network including but not limited to the choice of a 3 × 3 convolution kernel, pooling and upsampling both take on a 2 × 2 size. ReLu was chosen as the activation function.
In the embodiment of the present disclosure, a cost function may be adopted as an objective function to be optimized, and the cost function of the overall network may use, but is not limited to, l-norm, RRMSE, SSIM, and the like commonly used in the field, and a combination of multiple cost functions.
Figure BDA0002073542470000113
Figure BDA0002073542470000114
Figure BDA0002073542470000115
Wherein f ═ { f1,f2,…,fnIs the output image and the target image is
Figure BDA0002073542470000116
fi
Figure BDA0002073542470000117
The ith output image and the ith target image are respectively.
Figure BDA0002073542470000118
Is fi
Figure BDA0002073542470000119
Average value of (d);
Figure BDA00020735424700001110
is fi
Figure BDA00020735424700001111
The variance of (a);
Figure BDA00020735424700001112
is fi
Figure BDA00020735424700001113
The covariance of (a); c. C1,c2Is a constant.
In the embodiment of the disclosure, the neural network parameters may be trained, and the training data includes simulation data and actual data. And for the simulation data, establishing a basic mathematical model of the scanned object, generating spiral projection data according to actual system modeling, preprocessing the spiral projection data to be used as network input, and training network parameters by using the true value of the image of the scanned object as a label. For example, the simulation data may be lung simulation data, which includes 30 cases each containing 100 slices, a total of 3000 samples, and data augmentation. Augmentation means include, but are not limited to, rotation, flipping, and the like. For actual data, the object can be scanned on an actual system to obtain helical projection data, and the helical projection data is input into the network after being preprocessed to obtain preliminary reconstruction. And then, performing targeted image processing on the preliminary reconstruction results, for example, performing local smoothing on a known local smoothing area to obtain a label image. The label image can also be reconstructed by using an advanced iterative reconstruction method in the field. And (3) further training the network by using the label image to achieve the fine adjustment of the network parameters. In some embodiments, during training, the spiral projection can be trained first to be converted into a two-dimensional plane projection subnetwork, and then the whole training can be carried out; or direct whole training. For the training of the projection sub-network and the image domain sub-network, the training can be carried out respectively and independently; the parameters for the domain switching sub-networks may be calculated in advance without post-training or may be trained.
If the projection domain sub-network is trained separately first, the cost function is
Figure BDA0002073542470000121
Where k is the training sample index number,
Figure BDA0002073542470000122
is a projected label. In view of the fact that labels cannot be obtained in practical application, the reconstruction result of complete data under a small pitch or the reconstruction result of an advanced iterative reconstruction method in the field is used as a projection label for fan beam projection.
If the image domain sub-network is trained separately, the cost function is defined as 2 norm:
Figure BDA0002073542470000123
where k is the training sample index, μ*Is an image label. In view of the fact that a label cannot be obtained in practical application, a reconstruction result of complete data under a small pitch or a reconstruction result of an advanced iterative reconstruction method in the field is used as the label. Other labels may be used if other ways are available to obtain high quality images.
According to one embodiment of the present invention, a direct training approach may be employed. In the direct training process, convolution kernel weights of a projection domain sub-network and an image domain sub-network are initialized randomly, training is carried out through an actually acquired data set, and after the training is finished, another group of actually acquired data is used as a test set to verify the network training effect.
For the actual CT scanning process, the acquired data is input into the training process to obtain a trained network (at the moment, the network parameters are subjected to machine learning), and a reconstructed image is obtained.
Fig. 8 is a schematic flow chart diagram illustrating a helical CT image reconstruction method according to an embodiment of the present disclosure. As shown in fig. 8, in step S10, three-dimensional projection data is input; in step S20, a neural network model is trained to input the three-dimensional projection data to obtain a planar reconstructed image of a set cross-section of the object under examination.
A neural network according to an embodiment of the present disclosure may include: a projection domain subnetwork, a domain switching subnetwork, and an image domain subnetwork. And the projection domain sub-network processes the input three-dimensional projection data to obtain two-dimensional projection data. And the domain conversion sub-network analyzes and reconstructs the two-dimensional projection data to obtain an image domain set section image. And inputting the image domain section image into an image domain sub-network, extracting the characteristics of data in an image domain through the convolution neural network effect comprising a plurality of layers, and further coupling the image characteristics to obtain a planar reconstruction image with a set section. By utilizing the scheme of the embodiment of the disclosure, the three-dimensional projection data of the examined object of the spiral CT equipment can be reconstructed to obtain a result with higher quality.
Machine learning according to embodiments of the present disclosure may include: training and optimizing parameters in the neural network model through simulation data and/or actual data; and carrying out generalization processing on the optimized parameters through part of actual data, wherein the generalization processing comprises carrying out fine tuning on the parameters.
The method disclosed by the invention can be flexibly suitable for different CT scanning modes and system architectures, and can be used in the fields of medical diagnosis, industrial nondestructive testing and security inspection.
The foregoing detailed description has set forth numerous embodiments of the method and apparatus for training a neural network using schematics, flowcharts, and/or examples. Where such diagrams, flowcharts, and/or examples contain one or more functions and/or operations, it will be understood by those within the art that each function and/or operation within such diagrams, flowcharts, or examples can be implemented, individually and/or collectively, by a wide range of structures, hardware, software, firmware, or virtually any combination thereof. In one embodiment, portions of the subject matter described in embodiments of the present disclosure may be implemented by Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), Digital Signal Processors (DSPs), or other integrated formats. However, those skilled in the art will recognize that some aspects of the embodiments disclosed herein, in whole or in part, can be equivalently implemented in integrated circuits, as one or more computer programs running on one or more computers (e.g., as one or more programs running on one or more computer systems), as one or more programs running on one or more processors (e.g., as one or more programs running on one or more microprocessors), as firmware, or as virtually any combination thereof, and that designing the circuitry and/or writing the code for the software and/or firmware would be well within the skill of one of skill in the art in light of this disclosure. Moreover, those skilled in the art will appreciate that the mechanisms of the subject matter described herein are capable of being distributed as a program product in a variety of forms, and that an illustrative embodiment of the subject matter described herein applies regardless of the particular type of signal bearing media used to actually carry out the distribution. Examples of signal bearing media include, but are not limited to: recordable type media such as floppy disks, hard disk drives, Compact Disks (CDs), Digital Versatile Disks (DVDs), digital tapes, computer memories, etc.; and a transmission type medium such as a digital and/or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link, etc.).
While embodiments of the present disclosure have been described with reference to several exemplary embodiments, it is understood that the terminology used is intended to be in the nature of words of description and illustration, rather than of limitation. As the disclosed embodiments may be embodied in several forms without departing from the spirit or essential characteristics thereof, it should also be understood that the above-described embodiments are not limited by any of the details of the foregoing description, but rather should be construed broadly within its spirit and scope as defined in the appended claims, and therefore all changes and modifications that fall within the metes and bounds of the claims, or equivalence of such metes and bounds are therefore intended to be embraced by the appended claims.

Claims (18)

1. A spiral CT image reconstruction device based on a neural network comprises:
a memory for storing instructions and three-dimensional projection data from a helical CT apparatus on an object under examination, the object under examination being preset to a multi-slice cross-section;
a processor configured to execute the instructions to:
respectively carrying out image reconstruction on each layer of section, and for the reconstruction of each layer of section, the method comprises the following steps: inputting the three-dimensional projection data related to the section to be reconstructed to the trained neural network model to obtain a section reconstruction image;
and forming a three-dimensional reconstruction image according to the reconstruction image of the multilayer section.
2. The neural network-based helical CT image reconstruction device of claim 1, wherein the neural network model comprises:
and the projection domain subnetwork is used for processing the input three-dimensional projection data related to the cross section to obtain two-dimensional projection data.
3. The neural network-based helical CT image reconstruction device of claim 2, wherein the neural network model further comprises:
and the domain conversion sub-network is used for analyzing and reconstructing the two-dimensional projection data to obtain an image domain section image.
4. The neural network-based helical CT image reconstruction device of claim 3, wherein the neural network model further comprises:
and the image domain sub-network is used for inputting the image domain section image, extracting the characteristics of the data in the image domain through the convolution neural network effect comprising a plurality of layers, further coupling the image characteristics and finally obtaining the section reconstruction image.
5. The spiral CT image reconstruction device based on the neural network as claimed in claim 1, wherein the three-dimensional projection data related to the section to be reconstructed input by the neural network model is projection data which is selected from all projection data of the spiral CT device and is related to the section to be reconstructed and rearranged.
6. The spiral CT image reconstruction device based on the neural network as claimed in claim 1, wherein the three-dimensional projection data of the inspected object stored in the memory is data of all projection data of the spiral CT device after interpolation preprocessing.
7. The neural network-based helical CT image reconstruction device of claim 1, wherein the learning data employed by the training comprises: the method comprises the steps that simulation data and/or actual data are/is obtained, wherein the simulation data comprise data obtained by carrying out spiral projection on a scanned object through numerical simulation, and an image truth value of the scanned object is used as a label; the actual data comprises spiral projection data obtained by scanning an object by a spiral CT device and a label image obtained by performing iterative reconstruction according to the projection data; or the actual model body with known material and structure is scanned spirally to obtain projection data, and the label image is formed by using the information of the known material and structure.
8. The neural network-based helical CT image reconstruction apparatus according to claim 2, wherein the projection domain sub-network is a convolutional neural network structure, the partial network having three-dimensional projection data as an input, the partial network being used for estimating fan beam and/or parallel beam projections of linear attenuation coefficients within a set cross-section, the fan beam and/or parallel beam projections being output from the projection domain sub-network.
9. The spiral CT image reconstruction device based on neural network as claimed in claim 4, wherein the image domain sub-network is a convolutional neural network structure, and the partial network takes the output of the domain conversion sub-network as input and outputs the optimized reconstructed image.
10. A helical CT image reconstruction method, comprising:
the object to be inspected is preset to be a multilayer section;
respectively carrying out image reconstruction on each layer of section, and for the reconstruction of each layer of section, the method comprises the following steps: inputting the three-dimensional projection data related to the section to be reconstructed to the trained neural network model to obtain a section reconstruction image;
and forming a three-dimensional reconstruction image according to the reconstruction image of the multilayer section.
11. The method of claim 10, wherein the neural network model comprises:
the projection domain subnetwork is used for processing three-dimensional projection data related to a section to be reconstructed to obtain two-dimensional projection data;
the domain conversion sub-network is used for analyzing and reconstructing the two-dimensional projection data to obtain an image domain section image;
and the image domain sub-network is used for inputting the image domain section image, extracting the characteristics of data in the image domain through the convolution neural network effect comprising a plurality of layers, and further optimizing the image characteristics to obtain an accurate reconstruction image of the section to be reconstructed.
12. The method of claim 10, further comprising:
acquiring attenuation signal data by CT scanning equipment, and processing the attenuation signal data to obtain three-dimensional projection data;
and selecting projection data which is related to the section to be reconstructed and rearranged from all the three-dimensional projection data as the input of the neural network model.
13. The method of claim 11, wherein the training comprises:
training and optimizing parameters in the neural network model through simulation data and/or actual data;
and carrying out generalization processing on the optimized parameters through part of actual data, wherein the generalization processing comprises fine adjustment on the parameters.
14. The method of claim 13, wherein the training comprises:
training a projection sub-network and an image domain sub-network respectively or training a neural network integrally; either the parameters of the domain switching sub-network are calculated in advance or the parameters of the domain switching sub-network are trained.
15. The method of claim 10, wherein the three-dimensional projection data is interpolated from helical CT device ensemble projection data.
16. The method of claim 11, wherein the projection domain sub-network is a convolutional neural network structure, the partial network having three-dimensional projection data as input, the partial network functioning to estimate fan beam and/or parallel beam projections of linear attenuation coefficients within a set cross-section as output of the projection domain sub-network; the image domain sub-network is a convolutional neural network structure, the partial network takes an image domain set section image as input, the convolutional layers are pooled in the first half of the network structure, the image scale is reduced layer by layer, and the image scale is restored layer by layer through upsampling in the second half of the network structure.
17. A method for training a neural network, the neural network comprising:
the projection domain subnetwork is used for processing the input spiral CT three-dimensional projection data related to the section to be reconstructed to obtain two-dimensional projection data;
the domain conversion sub-network is used for analyzing and reconstructing the two-dimensional projection data to obtain a section image to be reconstructed;
the image domain sub-network is used for processing the image domain section image to obtain an accurate reconstruction image of a section to be reconstructed;
wherein the method comprises the following steps:
parameters in the neural network are adjusted by using a consistency cost function of a data model based on three-dimensional projection data, an image truth value and a plane reconstruction image with a set section.
18. A computer-readable storage medium having stored therein computer instructions which, when executed by a processor, implement the method of one of claims 10-16.
CN201910448427.0A 2019-05-27 2019-05-27 Spiral CT image reconstruction method and equipment based on neural network and storage medium Pending CN112085829A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910448427.0A CN112085829A (en) 2019-05-27 2019-05-27 Spiral CT image reconstruction method and equipment based on neural network and storage medium
PCT/CN2019/103038 WO2020237873A1 (en) 2019-05-27 2019-08-28 Neural network-based spiral ct image reconstruction method and device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910448427.0A CN112085829A (en) 2019-05-27 2019-05-27 Spiral CT image reconstruction method and equipment based on neural network and storage medium

Publications (1)

Publication Number Publication Date
CN112085829A true CN112085829A (en) 2020-12-15

Family

ID=73552051

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910448427.0A Pending CN112085829A (en) 2019-05-27 2019-05-27 Spiral CT image reconstruction method and equipment based on neural network and storage medium

Country Status (2)

Country Link
CN (1) CN112085829A (en)
WO (1) WO2020237873A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114018962A (en) * 2021-11-01 2022-02-08 北京航空航天大学宁波创新研究院 Synchronous multi-spiral computed tomography method based on deep learning
CN117611750A (en) * 2023-12-05 2024-02-27 北京思博慧医科技有限公司 Method and device for constructing three-dimensional imaging model, electronic equipment and storage medium

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113192155B (en) * 2021-02-04 2023-09-26 南京安科医疗科技有限公司 Spiral CT cone beam scanning image reconstruction method, scanning system and storage medium
CN113689545B (en) * 2021-08-02 2023-06-27 华东师范大学 2D-to-3D end-to-end ultrasound or CT medical image cross-modal reconstruction method
CN113963132A (en) * 2021-11-15 2022-01-21 广东电网有限责任公司 Three-dimensional distribution reconstruction method of plasma and related device
CN114359317A (en) * 2021-12-17 2022-04-15 浙江大学滨江研究院 Blood vessel reconstruction method based on small sample identification
CN114255296B (en) * 2021-12-23 2024-04-26 北京航空航天大学 CT image reconstruction method and device based on single X-ray image
CN114742771B (en) * 2022-03-23 2024-04-02 中国科学院高能物理研究所 Automatic nondestructive measurement method for size of back drilling hole of circuit board
CN115690255B (en) * 2023-01-04 2023-05-09 浙江双元科技股份有限公司 CT image artifact removal method, device and system based on convolutional neural network
CN116612206B (en) * 2023-07-19 2023-09-29 中国海洋大学 Method and system for reducing CT scanning time by using convolutional neural network
CN117351482B (en) * 2023-12-05 2024-02-27 国网山西省电力公司电力科学研究院 Data set augmentation method, system, electronic device and storage medium for electric power visual recognition model

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103714578A (en) * 2014-01-24 2014-04-09 中国人民解放军信息工程大学 Single-layer rearrangement filtered backprojection reconstruction method aiming to half mulching helical cone beam CT
CN105093342A (en) * 2014-05-14 2015-11-25 同方威视技术股份有限公司 Spiral CT system and reconstruction method
CN109171793A (en) * 2018-11-01 2019-01-11 上海联影医疗科技有限公司 A kind of detection of angle and bearing calibration, device, equipment and medium
CN109300167A (en) * 2017-07-25 2019-02-01 清华大学 The method and apparatus and storage medium of CT image reconstruction
CN109300166A (en) * 2017-07-25 2019-02-01 同方威视技术股份有限公司 The method and apparatus and storage medium of CT image reconstruction

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108898642B (en) * 2018-06-01 2022-11-11 安徽工程大学 Sparse angle CT imaging method based on convolutional neural network
CN109102550B (en) * 2018-06-08 2023-03-31 东南大学 Full-network low-dose CT imaging method and device based on convolution residual error network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103714578A (en) * 2014-01-24 2014-04-09 中国人民解放军信息工程大学 Single-layer rearrangement filtered backprojection reconstruction method aiming to half mulching helical cone beam CT
CN105093342A (en) * 2014-05-14 2015-11-25 同方威视技术股份有限公司 Spiral CT system and reconstruction method
CN109300167A (en) * 2017-07-25 2019-02-01 清华大学 The method and apparatus and storage medium of CT image reconstruction
CN109300166A (en) * 2017-07-25 2019-02-01 同方威视技术股份有限公司 The method and apparatus and storage medium of CT image reconstruction
CN109171793A (en) * 2018-11-01 2019-01-11 上海联影医疗科技有限公司 A kind of detection of angle and bearing calibration, device, equipment and medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HONG-KAI YANG 等: "Slice-wise reconstruction for low-dose cone-beam CT using a deep residual convolutional neural network", NUCL.SCI TECH, vol. 30, pages 1 - 9 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114018962A (en) * 2021-11-01 2022-02-08 北京航空航天大学宁波创新研究院 Synchronous multi-spiral computed tomography method based on deep learning
CN114018962B (en) * 2021-11-01 2024-03-08 北京航空航天大学宁波创新研究院 Synchronous multi-spiral computed tomography imaging method based on deep learning
CN117611750A (en) * 2023-12-05 2024-02-27 北京思博慧医科技有限公司 Method and device for constructing three-dimensional imaging model, electronic equipment and storage medium

Also Published As

Publication number Publication date
WO2020237873A1 (en) 2020-12-03

Similar Documents

Publication Publication Date Title
CN112085829A (en) Spiral CT image reconstruction method and equipment based on neural network and storage medium
CN110047113B (en) Neural network training method and apparatus, image processing method and apparatus, and storage medium
CN110660123B (en) Three-dimensional CT image reconstruction method and device based on neural network and storage medium
CN110544282B (en) Three-dimensional multi-energy spectrum CT reconstruction method and equipment based on neural network and storage medium
EP3608877B1 (en) Iterative image reconstruction framework
KR102257637B1 (en) Tomography reconstruction based on deep learning
EP3435334B1 (en) Method and device for reconstructing ct image and storage medium
CN109300166B (en) Method and apparatus for reconstructing CT image and storage medium
CN111492406A (en) Image generation using machine learning
Li et al. Refraction corrected transmission ultrasound computed tomography for application in breast imaging
CN104240270A (en) CT imaging method and system
JPH05192322A (en) Method for reproducing tomographic image using radiation line intersecting plane
CN114067013A (en) System and method for reprojection and backprojection via a homographic resampling transform
WO2007053587A1 (en) Method for increasing the resolution of a ct image during image reconstruction
Sunnegårdh et al. Regularized iterative weighted filtered backprojection for helical cone‐beam CT
Banjak X-ray computed tomography reconstruction on non-standard trajectories for robotized inspection
Mustafa et al. Sparse-view spectral CT reconstruction using deep learning
Wang et al. Sparse-view cone-beam CT reconstruction by bar-by-bar neural FDK algorithm
CN117523095A (en) Sparse angle THz-CT image reconstruction method based on deep learning
Johnston et al. GPU-based iterative reconstruction with total variation minimization for micro-CT
Iskender et al. Scatter correction in X-ray CT by physics-inspired deep learning
CN116188615A (en) Sparse angle CT reconstruction method based on sine domain and image domain
Mora et al. New pixellation scheme for CT algebraic reconstruction to exploit matrix symmetries
Zou et al. Dual helical cone-beam CT for inspecting large object
CN110044937B (en) CT imaging method and device based on compressed sensing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination