CN117197349A - CT image reconstruction method and device - Google Patents
CT image reconstruction method and device Download PDFInfo
- Publication number
- CN117197349A CN117197349A CN202311152367.0A CN202311152367A CN117197349A CN 117197349 A CN117197349 A CN 117197349A CN 202311152367 A CN202311152367 A CN 202311152367A CN 117197349 A CN117197349 A CN 117197349A
- Authority
- CN
- China
- Prior art keywords
- data
- network
- sub
- neural network
- reconstruction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 238000013528 artificial neural network Methods 0.000 claims abstract description 71
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 43
- 238000012545 processing Methods 0.000 claims abstract description 12
- 238000013527 convolutional neural network Methods 0.000 claims description 67
- 230000009466 transformation Effects 0.000 claims description 18
- 238000004590 computer program Methods 0.000 claims description 17
- 238000005070 sampling Methods 0.000 claims description 10
- 230000005855 radiation Effects 0.000 claims description 5
- 238000002591 computed tomography Methods 0.000 claims description 3
- 238000003384 imaging method Methods 0.000 abstract description 6
- 230000006870 function Effects 0.000 description 19
- 238000010586 diagram Methods 0.000 description 11
- 238000004364 calculation method Methods 0.000 description 9
- 238000013135 deep learning Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 3
- 239000000284 extract Substances 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000017105 transposition Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Landscapes
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The embodiment of the application provides a CT image reconstruction method and a device, the method comprises the steps of obtaining projection data of a first object, and then inputting the projection data into a CT iterative expansion reconstruction network for processing to obtain a CT image corresponding to the first object, wherein the CT iterative expansion reconstruction network is obtained by expanding a neural network of a fidelity term and a compressed sensing regularization term or a total variation TV regularization term in a CT iterative reconstruction algorithm. In this way, the method can obtain the complete CT iterative expansion reconstruction network by directly expanding the neural network aiming at the compressed sensing regularization term or the total variation TV regularization term in the CT iterative reconstruction algorithm, so that the interpretability of the whole CT reconstruction network can be improved, and the imaging quality of the reconstructed CT image can be improved.
Description
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a CT image reconstruction method and device.
Background
The computed tomography (computed tomography, CT) technology reconstructs the linear attenuation coefficient of the whole object by detecting X-rays passing through the object from different angles, and finally reflects the substances and structures inside the object, thereby enhancing the observation capability of people on the internal structure of the object. The CT technology has the advantages of high measuring speed, low detecting cost, high imaging quality, few limiting conditions and the like, and is widely applied to the fields of medical diagnosis and treatment, image guided intervention, industrial nondestructive detection and the like.
In recent years, with the wide application of CT technology, CT reconstruction algorithms have also been developed. Particularly, with the development of artificial intelligence technology, the deep learning technology has gradually become a main reconstruction method of the CT technology, and many problems existing in the conventional CT reconstruction methods are expected to be better solved by means of the deep learning technology. The main idea of applying the deep learning technology to CT reconstruction is to integrate the neural network into the reconstruction process, for example, the steps of weighting, filtering, back projection and other calculation in the analytical reconstruction algorithm are realized by the neural network, or the iterative calculation step in the iterative reconstruction algorithm is replaced by the neural network.
The deep learning technology is applied to CT reconstruction, so that the imaging quality of CT images is remarkably improved. However, due to the black box characteristics of the deep neural network, the combination of the deep learning technology and the CT reconstruction method still has some challenges, namely the robustness and generalization of the reconstruction network. Because of the lack of interpretability of the deep neural network, the designed neural network is seriously dependent on the quality of a data set, massive high-precision data are required for training, and the calculation result is possibly incorrect due to weak change of the distribution of input data. To overcome the shortcoming of lack of interpretability of the traditional deep learning method, researchers have proposed to solve this by developing a model-based iterative algorithm to construct a deep neural network. For this idea, some schemes are proposed in the industry, but the schemes either need to adopt more approximate calculation, so that the calculation accuracy of the CT reconstruction method is reduced, and the calculation accuracy of the iterative expansion neural network is reduced, or the neural network is directly adopted to replace compressed sensing regular terms, and the neural network expansion is not performed, so that the interpretability of the constructed CT reconstruction network is reduced.
Disclosure of Invention
The embodiment of the application provides a CT image reconstruction method and device, which are used for improving the interpretability of an integral CT reconstruction network, so that the imaging quality of a reconstructed CT image can be improved.
In a first aspect, an embodiment of the present application provides a CT image reconstruction method, including:
acquiring projection data of a first object;
inputting the projection data into a CT iterative unfolding reconstruction network to obtain a target CT image corresponding to the first object;
the CT iterative expansion reconstruction network is obtained by expanding a neural network of a fidelity term and a compressed sensing regularization term or a total variation TV regularization term in a CT iterative reconstruction algorithm.
In the technical scheme, the neural network expansion is directly carried out on the fidelity term and the compressed sensing regularization term or the total variation TV regularization term in the CT iterative reconstruction algorithm, so that a complete CT iterative expansion reconstruction network can be obtained, the interpretability of the whole CT reconstruction network can be improved, and the imaging quality of the reconstructed CT image can be improved.
In one possible implementation, the projection data is obtained by scanning the first object by the CT system according to a system parameter, or the projection data is obtained by simulating the first object by the computing device according to the system parameter.
According to the technical scheme, the acquisition modes of the projection data are flexible and various, and the requirements of different users can be met, so that the application scene of the scheme is wider.
In one possible implementation manner, the CT iterative expansion reconstruction network includes m sub-networks sequentially processed in series, the network structures of the m sub-networks are the same, and m is an integer greater than 1.
In the technical scheme, by adopting a plurality of sub-networks connected in series, effective iterative processing of projection data can be realized, so that a reconstructed CT image is obtained, and the interpretability of the whole CT reconstruction network can be improved.
In one possible implementation manner, inputting the projection data into a CT iterative unfolding reconstruction network to obtain a target CT image corresponding to the first object includes:
and sequentially passing the projection data through the m sub-networks to obtain the target CT image.
According to the technical scheme, the projection data are subjected to iterative processing through the plurality of sub-networks which are connected in series, so that reconstruction of the projection data can be effectively completed, and a target CT image with higher visual quality can be obtained.
In a possible implementation manner, a first sub-network at a starting position included in the m sub-networks is used as an input end of the CT iterative expansion reconstruction network, an mth sub-network at an end position included in the m sub-networks is used as an output end of the CT iterative expansion reconstruction network, input data acquired by the first sub-network includes the projection data, and output data determined by the mth sub-network includes the target CT image;
The ith sub-network is configured to determine a plurality of different types of second data according to the projection data and a plurality of different types of first data output by the ith sub-network and the ith sub-network, where the plurality of different types of second data and the projection data are used as input data of the (i+1) th sub-network, the plurality of different types of second data are in one-to-one correspondence with the plurality of different types of first data, and the ith sub-network is any one of the other sub-networks except the first sub-network and the ith sub-network in the m sub-networks.
In the above technical solution, by inputting the projection data and the plurality of different types of initial data to the first sub-network at the initial position, a plurality of different types of data can be obtained, and the projection data and the plurality of different types of data are input to the next sub-network, and so on until reaching the sub-network at the end position, so that a CT image with high visual quality can be iterated effectively.
In one possible implementation manner, any one of the sub-networks includes a front projection network layer, a plurality of back projection network layers and a plurality of convolution neural network layers, wherein the back projection network layers are identical, the convolution neural network layers are different, and the convolution neural network layers include a first convolution neural network layer, a second convolution neural network layer, a third convolution neural network layer, a fourth convolution neural network layer, a fifth convolution neural network layer and a sixth convolution neural network layer;
For the ith sub-network, the plurality of different types of first data acquired by the ith sub-network include first type first data, second type first data, third type first data and fourth type first data, wherein the first type first data is used for indicating CT images in an iterative state;
a first convolutional neural network layer included in the ith sub-network is used for determining first sparse transformation data according to the projection data, a front projection network layer included in the ith sub-network is used for determining front projection data according to the first data of the first type, a second back projection network layer included in the ith sub-network is used for determining second back projection data according to the front projection data, the first convolutional neural network layer included in the ith sub-network is used for determining first sparse transformation data according to the first data of the first type, the second convolutional neural network layer included in the ith sub-network is used for determining second sparse transformation data according to the first sparse transformation data, the first back projection network layer is any one of the plurality of back projection network layers, and the second back projection network layer is any one of other back projection network layers except the first back projection network layer;
The third convolutional neural network layer included in the i-th sub-network is used for determining second data of the second type according to the first back projection data, the second back projection data and the first data of the second type, and the fourth convolutional neural network layer included in the i-th sub-network is used for determining second data of the third type according to the second sparse transform data and the first data of the third type;
the fifth convolutional neural network layer included in the i-th sub-network is used for determining second data of a fourth type according to the second data of the second type, the second data of the third type and the first data of the fourth type;
the sixth convolutional neural network layer included in the i-th sub-network is configured to determine the second data of the first type according to the second data of the fourth type and the first data of the fourth type.
According to the technical scheme, for each sub-network, the projection data and the data of different types are respectively input into the front projection network layer, the back projection network layers and the convolution neural network layers which are included in the sub-network, so that the feature extraction capability of the whole CT reconstruction network can be further improved, CT image reconstruction can be effectively completed, and the imaging quality of the reconstructed CT image can be improved.
In one possible implementation, the system parameters include at least one of: detector position, detector size and sampling interval, radiation source position, rotation center position of CT system, projection data number and acquisition angle, and size and sampling interval of reconstructed phantom.
In a second aspect, an embodiment of the present application further provides a CT image reconstruction apparatus, including:
the acquisition module is used for acquiring projection data of the first object;
the processing module is used for inputting the projection data into a CT iterative expansion reconstruction network to obtain a target CT image corresponding to the first object; the CT iterative expansion reconstruction network is obtained by expanding a neural network of a fidelity term and a compressed sensing regularization term or a total variation TV regularization term in a CT iterative reconstruction algorithm.
In one possible implementation, the projection data is obtained by scanning the first object by the CT system according to a system parameter, or the projection data is obtained by simulating the first object by the computing device according to the system parameter.
In one possible implementation manner, the CT iterative expansion reconstruction network includes m sub-networks sequentially processed in series, the network structures of the m sub-networks are the same, and m is an integer greater than 1.
In one possible implementation manner, the processing module is specifically configured to:
and sequentially passing the projection data through the m sub-networks to obtain the target CT image.
In a possible implementation manner, a first sub-network at a starting position included in the m sub-networks is used as an input end of the CT iterative expansion reconstruction network, an mth sub-network at an end position included in the m sub-networks is used as an output end of the CT iterative expansion reconstruction network, input data acquired by the first sub-network includes the projection data, and output data determined by the mth sub-network includes the target CT image;
the ith sub-network is configured to determine a plurality of different types of second data according to the projection data and a plurality of different types of first data output by the ith sub-network and the ith sub-network, where the plurality of different types of second data and the projection data are used as input data of the (i+1) th sub-network, the plurality of different types of second data are in one-to-one correspondence with the plurality of different types of first data, and the ith sub-network is any one of the other sub-networks except the first sub-network and the ith sub-network in the m sub-networks.
In one possible implementation manner, any one of the sub-networks includes a front projection network layer, a plurality of back projection network layers and a plurality of convolution neural network layers, wherein the back projection network layers are identical, the convolution neural network layers are different, and the convolution neural network layers include a first convolution neural network layer, a second convolution neural network layer, a third convolution neural network layer, a fourth convolution neural network layer, a fifth convolution neural network layer and a sixth convolution neural network layer;
for the ith sub-network, the plurality of different types of first data acquired by the ith sub-network include first type first data, second type first data, third type first data and fourth type first data, wherein the first type first data is used for indicating CT images in an iterative state;
a first convolutional neural network layer included in the ith sub-network is used for determining first sparse transformation data according to the projection data, a front projection network layer included in the ith sub-network is used for determining front projection data according to the first data of the first type, a second back projection network layer included in the ith sub-network is used for determining second back projection data according to the front projection data, the first convolutional neural network layer included in the ith sub-network is used for determining first sparse transformation data according to the first data of the first type, the second convolutional neural network layer included in the ith sub-network is used for determining second sparse transformation data according to the first sparse transformation data, the first back projection network layer is any one of the plurality of back projection network layers, and the second back projection network layer is any one of other back projection network layers except the first back projection network layer;
The third convolutional neural network layer included in the i-th sub-network is used for determining second data of the second type according to the first back projection data, the second back projection data and the first data of the second type, and the fourth convolutional neural network layer included in the i-th sub-network is used for determining second data of the third type according to the second sparse transform data and the first data of the third type;
the fifth convolutional neural network layer included in the i-th sub-network is used for determining second data of a fourth type according to the second data of the second type, the second data of the third type and the first data of the fourth type;
the sixth convolutional neural network layer included in the i-th sub-network is configured to determine the second data of the first type according to the second data of the fourth type and the first data of the fourth type.
In one possible implementation, the system parameters include at least one of: detector position, detector size and sampling interval, radiation source position, rotation center position of CT system, projection data number and acquisition angle, and size and sampling interval of reconstructed phantom.
In a third aspect, embodiments of the present application provide a computing device comprising:
a memory for storing a computer program;
and the processor is used for calling the computer program stored in the memory and executing the steps of the CT image reconstruction method according to the obtained program.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer-executable program for causing a computer to perform the steps of a CT image reconstruction method.
In a fifth aspect, embodiments of the present application provide a computer program product comprising a computer program or instructions which, when run on a computer, cause the computer to perform the steps of a CT image reconstruction method.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a CT image reconstruction method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a CT iterative expansion reconstruction network according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a sub-network according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a plurality of network layers included in a sub-network according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a CT image reconstruction device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described in further detail below with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The exemplary embodiments of the present application and their description are for explaining the present application, but are not limiting of the application. In addition, the same or similar reference numerals are used for the same or similar parts in the drawings and the embodiments.
It is to be understood that the terms "first," "second," and the like, as used herein, are not specifically intended to be used in a sequential or ordinal sense, nor are they intended to limit the application, and are interchangeable as appropriate, merely to distinguish between components or operations described in the same technical term.
The terms "comprising," "including," "having," "containing," and the like as used herein are intended to be inclusive and mean that they are encompassed by, but not limited to. In addition, the term "and/or" as used in this disclosure includes any or all combinations of such things.
Fig. 1 schematically illustrates a flow chart of a CT image reconstruction method according to an embodiment of the present application. The method flow may be performed by a CT image reconstruction device or by a component (such as a chip or circuit) capable of supporting the functions required by the CT image reconstruction device to implement the method. The CT image reconstruction device may be a computing device having a CT image reconstruction function, or may be another device having a CT image reconstruction function, or may be a functional component (such as a chip) having a CT image reconstruction function in the computing device or in another device. In order to facilitate description of the CT image reconstruction scheme provided in the embodiments of the present application, a CT image reconstruction method performed by a CT image reconstruction device is described below as an example.
As shown in fig. 1, the method flow may include:
in step 101, the ct image reconstruction apparatus acquires projection data of a first object.
Step 102, the CT image reconstruction device inputs the projection data to a CT iterative unfolding reconstruction network, so as to obtain a target CT image corresponding to the first object.
In step 101, the first object may be a human body, or may be an object (such as an animal body or other objects, etc.), which is not limited in this embodiment of the present application. In one example, projection data of the first object may be acquired by a CT hardware system by scanning the first object in accordance with system parameters. The CT hardware system may then transmit the projection data of the first object to a CT image reconstruction device. In another example, the projection data of the first object may also be obtained by the computing device by simulating the first object according to system parameters. The computing device may then transmit projection data of the first object to the CT image reconstruction apparatus.
The system parameters are corresponding parameters required by the CT hardware system to acquire projection data or the computing device to simulate the projection data. Illustratively, the system parameters may include at least one of: detector position, detector size and sampling interval, radiation source position, rotation center position of CT system, projection data number and acquisition angle, and size and sampling interval of reconstructed phantom.
Taking the example of acquiring projection data by a CT hardware system, the CT hardware system irradiates a first object by using cone beam X-rays, images the first object by using a flat panel detector, and extracts projection data at the isocenter position on the flat panel detector under different irradiation angles to form projection data g. For example, in the embodiment of the present application, only the fault plane in which the rotation center is located is reconstructed. As an example, the number of projections is 360 and the length of projection data is 768 pixels.
In the step 102, the CT iterative expansion reconstruction network is obtained by performing neural network expansion on the fidelity term and the compressed sensing regularization term or the total variation TV regularization term in the CT iterative reconstruction algorithm by the CT image reconstruction device. In one example, when the CT iterative expansion reconstruction network is obtained by performing neural network expansion on a fidelity term and a compressed sensing regularization term in a CT iterative reconstruction algorithm by using a CT image reconstruction device, the CT image reconstruction device may solve a CT reconstruction problem based on the compressed sensing term by using an original-dual algorithm, so as to obtain an iterative reconstruction algorithm easy for neural network expansion. Then, the CT image reconstruction device expands the neural network through the CT iterative reconstruction algorithm, particularly expands the neural network aiming at the compressed sensing regular term in the CT iterative reconstruction algorithm, and expands the neural network of the fidelity term in the CT iterative reconstruction algorithm at the same time, so that the complete CT iterative expansion reconstruction network can be obtained. In another example, when the CT iterative expansion reconstruction network is obtained by performing neural network expansion on a fidelity term and a total variation TV regularization term in a CT iterative reconstruction algorithm by using a CT image reconstruction device, the CT image reconstruction device may solve a CT reconstruction problem based on the TV regularization term by using an original-dual algorithm, so as to obtain an iterative reconstruction algorithm easy for neural network expansion. Then, the CT image reconstruction device performs neural network expansion on the CT iterative reconstruction algorithm, particularly performs neural network expansion on a TV regularization term in the CT iterative reconstruction algorithm, and performs neural network expansion on a fidelity term in the CT iterative reconstruction algorithm at the same time, so that a complete CT iterative expansion reconstruction network can be obtained.
The CT iterative expansion reconstruction network can comprise m sub-networks which are sequentially processed in series, wherein the network structures of the m sub-networks are the same, and m is an integer greater than 1. After the CT image reconstruction device acquires the projection data of the first object, the projection data of the first object may sequentially pass through m sub-networks to obtain a target CT image corresponding to the first object (i.e., three-dimensional information of the first object, or may be understood as obtaining information inside the first object without damaging the first object). For example, in the embodiment of the present application, when the CT iterative expansion reconstruction network is obtained by performing neural network expansion on the fidelity term and the compressed sensing regularization term in the CT iterative reconstruction algorithm at the same time by the CT image reconstruction device, the value of m may be taken as 8, and when the CT iterative expansion reconstruction network is obtained by performing neural network expansion on the fidelity term and the total variation TV regularization term in the CT iterative reconstruction algorithm at the same time by the CT image reconstruction device, the value of m may be taken as 10.
Fig. 2 schematically illustrates a structural diagram of a CT iterative unfolding reconstruction network according to an embodiment of the present application. As shown in fig. 2, the CT iterative expansion reconstruction network includes m sub-networks, that is, sub-network 1 (or may be referred to as a first sub-network), sub-network 2 (or may be referred to as a second sub-network), sub-network 3 (or may be referred to as a third sub-network) … … sub-network m (or may be referred to as an mth sub-network). The first sub-network (i.e. sub-network 1) at the initial position included in the m sub-networks is used as an input end of the CT iterative expansion reconstruction network, and is used for acquiring input data, and the m sub-network (i.e. sub-network m) at the final position included in the m sub-networks is used as an output end of the CT iterative expansion reconstruction network, and is used for outputting target data. Specifically, for the first sub-network, the CT image reconstruction apparatus may input the projection data and a plurality of different types of initial data to the first sub-network, obtain a plurality of different types of data, and may use the projection data and the plurality of different types of data as input data of a next sub-network adjacent to the first sub-network. For the mth sub-network, the CT image reconstruction device may input the projection data and a plurality of different types of data output by a previous sub-network adjacent to the mth sub-network, to obtain a plurality of different types of target data, where the plurality of different types of target data includes a target CT image corresponding to the first object.
In addition, for any one of the other sub-networks (such as the ith sub-network) except the first sub-network and the mth sub-network among the m sub-networks, the CT image reconstruction device may input the projection data and the plurality of different types of first data output from the ith sub-network to obtain a plurality of different types of second data. The second data of different types are in one-to-one correspondence with the first data of different types. Then, the CT image reconstruction device may input the projection data and the plurality of second data of different types to the i+1st sub-network, and so on, until the mth sub-network, so as to obtain a target CT image (i.e., a reconstructed CT image) corresponding to the first object.
In the embodiment of the application, the sub-network structure of each iteration can be obtained by performing iterative expansion according to the CT iterative reconstruction algorithm based on the compressed sensing term (or the CT iterative reconstruction algorithm based on the TV regular term) and replacing the calculation step of each iteration in the iterative algorithm by using the neural network.
The CT reconstruction problem based on the compressed sensing term (or CT reconstruction problem based on the TV regular term) is solved by adopting an original-dual algorithm, and a CT iterative reconstruction algorithm easy to be unfolded by a neural network is obtained. In the iterative reconstruction algorithm, the calculation formula of each iteration is as follows:
u n+1 =u n -τp n+1 -τq n+1
Where g is used to represent projection data,for representing sparse transform matrices, A for representing system matrices, T for representing transpose operations, σ, λ, τ, θ for representing weight parameters, p, respectively n 、q n 、u n Are used for representing intermediate variables respectively, and the initial values are 0, < + >>Input CT image for representing any one of the sub-networks, with an initial value of 0,/or more>For representing the CT image output by the sub-network.
Fig. 3 is a schematic diagram schematically illustrating a sub-network according to an embodiment of the present application. As shown in fig. 3, a forward projection network layer (e.g., CNN A ) Multiple back-projection network layers (e.g. two) And a plurality of convolutional neural network layers. Illustratively, the network layer CNN is orthographic projected A The structure can be realized by adopting a ray driving method, so that the forward projection operation in an iterative algorithm and the back projection network layer are realized>The back projection operation in the iterative algorithm can be realized by constructing by adopting a pixel driving method. The front/back projection network layers are neural networks without adjustable parameters.
Wherein the plurality of convolutional neural network layers are different from each other, and the plurality of convolutional neural network layers include a first convolutional neural network layer (e.g. ) A second convolutional neural network layer (e.g.)>) Third convolutional neural network layer (such as CNN p ) Fourth convolutional neural network layer (e.g. CNN q ) Fifth convolutional neural network layer (such as CNN u ) And a sixth convolutional neural network layer (e.g. +.>)。
In the embodiment of the application, shallow convolutional neural network CNN is respectively adopted p 、CNN q 、CNN u 、Instead of the calculation of the variables in each iteration described above, a shallow neural network is used, respectively +.>And->Sparse transformation and transposition thereof in compressed sensing (or TV regular term) are realized, so that the feature extraction capability of the whole CT reconstruction network can be improved.
When sparsely transformingGradient operator for image->When the original CT reconstruction problem is degraded into a CT reconstruction problem based on a TV regularization term. Thus, a CT reconstruction algorithm based on TV canonical term expansion is a particular implementation provided by the present application. In this particular implementation, a convolutional neural network with fixed parameters is used +.>And->Realize operation->And operations->Wherein the neural network->Is a roll of (2)The product template is as follows:
and->
Neural networkThe convolution templates of (2) are:
and->
Taking a certain sub-network (such as the jth sub-network) included in the m sub-networks as an example, it is assumed that the first data including the first type among the plurality of different types of first data acquired by the jth sub-network is The first data of the second type is p n The first data of the third type is q n And the first data of the fourth type is u n . Wherein the first data of the first type +.>May be used to indicate CT images in an iterative state.
In particular, the CT image reconstruction apparatus may input projection data to a first backprojection network layer in a jth sub-networkObtaining first back projection data (such as A T g) First data of the first type +.>Orthographic projection network layer CNN input into jth sub-network A To obtain orthographic projection data (e.g.)>) And first data of the first type +.>First convolutional neural network layer input into jth subnetwork>Obtaining first sparse representation data (e.g +.>). The CT image reconstruction device may then reconstruct the forward projection data (e.g.)>) Second backprojection network layer input into the jth subnetwork->Obtaining second back projection data (e.g. +.>) The first sparsely transformed data (e.g +.>) A second convolutional neural network layer input into the jth subnetwork>Obtaining second sparse representation data (e.g.)>)。
The CT image reconstruction device may then reconstruct the first back projection data, the second back projection data, and the first number of the second typeAccording to p n Third convolutional neural network layer CNN input into jth subnetwork p Obtaining second data p of a second type n+1 And combining the second sparsely transformed data with the first data q of the third type n Fourth convolutional neural network layer CNN input into jth subnetwork q Obtaining second data q of a third type n+1 . The CT image reconstruction device may then reconstruct the second data p of the second type n+1 Second data q of a third type n+1 And first data u of the fourth type n Fifth convolutional neural network layer CNN input into jth subnetwork u Obtaining second data u of a fourth type n+1 . Finally, the CT image reconstruction device can reconstruct the first data u of the fourth type n And second data u of the fourth type n+1 A sixth convolutional neural network layer input into the jth subnetworkObtaining second data of the first type +.>
In addition, in the embodiment of the present application, the structure of a plurality of network layers included in any one of the sub-networks (such as the jth sub-network) may be as shown in fig. 4. As shown in fig. 4, the channel number of each variable in the jth subnetwork can be seen in fig. 4, wherein the initial value of each variable is u 0 =0,p 0 =0,q 0 =0,Output of the j-th sub-network>I.e. the reconstructed CT image.
It should be noted that, in the testing process of the CT iterative expansion reconstruction network, the target CT image corresponding to the first object calculated by the CT iterative expansion reconstruction network is directly output. In the training process of the CT iterative expansion reconstruction network, constructing a loss function, calculating the difference between the output value (i.e. the predicted value) and the true value of the CT iterative expansion reconstruction network, feeding back the difference as gradient information to the CT iterative expansion reconstruction network based on compressed sensing (or based on a TV regularization term), and updating the model parameter value in the CT iterative expansion reconstruction network. In one example, the above-described loss function, such as an L1-norm loss function, an L2-norm loss function, or a SSIM (structural similarity index) loss function, may be constructed using point-to-point pixel differences or image structure similarities. In another example, a convolutional neural network that extracts image structural features may also be utilized to construct a loss function, such as a VGG loss function. In an embodiment of the application, an L2 norm loss function is used as the loss function for the entire network model.
The above embodiment shows that compared with the existing CT iterative expansion reconstruction network, the method mainly expands the fidelity terms in the CT iterative reconstruction algorithm, directly adopts the neural network to directly replace the regularization terms, and limits the interpretability of the overall neural network. According to the technical scheme provided by the embodiment of the application, the CT reconstruction problem based on compressed sensing is solved by adopting an original-dual algorithm, so that an iterative reconstruction algorithm easy to unfold by a neural network can be obtained, the neural network is unfolded by the CT iterative reconstruction algorithm, especially, the neural network is unfolded directly aiming at a compressed sensing regularization term (or a total variation TV regularization term) in the CT iterative reconstruction algorithm, and the complete CT iterative unfolding reconstruction network can be obtained by unfolding the neural network of a fidelity term in the CT iterative reconstruction algorithm. Therefore, the method can improve the interpretability of the whole CT reconstruction network by fully expanding the CT iterative reconstruction algorithm, and can further improve the visual quality of the CT reconstruction image by adopting the shallow neural network to replace sparse transformation and transposition thereof in the compressed sensing regular term.
Based on the same technical concept, fig. 5 schematically illustrates a CT image reconstruction apparatus provided in an embodiment of the present application, which may perform a flow of a CT image reconstruction method. The CT image reconstruction device may be a computing device having a CT image reconstruction function, or may be another device having a CT image reconstruction function, or may be a functional component (such as a chip) having a CT image reconstruction function in the computing device or in another device.
As shown in fig. 5, the apparatus includes:
an acquisition module 501, configured to acquire projection data of a first object;
the processing module 502 is configured to input the projection data to a CT iterative unfolding reconstruction network, so as to obtain a target CT image corresponding to the first object; the CT iterative expansion reconstruction network is obtained by expanding a neural network of a fidelity term and a compressed sensing regularization term or a total variation TV regularization term in a CT iterative reconstruction algorithm.
In one possible implementation, the projection data is obtained by scanning the first object by the CT system according to a system parameter, or the projection data is obtained by simulating the first object by the computing device according to the system parameter.
In one possible implementation manner, the CT iterative expansion reconstruction network includes m sub-networks sequentially processed in series, the network structures of the m sub-networks are the same, and m is an integer greater than 1.
In one possible implementation, the processing module 502 is specifically configured to:
and sequentially passing the projection data through the m sub-networks to obtain the target CT image.
In a possible implementation manner, a first sub-network at a starting position included in the m sub-networks is used as an input end of the CT iterative expansion reconstruction network, an mth sub-network at an end position included in the m sub-networks is used as an output end of the CT iterative expansion reconstruction network, input data acquired by the first sub-network includes the projection data, and output data determined by the mth sub-network includes the target CT image;
The ith sub-network is configured to determine a plurality of different types of second data according to the projection data and a plurality of different types of first data output by the ith sub-network and the ith sub-network, where the plurality of different types of second data and the projection data are used as input data of the (i+1) th sub-network, the plurality of different types of second data are in one-to-one correspondence with the plurality of different types of first data, and the ith sub-network is any one of the other sub-networks except the first sub-network and the ith sub-network in the m sub-networks.
In one possible implementation manner, any one of the sub-networks includes a front projection network layer, a plurality of back projection network layers and a plurality of convolution neural network layers, wherein the back projection network layers are identical, the convolution neural network layers are different, and the convolution neural network layers include a first convolution neural network layer, a second convolution neural network layer, a third convolution neural network layer, a fourth convolution neural network layer, a fifth convolution neural network layer and a sixth convolution neural network layer;
for the ith sub-network, the plurality of different types of first data acquired by the ith sub-network include first type first data, second type first data, third type first data and fourth type first data, wherein the first type first data is used for indicating CT images in an iterative state;
A first convolutional neural network layer included in the ith sub-network is used for determining first sparse transformation data according to the projection data, a front projection network layer included in the ith sub-network is used for determining front projection data according to the first data of the first type, a second back projection network layer included in the ith sub-network is used for determining second back projection data according to the front projection data, the first convolutional neural network layer included in the ith sub-network is used for determining first sparse transformation data according to the first data of the first type, the second convolutional neural network layer included in the ith sub-network is used for determining second sparse transformation data according to the first sparse transformation data, the first back projection network layer is any one of the plurality of back projection network layers, and the second back projection network layer is any one of other back projection network layers except the first back projection network layer;
the third convolutional neural network layer included in the i-th sub-network is used for determining second data of the second type according to the first back projection data, the second back projection data and the first data of the second type, and the fourth convolutional neural network layer included in the i-th sub-network is used for determining second data of the third type according to the second sparse transform data and the first data of the third type;
The fifth convolutional neural network layer included in the i-th sub-network is used for determining second data of a fourth type according to the second data of the second type, the second data of the third type and the first data of the fourth type;
the sixth convolutional neural network layer included in the i-th sub-network is configured to determine the second data of the first type according to the second data of the fourth type and the first data of the fourth type.
In one possible implementation, the system parameters include at least one of: detector position, detector size and sampling interval, radiation source position, rotation center position of CT system, projection data number and acquisition angle, and size and sampling interval of reconstructed phantom.
Based on the same technical idea, an embodiment of the present application provides a computing device, including:
a memory for storing a computer program;
and the processor is used for calling the computer program stored in the memory and executing the steps of the CT image reconstruction method according to the obtained program.
Based on the same technical idea, an embodiment of the present application provides a computer-readable storage medium storing a computer-executable program for causing a computer to execute steps of a CT image reconstruction method.
Based on the same conception, the embodiments of the present application also provide a computer program product comprising a computer program or instructions for causing a computer to perform the steps of the CT image reconstruction method when the computer program or instructions are run on the computer.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, or may be loaded onto a computer or other programmable data processing apparatus such that a series of operational steps are performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Although the application has been described in conjunction with specific features and embodiments thereof, it is evident that those skilled in the art may make numerous modifications and variations to the application without departing from the spirit and scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.
Claims (10)
1. A method for reconstructing a computed tomography CT image, comprising:
acquiring projection data of a first object;
inputting the projection data into a CT iterative unfolding reconstruction network to obtain a target CT image corresponding to the first object;
The CT iterative expansion reconstruction network is obtained by expanding a neural network of a fidelity term and a compressed sensing regularization term or a total variation TV regularization term in a CT iterative reconstruction algorithm.
2. The method of claim 1, wherein the projection data is obtained by a CT system scanning the first object according to system parameters or the projection data is obtained by a computing device simulating the first object according to the system parameters.
3. The method according to claim 1 or 2, wherein the CT iterative unfolding reconstruction network comprises m sub-networks processed in series in turn, the network structures of the m sub-networks are identical, and the m is an integer greater than 1.
4. A method according to claim 3, wherein inputting the projection data into a CT iterative unfolded reconstruction network to obtain a target CT image corresponding to the first object comprises:
and sequentially passing the projection data through the m sub-networks to obtain the target CT image.
5. The method of claim 4, wherein a first sub-network at a start position included in the m sub-networks is used as an input end of the CT iterative expansion reconstruction network, an mth sub-network at an end position included in the m sub-networks is used as an output end of the CT iterative expansion reconstruction network, input data acquired by the first sub-network includes the projection data, and output data determined by the mth sub-network includes the target CT image;
The ith sub-network is configured to determine a plurality of different types of second data according to the projection data and a plurality of different types of first data output by the ith sub-network and the ith sub-network, where the plurality of different types of second data and the projection data are used as input data of the (i+1) th sub-network, the plurality of different types of second data are in one-to-one correspondence with the plurality of different types of first data, and the ith sub-network is any one of the other sub-networks except the first sub-network and the ith sub-network in the m sub-networks.
6. The method of claim 5, wherein any one of the sub-networks comprises a forward projection network layer, a plurality of backward projection network layers, and a plurality of convolutional neural network layers, the plurality of backward projection network layers being identical, the plurality of convolutional neural network layers being different, the plurality of convolutional neural network layers comprising a first convolutional neural network layer, a second convolutional neural network layer, a third convolutional neural network layer, a fourth convolutional neural network layer, a fifth convolutional neural network layer, and a sixth convolutional neural network layer;
for the ith sub-network, the plurality of different types of first data acquired by the ith sub-network include first type first data, second type first data, third type first data and fourth type first data, wherein the first type first data is used for indicating CT images in an iterative state;
A first convolutional neural network layer included in the ith sub-network is used for determining first sparse transformation data according to the projection data, a front projection network layer included in the ith sub-network is used for determining front projection data according to the first data of the first type, a second back projection network layer included in the ith sub-network is used for determining second back projection data according to the front projection data, the first convolutional neural network layer included in the ith sub-network is used for determining first sparse transformation data according to the first data of the first type, the second convolutional neural network layer included in the ith sub-network is used for determining second sparse transformation data according to the first sparse transformation data, the first back projection network layer is any one of the plurality of back projection network layers, and the second back projection network layer is any one of other back projection network layers except the first back projection network layer;
the third convolutional neural network layer included in the i-th sub-network is used for determining second data of the second type according to the first back projection data, the second back projection data and the first data of the second type, and the fourth convolutional neural network layer included in the i-th sub-network is used for determining second data of the third type according to the second sparse transform data and the first data of the third type;
The fifth convolutional neural network layer included in the i-th sub-network is used for determining second data of a fourth type according to the second data of the second type, the second data of the third type and the first data of the fourth type;
the sixth convolutional neural network layer included in the i-th sub-network is configured to determine the second data of the first type according to the second data of the fourth type and the first data of the fourth type.
7. The method of any of claims 1-6, wherein the system parameters include at least one of: detector position, detector size and sampling interval, radiation source position, rotation center position of CT system, projection data number and acquisition angle, and size and sampling interval of reconstructed phantom.
8. A CT image reconstruction apparatus, comprising:
the acquisition module is used for acquiring projection data of the first object;
the processing module is used for inputting the projection data into a CT iterative expansion reconstruction network to obtain a target CT image corresponding to the first object; the CT iterative expansion reconstruction network is obtained by expanding a neural network of a fidelity term and a compressed sensing regularization term or a total variation TV regularization term in a CT iterative reconstruction algorithm.
9. A computing device, comprising:
a memory for storing a computer program;
a processor for invoking a computer program stored in said memory, performing the method according to any of claims 1 to 7 in accordance with the obtained program.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer-executable program for causing a computer to execute the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311152367.0A CN117197349A (en) | 2023-09-07 | 2023-09-07 | CT image reconstruction method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311152367.0A CN117197349A (en) | 2023-09-07 | 2023-09-07 | CT image reconstruction method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117197349A true CN117197349A (en) | 2023-12-08 |
Family
ID=89001034
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311152367.0A Pending CN117197349A (en) | 2023-09-07 | 2023-09-07 | CT image reconstruction method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117197349A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117456038A (en) * | 2023-12-22 | 2024-01-26 | 合肥吉麦智能装备有限公司 | Energy spectrum CT iterative expansion reconstruction system based on low-rank constraint |
-
2023
- 2023-09-07 CN CN202311152367.0A patent/CN117197349A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117456038A (en) * | 2023-12-22 | 2024-01-26 | 合肥吉麦智能装备有限公司 | Energy spectrum CT iterative expansion reconstruction system based on low-rank constraint |
CN117456038B (en) * | 2023-12-22 | 2024-03-22 | 合肥吉麦智能装备有限公司 | Energy spectrum CT iterative expansion reconstruction system based on low-rank constraint |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110462689B (en) | Tomographic reconstruction based on deep learning | |
RU2709437C1 (en) | Image processing method, an image processing device and a data medium | |
US11026642B2 (en) | Apparatuses and a method for artifact reduction in medical images using a neural network | |
Boink et al. | A partially-learned algorithm for joint photo-acoustic reconstruction and segmentation | |
US12039637B2 (en) | Low dose Sinogram denoising and PET image reconstruction method based on teacher-student generator | |
Dong et al. | X-ray CT image reconstruction via wavelet frame based regularization and Radon domain inpainting | |
CN113272869B (en) | Method and system for reconstructing three-dimensional shape from positioning sheet in medical imaging | |
KR20190138292A (en) | Method for processing multi-directional x-ray computed tomography image using artificial neural network and apparatus therefor | |
CN113424222A (en) | System and method for providing stroke lesion segmentation using a conditional generation countermeasure network | |
Jiao et al. | A dual-domain CNN-based network for CT reconstruction | |
CN117011673B (en) | Electrical impedance tomography image reconstruction method and device based on noise diffusion learning | |
Lahiri et al. | Sparse-view cone beam CT reconstruction using data-consistent supervised and adversarial learning from scarce training data | |
CN117197349A (en) | CT image reconstruction method and device | |
CN115115736A (en) | Image artifact removing method, device and equipment and storage medium | |
Gothwal et al. | Computational medical image reconstruction techniques: a comprehensive review | |
Zhang et al. | Nonsmooth nonconvex LDCT image reconstruction via learned descent algorithm | |
CN116503506B (en) | Image reconstruction method, system, device and storage medium | |
Ma et al. | A neural network with encoded visible edge prior for limited‐angle computed tomography reconstruction | |
Guo et al. | Noise-resilient deep learning for integrated circuit tomography | |
CN110853113A (en) | TOF-PET image reconstruction algorithm and reconstruction system based on BPF | |
Zhong et al. | Super-resolution image reconstruction from sparsity regularization and deep residual-learned priors | |
Xie et al. | 3D few-view CT image reconstruction with deep learning | |
Bazrafkan et al. | To recurse or not to recurse: a low-dose CT study | |
Borrelli et al. | Deep learning for accelerating Radon inversion in single-cells tomographic phase imaging flow cytometry | |
Zhao et al. | Deep learning for medical image reconstruction: Focus on MRI, CT and PET |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |