WO2020237873A1 - 基于神经网络的螺旋ct图像重建方法和设备及存储介质 - Google Patents
基于神经网络的螺旋ct图像重建方法和设备及存储介质 Download PDFInfo
- Publication number
- WO2020237873A1 WO2020237873A1 PCT/CN2019/103038 CN2019103038W WO2020237873A1 WO 2020237873 A1 WO2020237873 A1 WO 2020237873A1 CN 2019103038 W CN2019103038 W CN 2019103038W WO 2020237873 A1 WO2020237873 A1 WO 2020237873A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- network
- neural network
- data
- spiral
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Definitions
- the present disclosure relates to radiation imaging, and in particular to a method and equipment for reconstructing a spiral CT image based on a neural network, and a storage medium.
- X-ray CT Computed-Tomography
- the ray source and the detector collect a series of projection data according to a certain trajectory, and the three-dimensional spatial distribution of the linear attenuation coefficient of the object under the ray energy can be obtained through the restoration of the image reconstruction algorithm.
- CT image reconstruction is to restore the linear attenuation coefficient distribution from the projection data collected by the detector, which is the core step of CT imaging.
- Filtered Back-Projection FBP
- FDK Feldkmap-Davis-Kress
- ART Algebra Reconstruction Technique
- MAP Maximum A Posterior
- the analytical reconstruction speed is fast, but it is limited to the traditional system architecture, and cannot solve problems such as missing data and large noise.
- the iterative reconstruction algorithm has a wide range of application conditions in the system architecture. It can achieve better reconstruction results for various non-standard scanning orbits, low-dose and large noise, and missing projection data. But iterative reconstruction algorithms often require multiple iterations, and reconstruction takes a long time. Three-dimensional spiral CT with larger data scale is even more difficult to actually use.
- Deep learning has made significant developments in computer vision and natural language processing.
- convolutional neural networks have become image classification, detection, etc. because of their simplicity in network structure, effective feature extraction, and compression of parameter spaces.
- the mainstream network structure of the application is no relevant research on the application of neural networks for spiral CT image reconstruction.
- a spiral CT image reconstruction device based on a neural network which includes:
- the memory is used to store instructions and three-dimensional projection data from the spiral CT equipment to the inspected object, the inspected object is preset as a multi-layer section;
- the processor is configured to execute the instructions in order to:
- Image reconstruction is performed on each layer section respectively, and the reconstruction of each layer section includes: inputting the three-dimensional projection data related to the section to be reconstructed to the trained neural network model to obtain the section reconstruction image;
- a three-dimensional reconstructed image is formed according to the reconstructed image of the multi-layer section.
- a spiral CT image reconstruction method which includes:
- the inspected object is preset as a multi-layer section
- Image reconstruction is performed on each layer section respectively, and the reconstruction of each layer section includes: inputting the three-dimensional projection data related to the section to be reconstructed to the trained neural network model to obtain the section reconstruction image;
- a three-dimensional reconstructed image is formed according to the reconstructed image of the multi-layer section.
- a method for training a neural network including:
- the projection domain sub-network is used to process the input spiral CT three-dimensional projection data related to the section to be reconstructed to obtain two-dimensional projection data;
- the domain conversion sub-network is used to analyze and reconstruct the two-dimensional projection data to obtain the cross-sectional image to be reconstructed;
- the image domain sub-network is used to process the cross-sectional image of the image domain to obtain an accurate reconstructed image of the cross-section to be reconstructed;
- the method includes:
- the parameters of the neural network are adjusted by the consistency cost function of the data model based on the input three-dimensional projection data, the image truth value, and the plane reconstruction image of the set section.
- a computer-readable storage medium in which computer instructions are stored, and when the instructions are executed by a processor, the spiral CT image reconstruction method as described above is realized.
- the neural network-based spiral CT image reconstruction device of the present disclosure combines the advantages of the deep network and the particularity of the spiral CT imaging problem, and the provided device can reconstruct the three-dimensional projection data into a more accurate three-dimensional image;
- the present disclosure trains the network through a targeted neural network model architecture, combined with simulation and actual data, so as to reliably, effectively and comprehensively cover all system information and the aggregate information of the imaged object, accurately reconstruct the object image, and suppress low dose Noise and artifacts caused by missing data;
- the training process of the neural network model of the present disclosure requires a large amount of data and operations, the actual reconstruction process does not require iteration.
- the amount of calculation required for reconstruction is comparable to the analytical reconstruction method, and is much faster than the iterative reconstruction algorithm.
- Fig. 1 shows a schematic structural diagram of a spiral CT system according to an embodiment of the present disclosure
- FIG. 2A is a schematic diagram of the spiral movement of the detector relative to the object under inspection in the spiral CT system shown in FIG. 1;
- FIG. 2B is a schematic diagram of the three-dimensional projection data corresponding to the signals detected by the detector in the spiral CT system.
- FIG. 3 is a schematic diagram of the structure of the control and data processing device in the spiral CT system shown in FIG. 1;
- FIG. 4 shows a schematic diagram of the principle of a spiral CT image reconstruction device based on a neural network according to an embodiment of the present disclosure
- Fig. 5 shows a schematic structural diagram of a neural network according to an embodiment of the present disclosure
- Fig. 6 is a visual network structure diagram of a neural network according to an embodiment of the disclosure.
- Fig. 7 shows an exemplary network structure of the projection domain sub-network
- Fig. 8 is a schematic flowchart describing a method for reconstructing a spiral CT image according to an embodiment of the present disclosure.
- references to "one embodiment,” “an embodiment,” “an example,” or “example” mean that a specific feature, structure, or characteristic described in conjunction with the embodiment or example is included in the present disclosure In at least one embodiment. Therefore, the phrases “in one embodiment,” “in an embodiment,” “an example,” or “example” appearing in various places throughout the specification do not necessarily all refer to the same embodiment or example.
- specific features, structures or characteristics may be combined in one or more embodiments or examples in any suitable combination and/or subcombination.
- those of ordinary skill in the art should understand that the term “and/or” used herein includes any and all combinations of one or more related listed items.
- this disclosure proposes a reconstruction method based on convolutional neural network for spiral CT equipment under large-pitch scanning, which deeply mines data information, combines the physical laws of the spiral CT system, and designs a unique network architecture, And training methods, so that higher quality images can be reconstructed in a shorter time.
- the embodiments of the present disclosure propose a spiral CT image reconstruction method and device based on a neural network, and a storage medium.
- the neural network is used to process the three-dimensional projection data of the inspected object from the spiral CT equipment to obtain the volume distribution of the linear attenuation coefficient of the inspected object.
- the neural network may include: a projection domain sub-network, a domain conversion sub-network, and an image domain sub-network.
- the projection domain sub-network processes the input 3D projection data to obtain 2D projection data.
- the domain conversion sub-network analyzes and reconstructs the two-dimensional projection data to obtain a set cross-sectional image of the image domain.
- the image domain sub-network inputs a cross-sectional image, and through the action of a convolutional neural network containing several layers, the features of the data in the image domain are collected, and the image features are further extracted and coupled with each other to obtain an accurate reconstructed image of the set cross-section.
- Fig. 1 shows a schematic structural diagram of a spiral CT system according to an embodiment of the present disclosure.
- the CT spiral system according to this embodiment includes an X-ray source 20, a mechanical motion device 30, and a detector and data acquisition system 10, and performs spiral CT scanning on an object 60 under inspection.
- the X-ray source 10 may be, for example, an X-ray machine, and a suitable X-ray machine focus size can be selected according to the imaging resolution.
- an X-ray machine may not be used, but a linear accelerator or the like may be used to generate the X-ray beam.
- the mechanical movement device includes a stage 60 and a frame 30.
- the stage can move along the axial direction of the cross section (the direction perpendicular to the paper surface), and the frame 30 can also rotate, and at the same time drive the detector on the frame and the X-ray source 10 to rotate synchronously.
- a translation stage, a synchronous rotation detector, and an X-ray source are used to make the detector move spirally relative to the object to be inspected.
- the detector and data acquisition system 10 includes an X-ray detector and a data acquisition circuit.
- the X-ray detector can use a solid detector, a gas detector or other detectors, and the embodiments of the present disclosure are not limited thereto.
- the data acquisition circuit includes a readout circuit, an acquisition trigger circuit, and a data transmission circuit.
- the detector usually acquires analog signals, which can be converted into digital signals through the data acquisition circuit.
- the detectors may be one row of detectors or multiple rows of detectors. For multiple rows of detectors, different row spacings can be set.
- the control and data processing device 60 includes, for example, a spiral CT image reconstruction device installed with a control program and a neural network, which is responsible for completing the control of the spiral CT system operation process, including mechanical rotation, electrical control, safety interlock control, etc., and training the neural network (Ie the machine learning process), and use the trained neural network to reconstruct CT images from the projection data.
- a spiral CT image reconstruction device installed with a control program and a neural network, which is responsible for completing the control of the spiral CT system operation process, including mechanical rotation, electrical control, safety interlock control, etc., and training the neural network (Ie the machine learning process), and use the trained neural network to reconstruct CT images from the projection data.
- Fig. 2A is a schematic diagram of the trajectory of the spiral movement of the detector relative to the inspected object in the CT spiral system shown in Fig. 1.
- the stage can be translated back and forth (that is, the direction perpendicular to the paper in Figure 1).
- the inspected object is synchronized to move; at the same time, the detector moves in a circular motion around the center axis of the stage.
- the relative movement relationship between the set section corresponding to the image to be reconstructed ⁇ of the inspection object and the detector is that the detector moves spirally around the set section.
- FIG. 3 shows a schematic structural diagram of the control and data processing device 60 shown in FIG. 1.
- the data collected by the detector and the data collection system 10 is stored in the storage device 310 through the interface unit 370 and the bus 380.
- the read-only memory (ROM) 320 stores configuration information and programs of the computer data processor.
- the random access memory (RAM) 330 is used to temporarily store various data during the working process of the processor 350.
- the storage device 310 also stores computer programs for data processing, such as a program for training a neural network, a program for reconstructing a CT image, and so on.
- the internal bus 380 connects the aforementioned storage device 310, read-only memory 320, random access memory 330, input device 340, processor 350, display device 360, and interface unit 370.
- the neural network-based spiral CT image reconstruction device and the control and data processing device 60 in the embodiments of the present disclosure share a storage device 310, an internal bus 380, a read-only memory (ROM) 320, a display device 360, a processor 350, etc., for Realize the reconstruction of spiral CT images.
- the instruction code of the computer program commands the processor 350 to execute the algorithm for training the neural network and/or the algorithm for reconstructing the CT image.
- the reconstruction result is obtained, it is displayed on a display device 360 such as an LCD display, or the processing result is directly output in a hard copy form such as printing.
- the above-mentioned system is used to perform a spiral CT scan on the inspected object to obtain the original attenuated signal.
- Such attenuation signal data is a three-dimensional data, denoted as P, then P is a matrix of size C ⁇ R ⁇ A, where C represents the number of columns of the detector (the column directions indicated in Figure 2A and Figure 2B) , R represents the number of rows of the detector (the row direction indicated in Figure 2A and Figure 2B corresponds to the rows of multiple rows of detectors), A represents the number of angles of the projections collected by the detector (the dimensions indicated in Figure 2B ), which is to organize the spiral CT projection data into a matrix form.
- the original attenuated signal is preprocessed into three-dimensional projection data (see Figure 2B).
- the projection data can be obtained by preprocessing such as negative logarithmic transformation on the projection data by the spiral CT system.
- the processor 350 in the control device executes the reconstruction program, uses the trained neural network to process the projection data to obtain the two-dimensional projection data of the set section, and then further analyzes and reconstructs the two-dimensional projection data to obtain the image domain setting
- the cross-sectional image can then be further processed by setting the cross-sectional image in the image field to obtain a plane reconstruction image of the set cross-section.
- a trained convolutional neural network such as a U-net neural network
- the convolutional neural network may include convolutional layers, pooling, and fully connected layers.
- the convolutional layer recognizes the characteristic representation of the input data set, and each convolutional layer carries a nonlinear activation function operation.
- the pooling layer refines the representation of features. Typical operations include average pooling and maximum pooling.
- One or more fully connected layers implement high-order signal nonlinear synthesis operations, and the fully connected layers also have nonlinear activation functions. Commonly used nonlinear activation functions are Sigmoid, Tanh, ReLU and so on.
- the interpolation methods include but are not limited to linear interpolation and cubic spline interpolation.
- the X-rays emitted by the X-ray sources 20 located at different positions are transmitted by the inspection object 60, and are received by the detector, converted into electrical signals and then converted into digital signals representing the attenuation value, which are preprocessed as Project data for reconstruction by computer.
- Fig. 4 shows a schematic diagram of the principle of a spiral CT image reconstruction device based on a neural network according to an embodiment of the present disclosure.
- the neural network-based spiral CT image reconstruction device of the embodiment of the present disclosure by inputting the three-dimensional projection data to the trained neural network model, a reconstructed image of the set section of the object under inspection is obtained.
- the neural network model is trained and the parameters in the network are optimized.
- the network learns from the training set data, including training and generalization processing, including training and optimizing the parameters in the neural network model through simulated data and/or actual data ; And through some actual data, the optimized parameters are generalized, and the generalized processing includes fine-tuning the parameters.
- Fig. 5 shows a schematic structural diagram of a neural network according to an embodiment of the present disclosure.
- the neural network of the embodiment of the present disclosure may include three cascaded sub-networks, which are independent neural networks, namely, the projection domain sub-network, the domain conversion sub-network and the image domain sub-network.
- Fig. 6 is a visual network structure diagram of a neural network according to an embodiment of the disclosure. Among them, you can visually understand the types of data before and after the processing of the sub-networks at all levels.
- the three-level sub-network will be specifically described with reference to FIGS. 5 and 6.
- the projection domain sub-network inputs the three-dimensional projection data.
- the three-dimensional projection data is the data received by the detector in the spiral CT system.
- This sub-network is used as the first part of the neural network structure to convert the three-dimensional spiral projection to the two-dimensional planar projection.
- the network takes the spiral projection data related to a certain cross-section of the object to be reconstructed (the cross-section is set as the cross-section of the image to be reconstructed) as input.
- the projection domain sub-network may include several layers of convolutional neural networks and spiral projection data. After passing through the convolutional neural network, the equivalent two-dimensional fan beam (or parallel beam) projection data of the object is output.
- This part of the network aims to extract the characteristics of the original spiral CT projection data through the convolutional neural network, and then estimate the independent fan beam (or parallel beam) projections between sections.
- the main task is to simplify the high-complexity problem of spiral CT projection into two-dimensional in-plane projection, which can not only eliminate the impact of cone angle effect, but also simplify subsequent reconstruction problems.
- the resources and computation required for two-dimensional reconstruction are far less than spiral CT reconstruction.
- Fig. 7 shows an exemplary network structure of the projection domain sub-network.
- the projection data related to P is selected from the above-mentioned projection data as P and rearranged, denoted as P'as the input of the projection domain sub-network.
- the specific operation is as follows: select the axial coordinates of the reconstructed section as the center, cover the data of the 180-degree spiral scanning angles of the front and back, find the number of detector rows corresponding to the reconstructed section at each scan angle, and rearrange it into C ⁇ A' ⁇ R' size matrix.
- A' represents the number of angles we select for the spiral projection, namely 360 degrees
- R' representss the maximum number of rows of the corresponding detector at all angles.
- the projection data of the linear attenuation coefficient distribution of the set section (the plane to be reconstructed) under the fan beam projection condition is denoted as p, and p is a matrix of size C ⁇ A'.
- analytical reconstruction methods including but not limited to PI-original, to reconstruct the cross-sectional image corresponding to the input spiral projection data
- H represent the system matrix of fan beam scanning, so As the residual of the projection domain sub-network.
- this part of the network takes the rearranged projection data P'as input, and the function of this part of the network is to estimate the two-dimensional section
- This part of the network is composed of multiple convolutional layers, and the convolutional layer is configured with K-scale 2-dimensional convolution kernels.
- the convolution kernel of a certain scale there are two dimensions.
- the first dimension is defined as the detector direction
- the second dimension is the scanning angle direction.
- the length of the convolution kernels of the two dimensions need not be the same, for example, take 3*1, 3*5, 7*3 convolution kernels.
- convolution kernels can be set for each scale. All convolution kernels are network parameters to be determined. In the pooling part of the network, the convolutional layers are pooled, and the image scale is reduced layer by layer. In the up-sampling part, the convolutional layers are up-sampled to restore the image scale layer by layer. In order to retain more image details, images of the same scale in the network output before the pooling part and the network output after the upsampling part are stitched in the third dimension, as shown in Figure 7.
- FIG. 7 exemplifies the projection domain sub-network as an example of a specific structure of a U-shaped network
- those skilled in the art can think of other structures that can also implement the technical solutions of the present disclosure.
- those skilled in the art can also think of using other networks as image domain networks, such as Auto-Encoder, Fully Convolution Neural Network, etc., which can also implement the technical solutions of the present disclosure. .
- the domain conversion sub-network inputs the two-dimensional projection data output by the above-mentioned projection domain sub-network, and obtains a set cross-sectional image of the image domain after analysis and reconstruction.
- this sub-network is used for the domain conversion from the projection domain to the image domain.
- This part of the network realizes the operation from the two-dimensional fan beam (or parallel beam) projection domain data to the image domain cross-sectional image.
- the weight coefficients between network nodes (neurons) in the network can be determined by the scanning geometry in the two-dimensional fan beam (or parallel beam) CT scanning relationship.
- the input of this layer is the fan beam (or parallel beam) projection data output by the first part, and the output is the preliminary CT reconstruction image (that is, the image domain setting cross-sectional image). Since the first part of the sub-network has transformed the reconstruction problem into two dimensions, the domain conversion network in this part can be directly completed using the matrix operator of the two-dimensional analytical reconstruction. The operators in this part can also be implemented through a fully connected network. Use simulated or actual projection data and reconstructed image pair for training. The output of this part of the network can be used as the final output result, or it can be output after being processed by the image domain sub-network.
- the domain conversion sub-network specifically obtains the image domain output by performing inverse calculation of the above-mentioned p from the projection domain to the image domain.
- W completes the weighting of the projection domain data
- F corresponds to a ramp filter convolution operation
- the image domain sub-network inputs the set section image of the image domain, and after further extraction and fusion of the image features, a plane reconstruction image of the set section is formed.
- the image domain sub-network is the third part. This part of the network takes the image domain set cross-sectional image output by the aforementioned domain conversion sub-network as input, and through the action of a convolutional neural network containing several layers, collects the characteristics of the data in the image domain, and Taking the target image as the learning goal, the image features are further extracted and coupled with each other to achieve the effect of optimizing image quality in the image domain. The output of this part is the final output of the entire network.
- the image domain sub-network adopts a U-net type neural network structure similar to the first part.
- the role is to optimize the image domain.
- the convolutional layers are pooled, and the image scale is reduced layer by layer.
- the convolutional layers are up-sampling to restore the image scale layer by layer.
- This part of the network can use residual training, that is, the output of the last convolutional layer plus Equal to the estimated ⁇ of the two-dimensional reconstructed image.
- the image domain sub-network includes but is not limited to selecting a 3 ⁇ 3 convolution kernel, and both pooling and up-sampling use a size of 2 ⁇ 2. Choose ReLU as the activation function.
- a cost function can be used as the objective function to be optimized, and the cost function of the overall network can use but is not limited to the l-norm, RRMSE, SSIM, etc. commonly used in the field, and the synthesis of multiple cost functions.
- neural network parameters can be trained, and the training data includes simulated data and actual data.
- the simulated data a basic mathematical model of the scanned object is established, and the spiral projection data is generated according to the actual system modeling. After preprocessing, it is used as the network input, and the true value of the scanned object is used as the label to train the network parameters.
- the simulation data can be lung simulation data, including 30 cases, each case contains 100 slices, a total of 3000 samples, and data augmentation. Augmentation methods include but are not limited to rotation, flipping, and so on.
- objects can be scanned on the actual system to obtain spiral projection data, which is preprocessed and input to this network for preliminary reconstruction.
- the sub-network converted from the spiral projection to the two-dimensional projection can be trained first, and then the overall training; or the overall training directly.
- the parameters of the domain conversion sub-network can be calculated in advance without post-training, or the parameters of the domain conversion sub-network can also be trained.
- a direct training method may be used.
- the projection domain sub-network and image domain sub-network convolution kernel weights are randomly initialized, and the actual collected data set is used for training. After the training is completed, another set of actual collected data is used as a test set to verify the network training effect .
- Fig. 8 is a schematic flowchart describing a method for reconstructing a spiral CT image according to an embodiment of the present disclosure.
- step S10 three-dimensional projection data is input; in step S20, the neural network model receives the three-dimensional projection data and obtains a plane reconstruction image of the set section of the object under inspection, wherein the neural network model is trained.
- a neural network may include: a projection domain sub-network, a domain conversion sub-network, and an image domain sub-network.
- the projection domain sub-network processes the input 3D projection data to obtain 2D projection data.
- the domain conversion sub-network analyzes and reconstructs the two-dimensional projection data to obtain a set cross-sectional image of the image domain.
- the image domain sub-network inputs the cross-sectional image of the image domain, and through the action of a convolutional neural network containing several layers, the features of the data in the image domain are extracted, and the image features are further coupled to obtain a plane reconstruction image of a set cross-section.
- the machine learning may include: training and optimizing the parameters in the neural network model through simulated data and/or actual data; and generalizing the optimized parameters through part of the actual data.
- the generalization processing Including fine-tuning the parameters.
- the method of the present disclosure can be flexibly applied to different CT scanning modes and system architectures, and can be used in the fields of medical diagnosis, industrial non-destructive testing, and security inspection.
- signal bearing media include, but are not limited to: recordable media such as floppy disks, hard drives, compact disks (CDs), digital versatile disks (DVD), digital tapes, computer storage, etc.; and transmission media such as digital and / Or analog communication media (for example, fiber optic cables, waveguides, wired communication links, wireless communication links, etc.).
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
本公开提供一种基于神经网络的螺旋CT图像重建设备和方法。其中,该设备包括:存储器,用于存储指令和来自螺旋CT设备对被检查对象的三维投影数据,所述被检查对象被预设为多层截面;处理器,配置为执行所述指令,以便:分别对各层截面进行图像重建,对于每层截面的重建,包括:输入所述与待重建截面相关的三维投影数据至经训练的神经网络模型,得到截面重建图像;根据多层截面的重建图像形成三维重建图像。本公开的设备通过结合深度神经网络优势和螺旋CT成像问题的特殊性,能够将三维数据投影数据重建为信息更多噪声更少的三维重建图像。
Description
本公开涉及辐射成像,具体涉及一种基于神经网络的螺旋CT图像重建方法和设备,以及存储介质。
X射线CT(Computed-Tomography)成像系统在医疗、安检、工业无损检测等领域中都有着广泛的应用。射线源和探测器按照一定的轨道采集一系列的投影数据,经过图像重建算法的复原可以得到物体在该射线能量下的线性衰减系数的三维空间分布。CT图像重建是从探测器采集到的投影数据中恢复线性衰减系数分布,是CT成像的核心步骤。目前,在实际应用中主要使用滤波反投影(Filtered Back-Projection,FBP)、Feldkmap-Davis-Kress(FDK)类的解析重建算法和Algebra Reconstruction Technique(ART)、Maximum A Posterior(MAP)等迭代重建方法。
随着人们对辐射剂量这一问题愈发重视,如何在低剂量、快速扫描的条件下获得常规质量或更高质量图像成为领域内研究的热门。在重建方法方面,解析重建速度快,但局限于传统的系统架构,且不能很好地解决数据缺失、噪声大等问题。与解析算法相比,迭代重建算法在系统架构方面的适用条件广泛,对于各种非标准扫描轨道、低剂量大噪声、投影数据缺失等问题都能取得较好的重建结果。但是迭代重建算法往往要求多次迭代,重建耗时较长。对于数据规模更大的三维螺旋CT更是难以实际运用。对于医疗和工业上广泛应用的螺旋CT,增大螺距可以减少扫描时间,提高扫描效率,降低辐射剂量。然而,增大螺距意味着有效数据的减少。利用常规解析重建方法得到的图像质量较差;而迭代重建方法由于耗时较长,难以实际应用。
深度学习在计算机视觉、自然语言处理等方面取得了重大发展,尤其是卷积神经网络因为其网络结构的简洁、特征提取的有效、参数空间 的压缩等多个方面的优势成为图像分类、检测等应用的主流网络结构。但目前未有相关的应用神经网络来进行螺旋CT图像重建的研究。
发明内容
根据本公开实施例,提出了一种螺旋CT图像重建方法和设备以及存储介质。
根据本公开的一方面,提供一种基于神经网络的螺旋CT图像重建设备,其中,包括:
存储器,用于存储指令和来自螺旋CT设备对被检查对象的三维投影数据,所述被检查对象被预设为多层截面;
处理器,配置为执行所述指令,以便:
分别对各层截面进行图像重建,对于每层截面的重建,包括:输入所述与待重建截面相关的三维投影数据至经训练的神经网络模型,得到截面重建图像;
根据多层截面的重建图像形成三维重建图像。
根据本公开的另一方面,提供一种螺旋CT图像重建方法,其中,包括:
被检查对象被预设为多层截面;
分别对各层截面进行图像重建,对于每层截面的重建,包括:输入所述与待重建截面相关的三维投影数据至经训练的神经网络模型,得到截面重建图像;
根据多层截面的重建图像形成三维重建图像。
根据本公开的再一方面,提供一种用于训练神经网络的方法,所述神经网络包括:
投影域子网络,用于处理输入的待重建截面相关的螺旋CT三维投影数据,得到二维投影数据;
域转换子网络,用于对二维投影数据进行解析重建,得到待重建截面图像;
图像域子网络,用于对图像域截面图像进行处理,得到待重建截面的精确重建图像;
其中,所述方法包括:
利用基于输入的三维投影数据、图像真值、以及设定截面的平面重建图像这三者的数据模型的一致性代价函数来调整神经网络中参数。
根据本公开的又一方面,提供一种计算机可读存储介质,其中存储有计算机指令,当所述指令被处理器执行时实现如上述的螺旋CT图像重建方法。
本公开的基于神经网络的螺旋CT图像重建设备,通过结合深度网络优势和螺旋CT成像问题的特殊性,提供的设备能够将三维投影数据重建为较为准确的三维图像;
本公开通过针对性的神经网络模型架构,结合仿真和实际数据,训练该网络,从而能够可靠、有效、全面地覆盖所有系统信息和被成像对象的集合信息,准确地重建物体图像,抑制低剂量带来的噪声和数据缺失带来的伪影;
本公开的神经网络模型虽然训练过程需要大量数据和运算,但是实际重建过程不需要迭代,重建所需计算量与解析重建方法可比,远快于迭代重建算法。
为了更清楚地说明本公开实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他附图。
图1示出了本公开一个实施例的螺旋CT系统的结构示意图;
图2A是如图1所示的螺旋CT系统中探测器相对于被检查对象进行螺旋运动的轨迹示意图;图2B是螺旋CT系统中探测器探测的信号对应三维投影数据示意图。
图3是如图1所示的螺旋CT系统中控制与数据处理装置的结构示意图;
图4示出了根据本公开实施例的基于神经网络的螺旋CT图像重建设备原理示意图;
图5示出了根据本公开实施例的神经网络的一种结构示意图;
图6为本公开实施例的神经网络的可视化网络结构图;
图7示出了投影域子网络一示例性网络结构;
图8是描述根据本公开实施例的螺旋CT图像重建方法示意流程图。
下面将详细描述本公开具体实施例,应当注意,这里描述的实施例只用于举例说明,并不用于限制本公开实施例。在以下描述中,为了提供对本公开实施例的透彻理解,阐述了大量特定细节。然而,对于本领域普通技术人员显而易见的是:不必采用这些特定细节来实行本公开实施例。在其他实例中,为了避免混淆本公开实施例,未具体描述公知的结构、材料或方法。
在整个说明书中,对“一个实施例”、“实施例”、“一个示例”或“示例”的提及意味着:结合该实施例或示例描述的特定特征、结构或特性被包含在本公开至少一个实施例中。因此,在整个说明书的各个地方出现的短语“在一个实施例中”、“在实施例中”、“一个示例”或“示例”不一定都指同一实施例或示例。此外,可以以任何适当的组合和/或子组合将特定的特征、结构或特性组合在一个或多个实施例或示例中。此外,本领域普通技术人员应当理解,这里使用的术语“和/或”包括一个或多个相关列出的项目的任何和所有组合。
对于医疗和工业上广泛应用的螺旋CT,增大螺距可以减少扫描时间,提高扫描效率,降低辐射剂量。然而,增大螺距意味着有效数据的减少。利用常规解析重建方法得到的图像质量较差;而迭代重建方法由于耗时较长,难以实际应用。
本公开从深度学习角度,针对大螺距扫描下的螺旋CT设备,提出了一种基于卷积神经网络的重建方法,深度挖掘数据信息,结合螺旋CT系统的物理规律,设计了独特的网络架构、以及训练方法,从而在较短的时间内重建得到更高质量的图像。
本公开的实施例提出了一种基于神经网络的螺旋CT图像重建方法和设备以及存储介质。其中利用神经网络来处理来自螺旋CT设备对被检查对象的三维投影数据以获得被检查对象线性衰减系数的体分布。该神经网络可以包括:投影域子网络,域转换子网络以及图像域子网络。投影域子网络处理输入的三维投影数据,得到二维投影数据。域转换子网络对二维投影数据进行解析重建,得到图像域设定截面图像。图像域子网络输入截面图像,经过包含若干层的卷积神经网络作用,采集数据在图像域的特征,对图像特征进行进一步的提取并相互耦合,得到设定截面的精确重建图像。利用本公开上述实施例的方案,能够对螺旋CT设备被检查对象的三维投影数据重建得到质量更高的结果。
图1示出了本公开一个实施例的螺旋CT系统的结构示意图。如图1所示,根据本实施例的CT螺旋系统包括X射线源20、机械运动装置30、以及探测器和数据采集系统10,对被检查对象60进行螺旋CT扫描。
X射线源10例如可以为X光机,可以根据成像的分辨率选择合适的X光机焦点尺寸。在其他实施例中也可以不使用X光机,而是使用直线加速器等产生X射线束。
机械运动装置包括载物台60和机架30。载物台可沿着截面的轴线方向(垂直于纸面的方向)移动,机架30也可以转动,同时带动机架上的探测器和X射线源10同步转动。本实施例中按照平移载物台、同步旋转探测器、X射线源,以使探测器相对于被检查对象作螺旋运动。
探测器及数据采集系统10包括X射线探测器和数据采集电路等。X射线探测器可以使用固体探测器,也可以使用气体探测器或者其他探测器,本公开的实施例不限于此。数据采集电路包括读出电路、采集触发电路及数据传输电路等,探测器通常采集的为模拟信号,通过数据采集 电路可以转换为数字信号。一个示例中,探测器可以是一排探测器或多排探测器,对于多排探测器,可以设置不同的排间距。
控制和数据处理装置60中例如包括安装有控制程序和基于神经网络的螺旋CT图像重建设备,负责完成螺旋CT系统运行过程的控制,包括机械转动、电气控制、安全联锁控制等,训练神经网络(即机器学习过程),并且利用训练的神经网络从投影数据重建CT图像等。
图2A是如图1所示的CT螺旋系统中探测器相对于被检查对象进行螺旋运动的轨迹示意图。如图2A所示,载物台可以前后平移(即图1中垂直于纸面的方向),该过程中同步带动被检查对象移动;同时,探测器围绕载物台中心轴做圆周运动,被检查对象待重建图像μ对应的设定截面与探测器之间的相对运动关系则为探测器围绕设定截面作螺旋运动。
图3示出了如图1所示的控制和数据处理装置60的结构示意图。如图3所示,探测器及数据采集系统10采集得到的数据通过接口单元370和总线380存储在存储设备310中。只读存储器(ROM)320中存储有计算机数据处理器的配置信息以及程序。随机存取存储器(RAM)330用于在处理器350工作过程中暂存各种数据。另外,存储设备310中还存储有用于进行数据处理的计算机程序,例如训练神经网络的程序和重建CT图像的程序等等。内部总线380连接上述的存储设备310、只读存储器320、随机存取存储器330、输入装置340、处理器350、显示设备360和接口单元370。本公开实施例中的基于神经网络的螺旋CT图像重建设备与控制和数据处理装置60共用存储设备310,内部总线380,只读存储器(ROM)320,显示设备360和处理器350等,用于实现螺旋CT图像的重建。
在用户通过诸如键盘和鼠标之类的输入装置340输入的操作命令后,计算机程序的指令代码命令处理器350执行训练神经网络的算法和/或重建CT图像的算法。在得到重建结果之后,将其显示在诸如LCD显示器之类的显示设备360上,或者直接以诸如打印之类硬拷贝的形式输出处理结果。
根据本公开的实施例,利用上述系统对被检查对象进行螺旋CT扫描,得到原始衰减信号。这样的衰减信号数据是一个三维数据,记为P,则P是一个C×R×A大小的矩阵,其中,C表示探测器的列数(如图2A和图2B中所标示的列方向),R表示探测器的行数(如图2A和图2B中所标示的行方向,对应多排探测器的排),A表示探测器采集到的投影的角度数目(如图2B中所标示维度),也就是将螺旋CT投影数据组织成矩阵形式。原始衰减信号进行预处理后成为三维投影数据(参见图2B所示)。例如,可以由螺旋CT系统对投影数据进行负对数变换等预处理得到投影数据。然后,控制设备中的处理器350执行重建程序,利用训练的神经网络对投影数据进行处理,得到设定截面的二维投影数据,再进一步对二维投影数据进行解析重建,得到图像域设定截面图像,接着可以对该图像域设定截面图像进一步处理,得到设定截面的平面重建图像。例如,可以利用训练的卷积神经网络(例如U-net型神经网络)处理图像,得到不同尺度的特征图,并且对不同尺度的特征图进行合并,得到结果。
在一具体示例中,卷积神经网络可以包括卷积层、池化、和全连接层。卷积层识别输入数据集合的特性表征,每个卷积层带一个非线性激活函数运算。池化层精炼对特征的表示,典型的操作包括平均池化和最大池化。一层或多层的全连接层实现高阶的信号非线性综合运算,全连接层也带非线性激活函数。常用的非线性激活函数有Sigmoid、Tanh、ReLU等等。
在螺旋CT扫描时,通过增大螺距可以减少扫描时间,提高扫描效率,降低辐射剂量,相应带来的是有效数据减少。可以选择对螺旋CT投影数据进行插值,关于插值,即探测器行方向进行缺失数据插值,插值方法包括但不限于线性插值和三次样条插值等。
进一步参见图1所示,位于不同位置的X射线源20发出的X射线透射被检查对象60后,被探测器接收,转换成电信号并进而转换成表示衰减值的数字信号,预处理后作为投影数据,以便由计算机进行重建。
图4示出了根据本公开实施例的基于神经网络的螺旋CT图像重建 设备原理示意图。如图4所示,本公开实施例的基于神经网络的螺旋CT图像重建设备中,通过输入所述三维投影数据至经训练的神经网络模型,得到被检查对象设定截面的重建图像。其中,神经网络模型经过训练,网络中参数得以优化,该网络通过训练集的数据进行学习,包括训练和泛化处理,包括通过模拟数据和/或实际数据对神经网络模型中的参数进行训练优化;以及通过部分实际数据,对已优化的参数进行泛化处理,所述泛化处理包括对参数进行细化微调。
图5示出了根据本公开实施例的神经网络的一种结构示意图。如图5所示,本公开实施例的神经网络可以包括三个级联的子网络,分别为独立的神经网络,即投影域子网络,域转换子网络以及图像域子网络。图6为本公开实施例的神经网络的可视化网络结构图。其中,可以视觉化的了解各级子网络处理前后数据的类型。以下,参照图5和图6对三级子网络进行具体说明。
投影域子网络输入三维投影数据,该三维投影数据为螺旋CT系统中探测器接收的数据,该子网络作为神经网络结构的第一部分,用于将三维螺旋投影转换到二维平面投影,这部分网络以待重建物体某一截面(设定截面,该截面为待重建图像的截面)相关的螺旋投影数据作为输入,一示例中,投影域子网络可以包括若干层卷积神经网络,螺旋投影数据通过该卷积神经网络后,输出物体的等效二维扇束(或平行束)投影数据。该部分网络旨在通过卷积神经网络来提取出原始的螺旋CT投影数据的特征,进而估计截面之间互相独立的扇束(或平行束)投影。主要完成将螺旋CT投影的高复杂度问题简化为二维平面内投影,不仅能消除锥角效应带来的影响,还能简化后续的重建问题。二维重建所需的资源和计算量远远小于螺旋CT重建。
图7示出了投影域子网络一示例性网络结构。如图7所示,对于待重建的设定截面图像,从上述的投影数据为P选取与其相关的投影数据并进行重排,记为P’作为投影域子网络的输入。具体操作如下:选取重建截面所在轴向坐标为中心,覆盖前后各180度螺旋扫描角度的数据,找到每个扫描角度下重建截面所对应的探测器行数,将其重排成C×A’ ×R’大小的矩阵。其中,A’表示我们选取螺旋投影的角度数即360度,R’表示在所有角度下对应探测器的最大行数。
此外,对于设定截面(待重建平面)的线衰减系数分布在扇束投影条件下的投影数据记为p,p是一个C×A’大小的矩阵。在训练网络前,可以先用解析重建方法,包括但不限于PI-original等,重建出输入螺旋投影数据对应的截面图像
用H表示扇束扫描的系统矩阵,所以有
作为投影域子网络残差。如图8所示,包括但不限于采用一个U-net型神经网络结构作为投影域子网络,该部分网络以重排后的投影数据P’作为输入,此部分网络作用是估计该二维截面内线衰减系数μ的扇束投影p。这部分网络由多个卷积层组成,卷积层配置K个尺度的2维卷积核。对于某一个尺度的2维卷积核有两个维度,此处定义第一维度为探测器方向,第二维度为扫描角度方向。两个维度的卷积核长度不必相同,例如取3*1,3*5,7*3的卷积核。每个尺度可以设置多个卷积核。所有的卷积核为待定的网络参数。在网络的池化部分,卷积层之间经过池化,图像尺度逐层减小,在上采样部分,卷积层之间通过上采样,逐层恢复图像尺度。为保留更多图像细节信息,池化部分之前的网络输出和上采样部分之后的网络输出中同等尺度的图像在第三维度方向进行拼接,详见图7。用φ
P-net(P)表示投影域子网络对应的算子,最后一层卷积层的输出结果使用残差方式:
若先单独训练投影域子网络,则代价函数可设置为l范数,以l=2为例:
虽然图7将投影域子网络示例为一种U型网络的具体结构示例,但是本领域的技术人员可想到用其他结构的网络也能实现本公开的技术方案。此外,本领域的技术人员也可以想到将其他网络用作图像域网络,例如自编码网络(Auto-Encoder)、全卷积神经网络(Fully convolution neural network)等,也能够实现本公开的技术方案。
域转换子网络输入上述投影域子网络输出的二维投影数据,进行解析重建后得到图像域设定截面图像。该子网络作为神经网络结构的第二 部分,用于投影域到图像域的域转换,这部分网络实现从二维扇束(或平行束)投影域数据到图像域截面图像的运算,该子网络中网络节点(神经元)间的权重系数可以由二维扇束(或平行束)CT扫描关系中的扫描几何确定。此层的输入为第一部分输出的扇束(或平行束)投影数据,输出为初步的CT重建图像(也就是图像域设定截面图像)。由于第一部分的子网络已经把重建问题转化到二维,此部分的域转换网络可以直接使用二维解析重建的矩阵算子完成。此部分的算子也可以通过一个全连接型网络实现。使用仿真的或者实际的投影数据和重建图像对进行训练得到。这部分网络的输出可以作为最终输出结果,也可以经过图像域子网络处理后输出。
一示例性实施例中,域转换子网络具体通过对上述p进行投影域到图像域的逆向计算,得到图像域输出。使用领域内已有的Siddon或其它方法计算投影矩阵,以此系统矩阵的元素对应解析重建连接层的连接权重。以FBP扇束解析重建为例,
这里W完成投影域数据的加权,F对应于一个斜坡滤波卷积运算,
完成带权重的反投影。
图像域子网络输入图像域设定截面图像,经过对图像特征进一步提取融合后形成设定截面的平面重建图像。该图像域子网络为第三部分,这部分网络以前述域转换子网络输出的图像域设定截面图像为输入,经过包含若干层的卷积神经网络作用,采集数据在图像域的特征,并以目标图像为学习目标,对图像特征进行进一步的提取并相互耦合,从而达到在图像域优化图像质量的作用。此部分的输出为整个网络的最终输出结果。
一示例性实例中,图像域子网络采用一个类似第一部分的U-net型神经网络结构,该部分网络以
作为输入,作用是实现图像域优化。类似于第一部分网络,在前半部分,卷积层之间经过池化,图像尺度逐层减小,在后半部分,卷积层之间通过上采样,逐层恢复图像尺度。此部分网络可以采用残差训练方式,即最后一层卷积层的输出结果加上
等于对二维重建图像的估计μ。与投影域子网络类似的,在图像域 子网络包括但不限于选择3×3的卷积核,池化和上采样均采用2×2的尺寸。选择ReLU作为激活函数。
本公开实施例中,可以采用代价函数作为待优化的目标函数,总体网络的代价函数可以使用但不限于领域内常用的l-范数、RRMSE、SSIM等以及多个代价函数的综合。
本公开实施例中,可以对神经网络参数进行训练,训练数据包括模拟数据和实际数据。对于模拟数据,建立基本的扫描物体数学模型,按照实际系统建模生成螺旋投影数据,预处理后作为网络输入,使用扫描物体的图像真值作为标签,训练网络参数。举例来说,模拟数据可以是肺部模拟数据,共包含30个病例,每个病例含有100层切片,共计3000个样本,并进行数据增广。增广方式包括但不限于旋转、翻转等等。对于实际数据,可以在实际系统上扫描物体,获得螺旋投影数据,预处理后输入到此网络获得初步的重建。然后对这些初步重建结果进行针对性的图像处理,例如对已知局部平滑区域进行局部平滑,得到标签图像。也可以使用领域内的先进迭代重建方法进行重建得到标签图像。使用标签图像进一步训练网络,达到网络参数的细化微调。一些实施例中,训练时可先训练螺旋投影转换到二维平面投影的子网络,再整体训练;或直接整体训练。对于投影子网络和图像域子网络的训练,可以各自单独训练;对于域转换子网络的参数可以通过进行提前计算而无需后期训练, 或者也可以对域转换子网络的参数进行训练。
若先单独训练投影域子网络,则代价函数为
其中k为训练样本索引号,μ
*为图像标签。鉴于实际应用无法获得标签,我们使用小螺距下完备数据的重建结果或领域内的先进迭代重建方法的重建结果作为标签。如有其他途径获得高质量图像,也可以用其他标签。
根据本发明的一个实施例,可采用直接训练方式。在直接训练过程中,随机初始化投影域子网络及图像域子网络卷积核权值,由实际采集数据集进行训练,训练完成后,用另一组实际采集数据作为测试集以验证网络训练效果。
对于实际CT扫描过程,把采集的数据输入上述训练过程获得已训练网络(此时网络参数已经过机器学习),获得重建图像。
图8是描述根据本公开实施例的螺旋CT图像重建方法示意流程图。如图8所示,在步骤S10,输入三维投影数据;在步骤S20,神经网络模型输入三维投影数据后得到被检查对象设定截面的平面重建图像,其中,所述神经网络模型经过训练。
根据本公开实施例的神经网络可以包括:投影域子网络,域转换子 网络以及图像域子网络。投影域子网络处理输入的三维投影数据,得到二维投影数据。域转换子网络对二维投影数据进行解析重建,得到图像域设定截面图像。图像域子网络输入所述图像域截面图像,经过包含若干层的卷积神经网络作用,提取数据在图像域的特征,对图像特征进行进一步耦合,得到设定截面的平面重建图像。利用本公开上述实施例的方案,能够对螺旋CT设备被检查对象的三维投影数据重建得到质量更高的结果。
根据本公开实施例的机器学习可以包括:通过模拟数据和/或实际数据对神经网络模型中的参数进行训练优化;通过部分实际数据,对已优化的参数进行泛化处理,所述泛化处理包括对参数进行细化微调。
本公开的方法可以灵活适用于不同的CT扫描模式和系统架构,可用于医学诊断、工业无损检测和安检领域。
以上的详细描述通过使用示意图、流程图和/或示例,已经阐述了训练神经网络的方法和设备的众多实施例。在这种示意图、流程图和/或示例包含一个或多个功能和/或操作的情况下,本领域技术人员应理解,这种示意图、流程图或示例中的每一功能和/或操作可以通过各种结构、硬件、软件、固件或实质上它们的任意组合来单独和/或共同实现。在一个实施例中,本公开实施例所述主题的若干部分可以通过专用集成电路(ASIC)、现场可编程门阵列(FPGA)、数字信号处理器(DSP)、或其他集成格式来实现。然而,本领域技术人员应认识到,这里所公开的实施例的一些方面在整体上或部分地可以等同地实现在集成电路中,实现为在一台或多台计算机上运行的一个或多个计算机程序(例如,实现为在一台或多台计算机系统上运行的一个或多个程序),实现为在一个或多个处理器上运行的一个或多个程序(例如,实现为在一个或多个微处理器上运行的一个或多个程序),实现为固件,或者实质上实现为上述方式的任意组合,并且本领域技术人员根据本公开,将具备设计电路和/或写入软件和/或固件代码的能力。此外,本领域技术人员将认识到,本公开所述主题的机制能够作为多种形式的程序产品进行分发,并且无论实际用来执行分发的信号承载介质的具体类型如何,本公开所述主题 的示例性实施例均适用。信号承载介质的示例包括但不限于:可记录型介质,如软盘、硬盘驱动器、紧致盘(CD)、数字通用盘(DVD)、数字磁带、计算机存储器等;以及传输型介质,如数字和/或模拟通信介质(例如,光纤光缆、波导、有线通信链路、无线通信链路等)。
虽然已参照几个典型实施例描述了本公开实施例,但应当理解,所用的术语是说明和示例性、而非限制性的术语。由于本公开实施例能够以多种形式具体实施而不脱离公开实施例的精神或实质,所以应当理解,上述实施例不限于任何前述的细节,而应在随附权利要求所限定的精神和范围内广泛地解释,因此落入权利要求或其等效范围内的全部变化和改型都应为随附权利要求所涵盖。
Claims (18)
- 一种基于神经网络的螺旋CT图像重建设备,其中,包括:存储器,用于存储指令和来自螺旋CT设备对被检查对象的三维投影数据,所述被检查对象被预设为多层截面;处理器,配置为执行所述指令,以便:分别对各层截面进行图像重建,对于每层截面的重建,包括:输入所述与待重建截面相关的三维投影数据至经训练的神经网络模型,得到截面重建图像;根据多层截面的重建图像形成三维重建图像。
- 根据权利要求1所述的基于神经网络的螺旋CT图像重建设备,其中,所述神经网络模型包括:投影域子网络,用于处理输入的与截面相关的三维投影数据,得到二维投影数据。
- 根据权利要求2所述的基于神经网络的螺旋CT图像重建设备,其中,所述神经网络模型还包括:域转换子网络,用于对所述二维投影数据进行解析重建,得到图像域截面图像。
- 根据权利要求3所述的基于神经网络的螺旋CT图像重建设备,其中,所述神经网络模型还包括:图像域子网络,用于输入所述图像域截面图像,经过包含若干层的卷积神经网络作用,提取数据在图像域的特征并对图像特征进行进一步的耦合,最终得到截面重建图像。
- 根据权利要求1所述的基于神经网络的螺旋CT图像重建设备,其中,所述神经网络模型输入的待重建截面相关的三维投影数据为从螺旋CT设备全部投影数据中选取的与待重建截面相关并进行重排后的投影数据。
- 根据权利要求1所述的基于神经网络的螺旋CT图像重建设备,其中,存入所述存储器被检查对象的三维投影数据为螺旋CT设备全部 投影数据进行过插值预处理后的数据。
- 根据权利要求1所述的基于神经网络的螺旋CT图像重建设备,其中,所述训练采用的学习数据包括:模拟数据和/或实际数据,所述模拟数据包括使用数值仿真被扫描物体进行螺旋投影得到的数据,以及该被扫描物体的图像真值作为标签;所述实际数据包括螺旋CT设备扫描物体获得的螺旋投影数据和根据该投影数据进行迭代重建获得的标签图像;或者已知材料和结构的实际模体进行螺旋扫描得到投影数据,用已知的材料和结构信息形成标签图像。
- 根据权利要求2所述的基于神经网络的螺旋CT图像重建设备,其中,所述投影域子网络为卷积神经网络结构,该部分网络以三维投影数据作为输入,此部分网络用于估计设定截面内线衰减系数的扇束和/或平行束投影,该扇束和/或平行束投影作为投影域子网络的输出。
- 根据权利要求4所述的基于神经网络的螺旋CT图像重建设备,其中,所述图像域子网络为卷积神经网络结构,该部分网络以域转换子网络的输出作为输入,输出优化的重建图像。
- 一种螺旋CT图像重建方法,其中,包括:被检查对象被预设为多层截面;分别对各层截面进行图像重建,对于每层截面的重建,包括:输入所述与待重建截面相关的三维投影数据至经训练的神经网络模型,得到截面重建图像;根据多层截面的重建图像形成三维重建图像。
- 根据权利要求10所述的方法,其中,所述神经网络模型包括:投影域子网络,用于处理与待重建截面相关的三维投影数据,得到二维投影数据;域转换子网络,用于对二维投影数据进行解析重建,得到图像域截面图像;图像域子网络,用于输入所述图像域截面图像,经过包含若干层的卷积神经网络作用,提取数据在图像域的特征,对图像特征进行进一步的优化,得到待重建截面的精确重建图像。
- 根据权利要求10所述的方法,其中,还包括:由CT扫描设备获取衰减信号数据,并对衰减信号数据进行处理后得到三维投影数据;从全部三维投影数据中选取与待重建截面相关并进行重排后的投影数据,作为神经网络模型的输入。
- 根据权利要求11所述的方法,其中,所述训练包括:通过模拟数据和/或实际数据对神经网络模型中的参数进行训练优化;通过部分实际数据,对已优化的参数进行泛化处理,所述泛化处理包括对参数进行细化微调。
- 根据权利要求13所述的方法,其中,所述训练包括:对投影子网络和图像域子网络各自单独训练或者对神经网络整体进行训练;或者对域转换子网络的参数进行提前计算,或者对域转换子网络的参数进行训练。
- 根据权利要求10所述的方法,其中,所述三维投影数据为从螺旋CT设备总体投影数据进行过插值预处理后的数据。
- 根据权利要求11所述的方法,其中,所述投影域子网络为卷积神经网络结构,该部分网络以三维投影数据作为输入,此部分网络作用是估计设定截面内线衰减系数的扇束和/或平行束投影,该扇束和/或平行束投影作为投影域子网络的输出;所述图像域子网络为卷积神经网络结构,该部分网络以图像域设定截面图像作为输入,网络结构前半部分,卷积层之间经过池化,图像尺度逐层减小,在后半部分,卷积层之间通过上采样,逐层恢复图像尺度。
- 一种用于训练神经网络的方法,所述神经网络包括:投影域子网络,用于处理输入的待重建截面相关的螺旋CT三维投影数据,得到二维投影数据;域转换子网络,用于对二维投影数据进行解析重建,得到待重建截面图像;图像域子网络,用于对图像域截面图像进行处理,得到待重建截面 的精确重建图像;其中,所述方法包括:利用基于输入的三维投影数据、图像真值、以及设定截面的平面重建图像这三者的数据模型的一致性代价函数来调整神经网络中参数。
- 一种计算机可读存储介质,其中存储有计算机指令,当所述指令被处理器执行时实现如权利要求10-16之一所述的方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910448427.0 | 2019-05-27 | ||
CN201910448427.0A CN112085829A (zh) | 2019-05-27 | 2019-05-27 | 基于神经网络的螺旋ct图像重建方法和设备及存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020237873A1 true WO2020237873A1 (zh) | 2020-12-03 |
Family
ID=73552051
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/103038 WO2020237873A1 (zh) | 2019-05-27 | 2019-08-28 | 基于神经网络的螺旋ct图像重建方法和设备及存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112085829A (zh) |
WO (1) | WO2020237873A1 (zh) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113192155A (zh) * | 2021-02-04 | 2021-07-30 | 南京安科医疗科技有限公司 | 螺旋ct锥束扫描图像重建方法、扫描系统及存储介质 |
CN113689545A (zh) * | 2021-08-02 | 2021-11-23 | 华东师范大学 | 一种2d到3d端对端的超声或ct医学影像跨模态重建方法 |
CN113963132A (zh) * | 2021-11-15 | 2022-01-21 | 广东电网有限责任公司 | 一种等离子体的三维分布重建方法及相关装置 |
CN114255296A (zh) * | 2021-12-23 | 2022-03-29 | 北京航空航天大学 | 基于单幅x射线影像的ct影像重建方法及装置 |
CN114359317A (zh) * | 2021-12-17 | 2022-04-15 | 浙江大学滨江研究院 | 一种基于小样本识别的血管重建方法 |
CN114742771A (zh) * | 2022-03-23 | 2022-07-12 | 中国科学院高能物理研究所 | 一种电路板背钻孔尺寸自动化无损测量方法 |
CN115690255A (zh) * | 2023-01-04 | 2023-02-03 | 浙江双元科技股份有限公司 | 基于卷积神经网络的ct图像去伪影方法、装置及系统 |
CN116612206A (zh) * | 2023-07-19 | 2023-08-18 | 中国海洋大学 | 一种利用卷积神经网络减少ct扫描时间的方法及系统 |
CN117351482A (zh) * | 2023-12-05 | 2024-01-05 | 国网山西省电力公司电力科学研究院 | 一种用于电力视觉识别模型的数据集增广方法、系统、电子设备和存储介质 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114018962B (zh) * | 2021-11-01 | 2024-03-08 | 北京航空航天大学宁波创新研究院 | 一种基于深度学习的同步多螺旋计算机断层成像方法 |
CN117611750B (zh) * | 2023-12-05 | 2024-07-19 | 北京思博慧医科技有限公司 | 三维成像模型的构建方法、装置、电子设备和存储介质 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108898642A (zh) * | 2018-06-01 | 2018-11-27 | 安徽工程大学 | 一种基于卷积神经网络的稀疏角度ct成像方法 |
CN109102550A (zh) * | 2018-06-08 | 2018-12-28 | 东南大学 | 基于卷积残差网络的全网络低剂量ct成像方法及装置 |
CN109300167A (zh) * | 2017-07-25 | 2019-02-01 | 清华大学 | 重建ct图像的方法和设备以及存储介质 |
CN109300166A (zh) * | 2017-07-25 | 2019-02-01 | 同方威视技术股份有限公司 | 重建ct图像的方法和设备以及存储介质 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103714578A (zh) * | 2014-01-24 | 2014-04-09 | 中国人民解放军信息工程大学 | 针对半覆盖螺旋锥束ct的单层重排滤波反投影重建方法 |
CN105093342B (zh) * | 2014-05-14 | 2017-11-17 | 同方威视技术股份有限公司 | 螺旋ct系统及重建方法 |
CN109171793B (zh) * | 2018-11-01 | 2022-10-14 | 上海联影医疗科技股份有限公司 | 一种角度检测和校正方法、装置、设备和介质 |
-
2019
- 2019-05-27 CN CN201910448427.0A patent/CN112085829A/zh active Pending
- 2019-08-28 WO PCT/CN2019/103038 patent/WO2020237873A1/zh active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109300167A (zh) * | 2017-07-25 | 2019-02-01 | 清华大学 | 重建ct图像的方法和设备以及存储介质 |
CN109300166A (zh) * | 2017-07-25 | 2019-02-01 | 同方威视技术股份有限公司 | 重建ct图像的方法和设备以及存储介质 |
CN108898642A (zh) * | 2018-06-01 | 2018-11-27 | 安徽工程大学 | 一种基于卷积神经网络的稀疏角度ct成像方法 |
CN109102550A (zh) * | 2018-06-08 | 2018-12-28 | 东南大学 | 基于卷积残差网络的全网络低剂量ct成像方法及装置 |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113192155B (zh) * | 2021-02-04 | 2023-09-26 | 南京安科医疗科技有限公司 | 螺旋ct锥束扫描图像重建方法、扫描系统及存储介质 |
CN113192155A (zh) * | 2021-02-04 | 2021-07-30 | 南京安科医疗科技有限公司 | 螺旋ct锥束扫描图像重建方法、扫描系统及存储介质 |
CN113689545B (zh) * | 2021-08-02 | 2023-06-27 | 华东师范大学 | 一种2d到3d端对端的超声或ct医学影像跨模态重建方法 |
CN113689545A (zh) * | 2021-08-02 | 2021-11-23 | 华东师范大学 | 一种2d到3d端对端的超声或ct医学影像跨模态重建方法 |
CN113963132A (zh) * | 2021-11-15 | 2022-01-21 | 广东电网有限责任公司 | 一种等离子体的三维分布重建方法及相关装置 |
CN114359317A (zh) * | 2021-12-17 | 2022-04-15 | 浙江大学滨江研究院 | 一种基于小样本识别的血管重建方法 |
CN114255296A (zh) * | 2021-12-23 | 2022-03-29 | 北京航空航天大学 | 基于单幅x射线影像的ct影像重建方法及装置 |
CN114255296B (zh) * | 2021-12-23 | 2024-04-26 | 北京航空航天大学 | 基于单幅x射线影像的ct影像重建方法及装置 |
CN114742771A (zh) * | 2022-03-23 | 2022-07-12 | 中国科学院高能物理研究所 | 一种电路板背钻孔尺寸自动化无损测量方法 |
CN114742771B (zh) * | 2022-03-23 | 2024-04-02 | 中国科学院高能物理研究所 | 一种电路板背钻孔尺寸自动化无损测量方法 |
CN115690255A (zh) * | 2023-01-04 | 2023-02-03 | 浙江双元科技股份有限公司 | 基于卷积神经网络的ct图像去伪影方法、装置及系统 |
CN115690255B (zh) * | 2023-01-04 | 2023-05-09 | 浙江双元科技股份有限公司 | 基于卷积神经网络的ct图像去伪影方法、装置及系统 |
CN116612206A (zh) * | 2023-07-19 | 2023-08-18 | 中国海洋大学 | 一种利用卷积神经网络减少ct扫描时间的方法及系统 |
CN116612206B (zh) * | 2023-07-19 | 2023-09-29 | 中国海洋大学 | 一种利用卷积神经网络减少ct扫描时间的方法及系统 |
CN117351482A (zh) * | 2023-12-05 | 2024-01-05 | 国网山西省电力公司电力科学研究院 | 一种用于电力视觉识别模型的数据集增广方法、系统、电子设备和存储介质 |
CN117351482B (zh) * | 2023-12-05 | 2024-02-27 | 国网山西省电力公司电力科学研究院 | 一种用于电力视觉识别模型的数据集增广方法、系统、电子设备和存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN112085829A (zh) | 2020-12-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020237873A1 (zh) | 基于神经网络的螺旋ct图像重建方法和设备及存储介质 | |
RU2709437C1 (ru) | Способ обработки изображений, устройство обработки изображений и носитель данных | |
CN110660123B (zh) | 基于神经网络的三维ct图像重建方法和设备以及存储介质 | |
US10769821B2 (en) | Method and device for reconstructing CT image and storage medium | |
EP3435334B1 (en) | Method and device for reconstructing ct image and storage medium | |
EP3608877B1 (en) | Iterative image reconstruction framework | |
CN110462689B (zh) | 基于深度学习的断层摄影重建 | |
CN110544282B (zh) | 基于神经网络的三维多能谱ct重建方法和设备及存储介质 | |
US12112471B2 (en) | Systems and methods for multi-label segmentation of cardiac computed tomography and angiography images using deep neural networks | |
Banjak | X-ray computed tomography reconstruction on non-standard trajectories for robotized inspection | |
Xia et al. | Deep residual neural network based image enhancement algorithm for low dose CT images | |
Wang et al. | Sparse-view cone-beam CT reconstruction by bar-by-bar neural FDK algorithm | |
Miao | Comparative studies of different system models for iterative CT image reconstruction | |
Meaney et al. | Helsinki tomography challenge 2022: Description of the competition and dataset | |
Cheng et al. | Super-resolution reconstruction for parallel-beam SPECT based on deep learning and transfer learning: a preliminary simulation study | |
Buzmakov et al. | Efficient and effective regularised ART for computed tomography | |
JPH0824676B2 (ja) | X線ct装置 | |
Shen et al. | Exterior computed tomography image reconstruction based on anisotropic relative total variation in polar coordinates | |
Lékó et al. | Scale invariance in projection selection using binary tomography | |
JP7520802B2 (ja) | 放射線画像処理装置、放射線画像処理方法、画像処理装置、学習装置、学習データの生成方法、及びプログラム | |
Pereira | Development of a fast and cost-effective computed tomography system for industrial environments by incorporating priors into the imaging workflow | |
Valat et al. | Sinogram Inpainting with Generative Adversarial Networks and Shape Priors. Tomography 2023, 9, 1137–1152 | |
Rautio | Combining Deep Learning and Iterative Reconstruction in Dental Cone-Beam Computed Tomography | |
CN118736050A (zh) | 一种基于扩散模型的有限角度ct重建系统 | |
Pluta et al. | A New Statistical Approach to Image Reconstruction with Rebinning for the X-Ray CT Scanners with Flying Focal Spot Tube |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19930775 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19930775 Country of ref document: EP Kind code of ref document: A1 |