CN114004907A - Rapid pipeline computed tomography method based on deep learning - Google Patents
Rapid pipeline computed tomography method based on deep learning Download PDFInfo
- Publication number
- CN114004907A CN114004907A CN202111282604.6A CN202111282604A CN114004907A CN 114004907 A CN114004907 A CN 114004907A CN 202111282604 A CN202111282604 A CN 202111282604A CN 114004907 A CN114004907 A CN 114004907A
- Authority
- CN
- China
- Prior art keywords
- projection
- projection sequence
- sequence
- complete
- sparse
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000002591 computed tomography Methods 0.000 title claims abstract description 45
- 238000013135 deep learning Methods 0.000 title claims abstract description 39
- 238000000034 method Methods 0.000 title claims abstract description 36
- 238000013170 computed tomography imaging Methods 0.000 claims abstract description 24
- 238000005070 sampling Methods 0.000 claims abstract description 23
- 238000005516 engineering process Methods 0.000 claims abstract description 17
- 238000001514 detection method Methods 0.000 claims abstract description 7
- 238000012545 processing Methods 0.000 claims abstract description 7
- 238000001914 filtration Methods 0.000 claims abstract description 4
- 230000001502 supplementing effect Effects 0.000 claims abstract description 4
- 230000001360 synchronised effect Effects 0.000 claims abstract description 4
- 238000013527 convolutional neural network Methods 0.000 claims description 12
- 238000005457 optimization Methods 0.000 claims description 5
- 230000011218 segmentation Effects 0.000 claims description 5
- 239000000523 sample Substances 0.000 claims description 4
- 238000013507 mapping Methods 0.000 claims description 3
- 230000006870 function Effects 0.000 claims description 2
- 239000011159 matrix material Substances 0.000 claims description 2
- 239000000126 substance Substances 0.000 claims description 2
- 239000013589 supplement Substances 0.000 claims description 2
- 238000003384 imaging method Methods 0.000 abstract description 9
- 238000009659 non-destructive testing Methods 0.000 abstract description 2
- 238000004519 manufacturing process Methods 0.000 description 5
- 238000002247 constant time method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000004904 shortening Methods 0.000 description 2
- HPTJABJPZMULFH-UHFFFAOYSA-N 12-[(Cyclohexylcarbamoyl)amino]dodecanoic acid Chemical compound OC(=O)CCCCCCCCCCCNC(=O)NC1CCCCC1 HPTJABJPZMULFH-UHFFFAOYSA-N 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
- G06T11/008—Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
- G06T11/005—Specific pre-processing for tomographic reconstruction, e.g. calibration, source positioning, rebinning, scatter correction, retrospective gating
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4007—Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Analysing Materials By The Use Of Radiation (AREA)
Abstract
The invention discloses a fast pipeline Computed Tomography (CT) method based on deep learning, which comprises the following steps: placing a plurality of objects on an assembly line CT detection table, and performing synchronous CT scanning based on sparse sampling to obtain an assembly line CT sparse projection sequence; interpolating the streamline CT sparse projection sequence into a complete projection sequence with information missing; processing the projection sequence with information missing by utilizing a deep learning technology, supplementing projection information and obtaining a projection sequence with complete information and size; segmenting the complete projection sequence to obtain a projection sequence corresponding to each object; and respectively reconstructing the projection sequence of each object by using a filtering back projection reconstruction algorithm to obtain a final tomographic reconstruction image. The invention greatly shortens the imaging time; meanwhile, the reconstruction quality of the fast assembly line CT imaging technology is ensured, so that the high-quality and high-efficiency industrial nondestructive testing requirements are met.
Description
Technical Field
The invention relates to the technical field of X-ray computed tomography and artificial intelligence, in particular to a fast pipeline computed tomography method based on deep learning.
Background
In an X-ray Computed Tomography (CT) system, an X-ray source emits X-rays, the X-rays penetrate through a certain region of a measured object from different angles, a detector arranged opposite to the X-ray source receives the X-rays at corresponding angles, then, according to the attenuation of the rays at different angles, a certain reconstruction algorithm and a computer are used for operation, and a ray attenuation coefficient distribution mapping image of the scanned region of the object is reconstructed, so that the image is reconstructed by projection, and the characteristics of the object, such as medium density, components, structural morphology and the like, in the region are nondestructively reproduced.
Imaging efficiency has always been one of the major factors that have limited the widespread use of CT. It is mainly determined by the scan time and the image reconstruction time. At present, due to the wide use of the GPU and the CUDA, the image reconstruction time has been greatly increased. Therefore, to further improve imaging efficiency, development of a rapid scanning technique is required.
However, fast scanning techniques typically result in missing projection information. Currently, the most common reconstruction algorithm is a Filtered Back Projection (FBP) algorithm, and when the FBP algorithm is applied to complete data, the FBP reconstruction speed is fast and the obtained image quality is good. However, when the projection data is not complete, the corresponding FBP reconstruction results in severe artifacts and noise.
Disclosure of Invention
The invention solves the problems: the defects of the prior art are overcome, the fast pipeline computed tomography imaging method based on deep learning is provided, fast pipeline CT scanning is realized through a pipeline CT framework and a sparse sampling strategy, and the imaging time is greatly shortened; meanwhile, the deep learning technology is utilized to ensure the reconstruction quality of the fast assembly line CT imaging technology, so that the high-quality and high-efficiency industrial nondestructive testing requirement is met.
The technical scheme of the invention is as follows: a fast pipeline computed tomography method based on deep learning comprises the following steps:
step 2, interpolating the streamline CT sparse projection sequence into a complete projection sequence with information missing; the information loss is caused by that projection data of partial angles are lost due to sparse angle scanning, and the interpolation can approximately change the size of a projection sequence but cannot supplement projection information;
step 3, processing the projection sequence with the missing information by utilizing a deep learning technology, supplementing projection information and obtaining a complete and accurate projection sequence with complete projection information; the complete projection sequence with complete and accurate projection information is obtained by processing an information-missing projection sequence by using a deep learning technology, and a reconstructed image of the obtained projection sequence has no sparse scanning artifact, wherein the deep learning technology is an optimization technology of a pipeline CT sparse projection sequence based on a convolutional neural network;
step 4, segmenting the processed projection sequence to obtain an independent projection sequence of each object; the independent projection sequence is that the assembly line CT images a plurality of objects at the same time, the projection sequence comprises the projection of the plurality of objects, and the positions of the objects in the projection sequence are calculated and subjected to projection segmentation;
and 5, respectively reconstructing the projection sequence of each object by using a filtering back projection reconstruction algorithm to obtain a final tomographic reconstruction image.
Further, the assembly line CT imaging in step 1 is different from the conventional multi-object CT imaging in that a plurality of objects rotate around a common rotating shaft, but each object is provided with an independent rotating shaft; in addition, sparse sampling scans further reduce CT imaging time.
Further, step 2 is to perform expansion and interpolation approximation on the pipeline CT sparse sampling projection sequence by using a bicubic interpolation algorithm, so that the size of the pipeline CT sparse sampling projection sequence is the same as that of the completely sampled projection sequence.
Further, the projection optimization technique based on deep learning in step 3 is shown in formulas (1) to (4):
f(P(ω,φ))=WT·P(ω,φ)+Bias (2)
wherein, P (omega, phi) is a projection sequence with missing information and complete size,representing the position and the rotation angle of a probe element of a projection sequence for a complete and accurate projection sequence with complete projection information (omega, phi); f () and F () represent coding and decoding networks based on deep learning techniques, which are respectively used to extract features from P (ω, φ) and parse the information missing degree of the projection sequence from the features; Λ represents a nonlinear mapping function; error represents the learning objective of the convolutional neural network and is used for measuring the difference between the network output and the label; w and Bias represent learning parameters in the convolutional neural network, namely weight and Bias, and the updating of the parameters is realized by solving the partial derivative of the learning target on the parameters by using a gradient descent algorithm; eta andrespectively representing the learning rate and the learned network parameters.
Furthermore, the coding network and the decoding network are both composed of a plurality of convolutional neural network layers. In the two networks, the height and width of each level of feature map are respectively reduced and doubled, and the number of corresponding feature maps is opposite. And splicing feature graphs with the same height and width in the coding network and the decoding network, and then taking the feature graphs as input feature graphs of the next-stage decoding network.
Further, the segmentation process described in step 4 needs to be based on accurate calculation of the position parameters corresponding to each object in the projection sequence, as shown in equations (5) to (6):
wherein S isAAnd SBRespectively representing the left and right positions of the projection of a certain object in a two-dimensional projection sequence, D is the distance from a ray source to a detector, s is the distance from the projection position of the center of a rotating shaft corresponding to the object on the detector to the center of the detector, r is the radius of gyration of the object, E is the projection position of the rotating shaft where the object is located in the detector and the position of the ray source, tan and tan-1Respectively representing tangent and arctangent, sin and sin-1Representing sine and arcsine operations.
Further, the Filtered Back-projection (FBP) reconstruction algorithm in step 5 is shown in formula (7):
wherein the content of the first and second substances,the complete projection sequence output by the network is represented, R (R, theta) represents a reconstructed image, R (theta) represents polar coordinates, U represents a projection weight matrix, D represents the distance from a ray source to the rotating center of the rotary table, h represents a one-dimensional filter, and (omega, phi) respectively represent the coordinates of a detector probe element and the rotating angle of the rotary table.
Compared with the traditional computed tomography method, the embodiment of the invention realizes fast pipeline CT scanning by a pipeline CT imaging framework and a sparse sampling strategy, thereby greatly shortening the imaging time; and the high quality of the reconstruction result is kept by utilizing a deep learning technology so as to meet the high-quality and high-efficiency production requirement of the industry.
Drawings
FIG. 1 is a flow chart of a fast pipeline computed tomography method based on deep learning of the present invention;
FIG. 2 is a pipeline CT imaging architecture of the deep learning based fast pipeline computed tomography method of the present invention;
FIG. 3 is a network structure diagram of the deep learning technique of the fast pipeline computed tomography reconstruction method based on deep learning of the present invention;
FIG. 4 is a projection sequence of sparse angular data of pipeline CT imaging, an information missing projection sequence obtained by interpolation, an optimized projection sequence after deep learning technique processing, and a complete projection sequence image processed by the present invention; wherein a is a pipeline CT sparse angular projection sequence (45 × 1600); b is the complete projection sequence (360 × 1600) with missing information for bicubic interpolation; c is a complete projection sequence (360 multiplied by 1600) after the deep learning technology is optimized; d is the complete projection sequence (360 × 1600);
FIG. 5 is a projection sequence for each object segmented in accordance with the present invention corresponding to the projection sequence of FIG. 4; a is a projection sequence of the left object; b is a projection sequence of the intermediate object; c is the projection sequence of the right object;
fig. 6 is a reconstructed image corresponding to the projection sequence of fig. 5 processed by the present invention, where a is a sparse projection reconstructed image, b is a reconstructed image based on a depth learning technique, and c is a reconstructed image of a complete projection sequence.
Detailed Description
The invention is further described with reference to the following figures and detailed description.
Fig. 1 is a flowchart of a fast pipeline computed tomography reconstruction method based on deep learning according to an embodiment of the present invention. The embodiment of the invention provides a reconstruction method based on deep learning aiming at multi-object incomplete projection data obtained by sparse sampling flow line computed tomography, which comprises the following specific steps:
s101, placing a plurality of objects on an assembly line CT imaging detection table, and synchronously performing sparse sampling scanning; in the production line CT imaging detection table, a cone-beam X-ray source is adopted as a ray source, and a plurality of independent turntables are arranged in a direction parallel to a detector, as shown in figure 2, so that a multi-object projection sequence which is not influenced mutually is obtained.
And S102, interpolating the pipeline CT projection sequence based on sparse sampling into a complete projection sequence with information missing by using a bicubic interpolation method. The complete projection sequence only reflects the angle of the projection sequence due to the fact that interpolation is adopted for the sparse sampling projection sequence, but the problem of information loss is not solved.
And step S103, processing the projection sequence with the missing information by utilizing a deep learning technology, supplementing projection information and obtaining a complete and accurate projection sequence with complete projection information. The complete projection sequence with complete and accurate projection information is obtained by processing an information missing projection sequence by using a deep learning technology, wherein the obtained projection sequence contains projection data at a complete sampling angle, and the projection quantity and the information are complete.
Fig. 3 is a structural diagram of an example of the deep learning technique of the deep learning-based fast pipeline computed tomography reconstruction method based on deep learning according to the present invention. As shown in fig. 3, the convolutional neural network is composed of a 5-level coding layer and a 4-level decoding layer. And the coding layer and the decoding layer respectively reduce and double the height and width of the feature map, and the number of the corresponding feature maps is opposite. And splicing feature graphs with the same height and width in the coding network and the decoding network, and then taking the feature graphs as input feature graphs of the next-stage decoding network.
And step S104, calculating accurate position parameters of each object corresponding to the projection sequence, and segmenting the processed projection sequence by using the position parameters to obtain the projection sequence corresponding to each object in the pipeline CT scanning. The projection sequence corresponding to each object is that the pipeline CT images a plurality of objects at the same time, and the projection sequence comprises the projections of the plurality of objects, so that projection segmentation is required.
And S105, respectively reconstructing the projection sequence of each object by using a filtering back projection reconstruction algorithm to obtain a final tomographic reconstruction image.
Compared with the traditional CT method, the invention has the advantages of two aspects: 1) the fast pipeline CT imaging technology is realized through a pipeline CT imaging framework and a sparse sampling strategy, so that the imaging time is greatly shortened; 2) the reconstruction framework based on the deep learning technology solves the problem that information is lost in a projection sequence obtained by the framework, and optimizes the imaging quality of a reconstructed image.
In order to demonstrate the effects of the above examples, the present invention performed the following experiments, which were conducted as follows:
(1) a fast pipeline CT experiment of multiple objects was performed. The method has the advantages that the synchronous sparse scanning is carried out on a plurality of objects through the production line CT imaging device, the imaging is greatly accelerated, and meanwhile, the information of each object in the projection sequence is independent and does not interfere with each other. The experimental conditions were set as follows: the method comprises the steps of adopting production line CT imaging to simultaneously scan three objects, setting a sampling factor to be 4, obtaining 360-degree projections under 90 sampling angles, and setting the sampling factor to be 1 according to the obtaining condition of complete projections.
(2) The sparse projection sequence is interpolated to approximate the angle-complete projection sequence using a bicubic interpolation method.
(3) According to fig. 3 and equations (1) - (4), the complete projection sequence with missing information is processed to obtain a projection sequence with complete information and size.
(4) The projection sequence is segmented into a projection sequence of three objects according to equations (5), (6).
(5) And obtaining a final reconstruction result by using an FBP reconstruction algorithm.
Fig. 4 a to d are respectively a sparse sampling projection sequence, a complete projection sequence with information missing, a projection sequence after deep learning optimization, and a complete projection sequence image of pipeline CT imaging of three objects processed by the embodiment of the present invention. Fig. 5 a-c are projection sequences of each object divided by the embodiment of the present invention, which correspond to the projection sequence c in fig. 4. Fig. 6 is a reconstructed image corresponding to the projection sequence a in fig. 5 processed by the embodiment of the present invention, where a to c in fig. 6 are sparse projection sequences, and the projection sequence a in fig. 5 and a reconstructed image corresponding to the complete scan projection sequence thereof, respectively.
As can be seen from fig. 4 to 6, b in fig. 6 eliminates sparse reconstruction artifacts compared with a in fig. 6, which illustrates that the fast pipeline computed tomography reconstruction method based on deep learning can effectively process data reconstruction under the condition of sparse sampling pipeline CT.
Compared with the traditional computed tomography method, the embodiment of the invention realizes fast pipeline CT scanning by combining the pipeline CT imaging framework and the sparse sampling strategy, thereby greatly shortening the imaging time; compared with the traditional reconstruction method based on incomplete data of deep learning, which aims at a single object, the embodiment of the invention directly acts on the projection sequences of a plurality of objects, the operation flow is simple and clear, and the optimization of the projection domain is more favorable for retaining image details so as to improve the image quality.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the present disclosure has been described in detail with reference to the foregoing embodiments, it will be understood by those skilled in the art; the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.
Claims (6)
1. A fast pipeline computed tomography method based on deep learning is characterized by comprising the following steps:
step 1, placing a plurality of objects on an assembly line CT imaging detection table, and synchronously performing sparse sampling scanning to obtain an assembly line CT sparse projection sequence; the assembly line CT imaging detection table is provided with a plurality of independent rotary tables parallel to the direction of the detector, and can independently image a plurality of objects, so that projection sequences of the objects are not interfered with each other;
step 2, interpolating the streamline CT sparse projection sequence into a projection sequence with missing information and complete size; the information loss refers to that projection data of partial angles are lost in sparse angle scanning, and interpolation approximately changes the size of a projection sequence but cannot supplement projection information;
step 3, processing the projection sequence with missing information and complete size by utilizing a deep learning technology, supplementing projection information, and obtaining a complete projection sequence with complete and accurate projection information, wherein the reconstructed image of the complete projection sequence does not have sparse artifacts any more, and the deep learning technology is an optimization technology of a pipeline CT sparse projection sequence based on a convolutional neural network;
step 4, segmenting the processed complete projection sequence to obtain an independent projection sequence corresponding to each object; the independent projection sequence is that the assembly line CT images a plurality of objects at the same time, the projection sequence comprises the projection of the plurality of objects, and the positions of the objects in the projection sequence are calculated and subjected to projection segmentation;
and 5, respectively reconstructing the projection independent image sequences corresponding to each object by using a filtering back projection reconstruction algorithm to obtain a final tomographic reconstruction image.
2. The fast pipeline computed tomography method based on deep learning of claim 1, wherein: the fast computed tomography is realized by combining an assembly line CT imaging framework and a sparse sampling strategy, the assembly line CT imaging framework carries out synchronous scanning without crosstalk on a plurality of objects, the sparse sampling scanning reduces the time length of single scanning, and the combination of the assembly line CT imaging framework and the sparse sampling scanning greatly shortens the scanning time.
3. The fast pipeline computed tomography method based on deep learning of claim 1, wherein: and 2, interpolating the sparse projection sequence of the assembly line CT by using a bicubic interpolation method, and converting the sparse projection sequence into a projection sequence with missing information and complete size.
4. The fast pipeline computed tomography method based on deep learning of claim 1, wherein: in the step 3, the complete projection sequence with information missing is processed by using the convolutional neural network shown in the formulas (1) to (4), which is specifically as follows:
f(P(ω,φ))=WT·P(ω,φ)+Bias (2)
wherein, P (omega, phi) is a projection sequence with missing information and complete size,representing the detector probe element position and the turntable rotation angle corresponding to each pixel in the projection sequence for a complete and accurate projection sequence with complete projection information (omega, phi); f and F represent a coding network and a decoding network in the convolutional neural network, which are respectively used for extracting features from P (omega, phi) and analyzing the missing condition of projection information from the features; Λ represents a nonlinear mapping function; error represents the learning objective of the convolutional neural network and is used for measuring the difference between the convolutional neural network output and the label; w and Bias represent learning parameters in the convolutional neural network, namely weight and Bias, and parameter updating is realized by solving the partial derivative of the learning target to the parameters by using a gradient descent algorithm; eta andrespectively representing the learning rate and the learned intrinsic network parameters.
5. The fast pipeline computed tomography method based on deep learning of claim 1, wherein: in the step 4, calculating the position of each object corresponding to the projection sequence by using formulas (5) to (6) so as to perform subsequent projection sequence segmentation;
wherein S isAAnd SBRespectively representing the left and right positions of the projection of an object in a two-dimensional projection sequence, D is the distance from a ray source to a detector, s is the distance from the projection position of the center of a rotary shaft of a rotary table on the detector to the center of the detector, r is the radius of gyration of the object, E is the projection position of the rotation center of the rotary shaft of the object in the detector and the position of the ray source, and tan-1Respectively representing tangent and arctangent, sin and sin-1Representing sine and arcsine operations.
6. The fast pipeline computed tomography method based on deep learning of claim 1, wherein: in step 5, the filtered back-projection reconstruction algorithm is as shown in formula (7):
wherein the content of the first and second substances,representing the complete projection sequence output by the convolutional neural network, R (R, theta) representing the reconstructed image, (R, theta) representing the polar coordinates, U representing the projection weight matrix, D representing the distance from the source to the center of rotation of the gantry,h represents a one-dimensional filter, and (omega, phi) respectively represents the coordinates of a detector probe element and the rotation angle of the rotary table.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111282604.6A CN114004907A (en) | 2021-11-01 | 2021-11-01 | Rapid pipeline computed tomography method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111282604.6A CN114004907A (en) | 2021-11-01 | 2021-11-01 | Rapid pipeline computed tomography method based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114004907A true CN114004907A (en) | 2022-02-01 |
Family
ID=79926055
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111282604.6A Pending CN114004907A (en) | 2021-11-01 | 2021-11-01 | Rapid pipeline computed tomography method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114004907A (en) |
-
2021
- 2021-11-01 CN CN202111282604.6A patent/CN114004907A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110660123B (en) | Three-dimensional CT image reconstruction method and device based on neural network and storage medium | |
US5740224A (en) | Cone beam synthetic arrays in three-dimensional computerized tomography | |
JPH0714022A (en) | Method and apparatus for reconstitution of three-dimensional image from incomplete conical beam projection data | |
EP1644897B1 (en) | A fourier tomographic image reconstruction method for fan-beam data | |
JPH09253079A (en) | X-ray tomography system | |
CN110070612B (en) | CT image interlayer interpolation method based on generation countermeasure network | |
CN112085829A (en) | Spiral CT image reconstruction method and equipment based on neural network and storage medium | |
CN111553849B (en) | Cone beam CT geometric artifact removing method and device based on local feature matching | |
CN110057847B (en) | TR (transmitter-receiver) tomography projection rearrangement method and device | |
US7916828B1 (en) | Method for image construction | |
CN109171792B (en) | Imaging method and CT imaging system using same | |
CN110751701B (en) | X-ray absorption contrast computed tomography incomplete data reconstruction method based on deep learning | |
EP0323770B1 (en) | Method and apparatus allowing to reconstitute shape and position of objects in space | |
CN114004907A (en) | Rapid pipeline computed tomography method based on deep learning | |
US10217248B2 (en) | Method for removing streak from detector cell with performance difference | |
Fang et al. | Deep learning for improving non-destructive grain mapping in 3D | |
Okamoto et al. | Patch-based artifact reduction for three-dimensional volume projection data of sparse-view micro-computed tomography | |
CN114018962B (en) | Synchronous multi-spiral computed tomography imaging method based on deep learning | |
CN113554729A (en) | CT image reconstruction method and system | |
CN105678823A (en) | Multiple two-dimensional fan-beam computer chromatography method | |
Shen et al. | Exterior computed tomography image reconstruction based on anisotropic relative total variation in polar coordinates | |
WO2023125228A1 (en) | Ct image ring artifact processing method and apparatus, system, and storage medium | |
Zhang et al. | A Deep Learning Reconstruction Method for Fast Assembly Line Computed Tomography | |
CN116843779A (en) | Linear scanning detector differential BPF reconstructed image sparse artifact correction method | |
CN118015129A (en) | Slice image reconstruction method and device, storage medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |