CN114913259A - Truncation artifact correction method, CT image correction method, apparatus, and medium - Google Patents

Truncation artifact correction method, CT image correction method, apparatus, and medium Download PDF

Info

Publication number
CN114913259A
CN114913259A CN202210480772.4A CN202210480772A CN114913259A CN 114913259 A CN114913259 A CN 114913259A CN 202210480772 A CN202210480772 A CN 202210480772A CN 114913259 A CN114913259 A CN 114913259A
Authority
CN
China
Prior art keywords
projection data
data
projection
truncation
correction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210480772.4A
Other languages
Chinese (zh)
Inventor
张峥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Healthcare Co Ltd filed Critical Shanghai United Imaging Healthcare Co Ltd
Priority to CN202210480772.4A priority Critical patent/CN114913259A/en
Publication of CN114913259A publication Critical patent/CN114913259A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction

Abstract

The present application relates to a truncation artifact correction method, a CT image correction apparatus, and a medium. The truncation artifact correction method comprises the following steps: acquiring truncation artifact data according to original projection data of a scanned object; the truncation artifact data is projection data of each channel under a projection view angle with truncation in the original projection data; correcting the truncation artifact data through a first correction network to obtain extended projection data; a medical image of the scanned object is generated based on the raw projection data and the augmented projection data. The method is to directly analyze and correct the projection data in the data domain, so that the corrected extended projection data is closer to the real projection data. Furthermore, when truncation artifact correction is performed in the data domain, only projection data of each channel under a projection view angle with truncation in the data domain need to be corrected, so that the data calculation amount is reduced, and meanwhile, the correction speed of the truncation artifact is improved, and a complete medical image of the scanning object is acquired.

Description

Truncation artifact correction method, CT image correction method, apparatus, and medium
Technical Field
The present application relates to the field of medical image processing technologies, and in particular, to a truncation artifact correction method, a CT image correction apparatus, and a CT image correction medium.
Background
In clinical diagnosis and radiotherapy, limited by the conditions of the size of the detector, the field scanning space and the like, if a part of the scanned object is out of the scanning range, the data acquired by the detector is incomplete, so that a high-brightness truncation artifact exists in the reconstructed medical image, the overall structure of the scanned object cannot be judged in the reconstructed medical image, and the scanned object cannot be accurately diagnosed or radioactively treated.
In the related art, when correcting truncation artifacts in an image, projection data at the truncation point may be supplemented by using extrapolation data in a data domain, or an Artificial Intelligence (AI) technique may be combined in the image domain to correct the truncation artifacts in the medical image.
However, both the way of extrapolating data in the data domain and the way of combining AI techniques in the image domain have the problem of poor correction effect on truncation artifacts.
Disclosure of Invention
In view of the above, it is necessary to provide a truncation artifact correction method, a CT image correction method, an apparatus, and a medium capable of effectively correcting a truncation artifact so as to acquire a complete medical image, in view of the above technical problems.
In a first aspect, the present application provides a truncation artifact correction method, including:
acquiring truncation artifact data according to original projection data of a scanned object; the truncation artifact data is projection data of each channel under a truncated projection view angle in the original projection data;
correcting the truncation artifact data through a first correction network to obtain extended projection data;
a medical image of the scanned object is generated based on the raw projection data and the augmented projection data.
In one embodiment, the first correction network is constructed by:
acquiring a plurality of first sample projection data; each first sample projection data comprises at least one projection data of each channel under a projection visual angle with truncation;
constructing a loss function of a first correction network according to the projection data of a plurality of channels under each projection visual angle in the plurality of first sample projection data;
and training the first initial correction network according to the plurality of first sample projection data, a preset first gold standard and a loss function of the first correction network to obtain the first correction network.
In one embodiment, constructing the loss function of the first correction network according to the projection data of the plurality of channels at each projection viewing angle in the plurality of first sample projection data includes:
acquiring sample ideal channel accumulated sum data with truncated artifacts removed at each projection visual angle according to the projection data of a plurality of channels at each projection visual angle in the plurality of first sample projection data;
a loss function of the first correction network is constructed from the plurality of sample ideal channel sums.
In one embodiment, obtaining sample ideal channel accumulated sum data after removing truncated artifacts at each projection view angle according to projection data of a plurality of channels at each projection view angle in a plurality of first sample projection data includes:
acquiring initial channel accumulation sum data of the projection data of the channels at each projection visual angle according to the projection data of the channels at each projection visual angle in the first sample projection data;
and inputting the initial channel accumulated sum data into a second correction network to obtain the sample ideal channel accumulated sum data with the artifact being removed at each projection visual angle.
In one embodiment, the process of constructing the second correction network includes:
acquiring a plurality of second sample projection data; each second sample projection data comprises at least one projection data of each channel under a projection visual angle with truncation;
acquiring sample channel accumulated sum data of a plurality of channels under each projection visual angle according to the second sample projection data;
and training the second initial correction network according to the accumulated sum data of the plurality of sample channels and a preset second golden standard to obtain a second correction network.
In one embodiment, generating a medical image of a scanned object from raw projection data and augmented projection data comprises:
acquiring target projection data after truncation artifact correction according to the original projection data and the extended projection data;
and carrying out image reconstruction on the target projection data to obtain a medical image of the scanned object.
In one embodiment, the method includes acquiring truncation artifact corrected target projection data based on the raw projection data and the extended projection data, including
Correspondingly supplementing the expanded projection data into the original projection data to obtain corrected projection data;
and carrying out one-time correction or multiple iterative corrections on the corrected projection data to obtain the target projection data after truncation artifact correction.
In a second aspect, the present application further provides a CT image correction method, including:
acquiring CT projection data of a scanned object; the CT projection data comprises projection data of each channel under at least one projection visual angle with truncation;
correcting the CT projection data through a first correction network to obtain target projection data; compensating a part of the target projection data, which corresponds to the CT projection data and has truncation under the projection view angle;
a CT image of the scanned object is generated from the target projection data.
In a third aspect, the present application further provides a truncation artifact correction apparatus, including:
the acquisition module is used for acquiring truncation artifact data according to original projection data of a scanning object; the truncation artifact data is projection data of each channel under a truncated projection view angle in the original projection data;
the correction module is used for correcting the truncation artifact data through a first correction network to obtain extended projection data;
and the reconstruction module is used for generating a medical image of the scanned object according to the original projection data and the expanded projection data.
In a fourth aspect, the present application further provides a CT image correction apparatus, including:
an acquisition module for acquiring CT projection data of a scanned object; the CT projection data comprises projection data of each channel under at least one projection visual angle with truncation;
the correction module is used for correcting the CT projection data through a first correction network to obtain target projection data; compensating a part of the target projection data, which corresponds to the CT projection data and has truncation under the projection view angle;
and the reconstruction module is used for generating a CT image of the scanned object according to the target projection data.
In a fifth aspect, the present application further provides a computer device, which includes a memory and a processor, where the memory stores a computer program, and the processor implements the steps of any of the above method embodiments when executing the computer program.
In a sixth aspect, the present application further provides a computer-readable storage medium having a computer program stored thereon, which, when executed by a processor, performs the steps of any of the above-described method embodiments.
In a seventh aspect, the present application further provides a computer program product, which comprises a computer program, when executed by a processor, implements the steps of any of the method embodiments described above.
The truncation artifact correction method, the CT image correction device and the CT image correction medium acquire truncation artifact data according to original projection data of a scanned object; the truncation artifact data is projection data of each channel under a truncated projection view angle in the original projection data; correcting the truncation artifact data through a first correction network to obtain extended projection data; a medical image of the scanned object is generated based on the raw projection data and the augmented projection data. In the method, the projection data in the data domain is considered to be closer to the tissue structure information of the scanning object, so that when truncation artifact correction is carried out, the medical image obtained by scanning is not subjected to artifact correction in the image domain, but the truncation artifact data in the original projection data is analyzed and corrected in the data domain, so that the corrected expanded projection data is closer to the real projection data of the scanning object, and the integrity and the accuracy of the projection data are ensured. Furthermore, when truncation artifact correction is performed in the data domain, only the projection data of each channel under the projection view angle with truncation in the data domain is corrected, and the change trend of the projection data is predicted through the first correction network, so that the extended projection data corresponding to the truncation area is determined. The correction speed of the truncation artifact is improved while the data calculation amount is reduced. Therefore, the projection data at the truncation artifact position are corrected in the data domain, and then the medical image is generated based on the corrected extended projection data, so that the truncation artifact correction efficiency is improved, the complete and accurate medical image of the scanning object is obtained, and the image quality is better.
Drawings
FIG. 1 is a diagram of an exemplary embodiment of a truncation artifact correction method;
FIG. 2 is a flow diagram illustrating a method for truncation artifact correction in one embodiment;
FIG. 3 is a schematic diagram of a first calibration network according to one embodiment;
FIG. 4 is a graphical illustration of channel summation data corresponding to projection views in one embodiment;
FIG. 5 is a diagram illustrating training of a second calibration network, according to an embodiment;
FIG. 6 is a diagram illustrating training of a first calibration network in accordance with one embodiment;
FIG. 7 is a schematic diagram of an exemplary embodiment of truncation artifact data supplementation;
FIG. 8 is a flow diagram illustrating the acquisition of augmented artifact data according to one embodiment;
FIG. 9 is a schematic flow chart diagram of generating a medical image in one embodiment;
FIG. 10 is a flowchart illustrating a truncation artifact correction method according to another embodiment;
FIG. 11 is a flowchart illustrating a CT image correction method according to an embodiment;
FIG. 12 is a block diagram showing the structure of a truncation artifact correction apparatus according to an embodiment;
FIG. 13 is a block diagram showing the structure of a CT image correction apparatus according to an embodiment;
FIG. 14 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
With the development of Computed Tomography (CT) technology, various scanning methods have emerged. Regardless of the scanning modality used, it is often required that the projection data of the scanned object be low-noise and complete in order to obtain a high quality medical image. However, in industry and medicine, due to the limitation of the size of the detector, the field scanning space and other conditions, the scanning object may be out of the scanning range of the detector, and the projection data acquired by the detector may not be complete. However, the discontinuity of the projection data may cause truncation artifacts at the edges of the finally generated medical image, which seriously affect the image quality, and consequently, a physician cannot determine the complete tissue structure information of the scanned object according to the medical image, and cannot accurately diagnose.
The processing related to the occurrence of truncation artifacts generally includes the following two approaches:
(1) truncation artifacts generated during filtering and back-projection filtering are suppressed in the data domain by means of extrapolating the data.
However, this approach can suppress the quality of the image inside the scan Field of View (FOV) when truncation artifacts occur, but the imaging quality is poor for images outside the scan Field of View.
(2) Truncation artifacts are processed in the image domain in conjunction with AI techniques. Due to the image information learned from the image domain, it is limited to the training data set used to train the neural network. Moreover, the truncation artifacts in the medical image are directly corrected by the AI technique in the image domain, and although the corrected image quality is improved, part of the image details may be changed in a manner that makes a slight difference with the real object, and if the corrected medical image is re-projected into the data domain, the deviation of the image details is found to be more obvious.
Based on at least one defect in the truncation artifact processing mode, the application provides a truncation artifact correction method, which is used for correcting truncation artifact data by combining a trained correction network in a data domain, so that a medical image of a scanned object is generated based on complete and continuous corrected projection data.
The truncation artifact correction method provided by the application can be applied to the application environment shown in fig. 1. As shown in fig. 1, the application environment may include an imaging device 110, an information processing device 120, and a storage device 130, and the respective devices may be connected and communicate with each other through a network. For example, the imaging device 110 and the information processing device 120 may be connected or communicate via a network; the imaging device 110 and the storage device 130 may also be connected or communicate via a network; the information processing apparatus 120 and the storage apparatus 130 may also be connected or communicate via a network.
In some embodiments, the imaging device 110 may be a non-invasive biomedical imaging apparatus, including, for example, a single modality scanner and/or a multi-modality scanner, for disease diagnosis or research purposes. The single modality scanner may include, for example, an ultrasound scanner, an X-ray scanner, a CT scanner, a Magnetic Resonance Imaging (MRI) scanner, an ultrasound examination apparatus, an Optical Coherence Tomography (OCT) scanner, an Ultrasound (US) scanner, an intravascular ultrasound (IVUS) scanner, a Near Infrared spectroscopy (NIRS) scanner, a Far Infrared (FIR) scanner, or the like, or any combination thereof. The multi-modality scanner may include, for example, an X-ray imaging-magnetic resonance imaging (X-ray-MRI) scanner, a single photon emission computed tomography-magnetic resonance imaging (SPECT-MRI) scanner, a positron emission tomography-computed tomography (PET-CT) scanner, a digital subtraction angiography-magnetic resonance imaging (DSA-MRI) scanner, and the like. It should be understood that the scanner provided above is provided for illustrative purposes only and is not intended to limit the scope of the present application.
As one example, the imaging device 110 may specifically include a gantry, a detector, an examination region, a scanning bed, and a radiation source. The stand can be used for supporting the detector and the ray source, and the scanning bed can be used for placing a scanning object to scan; the radiation source may emit radiation toward the scan object to irradiate the scan object; the detector may be configured to receive radiation that traverses the scanned object.
Further, imaging device 110 may also include modules and/or components for performing imaging and/or correlation analysis. For example, the imaging device 110 may include a processor, which may be part of the information processing device 120, and may perform the truncation artifact correction methods provided herein.
Optionally, the imaging device 110 may also include a display screen that may be used to observe data information of the imaging device 110 and/or a scanned object scanned by the imaging device 110. For example, the medical staff can observe the focus information of the detection parts of the chest, the bones, the mammary glands and the like of the scanning object through the display screen.
In some embodiments, the imaging device 110 may also send the acquired scan data (e.g., projection data of the scanned object) to the information processing device 120 via a network for further analysis, processing, and display, and/or send the acquired scan data to the storage device 130 via a network for storage.
The information processing device 120 may be embodied as at least one computer device other than the imaging device, among others. For example, various personal computers, laptops, smart phones, tablets, portable wearable devices, or any combination thereof; also, for example, a single server or group of servers, the group of servers can be centralized or distributed.
In some embodiments, storage device 130 may store data, instructions, and/or any other information. For example, the storage device 130 may store scan data of the scan object acquired by the imaging device 110, and/or a medical image of the scan object processed by the information processing device 120, and the like.
As one example, the storage device 130 may include one or more of mass storage, removable storage, volatile read-write memory, read-only memory, and the like.
It should be noted that the foregoing description is provided for illustrative purposes only, and is not intended to limit the scope of the present application. Many variations and modifications will occur to those skilled in the art in light of the teachings herein. The features, structures, methods, and other features of the example embodiments described herein may be combined in various ways to obtain additional and/or alternative example embodiments, such changes and modifications not departing from the scope of the present application. For example, the storage device 130 may be a data storage device including a cloud computing platform (such as a public cloud, a private cloud, a community and hybrid cloud, and so forth).
Next, the technical solutions of the embodiments of the present application, and how to solve the above technical problems will be specifically described in detail through embodiments and with reference to the accompanying drawings. Several specific method embodiments may be combined with each other below, and details of the same or similar concepts or processes may not be repeated in some embodiments. It should be noted that an execution subject of the truncation artifact correction method provided in the embodiment of the present application may be the imaging device 110 shown in fig. 1, or may be a computer device other than the imaging device 110, or may specifically be a truncation artifact correction apparatus, and the apparatus may be implemented by software, hardware, or a combination of software and hardware to become part or all of a processor. It is to be understood that the embodiments described are only some of the embodiments of the present application and not all of them.
In an embodiment, as shown in fig. 2, a truncation artifact correction method is provided, which is described by taking an example that the method is applied to the information processing apparatus in fig. 1, and includes the following steps:
step 210: acquiring truncation artifact data according to original projection data of a scanned object; and the truncation artifact data is the projection data of each channel under the projection visual angle with truncation in the original projection data.
The raw projection data may be projection data obtained by scanning at least one portion of a scanning object by an imaging device, and the scanning object may include a biological object and/or a non-biological object. For example, the target object may include a specific part of a human body, such as a head, a chest, an abdomen, or one or more parts. The scan object may also be an artificial composition of organic and/or inorganic matter, living or non-living, which the present embodiment does not limit.
As an example, the projection data may be data acquired by a detector, which is generally artificially defined to have three directions, i.e., X, Y, Z, as shown in FIG. 1: the X axis is the length of the detector and reflects the number of acquisition units of each row of detectors; the Z axis is the width of the detector and reflects the number of rows of detection; the Y axis is the X-ray direction. The original projection data in the application includes data collected by each channel (channel) under each projection View (View), where the projection View is a direction perpendicular to the data received by the detector and parallel to the moving direction of the patient bed (corresponding to the Z-axis direction in fig. 1); each channel refers to a direction in which the detector receives data (see X-axis direction in fig. 1).
It should be noted that, when scanning an object to be scanned, due to the influence of the size of the detector, the field scanning space, the motion of the object to be scanned, etc., there may be channels in the scanning process that do not acquire projection data or only acquire a part of projection data, so that the original projection data obtained after scanning is discontinuous. In this case, truncation artifacts may be present in the medical image generated from the discontinuous raw projection data.
Based on this, in step 210, after the original projection data of the scanning object are acquired, the original projection data are analyzed, and projection data of each channel under a projection view angle with truncation are screened out from the original projection data, so as to obtain truncation artifact data.
It should be understood that the truncation artifact data is a portion of the projection data in the original projection data, and not all of the projection data. Considering that when truncation artifact correction is performed in a data domain, the related data set is more, and the data calculation amount is larger, in this embodiment, only truncation artifact data in original projection data is corrected, instead of performing optimization processing on the whole original projection data, so that the data processing amount is greatly reduced, and the correction speed is improved.
Step 220: and correcting the truncation artifact data through a first correction network to obtain extended projection data.
The truncated artifact data is partial projection data acquired by each channel of the detector at a truncation position under a projection view angle, and the expanded projection data is complete projection data which should be acquired by each channel under a corresponding projection view angle in an ideal state.
In other words, the truncation artifact data is corrected by the first correction network, and projection data which are not acquired by each channel under the projection view angle at the truncation artifact position are predicted by the first correction network, so that complete projection data are obtained.
In one possible implementation manner, the implementation procedure of step 220 may be: inputting the truncation artifact data into a first correction network trained in advance, and outputting extended projection data corresponding to the truncation artifact data through the first correction network.
Step 230: a medical image of the scanned object is generated based on the raw projection data and the augmented projection data.
The medical image generated according to the original projection data and the expanded projection data is the image without the truncated artifact, and the image information is complete and accurate. The imaging quality is better compared to a medical image generated directly from the raw projection data or corrected after the medical image is generated from the raw projection data.
In one possible implementation manner, the implementation procedure of step 230 may be: fusing the original projection data and the expanded projection data to obtain complete and continuous target projection data of a scanned object; and then, image reconstruction is carried out according to the target projection data to obtain a medical image of the scanned object.
The fusion process includes at least two cases: firstly, replacing truncation artifact data in original projection data by using expanded projection data; secondly, the truncation artifact data in the original projection data is modified or supplemented according to the corrected extended projection data, which is not limited in this embodiment.
In the truncation artifact correction method, truncation artifact data are obtained according to original projection data of a scanning object; the truncation artifact data is projection data of each channel under a projection view angle with truncation in the original projection data; correcting the truncation artifact data through a first correction network to obtain extended projection data; a medical image of the scanned object is generated based on the raw projection data and the augmented projection data. In the method, the projection data in the data domain is considered to be closer to the tissue structure information of the scanning object, so that when truncation artifact correction is carried out, the medical image obtained by scanning is not subjected to artifact correction in the image domain, but the truncation artifact data in the original projection data is analyzed and corrected in the data domain, so that the corrected expanded projection data is closer to the real projection data of the scanning object, and the integrity and the accuracy of the projection data are ensured. Furthermore, when truncation artifact correction is performed in the data domain, only the projection data of each channel under the projection view angle with truncation in the data domain is corrected, and the change trend of the projection data is predicted through the first correction network, so that the extended projection data corresponding to the truncation area is determined. The correction speed of the truncation artifact is improved while the data calculation amount is reduced. Therefore, the projection data at the truncation artifact position are corrected in the data domain, and then the medical image is generated based on the corrected extended projection data, so that the truncation artifact correction efficiency is improved, the complete and accurate medical image of the scanning object is obtained, and the image quality is better.
Based on the above embodiments, the following will explain the construction process of the first calibration network with reference to fig. 3-6.
In one embodiment, as shown in fig. 3, the construction process of the first correction network includes the following steps:
step 310: acquiring a plurality of first sample projection data; each of the first sample projection data includes projection data for each channel at least one projection view angle at which there is truncation.
The first sample projection data may be sample projection data acquired from a database, or may be real-time projection data acquired from an imaging device, and the data source of the first sample projection data is not limited in this embodiment.
It should be noted that, due to the effect of truncation, the sum of the projection data of each channel at each projection view angle on the data field is significantly reduced in the truncation region. Referring to fig. 4, (a) in fig. 4 is ideal channel cumulative sum data obtained by calculation according to complete projection data acquired by each channel under a projection view angle under the condition that no truncation region exists. When truncation occurs, the channel accumulation sum data of the truncation region may have a significant drop, as shown in fig. 4 (b).
If the projection data of the truncated region is corrected by using linear interpolation, the obtained result still has a discontinuous phenomenon. In some special cases, there may be abnormal variation in the accumulated sum of the truncated regions, and if linear interpolation is used for fitting, the opposite correction effect is obtained, and the imaging quality is worse.
Based on this, the present application performs correction processing on truncation artifact data using a first correction network. In order to obtain a first correction network with a better correction effect, the projection data of each channel in the first sample projection data is required to be used for training the first initial correction network, so that the first initial correction network has the capability of predicting the channel accumulation sum data variation trend under each projection visual angle in an ideal state and predicting the complete projection data of each channel under each projection visual angle.
Step 320: and constructing a loss function of the first correction network according to the projection data of the channels under the projection visual angles in the first sample projection data.
The loss function may be used to train the first initial calibration network, and when the output of the first initial calibration network satisfies the corresponding convergence condition, the training is ended to obtain the first calibration network.
It should be noted that the loss function of the first correction network may include the following two cases:
(1) the loss function of the first correction network is sample ideal channel accumulated sum data corresponding to the projection data of the multiple channels at each projection view angle in the first sample projection data.
Wherein the sample ideal channel accumulated sum data represents the accumulated sum of the projection data of each channel after removing the truncated artifact at each projection view angle in the first sample projection data. After determining the ideal channel sum data, the first initial calibration network is trained with the sample ideal channel sum data as a constraint.
(2) The loss function of the first correction network includes the sample ideal channel summation data and a predetermined loss function.
Therefore, when the first initial correction network is trained, after the first initial correction network is adjusted according to the loss value of the preset loss function, the ideal channel accumulation sum data of the sample corresponding to the same sample projection data before and after adjustment should be the same through the ideal channel accumulation data of the sample and the re-verification of the training result of the first initial correction network.
Optionally, a weight coefficient of the sample ideal channel accumulation sum data and the preset loss function may be preset, and a weight ratio of the ideal sample channel accumulation sum data and the preset loss function in the loss function of the first correction network may be determined according to the weight coefficient.
Further, the implementation process of constructing the first loss function according to the sample ideal channel accumulation sum data may be: acquiring sample ideal channel accumulated sum data after removing truncated artifacts at each projection visual angle according to the projection data of a plurality of channels at each projection visual angle in the plurality of first sample projection data; a loss function of the first correction network is constructed from the plurality of sample ideal channel sums.
The obtaining of the sample ideal channel accumulation sum data corresponding to the first sample projection data may be implemented by a specific computer algorithm, or may be implemented by a trained neural network in combination with an AI technology, which is not limited in this embodiment.
In one embodiment, the process of obtaining the sample ideal channel accumulation sum data may be: acquiring initial channel accumulation sum data of the projection data of the channels at each projection visual angle according to the projection data of the channels at each projection visual angle in the first sample projection data; and inputting the initial channel accumulated sum data into a second correction network to obtain the sample ideal channel accumulated sum data with the artifact being removed at each projection visual angle.
The network types and training modes of the second correction network and the first correction network may be the same or different, and are not limited thereto. "first" and "second" are used only to distinguish them, and are not intended to limit their functions.
In one possible implementation, the process of constructing the second correction network includes: acquiring a plurality of second sample projection data; the second sample projection data comprises projection data of each channel under at least one projection visual angle with truncation; acquiring sample channel accumulated sum data of a plurality of channels under each projection visual angle according to the second sample projection data; and training the second initial correction network according to the accumulated sum data of the plurality of sample channels and a preset second golden standard to obtain a second correction network.
And the second gold standard is the marked ideal channel accumulation sum data corresponding to the pre-calculated second sample projection data under the condition that truncation does not exist.
That is, when the second calibration network is constructed, the second sample projection data is input into the second initial calibration network, and whether the second initial calibration network converges is determined according to the sample ideal channel summation data output by the second initial calibration network. And when the error between the sample ideal channel accumulation output by the second initial correction network and the second gold standard is smaller than a preset error value, the second initial correction network is considered to be converged, and the training is finished to obtain a trained second correction network.
As an example, referring to the training diagram of the second correction network shown in fig. 5, when the second initial correction network is trained by using the second sample projection data, the learning condition of the second initial correction network is determined by combining the second gold standard. In addition, the loss of the second initial correction network is calculated in real time in the training process, and the network parameters of the second initial correction network are adjusted according to the loss calculation result, so that the sample ideal channel accumulation sum data output by the second initial correction network can approach the second golden standard.
In addition, the trained second correction network can be used alone or combined into other networks. The second correction network may be configured to predict ideal channel summation data corresponding to the projection data of the plurality of channels at each projection view.
Step 330: and training the first initial correction network according to the plurality of first sample projection data, a preset first gold standard and a loss function of the first correction network to obtain the first correction network.
The first sample projection data comprise at least one projection data of each channel under a projection visual angle with truncation, and the first gold standard is corresponding marked ideal channel accumulation sum data of the pre-calculated first sample projection data under the condition of no truncation artifact.
Further, when the first calibration network is constructed, a plurality of first sample projection data are input into the first initial calibration network, and whether the first initial calibration network converges is determined according to the extended projection data output by the first initial calibration network. When the error between the sample channel accumulated sum data corresponding to the sample extended projection data output by the first initial correction network and the first gold standard is smaller than the preset error value, the first initial correction network converges, and the training is finished to obtain the trained first correction network.
In the training process, the loss function of the first correction network is needed to control the sample extended projection data output by the first initial correction network to simultaneously satisfy the sample ideal channel accumulation sum data. That is, the sample extended projection data predicted by the first initial correction network should match the sample ideal channel accumulated sum data at the corresponding projection view, and cannot exceed the range of the sample ideal channel accumulated sum data.
As an example, referring to the first calibration network training diagram shown in fig. 6, when the first initial calibration network is trained by using the first sample projection data, the learning condition of the first initial calibration network is determined by combining the first gold standard. And meanwhile, combining the sample expansion projection data output by the first initial correction network, calculating corresponding sample channel accumulation sum data, and judging whether the sample channel accumulation sum data meets ideal channel accumulation sum data. In addition, the loss of the first initial correction network is calculated in real time in the training process, and the network parameters of the first initial correction network are adjusted according to the loss calculation result, so that the sample channel accumulation sum data of the sample extended projection data output by the first initial correction network can approach the first golden standard.
In this embodiment, a loss function of the first correction network is constructed by using projection data of a plurality of channels at each projection angle, and then the first initial correction network is trained by using the first sample projection data and the loss function of the first correction network to obtain the first correction network. Therefore, when the trained first correction network is applied, the ideal channel accumulated sum data of each channel under the condition of eliminating the truncation artifacts under the projection view angle can be predicted according to the input truncation artifact data, and the complete projection data of the truncation area can be predicted according to the ideal channel accumulated sum data to obtain the extended projection data. Therefore, the truncation artifact data is corrected in the data domain through the first correction network, complete projection data of the truncation region can be obtained, and the integrity and accuracy of the projection data are guaranteed.
Based on the foregoing embodiments, in one embodiment, the second correction network may be merged into the first correction network as a sub-network or an intermediate layer of the first correction network, and is used to output the ideal channel accumulated sum data corresponding to the truncated region when the first correction network predicts the expanded projection data, so as to limit the prediction range of the expanded projection data.
As shown in fig. 7 (a), when the projection data at the truncated region is corrected in the conventional manner, the projection data that is not acquired is supplemented by ellipse filling in the data field, so as to obtain the extended projection data. However, in some cases, the shape of the real image edge is not elliptical, so a neural network is required to learn the data characteristics of the truncated region, so as to predict the variation trend of the projection data corresponding to the correct image edge.
Referring to fig. 7 (b), after the first correction network is trained by using the first sample projection data, the present application can predict the complete projection data corresponding to the truncated artifact data through the first correction network, i.e., determine the extended projection data that is more consistent with the actual information of the truncated region.
Referring to fig. 8, in a possible implementation manner, the implementation process of predicting the augmented projection data by the first correction network may be: truncation artifact data is extracted from the raw projection data and input into a first correction network. And analyzing the truncated artifact data through a first correction network, and predicting ideal channel accumulation sum data corresponding to projection data of each channel under a truncated projection view angle in an ideal state. Meanwhile, based on the truncation artifact data, complete projection data of each channel under the projection view angle with truncation is predicted, namely the extended projection data. Furthermore, the expanded projection data is corrected through the ideal channel accumulation sum data, so that the output result of the first correction network is more accurate.
It should be noted that the ideal channel accumulated sum data is intermediate information of the first correction network, and may be directly output as an output item of the first correction network, or only output final extended projection data without outputting the ideal channel accumulated sum data, which is not limited in this embodiment.
In the embodiment, the truncation artifact data is corrected through the trained first correction network to obtain the extended projection data, so that the data processing efficiency is improved; moreover, the expanded projection data obtained by correction in the data domain is more suitable for the real situation of the truncated region of the scanning object, and the imaging quality is better.
In one embodiment, after determining the extended projection data corresponding to the truncation artifact in the original projection data through the first correction network, the complete projection data can be determined according to the original projection data and the extended projection data, so as to generate the medical image of the scanned object.
In this embodiment, as shown in fig. 9, the implementation of generating a medical image of a scanned object from the original projection data and the augmented projection data (i.e., step 230) may include the following steps:
step 910: and acquiring target projection data after truncation artifact correction according to the original projection data and the extended projection data.
Wherein the target projection data is projection data from which the truncated artifact is removed, for generating a medical image of the scanned object.
In one possible implementation manner, the implementation procedure of step 910 may be: correspondingly supplementing the expanded projection data into the original projection data to obtain corrected projection data; and carrying out one-time correction or multiple iterative corrections on the corrected projection data to obtain the target projection data after truncation artifact correction.
And if the error between the channel accumulated sum data and the ideal channel accumulated sum data under each projection visual angle in the corrected projection data is smaller than the preset error value, finishing correction to obtain target projection data. The ideal channel accumulated sum data represents the accumulated sum of the projection data of each channel after removing the truncated artifact at each projection view angle in the original projection data.
As an example, the ideal channel accumulated sum data corresponding to the raw projection data may be obtained through the second correction network. That is, the original projection data is input into the second correction network, and the ideal channel accumulation sum data at each projection view angle after the truncation artifact is removed is predicted and removed by the second correction network.
Further, the iterative modification of the corrected projection data may be implemented as: and carrying out back projection on the corrected projection data to obtain a first medical image with the truncation artifact removed, and then projecting the first medical image back to a data domain through forward projection to obtain new corrected projection data. And then, correcting the new corrected projection data after the iteration again by using the first correction network. And then, continuously carrying out back projection on the corrected projection data to obtain a second medical image without the truncation artifacts, and repeating the iteration processing for multiple times by analogy until the error between the channel accumulation sum data and the ideal channel accumulation sum data at each projection view angle in the corrected projection data is less than a preset error value, and taking the corrected projection data as target projection data.
Step 920: and carrying out image reconstruction on the target projection data to obtain a medical image of the scanned object.
Specifically, the medical image of the scanned object can be obtained by performing back projection reconstruction on the target projection data. The medical image is the image without the truncated artifact, the organization structure information of the medical image is complete and accurate, and the edge contour is clearer.
In this embodiment, after the extended projection data is correspondingly supplemented to the original projection data to obtain the corrected projection data, the corrected projection data can be made to approach the true value more by a repeated iteration method. Therefore, image reconstruction is carried out according to the target projection data obtained after repeated correction, a medical image with better image quality is obtained, and the real tissue structure information of the scanned object can be clearly and comprehensively reflected.
In summary of the foregoing method embodiments, in an embodiment, as shown in fig. 10, the present application further provides another truncation artifact correction method, which is also described by taking an example that the method is applied to the information processing apparatus in fig. 1, and includes the following steps:
step 1010: truncation artifact data and ideal channel summation data are obtained from raw projection data of the scanned object.
The ideal channel accumulated sum data represents the projection data accumulated sum of all channels except for the truncated artifact at all projection visual angles in the original projection data.
Further, when obtaining the ideal channel accumulated sum data corresponding to the original projection data, the ideal channel accumulated sum data can be obtained by the second correction network trained in the foregoing embodiment.
Step 1020: and correcting the truncation artifact data through a first correction network to obtain extended projection data.
Step 1030: and correspondingly supplementing the expanded projection data into the original projection data to obtain corrected projection data.
Step 1040: and performing iterative modification on the corrected projection data according to the ideal channel accumulated sum data to obtain the target projection data after truncation artifact correction.
In one possible implementation manner, the implementation procedure of step 1040 is: and carrying out iterative correction on the corrected projection data, and finishing the iterative correction if the error between the channel accumulated sum data and the ideal channel accumulated sum data at each projection visual angle in the corrected projection data is less than a preset error value, and taking the corrected projection data as target projection data.
Step 1050: and carrying out image reconstruction on the target projection data to obtain a medical image of the scanned object.
It should be noted that, in the steps of the truncation artifact correction method provided in this embodiment, the implementation principle and technical effect of the steps may refer to the steps in any of the above embodiments, and details are not described herein again.
In an embodiment, based on the first correction network and the second correction network that are constructed, specifically in the field of CT scanning, the present application further provides a CT image correction method, see fig. 11, which may be applied to a CT scanner and may also be applied to the information processing apparatus in fig. 1, including the following steps:
step 1110: acquiring CT projection data of a scanned object; the CT projection data includes projection data for each channel at least one projection view angle at which there is truncation.
The CT projection data is projection data obtained by scanning at least one part of a scanned object by a CT scanner. However, since the scanning field of view/scanning field of view that can be imaged by the CT scanner is smaller than the scanning object, a partial region of the scanning object may be located outside the scanning field of view during the scanning process, and the acquired CT projection data may include projection data at the cut-off.
Step 1120: correcting the CT projection data through a first correction network to obtain target projection data; portions of the target projection data corresponding to the CT projection data having a truncation at the projection view angle are compensated.
It should be noted that, the first correction network may be constructed in a manner as shown in the above embodiment corresponding to fig. 3, and the constructed first correction network has the capability of predicting the channel accumulation sum data variation trend in each projection view under an ideal state, and predicting complete projection data of each channel under each projection view.
Alternatively, the first correction network may employ the second correction network as a sub-network or an intermediate layer to predict the ideal channel summation data corresponding to the CT projection data through the second correction network.
In one possible implementation, the correction principle of the first correction network is: and inputting the CT projection data into a first correction network, analyzing the CT projection data through the first correction network, and predicting ideal channel accumulation sum data corresponding to each channel of projection data under a projection visual angle. Meanwhile, based on CT projection data, complete projection data of each channel under the projection view angle with truncation are predicted so as to compensate the projection data at the truncation position. And further, correcting the compensated projection data through the ideal channel accumulated sum data, and finally outputting the target projection data through the first correction network.
That is, the target projection data obtained in this step is complete projection data obtained by compensating the projection data at the truncation in the CT projection data.
Step 1130: a CT image of the scanned object is generated from the target projection data.
Specifically, the CT image of the scanned object can be obtained by performing back projection reconstruction on the target projection data. The CT image generated according to the target projection data is the image without the truncated artifact, the organization structure information of the CT image is complete and accurate, the edge contour is clearer, and the image quality is better.
In an embodiment, the constructed first correction network can effectively compensate the projection data at the cut-off position in the CT projection data, so as to output the corrected target projection data. Because the target projection data is complete data obtained after compensation, the quality of the CT image generated based on the target projection data is better. Therefore, the data at the truncation part is effectively compensated in the data domain through the first correction network, and the truncation artifact correction efficiency is improved.
It should be understood that, although the steps in the flowcharts related to the embodiments as described above are sequentially displayed as indicated by arrows, the steps are not necessarily performed sequentially as indicated by the arrows. The steps are not limited to being performed in the exact order illustrated and, unless explicitly stated herein, may be performed in other orders. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be rotated or alternated with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the present application further provides a truncation artifact correction apparatus for implementing the above-mentioned truncation artifact correction method. The solution to the problem provided by the apparatus is similar to the solution described in the above method, so the specific limitations in one or more embodiments of the truncation artifact correction apparatus provided below can be referred to the limitations of the truncation artifact correction method in the above, and are not described herein again.
In one embodiment, as shown in fig. 12, there is provided a truncation artifact correction apparatus including: an acquisition module 1210, a correction module 1220, and a reconstruction module 1230, wherein:
an obtaining module 1210, configured to obtain truncation artifact data according to original projection data of a scanned object; the truncation artifact data is projection data of each channel under a projection view angle with truncation in the original projection data;
the correction module 1220 is configured to correct the truncation artifact data through a first correction network to obtain extended projection data;
a reconstruction module 1230 for generating a medical image of the scanned object from the raw projection data and the augmented projection data.
In one embodiment, the first correction network is constructed by:
acquiring a plurality of first sample projection data; each first sample projection data comprises projection data of each channel under at least one projection visual angle with truncation artifact and truncation;
constructing a loss function of a first correction network according to the projection data of a plurality of channels under each projection visual angle in the plurality of first sample projection data;
and training the first initial correction network according to the plurality of first sample projection data, a preset first gold standard and a loss function of the first correction network to obtain the first correction network.
In one embodiment, constructing the loss function of the first correction network according to the projection data of the plurality of channels at each projection viewing angle in the plurality of first sample projection data includes:
acquiring sample ideal channel accumulated sum data after removing truncated artifacts at each projection visual angle according to the projection data of a plurality of channels at each projection visual angle in the plurality of first sample projection data;
a loss function of the first correction network is constructed from the plurality of sample ideal channel sums.
In one embodiment, obtaining sample ideal channel accumulated sum data after removing truncated artifacts at each projection view angle according to projection data of a plurality of channels at each projection view angle in a plurality of first sample projection data includes:
acquiring initial channel accumulation sum data of the projection data of the channels at each projection visual angle according to the projection data of the channels at each projection visual angle in the first sample projection data;
and inputting the initial channel accumulated sum data into a second correction network to obtain the sample ideal channel accumulated sum data with the artifact being removed at each projection visual angle.
In one embodiment, the process of constructing the second correction network includes:
acquiring a plurality of second sample projection data; each second sample projection data comprises at least one projection data of each channel under a projection visual angle with truncation;
acquiring sample channel accumulated sum data of a plurality of channels under each projection visual angle according to the second sample projection data;
and training the second initial correction network according to the accumulated sum data of the plurality of sample channels and a preset second golden standard to obtain a second correction network.
In one embodiment, the reconstructing module 1230 includes:
the data acquisition unit is used for acquiring target projection data after truncation artifact correction according to the original projection data and the extended projection data;
and the image reconstruction unit is used for carrying out image reconstruction on the target projection data to obtain a medical image of the scanned object.
In one embodiment, the data acquisition unit comprises
The supplement subunit is used for supplementing the expanded projection data to the original projection data correspondingly to obtain corrected projection data;
and the correction subunit is used for performing one-time correction or multiple iterative corrections on the corrected projection data to obtain the target projection data after the truncation artifact correction.
The modules in the truncation artifact correction apparatus may be implemented in whole or in part by software, hardware, and a combination thereof. The modules may be embedded in a hardware form or independent from a processor in a computer device (for example, any information processing device shown in fig. 1), or may be stored in a memory in the computer device in a software form, so that the processor calls to execute operations corresponding to the above modules.
Similarly, for the above CT image correction method, the present application also provides a CT image correction apparatus. In one embodiment, as shown in fig. 13, the CT image correction apparatus 1300 includes: an acquisition module 1310, a correction module 1320, and a reconstruction module 1330, wherein:
an acquisition module 1310 for acquiring CT projection data of a scanned object; the CT projection data comprises projection data of each channel under at least one projection visual angle with truncation;
a correcting module 1320, configured to correct the CT projection data through a first correcting network to obtain target projection data; compensating a part of the target projection data, which corresponds to the CT projection data and has truncation under the projection view angle;
a reconstruction module 1330 configured to generate a CT image of the scanned object according to the target projection data.
It should be noted that, the implementation scheme for solving the problem provided by the CT image correction apparatus is similar to the implementation scheme described in the above method, and reference may be made to the above definition of the CT image correction method, and details are not repeated here.
In addition, all or part of the modules in the CT image correction apparatus may be implemented by software, hardware, or a combination thereof. The modules may be embedded in a hardware form or may be independent of a processor in a computer device (for example, any information processing device shown in fig. 1), or may be stored in a memory in the computer device in a software form, so that the processor calls and executes operations corresponding to the above modules.
In one embodiment, a computer device is provided, the internal structure of which may be as shown in FIG. 12. The computer device comprises a processor, a memory, a communication interface, a display screen and an input device which are connected through a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for communicating with an external terminal in a wired or wireless manner, and the wireless manner can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program can realize the truncation artifact correction method and the CT image correction method provided by the application when being executed by a processor. The display unit of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 12 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring truncation artifact data according to original projection data of a scanned object; the truncation artifact data is projection data of each channel under a projection view angle with truncation in the original projection data;
correcting the truncation artifact data through a first correction network to obtain extended projection data;
a medical image of the scanned object is generated based on the raw projection data and the augmented projection data.
In addition, the processor can also realize the following steps when executing the computer program:
acquiring CT projection data of a scanned object; the CT projection data comprises projection data of each channel under at least one projection visual angle with truncation;
correcting the CT projection data through a first correction network to obtain target projection data; compensating a part of the target projection data, which corresponds to the CT projection data and has truncation under the projection view angle;
a CT image of the scanned object is generated from the target projection data.
The implementation principle and technical effect of the computer device provided by the above embodiment are similar to those of the above method embodiment, and are not described herein again.
In one embodiment, a computer readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring truncation artifact data according to original projection data of a scanned object; the truncation artifact data is projection data of each channel under a projection view angle with truncation in the original projection data;
correcting the truncation artifact data through a first correction network to obtain extended projection data;
a medical image of the scanned object is generated based on the raw projection data and the augmented projection data.
Further, the computer program when executed may further implement the steps of:
acquiring CT projection data of a scanned object; the CT projection data comprises projection data of each channel under at least one projection visual angle with truncation;
correcting the CT projection data through a first correction network to obtain target projection data; compensating a part of the target projection data, which corresponds to the CT projection data and has truncation under the projection view angle;
a CT image of the scanned object is generated from the target projection data.
The implementation principle and technical effect of the computer-readable storage medium provided by the above embodiments are similar to those of the above method embodiments, and are not described herein again.
In one embodiment, a computer program product is provided, the computer program product comprising a computer program that when executed by a processor performs the steps of:
acquiring truncation artifact data according to original projection data of a scanned object; the truncation artifact data is projection data of each channel under a projection view angle with truncation in the original projection data;
correcting the truncation artifact data through a first correction network to obtain extended projection data;
a medical image of the scanned object is generated based on the raw projection data and the augmented projection data.
Further, the computer program when executed may further implement the steps of:
acquiring CT projection data of a scanned object; the CT projection data comprises projection data of each channel under at least one projection visual angle with truncation;
correcting the CT projection data through a first correction network to obtain target projection data; compensating a part of the target projection data, which corresponds to the CT projection data and has truncation under the projection view angle;
a CT image of the scanned object is generated from the target projection data.
The foregoing embodiments provide a computer program product, which has similar implementation principles and technical effects to those of the foregoing method embodiments, and will not be described herein again.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A truncation artifact correction method, the method comprising:
acquiring truncation artifact data according to original projection data of a scanned object; the truncation artifact data is the projection data of each channel under the projection visual angle with truncation in the original projection data;
correcting the truncation artifact data through a first correction network to obtain extended projection data;
generating a medical image of the scanned object from the raw projection data and the augmented projection data.
2. The method of claim 1, wherein the first correction network is constructed by:
acquiring a plurality of first sample projection data; each first sample projection data comprises at least one projection data of each channel under a projection visual angle with truncation;
constructing a loss function of the first correction network according to the projection data of a plurality of channels under each projection visual angle in the plurality of first sample projection data;
and training a first initial correction network according to the plurality of first sample projection data, a preset first gold standard and a loss function of the first correction network to obtain the first correction network.
3. The method of claim 2, wherein constructing the loss function of the first correction network from the projection data of the plurality of channels at each projection view in the first plurality of sample projection data comprises:
acquiring sample ideal channel accumulated sum data of which the pseudoscopic images are removed at each projection visual angle according to the projection data of the plurality of channels at each projection visual angle in the plurality of first sample projection data;
and constructing a loss function of the first correction network according to the accumulated sum data of the plurality of sample ideal channels.
4. The method of claim 3, wherein obtaining the sample ideal channel summation data after removing the truncated artifact at each projection view angle according to the projection data of the plurality of channels at each projection view angle in the plurality of first sample projection data comprises:
acquiring initial channel accumulation sum data of the projection data of the channels at each projection visual angle according to the projection data of the channels at each projection visual angle in the first sample projection data;
and inputting the initial channel accumulated sum data into a second correction network to obtain the sample ideal channel accumulated sum data with the artifact being removed at each projection visual angle.
5. The method of claim 4, wherein the second calibration network is constructed by:
acquiring a plurality of second sample projection data; each second sample projection data comprises at least one projection data of each channel under a projection visual angle with truncation;
acquiring sample channel accumulated sum data of a plurality of channels under each projection visual angle according to the second sample projection data;
and training a second initial correction network according to the accumulated sum data of the plurality of sample channels and a preset second golden standard to obtain the second correction network.
6. The method of any of claims 1-5, wherein generating a medical image of the scanned object from the raw projection data and the augmented projection data comprises:
acquiring target projection data after truncation artifact correction according to the original projection data and the extended projection data;
and carrying out image reconstruction on the target projection data to obtain a medical image of the scanning object.
7. The method of claim 6 wherein obtaining truncation artifact corrected target projection data from the raw projection data and the extended projection data comprises
Correspondingly supplementing the expanded projection data into the original projection data to obtain corrected projection data;
and carrying out one-time correction or multiple iterative corrections on the corrected projection data to obtain the target projection data after the truncation artifact correction.
8. A method for CT image correction, the method comprising:
acquiring CT projection data of a scanned object; the CT projection data comprise projection data of each channel under at least one projection visual angle with truncation;
correcting the CT projection data through a first correction network to obtain target projection data; compensating a part of the target projection data corresponding to the CT projection data with truncation under a projection view angle;
generating a CT image of the scanned object from the target projection data.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 8.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 8.
CN202210480772.4A 2022-05-05 2022-05-05 Truncation artifact correction method, CT image correction method, apparatus, and medium Pending CN114913259A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210480772.4A CN114913259A (en) 2022-05-05 2022-05-05 Truncation artifact correction method, CT image correction method, apparatus, and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210480772.4A CN114913259A (en) 2022-05-05 2022-05-05 Truncation artifact correction method, CT image correction method, apparatus, and medium

Publications (1)

Publication Number Publication Date
CN114913259A true CN114913259A (en) 2022-08-16

Family

ID=82767063

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210480772.4A Pending CN114913259A (en) 2022-05-05 2022-05-05 Truncation artifact correction method, CT image correction method, apparatus, and medium

Country Status (1)

Country Link
CN (1) CN114913259A (en)

Similar Documents

Publication Publication Date Title
CN109035355B (en) System and method for PET image reconstruction
KR20190103227A (en) Deep learning based estimation of data for use in tomography reconstruction
JP2020537555A (en) Image reconstruction using machine learning regularizer
CN111080584B (en) Quality control method for medical image, computer device and readable storage medium
KR101725891B1 (en) Tomography imaging apparatus and method for reconstructing a tomography image thereof
US9317915B2 (en) Computed-tomography system and method for determining volume information for a body
CN110570483B (en) Scanning method, scanning device, computer equipment and storage medium
EP3874457B1 (en) Three-dimensional shape reconstruction from a topogram in medical imaging
CN110298447B (en) Method for processing parameters of machine learning method and reconstruction method
CN111540025A (en) Predicting images for image processing
KR20170088742A (en) Workstation, medical imaging apparatus comprising the same and control method for the same
CN110717961A (en) Multi-modal image reconstruction method and device, computer equipment and storage medium
CN109199422A (en) CT preview image rebuilds optimization method, device, computer equipment and storage medium
CN111260748A (en) Digital synthesis X-ray tomography method based on neural network
JPWO2017104700A1 (en) Image processing apparatus and image processing method
JP2022506395A (en) Artificial Intelligence (AI) -based Standard Capture Value (SUV) correction and variation assessment for positron emission tomography (PET)
CN110223247B (en) Image attenuation correction method, device, computer equipment and storage medium
CN111670461B (en) Low radiation dose Computed Tomography Perfusion (CTP) with improved quantitative analysis
US11823354B2 (en) System and method for utilizing a deep learning network to correct for a bad pixel in a computed tomography detector
CN112204607B (en) Scattering correction for X-ray imaging
WO2023219963A1 (en) Deep learning-based enhancement of multispectral magnetic resonance imaging
CN110473241A (en) Method for registering images, storage medium and computer equipment
CN114913259A (en) Truncation artifact correction method, CT image correction method, apparatus, and medium
EP4292042A1 (en) Generalizable image-based training framework for artificial intelligence-based noise and artifact reduction in medical images
JP2016042876A (en) Medical image processing device and medical image processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination