CN117058042A - Image correction method, device, electronic equipment and storage medium - Google Patents

Image correction method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117058042A
CN117058042A CN202311117690.4A CN202311117690A CN117058042A CN 117058042 A CN117058042 A CN 117058042A CN 202311117690 A CN202311117690 A CN 202311117690A CN 117058042 A CN117058042 A CN 117058042A
Authority
CN
China
Prior art keywords
image
data
attenuation
attenuation data
scan
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311117690.4A
Other languages
Chinese (zh)
Inventor
李运达
李明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Zhihe Medical Technology Co ltd
Original Assignee
Shenyang Zhihe Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Zhihe Medical Technology Co ltd filed Critical Shenyang Zhihe Medical Technology Co ltd
Priority to CN202311117690.4A priority Critical patent/CN117058042A/en
Publication of CN117058042A publication Critical patent/CN117058042A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention discloses an image correction method, an image correction device, electronic equipment and a storage medium. The image correction method comprises the following steps: acquiring a positioning image, wherein the positioning image is obtained by image acquisition of a scanned object, and is used for determining a scanning range when the scanned object is scanned by a first scanning mode; determining initial attenuation data based on the positioning image using the trained deep learning model; and carrying out image correction on the image to be corrected based on the initial attenuation data, wherein the image to be corrected is obtained by processing the scanning data obtained by the second scanning mode. The invention reduces the error of attenuation data and improves the effect of image correction.

Description

Image correction method, device, electronic equipment and storage medium
Technical Field
The present invention relates to the technical fields of image processing, deep learning, and the like, and in particular, to an image correction method, an image correction device, an electronic device, and a storage medium.
Background
For positron emission tomography (Positron Emission Tomography, PET), photons generated by radionuclide decaying and undergoing annihilation reactions decay, are absorbed or compton effects occur during propagation when data are acquired, resulting in changes in photon paths that affect imaging. In order to obtain PET images meeting clinical diagnostic requirements, attenuation correction (Attenuation Correction, AC) of the scan data is required. For example, there is a deviation in the PET image obtained by reconstructing the scanned PET data, and it is necessary to perform image correction on the PET image using the attenuation data. However, the attenuation data error for image correction in the related art is large, resulting in poor image correction effect.
Disclosure of Invention
Embodiments of the present application aim to solve at least one of the technical problems in the related art to some extent. To this end, an object of an embodiment of the present application is to provide an image correction method, an apparatus, an electronic device, a storage medium, and a program product.
The embodiment of the application provides an image correction method, which comprises the following steps: acquiring a positioning image, wherein the positioning image is obtained by carrying out image acquisition on a scanned object, and the positioning image is used for determining a scanning range when the scanned object is scanned by a first scanning mode; determining initial attenuation data based on the positioning image using a trained deep learning model; and carrying out image correction on the image to be corrected based on the initial attenuation data, wherein the image to be corrected is obtained by processing the scanning data obtained by using a second scanning mode.
Another embodiment of the present application provides an image correction apparatus including: the device comprises an acquisition module, a determination module and a correction module. The acquisition module is used for acquiring a positioning image, wherein the positioning image is obtained by carrying out image acquisition on a scanned object, and the positioning image is used for determining a scanning range when the scanned object is scanned by a first scanning mode; a determination module for determining initial attenuation data based on the localization image using a trained deep learning model; and the correction module is used for carrying out image correction on the image to be corrected based on the initial attenuation data, wherein the image to be corrected is obtained by processing the scanning data obtained by the second scanning mode.
Another embodiment of the application provides an electronic device comprising a memory storing a computer program and a processor implementing the steps of the method according to any of the embodiments above when the computer program is executed by the processor.
Another embodiment of the application provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of the method according to any of the above embodiments.
Another embodiment of the application provides a computer program product comprising instructions which, when executed by a processor of a computer device, enable the computer device to perform the steps of the method according to any one of the embodiments above.
In the embodiment, the positioning image is acquired, the initial attenuation data is determined based on the positioning image by using the trained deep learning model, and the image to be corrected is subjected to image correction based on the initial attenuation data, so that the error of the attenuation data is reduced, and the image correction effect is improved.
Drawings
Fig. 1 is a schematic flow chart of an image correction method according to an embodiment of the present application;
FIG. 2 is a training schematic diagram of a deep learning model according to an embodiment of the present application;
fig. 3 is a flow chart illustrating a method for acquiring attenuation data according to an embodiment of the present application;
fig. 4 is a schematic diagram of an image correction device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative and intended to explain the present application and should not be construed as limiting the application.
For positron emission tomography (Positron Emission Tomography, PET), photons generated by radionuclide decaying and undergoing annihilation reactions decay, are absorbed or compton effects occur during propagation when data are acquired, resulting in changes in photon paths that affect imaging. In order to obtain PET images meeting clinical diagnostic requirements, attenuation correction (Attenuation Correction, AC) of the scan data is required. For example, there is a deviation in the PET image obtained by reconstructing the scanned PET data, and it is necessary to perform image correction on the PET image using the attenuation data. However, the attenuation data error for image correction in the related art is large, resulting in poor image correction effect.
In order to obtain a PET image meeting clinical diagnosis requirements, accurate attenuation correction is a necessary calculation step for scanning data, and the method has important significance in the aspects of subsequent correction, quantitative accuracy and the like.
In one example, when correcting PET images, the PET system is typically paired with other imaging modality systems to obtain anatomical images, including electronic computed tomography (Computed Tomography, CT) and magnetic resonance imaging (Magnetic Resonance Imaging, MRI), among others. The tissue density distribution of the scanned object is obtained by corresponding calculation, so that the attenuation coefficient map distribution estimation at 511keV photon energy is obtained by calculation, and the attenuation coefficient map distribution estimation can be used for PET image reconstruction to realize attenuation correction and scattering correction (Scatter Correction, SC).
But in the process of collecting data through a multi-modal system, there are the following influencing factors:
(1) Longer scan times, CT scans or additional MRI sequence scans require longer scan times;
(2) For a PET/CT system, due to different acquisition time and different acquisition time length, the space of two mode images of PET and CT is not matched, so that errors such as correction artifact and the like are caused, wherein the different acquisition time points lead to acquisition errors for different systems due to intentional or unintentional body movement of a scanned object and the movement of internal organs, CT scanning can be completed in a shorter time, but the scanning of each body position by the PET system usually takes longer time than CT scanning, and the different acquisition time length leads to acquisition errors for different systems due to heart pulsation and respiratory movement of the scanned object;
(3) CT scanning introduces additional radiation doses and in practical clinical applications, PET images acquired at multiple time points may be required to better diagnose the presence of benign and malignant lesions in the scanned subject. However, to ensure accurate attenuation and scatter correction, if a CT scan is performed simultaneously while PET data is acquired at multiple time points, the additional radiation dose suffered by the scanned object will definitely be greatly increased;
(4) Generally, the radial scanning range of the PET system is larger than that of other modality systems, and when the arm or other body parts of a scanned object with a large weight or due to other factors exceed the scanning range of CT or MRI, the calculated attenuation coefficient map is truncated, so as to generate correction artifact errors;
(5) For PET/MR systems, the attenuation map derived from the Dixon sequence may be directly employed, but the derived attenuation map is not very accurate.
In one example, an initial linear attenuation coefficient map distribution may be generated from other modality (CT or MRI) data, modeled with a priori knowledge of the linear attenuation coefficients, an objective function corresponding to the attenuation coefficients is generated, and then iterated to obtain an attenuation coefficient estimate, and the PET data is corrected with the attenuation coefficients to solve the attenuation artifact problem. However, this approach requires a corresponding calculation based on other modality images (CT or MRI) to obtain the attenuation coefficient map as an initial input value for the iterative calculation, which would result in additional CT radiation or MRI additional sequence scans.
In one example, a positioning image of a scanned object, which is an image obtained before the start of scanning, for positioning a scanning range of the scanned object, and a first PET image are acquired. The scout image and the first PET image are input to a deep learning model to acquire a CT scan image of the scan object, and the PET data is attenuation corrected with the CT scan image to obtain an attenuation corrected PET image. However, this method directly inputs the scout image and the first PET image into the deep learning model to obtain a CT image and directly uses the CT image for attenuation correction, and introduces correction errors due to incomplete matching of CT and PET caused by movement of the scanned object, respiration, heart movement, and the like. In addition, the data input into the model comprise a positioning image and a first PET image, wherein the positioning image and the first PET image are images of different modes, and the positioning image and the first PET image can have deviation, so that a CT scanning image output through the model is not matched with the first PET image, and the accuracy of attenuation correction is affected.
In an example, an attenuation-free corrected PET image is first acquired, the attenuation-free corrected PET image is estimated through a pre-trained deep learning network to obtain an initial linear attenuation coefficient, then, based on a scan bed position parameter and the initial linear attenuation coefficient, a final attenuation coefficient map is obtained through iterative calculation by using a constructed objective function, and attenuation correction is performed on PET data by using the final attenuation coefficient map. However, in this method, the initial attenuation coefficient is estimated based on the non-attenuation correction PET image through the deep learning network, and the attenuation coefficient map is obtained by performing iterative calculation by adding the scanning bed information, so that the attenuation information of other parts except the scanned object cannot be accurately estimated, because the attenuation information of the articles such as the patient clothes, the worn metal, the scanning mattress and the like is also included in the scanning bed board, in addition, the PET image is a multi-breath superimposed image, and the motion artifact occurs in the small focus, the diaphragm and the like due to the respiratory motion and the like, the estimated attenuation map distribution is also inaccurate, and the correction error is introduced by performing the image correction based on the estimated attenuation map distribution.
In one example, background radiation data is acquired simultaneously during acquisition of data by the PET system, and an initial background radiation attenuation map is obtained by calculation, a final attenuation map is obtained by a trained model, and attenuation correction is performed on PET scan data to obtain a final PET image. However, this method causes a deviation in the obtained attenuation coefficient map due to a relatively small count amount, insensitivity to the movement of the scanned object, and the like, and thus causes a relatively low attenuation correction accuracy.
In view of this, it is highly necessary to consider attenuation coefficient maps (attenuation data) that are lower in radiation dose, more convenient and more applicable on the premise of ensuring that the correction result is accurate, so as to perform image correction by using the attenuation data. Accordingly, embodiments of the present application provide an optimized image correction method.
Fig. 1 is a flowchart of an image correction method according to an embodiment of the present application.
As shown in fig. 1, the image correction method 100 provided in the embodiment of the present application includes steps S110 to S130, for example.
Step S110, a positioning image is acquired.
Step S120, determining initial attenuation data based on the localization image using the trained deep learning model.
Step S130, performing image correction on the image to be corrected based on the initial attenuation data.
The positioning image is obtained by image acquisition of a scanned object, and the positioning image is used for determining a scanning range when the scanned object is scanned by a first scanning mode. For example, when scanning is performed by the first scanning method, the scanning range is different when scanning is performed for different objects. Therefore, before a certain scanned object is scanned, a positioning image is firstly acquired, the positioning image indicates the positions of feet, heads and the like of the scanned object, the positions of feet, heads and the like represent the scanning limit, the scanning range of the scanned object is obtained based on the positioning image, the determined scanning range can cover the scanned object, and the body part of the scanned object is prevented from being cut off due to incomplete scanning. The first scanning method includes, for example, a method of scanning by a PET/CT system, a PET/MR system, or the like.
For a PET/CT system, the positioning image can be a digital shot image, is an image obtained by scanning at the beginning before the PET/CT scanning starts, and can be used for positioning the scanning range of a scanned object by a scanning operator. The scout image may be a digital photographic image taken by a camera or video camera, but more is a two-dimensional image acquired by CT scanning, also known as a scout tile. For a PET/MR system, similar to a PET/CT system, a scout scan is also required to be performed on three planes of the whole body to determine the scan range before the PET/MR scan starts, and the image obtained by the scout scan is the scout image.
After the localization image is obtained, the localization image is input into a trained deep learning model to determine initial attenuation data, including an attenuation coefficient map, based on the localization image using the trained deep learning model. Then, image correction is performed on an image to be corrected, which is obtained by processing the scan data obtained by the second scan method, based on the initial attenuation data. The second scanning mode includes, for example, scanning by a PET system, and after scanning by the PET system to obtain PET scan data, reconstructing the PET scan data to obtain an image to be corrected, where the image to be corrected includes an uncorrected PET image.
It can be appreciated that the embodiment of the application predicts the initial attenuation data by inputting the positioning image into the trained deep learning model, and the predicted initial attenuation data is more accurate, so that the effect of image correction is improved by correcting the image to be corrected through the initial attenuation data, and the scanned object is not required to be scanned for a long time when the positioning image is acquired, thereby reducing the radiation dose applied to the scanned object.
By way of example, the deep learning model includes, for example, a convolutional neural network (Convolutional Neural Network, CNN), a generative countermeasure network (Generative Adversarial Network, GAN), and the like. For ease of understanding, embodiments of the present application are illustrated by way of example in the context of convolutional neural networks, and are specifically shown in fig. 2.
Fig. 2 is a training schematic diagram of a deep learning model according to an embodiment of the present application.
As shown in fig. 2, taking the deep learning model as an example of a convolutional neural network, the convolutional neural network includes, for example, a plurality of convolutional layers and a plurality of activation functions. The positioning sample image is a sample for training the model, and the positioning sample image includes a positioning image. And inputting the positioning sample image for model training into a deep learning model to be trained, outputting first attenuation data, and updating model parameters of the deep learning model to be trained based on errors between the first attenuation data and first reference attenuation data. The first attenuation data are prediction data output by the model during training, the first reference attenuation data are labels, and the parameters of the model are reversely updated based on errors between the prediction data and the labels.
The positioning sample image as the training sample may be a positioning slice or a positioning image of a volunteer (scanned object) acquired by a PET/CT system or a PET/MR system, and the first reference attenuation data as the tag may be obtained by processing images obtained by other scanning methods (other modalities). Because the locating piece or the locating image comprises the attenuation information of the scanning bed, the mattress, articles worn by volunteers and the like, and the first reference attenuation data obtained from other modal data also comprises the attenuation information, the first attenuation data (attenuation chart) predicted by inputting the locating piece or the locating image into the model also comprises the attenuation information, and the integrity of the attenuation information enables the subsequent correction result to be relatively more accurate.
After model training, model testing is needed, for example, a positioning sample image for testing is input into a deep learning model to be trained, second attenuation data is output, and a test result for the deep learning model is determined based on errors between the second attenuation data and second reference attenuation data. The second attenuation data is similar to the first attenuation data and the second reference attenuation data is similar to the first reference attenuation data. If the test result representation model prediction result meets the requirements, training can be stopped to obtain a trained model, and if the test result representation model prediction result does not meet the requirements, model training can be continued.
In the model training and testing stage, firstly, data scanning is carried out through a PET/CT or PET/MR system to obtain a locating plate or a locating image of a volunteer, scanning is carried out through other mode systems, a corresponding attenuation coefficient diagram is obtained through calculation and is used as first reference attenuation data and second reference attenuation data, and a plurality of groups of data are obtained through scanning according to the method, so that a required training set and a testing set can be obtained. Model training and testing are performed based on the locating plate or the locating image and corresponding attenuation coefficient graphs (first reference attenuation data and second reference attenuation data), so that characteristic representations of the attenuation coefficient graphs obtained by 'data conversion' from the locating image can be obtained, and attenuation data can be obtained through the characteristic representations.
In the model training stage, the model is input into a locating plate or a locating image of a volunteer acquired by a PET/CT or PET/MR system, and the corresponding label is an attenuation coefficient graph (first reference attenuation data) obtained by calculating other mode images in the same scanning. After the training sample is input into the model, the first attenuation data of the current prediction is obtained through forward calculation of the model. And then calculating error between the predicted first attenuation data and a real long-time scanning result (first reference attenuation data) to perform back propagation, so as to calculate gradient of each parameter in the model. And finally, updating each network parameter value by adopting a random gradient descent method, and performing iterative training until the error of the model is not reduced any more and a stable state is achieved.
In the stage of model test, the locating plate or locating image of the volunteer acquired by the PET/CT or PET/MR system is input into the trained model as a test sample, and forward calculation is performed to obtain a final prediction result (second attenuation data). A test result of the model is determined based on an error between the second attenuation data and the second reference attenuation data, so as to determine whether the model accuracy satisfies a requirement based on the test result.
After training to obtain the deep learning model, the trained deep learning model may be utilized to predict the positioning image to obtain initial attenuation data. It will be appreciated that after the initial attenuation data is predicted by the trained deep learning model, subsequent calculations are performed using the initial attenuation data without additional acquisition of CT images or a dedicated MRI image sequence to estimate the attenuation data, reducing the radiation dose applied to the scanned object or reducing the acquisition time.
In one example, after the initial attenuation data is obtained by the trained deep learning model, the image to be corrected may be corrected directly based on the initial attenuation data. Alternatively, in the case where the accuracy of the initial attenuation data is not yet high enough, the initial attenuation data may be further processed and then subjected to image correction.
In one case, since the image to be corrected obtained by the PET scan may be a part of the body part of the scanned object, that is, the axial length of the PET scan is smaller than the axial length corresponding to the positioning image, and the axial length corresponding to the initial attenuation data obtained by the deep learning model is identical to the axial length corresponding to the positioning image, that is, the axial length corresponding to the initial attenuation data is greater than the axial length corresponding to the image to be corrected, the part of the initial attenuation data corresponding to the PET scan range may be cut out from the initial attenuation data according to the PET scan range determined later to perform the subsequent correction calculation.
In another case, the radial scan field of a CT or MRI is typically smaller than the radial field of a PET, i.e. the scan field of the first scan mode above is smaller than the scan field of the second scan mode. When a scanned object with a large weight or a part of a body part (such as an arm or a trunk) of the scanned object is beyond the radial view of CT or MRI due to illness or the like, initial attenuation data generated based on positioning images of other modalities (CT or MRI) are deviated due to truncation, namely, the initial attenuation data are truncated relative to an image to be corrected, namely, the radial length corresponding to the initial attenuation data is smaller than the radial length corresponding to the image to be corrected, and partial data are absent from the initial attenuation data, so that the initial attenuation data can only correct the local area of the image to be corrected. It follows that additional errors will be introduced if the initial attenuation data obtained from the deep learning model based on the CT or MRI localization images is used directly for subsequent calculations. To solve this problem, correction compensation is required for the initial attenuation data output from the deep learning model.
For example, compensation correction is performed on the initial attenuation data based on the reference image to obtain compensated attenuation data, and then image correction is performed on the image to be corrected based on the compensated attenuation data.
The reference image is illustratively obtained by processing scan data obtained by the second scan mode. The reference image is, for example, a PET image, and can be used as a priori information to make compensation corrections to the initial attenuation data. The reference image includes a PET image which is not attenuation-corrected, an intermediate image which is attenuation-corrected, and the like, and an embodiment of the present application is described taking the reference image as an example of a PET image which is not attenuation-corrected. The information contained in the reference image is generally consistent with the information contained in the image to be corrected, so that the initial attenuation data is compensated and corrected based on the reference image to obtain compensated attenuation data, the compensated attenuation data has no truncation error and is more matched with the image to be corrected, and the image correction is performed on the image to be corrected by using the compensated attenuation data, so that the correction accuracy is improved.
How the compensation correction is performed on the initial attenuation data will be specifically described below.
For example, an object region where the scanned object is located is determined from the reference image, that is, a contour of the scanned object is determined from the reference image, and a region surrounded by the contour is the object region. Then, based on the scan field data of the first scan mode, an inner peripheral region and an outer peripheral region in the object region are determined, the inner peripheral region being within the scan field of the first scan mode, the outer peripheral region being outside the scan field of the first scan mode, i.e., the outer peripheral region representing an image region in which a body part of the scanned object that is beyond the radial scan field of the CT or MRI is located. The scan field of the first scan mode is a radial scan field of the CT or MRI, i.e. the scout image for generating the initial attenuation data is obtained based on a radial scan field scan of the CT or MRI.
And under the condition that the area size of the peripheral area exceeds the preset area size, compensating and correcting the numerical value corresponding to the peripheral area in the initial attenuation data to obtain compensated attenuation data. For example, the preset initial attenuation value is replaced with a value corresponding to the peripheral region in the initial attenuation data. It can be understood that, when the area size of the peripheral area exceeds the preset area size, it indicates that the body part of the scanned object in the reference image exceeds the radial scanning field of view of CT or MRI, which indirectly indicates that the initial attenuation data is truncated, that is, the current initial attenuation data cannot correct the body part of the image to be corrected, which exceeds the radial scanning field of view of CT or MRI. In one case, the initial attenuation data may be an attenuation image, the image size of the attenuation image is smaller than the image size of the image to be corrected, the attenuation image may be expanded to the same size as the image to be corrected when the attenuation image is corrected and compensated, the expanded portion includes the peripheral region, the expanded pixel value may be a null value or a zero value, and then the null value or the zero value corresponding to the peripheral region in the expanded portion is replaced by a preset initial attenuation value, where the preset initial attenuation value may include a soft tissue attenuation value, and the soft tissue attenuation value includes a soft tissue attenuation value of the human body. It should be appreciated that the preset initial attenuation values may also include other preset attenuation values, and that a suitable preset initial attenuation value may be selected according to the actual application.
For example, the reference picture is set to P PET The initial attenuation data (initial attenuation image) obtained by the deep learning model is set to μ 0 For reference image P by threshold segmentation or the like PET Processing to obtain mask image P mask Mask image P mask Also known as mask images. Since the imaging field of view fFOV of CT or MRI system is known in advance, that is, fFOV is the scanning field of view of the first scanning mode, the reference image P can be determined PET Whether the pixel corresponding to the scanned object is located outside the field fFOV or not automatically judges whether the initial attenuation data is truncated or not.
As shown in formula (1), the reference image P is based on a preset pixel threshold fThreshold PET Threshold segmentation processing is carried out to obtain a reference image P PET Corresponding mask image P mask . Mask image P mask Comprising a first pixel value 1 and a second pixel value 0, the first pixel value 1 characterizing the reference image P mask The pixel value of the corresponding pixel in the image is larger than a preset pixel threshold fThreshold, and the second pixel value 0 represents the reference image P mask The pixel value of the corresponding pixel in the pixel array is smaller than or equal to a preset pixel threshold value fThreshold. The pixel value referred to herein may be a gray value, and the preset pixel threshold value fThreshold may be a preset gray value.
As shown in formula (2), the mask image P mask Among the pixels corresponding to the first pixel value 1, determining the pixels belonging to the peripheral region to obtain the number f of pixels of the peripheral region sum The pixels in the peripheral region are pixels outside the field of view fFOV.
Wherein x, y and z respectively represent three direction coordinates, which are also the positions of the pixels; xcenter, ycenter, zcenter denote the center of the field of view fFOV.
From equation (1) and equation (2), it can be seen that from reference image P PET The region where the pixel whose pixel value is greater than the preset pixel threshold value fThreshold used for distinguishing the reference image P is determined as the object region PET The pixels corresponding to the scanned object and the pixels corresponding to the environment, namely the preset pixel threshold fThreshold is used for using the reference image P PET The scanned object and the environment are segmented. In addition, the object region includes an inner peripheral region and an outer peripheral region, and when the region size of the outer peripheral region exceeds the preset region size, the reference image P is represented PET The body part of the scanned object exceeding the radial scan field of view fFOV of CT or MRI indirectly indicates that there is a truncation of the initial attenuation data. In this embodiment, the preset area size may be characterized by a preset number of pixels, that is, the area size of the peripheral area exceeds the preset area size by including the number f of pixels of the peripheral area sum Exceeding a preset number of pixels.
The number f of pixels in the peripheral region of the object region sum Beyond a certain preset threshold, the actual body part of the scanned object can be considered to be beyond the imaging field fFOV of other modality systems (CT or MRI systems), and the initial attenuation data mu is needed 0 Correction compensation is performed to eliminate the effects of truncation artifacts. In general, the cut-off site may be assigned a preset initial attenuation value, such as a soft tissue attenuation value μ softtissue Obtaining compensated attenuation data mu 1 The method comprises the following steps:
by male meansThe equation (3) shows that the compensated attenuation data μ 1 The values not exceeding the field of view fFOV retain the initial attenuation data μ 0 Is assigned a preset initial attenuation value, such as a soft tissue attenuation value mu, outside the field of view fFOV softtissue
Obtaining compensated attenuation data mu 1 Thereafter, the compensated attenuation data μ can be directly based on 1 Correcting the image to be corrected so as to eliminate the initial attenuation data mu 0 The truncation error brought by the method improves the effect of image correction.
In another example, compensated attenuation data μ 1 Errors caused by truncation can be compensated to a certain extent, but in actual cases, a body part exceeding the fFOV of the imaging field of view of other modes can be a trunk or arm part, wherein the body part comprises a bone and other high-density parts, and the accuracy of attenuation correction after a preset initial attenuation value is assigned can be further improved. In addition, since the positioning image is acquired first, there is a time deviation from the subsequent PET scan, and the acquisition time of the first scan mode (CT scan) and the second scan mode (PET scan) are different, and since the attenuation data used as the training label is usually obtained by processing images of other modalities (such as CT or MRI) during the training of the model, there is a possibility that the attenuation data used as the training label is also deviated, so that there is still an optimized space for the initial attenuation data output by the model. It can be seen that the compensated attenuation data μ obtained by the above-described step calculation 1 There may be a certain deviation from the degree of matching between the images to be corrected, so that it is necessary to take corresponding measures to further update the compensated attenuation data mu 1 To obtain updated attenuation data μmeeting the requirements new And then based on the updated attenuation data mu new And carrying out image correction on the image to be corrected.
Specifically, the compensated attenuation data μ is corrected based on the scan data (PET scan data) obtained by the second scan mode 1 Updating to obtain updated attenuation data mu new
For example, based on the scan data (PET scan data) and then based on the scan data (PET scan data) obtained by the second scan mode and the radioactivity distribution data, the compensated attenuation data μ 1 And updating.
More specifically, after obtaining radioactivity distribution data based on scan data (PET scan data) obtained by the second scan mode, the compensated attenuation data μ is first maintained 1 Invariable, based on scan data (PET scan data) obtained by the second scan mode and compensated attenuation data mu 1 The radioactivity distribution data is updated for the first time. Then, the radioactivity distribution data after the first update is kept unchanged, and the compensated attenuation data mu is calculated based on the scan data (PET scan data) obtained by the second scan method and the radioactivity distribution data after the first update 1 A first update is performed.
If the compensated attenuation data mu 1 And (3) the update of the table does not meet the update condition, and the iterative update needs to be continued. For example, the compensated attenuation data μ after the first update is maintained 1 Unchanged, based on the scan data (PET scan data) obtained by the second scan mode and the compensated attenuation data μ after the first update 1 And carrying out second updating on the radioactivity distribution data after the first updating. Then, the radioactivity distribution data after the second update is kept unchanged, and the compensated attenuation data mu after the first update is obtained based on the scan data (PET scan data) obtained by the second scanning mode and the radioactivity distribution data after the second update 1 A second update is performed. And so on, advancing the row multiple iterations of updating until the updating condition is satisfied. Satisfying the update condition includes, for example, iterative update times greater than a preset number or two-time update of the compensated attenuation data mu 1 The deviation between the two values is smaller than the preset deviation value, and the compensated attenuation data mu obtained by the last update is obtained 1 As updated attenuation data μ new
In the joint compensated attenuation data mu 1 With the scan data (PET data) obtained by the second scan mode μ 1 Alternative methods of updating when updating include, but are not limited to: activity and attenuation maximum likelihood reconstruction (maximum likelihood reconstruction of attenuation and activity, MLAA) algorithm, maximum likelihood attenuation correction factor reconstruction (Maximum likelihood attenuation correction factor reconstruction, MLACF) algorithm, maximum likelihood transmission reconstruction (maximum likelihood transmission reconstruction, MLTR) algorithm, maximum a posteriori probability reconstruction (Maximum A Posteriori, MAP) algorithm, and the like.
Embodiments of the application are described by taking the activity and decay maximum likelihood reconstruction MLAA algorithm as an example, with compensated decay data μ 1 As an initial value, the compensated attenuation data μ is first held 1 Invariable, using compensated attenuation data μ 1 And updating the radioactivity distribution data (activity image) from the scan data (PET scan data) obtained in the second scan mode. Then, the activity image is kept unchanged, and the compensated attenuation data mu is subjected to 1 Updating is performed in such a way that alternate iterative updating is performed to obtain updated attenuation data mu new . The scan data obtained by the second scan mode may also be referred to as TOF (Time of flight) PET scan data, where the TOF PET scan data is obtained by scanning based on a TOF-PET imaging technique.
Updated attenuation data μ at this time new More closely matching the image to be corrected (PET image), i.e. updated attenuation data mu new Without truncation error, updated attenuation data μ new The metal artifacts caused by the existence of metal implants, buttons and the like during scanning in other modes are eliminated, and the problem of mismatching caused by different acquisition time and acquisition time length of each organ tissue in the body of the scanned object is also solved. Thus, the updated attenuation data μ new Can be regarded as final target attenuation data mu final For image correction of the image to be corrected to obtain the final corrected image (PET reconstructed image).
Fig. 3 is a flowchart illustrating a method for acquiring attenuation data according to an embodiment of the present application.
As shown in fig. 3, byScanning by a PET/CT system or a PET/MR system to obtain a positioning image, inputting the positioning image into a trained deep learning model, and outputting initial attenuation data mu 0 . For the two systems of the PET/CT system and the PET/MR system, the CT locating sheet or the whole body three-flat sheet locating scanning image is obtained to be used as a locating image, and then the locating image is input into a trained deep learning model to be predicted to obtain corresponding initial attenuation data mu 0 . If the axial length of the subsequent PET scan is smaller than the axial length corresponding to the positioning image, the initial attenuation data mu can be intercepted according to the axial range of the PET scan 0 As the required initial attenuation data mu 0
If the radial length of the PET scan is greater than the radial length corresponding to the scout image, the initial attenuation data μ is represented 0 There is a truncation error. At this time, TOP PET scan data (i.e., the scan data obtained by the second scanning mode mentioned above) is obtained by scanning with the PET system, and non-attenuation correction reconstruction is performed on the TOP PET scan data to obtain a PET non-attenuation correction image (i.e., the reference image mentioned above). Thresholding the PET non-attenuation corrected image to obtain a PET mask image (i.e., the mask image above), and thresholding the initial attenuation data μ based on the PET mask image 0 Performing truncation compensation correction to obtain compensated attenuation data mu 1 . I.e. by acquiring the whole contour of the body part of the scanned object, e.g. by thresholding, the initial attenuation data mu is automatically determined by determining if there is a body part outside the field of view of the other modality imaging 0 Compensation is performed to reduce the error generated by truncation.
Next, using activity and attenuation maximum likelihood reconstruction MLAA algorithm or other algorithm, based on TOF PET scan data and compensated attenuation data μ 1 Iterative updating to obtain updated attenuation data mu new The updated attenuation data μ can be used to update new As final target attenuation data μ final For image correction of the image to be corrected. I.e. based on compensated attenuation data mu 1 Iterative computation with TOF PET scan data to compensate for attenuation data μ 1 Update to correct the compensated attenuation data mu 1 There is a partial attenuation value that does not match the PET scan data, which does not match the true value, or which does not match the PET scan data because of metal artifacts, or which does not match the PET scan data because of different acquisition times and different acquisition durations.
In another example, when the PET scan data is low in data statistics, i.e., less valid data, due to short acquisition time, low injection dose, too long a delayed scan time, or injection of a particular radiopharmaceutical, etc., the compensated attenuation data μ is utilized in accordance with the above 1 Iterative update calculation with TOF PET scan data to obtain updated attenuation data mu new May result in updated attenuation data mu new Noise is introduced to affect the accuracy of the subsequent attenuation correction. Therefore, in order to reduce calculation errors caused by low data statistics, the embodiment of the application calculates updated attenuation data mu new After that, consider the updated attenuation data μ new And compensated attenuation data mu 1 Combining to promote the quality of the attenuation data and obtain final target attenuation data mu final
For example, the updated attenuation data μ new And compensated attenuation data mu 1 Data fusion is carried out to obtain target attenuation data mu final Specifically, as shown in formula (4):
attenuating data μ from updated new A first attenuation value outside the fov of the scan field for the first scan mode (as in the first case of equation (4)). The first attenuation value can substantially reflect a specific attenuation value for a portion of the body tissue that is outside of the field of view.
From compensated attenuation data mu 1 A second attenuation value within the scan field fFOV for the first scan mode (as in the second case of equation (4)). For a portion within the radial field of view, there isCompensated attenuation data μ 1 With specific values but updated attenuation data mu new No specific value or zero value is found because the updated attenuation data mu is obtained by iterative updating, corresponding to partial areas of the scanning bed board, mattress and clothing to be scanned, which areas have no metabolic information new The attenuation values may not be accurately acquired using TOF PET data in the process of (a) and errors may be introduced in the iterative update process, thus employing compensated attenuation data μ for attenuation data within the scan field of view fFOV 1 More suitably.
Intermediate attenuation data is then obtained based on the first attenuation value and the second attenuation value, e.g., the first attenuation value and the second attenuation value are combined and spliced to obtain the intermediate attenuation data.
Next, target attenuation data μ is obtained based on the intermediate attenuation data final . For example, attenuating data μ from the updates new And a third attenuation value (as in the fourth case of equation (4)), the image region corresponding to the third attenuation value is within the scan field fFOV of the first scan mode. The third attenuation value corresponds to the motion region of the scanned object, or from the updated attenuation data μ new The third attenuation value determined in (c) and the compensated attenuation data mu 1 The difference between the corresponding attenuation values is greater than a preset difference value. Replacing the corresponding attenuation value in the intermediate attenuation data with the third attenuation value (where the corresponding attenuation value is the compensated attenuation data μ 1 Numerical value in (a), the intermediate attenuation data obtained after the substitution is used as target attenuation data mu final . Compensated attenuation data mu corresponding to a motion area caused by movements such as changes in the position of the mediastinum and the hepatic tip, body movements of a scanned object, and intestinal peristalsis due to respiratory movements and heart beats 1 And updated attenuation data μ new Inconsistent, therefore, data μ for motion region or compensated attenuation 1 And updated attenuation data μ new Inconsistent areas corresponding to compensated attenuation data mu 1 The application adopts the warp containing the motion attenuation information for the region with insufficient motion attenuation informationUpdating attenuation data mu new As attenuation data corresponding to the region, updated attenuation data μ employed by the region new The method is more matched with TOF PET scanning data, and can reflect the real activity state of the scanned object during PET scanning, so that the correction result is more accurate.
Next, a partial attenuation value corresponding to the non-moving region of the scanned object is determined from the intermediate attenuation data in the scanning field fFOV of the first scanning mode, or the updated attenuation data μ is determined from the intermediate attenuation data new The difference between the corresponding attenuation values is smaller than or equal to the partial attenuation value of the preset difference value, the determined partial attenuation value corresponds to the image area which is also in the scanning visual field fFOV of the first scanning mode, and the partial attenuation value still maintains the compensated attenuation data mu 1 As in the third case in equation (4). This is because for a body part of the scanned object, the attenuation data μ is compensated for areas with little motion or 1 And updated attenuation data μ new Substantially equal regions for which the updated attenuation data may introduce noise due to iterative updating because the regions have no motion bias, the original compensated attenuation data μ is employed for the regions 1 More suitably, correction errors can be reduced.
It can be appreciated that the embodiment of the application does not need to perform conventional CT or MRI scanning, can calculate and obtain corresponding initial attenuation data based on a positioning image through a deep learning model, then performs compensation calculation and iterative update calculation according to prior information (a PET non-attenuation correction image and TOF PET scanning data) with the initial attenuation data as initial input so as to obtain target attenuation data meeting the correction precision requirement, and performs attenuation and scattering correction on an image to be corrected (a PET image) by using the target attenuation data, so that a clinical image meeting the clinical diagnosis requirement can be obtained. Embodiments of the present application do not require conventional CT or MRI scanning and thus can be used for lower dose clinical applications in shorter or longer duration scans, multiple point-in-time repeated scans, pediatric applications, theranostics, and other application scenarios.
Fig. 4 is a schematic diagram of an image correction device according to an embodiment of the present application.
Referring to fig. 4, an image correction apparatus 400 according to an embodiment of the present application includes: an acquisition module 410, a determination module 420, and a correction module 430.
The acquiring module 410 is configured to acquire a positioning image, where the positioning image is acquired by image capturing of a scanned object, and the positioning image is used to determine a scanning range, for example, when the scanned object is scanned by using the first scanning mode.
Illustratively, the determination module 420 is configured to determine initial attenuation data based on the localization image using a trained deep learning model.
The correction module 430 is used for performing image correction on an image to be corrected based on the initial attenuation data, wherein the image to be corrected is obtained by processing the scan data obtained by the second scan mode.
It is to be understood that, regarding the specific description of the image correction apparatus 400, reference may be made to the description of the image correction method hereinabove.
Illustratively, image correction of the image to be corrected based on the initial attenuation data includes: compensating and correcting the initial attenuation data based on a reference image to obtain compensated attenuation data, wherein the reference image is obtained by processing scanning data obtained by a second scanning mode; based on the compensated attenuation data, image correction is performed on the image to be corrected.
Illustratively, compensating for the initial attenuation data based on the reference image, the deriving compensated attenuation data includes: determining an object area where a scanned object is located from a reference image; determining an inner peripheral region and an outer peripheral region in the object region based on the scanning view data of the first scanning mode, wherein the inner peripheral region is in the scanning view of the first scanning mode, and the outer peripheral region is outside the scanning view of the first scanning mode; and under the condition that the area size of the peripheral area exceeds the preset area size, compensating and correcting the numerical value corresponding to the peripheral area in the initial attenuation data to obtain compensated attenuation data.
Illustratively, compensating for values in the initial attenuation data corresponding to the peripheral region includes: and replacing the numerical value corresponding to the peripheral area in the initial attenuation data with the preset initial attenuation value.
Illustratively, determining an object region in which the scanned object is located from the reference image includes: and determining an area where pixels with pixel values larger than a preset pixel threshold value are located from the reference image as an object area, wherein the preset pixel threshold value is used for distinguishing pixels corresponding to the scanned object in the reference image from pixels corresponding to the environment.
The predetermined region size is illustratively characterized by a predetermined number of pixels, and the region size of the peripheral region exceeding the predetermined region size includes the number of pixels of the peripheral region exceeding the predetermined number of pixels.
Illustratively, the number of pixels of the peripheral region is determined based on: dividing the reference image based on a preset pixel threshold value to obtain a mask image corresponding to the reference image, wherein the mask image comprises a first pixel value and a second pixel value, the first pixel value represents that the pixel value of a corresponding pixel in the reference image is larger than the preset pixel threshold value, and the second pixel value represents that the pixel value of the corresponding pixel in the reference image is smaller than or equal to the preset pixel threshold value; and determining pixels belonging to the peripheral area from pixels corresponding to the first pixel value in the mask image so as to obtain the number of pixels of the peripheral area.
Illustratively, image correcting the image to be corrected based on the compensated attenuation data includes: updating the compensated attenuation data based on the scan data obtained by the second scan mode to obtain updated attenuation data; based on the updated attenuation data, image correction is performed on the image to be corrected.
Illustratively, updating the compensated attenuation data based on the scan data resulting from the second scan pattern includes: obtaining radioactivity distribution data based on the scan data obtained by the second scan mode; the compensated attenuation data is updated based on the scan data obtained by the second scan modality and the radioactivity distribution data.
Illustratively, updating the compensated attenuation data based on the scan data obtained by the second scan modality and the radioactivity distribution data includes: updating the radioactivity distribution data for the first time based on the scan data obtained by the second scan mode and the compensated attenuation data; the compensated attenuation data is updated for the first time based on the scan data obtained by the second scan modality and the radioactivity distribution data after the first time update.
Illustratively, updating the compensated attenuation data based on the scan data obtained by the second scan modality and the radioactivity distribution data further comprises: when the update of the compensated attenuation data does not satisfy the update condition, performing a second update of the radioactivity distribution data after the first update based on the scan data obtained by the second scan method and the compensated attenuation data after the first update; and updating the compensated attenuation data after the first time based on the scanning data obtained by the second scanning mode and the radioactivity distribution data after the second time.
Illustratively, image correction of the image to be corrected based on the updated attenuation data includes: carrying out data fusion on the updated attenuation data and the compensated attenuation data to obtain target attenuation data; and carrying out image correction on the image to be corrected based on the target attenuation data.
Illustratively, data fusing the updated attenuation data and the compensated attenuation data to obtain target attenuation data includes: determining a first attenuation value outside the scan field of view for the first scan modality from the updated attenuation data; determining a second attenuation value within the scan field of view for the first scan modality from the compensated attenuation data; obtaining intermediate attenuation data based on the first attenuation value and the second attenuation value; based on the intermediate attenuation data, target attenuation data is obtained.
Illustratively, deriving the target attenuation data based on the intermediate attenuation data includes: determining a third attenuation value from the updated attenuation data, wherein the third attenuation value corresponds to a motion area of the scanned object, or the difference between the third attenuation value and the corresponding attenuation value in the compensated attenuation data is larger than a preset difference value, and an image area corresponding to the third attenuation value is in a scanning view of the first scanning mode; and replacing the corresponding attenuation value in the intermediate attenuation data with the third attenuation value, and taking the intermediate attenuation data obtained after replacement as target attenuation data.
Illustratively, the trained deep learning model is trained by: inputting a positioning sample image for training into a deep learning model to be trained, and outputting first attenuation data; model parameters of the deep learning model to be trained are updated based on errors between the first attenuation data and the first reference attenuation data.
Illustratively, training of the deep learning model further comprises: inputting the positioning sample image for testing into a deep learning model to be trained, and outputting second attenuation data; a test result for the deep learning model is determined based on an error between the second attenuation data and the second reference attenuation data.
An embodiment of the application provides an electronic device comprising a memory storing a computer program and a processor implementing the steps of the method of any of the embodiments described above when the processor executes the computer program.
An embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of the method of any of the above embodiments.
An embodiment of the application provides a computer program product comprising instructions which, when executed by a processor of a computer device, enable the computer device to perform the steps of the method of any of the embodiments described above.
It should be noted that the logic and/or steps represented in the flowcharts or otherwise described herein, for example, may be considered as a ordered listing of executable instructions for implementing logical functions, and may be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this disclosure, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium may even be paper or other suitable medium upon which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
In the description of the present application, a description of the terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In the present application, the schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
In the description of the present application, it should be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", "axial", "radial", "circumferential", etc. indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings are merely for convenience in describing the present application and simplifying the description, and do not indicate or imply that the device or element being referred to must have a specific orientation, be configured and operated in a specific orientation, and therefore should not be construed as limiting the present application.
Furthermore, the terms "first," "second," and the like, as used in embodiments of the present application, are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or as implying any particular number of features in the present embodiment. Thus, a feature of an embodiment of the application that is defined by terms such as "first," "second," etc., may explicitly or implicitly indicate that at least one such feature is included in the embodiment. In the description of the present application, the word "plurality" means at least two or more, for example, two, three, four, etc., unless explicitly defined otherwise in the embodiments.
In the present application, unless explicitly stated or limited otherwise in the examples, the terms "mounted," "connected," and "fixed" as used in the examples should be interpreted broadly, e.g., the connection may be a fixed connection, may be a removable connection, or may be integral, and it may be understood that the connection may also be a mechanical connection, an electrical connection, etc.; of course, it may be directly connected, or indirectly connected through an intermediate medium, or may be in communication with each other, or in interaction with each other. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art according to specific embodiments.
In the present application, unless expressly stated or limited otherwise, a first feature "up" or "down" a second feature may be the first and second features in direct contact, or the first and second features in indirect contact via an intervening medium. Moreover, a first feature being "above," "over" and "on" a second feature may be a first feature being directly above or obliquely above the second feature, or simply indicating that the first feature is level higher than the second feature. The first feature being "under", "below" and "beneath" the second feature may be the first feature being directly under or obliquely below the second feature, or simply indicating that the first feature is less level than the second feature.
While embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.

Claims (19)

1. An image correction method, the method comprising:
acquiring a positioning image, wherein the positioning image is obtained by carrying out image acquisition on a scanned object, and the positioning image is used for determining a scanning range when the scanned object is scanned by a first scanning mode;
determining initial attenuation data based on the positioning image using a trained deep learning model; and
and carrying out image correction on the image to be corrected based on the initial attenuation data, wherein the image to be corrected is obtained by processing the scanning data obtained by using a second scanning mode.
2. The method of claim 1, wherein image correcting the image to be corrected based on the initial attenuation data comprises:
compensating and correcting the initial attenuation data based on a reference image to obtain compensated attenuation data, wherein the reference image is obtained by processing the scanning data obtained by the second scanning mode; and
And performing image correction on the image to be corrected based on the compensated attenuation data.
3. The method of claim 2, wherein compensating the initial attenuation data based on the reference image to obtain compensated attenuation data comprises:
determining an object area where the scanned object is located from the reference image;
determining an inner peripheral region and an outer peripheral region in the object region based on the scanning view data of the first scanning mode, wherein the inner peripheral region is in the scanning view of the first scanning mode, and the outer peripheral region is outside the scanning view of the first scanning mode; and
and under the condition that the area size of the peripheral area exceeds the preset area size, compensating and correcting the numerical value corresponding to the peripheral area in the initial attenuation data to obtain the compensated attenuation data.
4. The method of claim 3, wherein compensating for values in the initial attenuation data corresponding to the peripheral region comprises:
and replacing the numerical value corresponding to the peripheral area in the initial attenuation data with a preset initial attenuation value.
5. A method according to claim 3, wherein said determining from said reference image an object region in which the scanned object is located comprises:
and determining an area where pixels with pixel values larger than a preset pixel threshold value are located from the reference image as the object area, wherein the preset pixel threshold value is used for distinguishing pixels corresponding to the scanned object from pixels corresponding to the environment in the reference image.
6. The method of claim 5, wherein the predetermined region size is characterized by a predetermined number of pixels, and wherein the region size of the peripheral region exceeding the predetermined region size comprises the number of pixels of the peripheral region exceeding the predetermined number of pixels.
7. The method of claim 6, wherein the number of pixels of the peripheral region is determined based on:
dividing the reference image based on the preset pixel threshold value to obtain a mask image corresponding to the reference image, wherein the mask image comprises a first pixel value and a second pixel value, the first pixel value represents that the pixel value of a corresponding pixel in the reference image is larger than the preset pixel threshold value, and the second pixel value represents that the pixel value of the corresponding pixel in the reference image is smaller than or equal to the preset pixel threshold value; and
And determining pixels belonging to the peripheral area from pixels corresponding to the first pixel value in the mask image so as to obtain the number of pixels of the peripheral area.
8. The method according to any one of claims 2-7, wherein said image correcting the image to be corrected based on the compensated attenuation data comprises:
updating the compensated attenuation data based on the scan data obtained by the second scan mode to obtain updated attenuation data; and
based on the updated attenuation data, image correction is performed on the image to be corrected.
9. The method of claim 8, wherein updating the compensated attenuation data based on scan data obtained by the second scan pattern comprises:
obtaining radioactivity distribution data based on the scan data obtained by the second scan mode;
updating the compensated attenuation data based on scan data obtained from the second scan modality and the radioactivity distribution data.
10. The method of claim 9, wherein updating the compensated attenuation data based on the scan data obtained from the second scan modality and the radioactivity distribution data comprises:
Updating the radioactivity distribution data for a first time based on scan data obtained by the second scan modality and the compensated attenuation data; and
and updating the compensated attenuation data for the first time based on the scan data obtained by the second scan mode and the radioactivity distribution data after the first time updating.
11. The method of claim 10, wherein updating the compensated attenuation data based on the scan data obtained from the second scan modality and the radioactivity distribution data further comprises:
when the update of the compensated attenuation data does not satisfy an update condition, performing a second update of the radioactivity distribution data after the first update based on the scan data obtained by the second scan method and the compensated attenuation data after the first update; and
and updating the compensated attenuation data after the first time based on the scanning data obtained by the second scanning mode and the radioactivity distribution data after the second time updating.
12. The method of claim 8, wherein the image correcting the image to be corrected based on the updated attenuation data comprises:
Performing data fusion on the updated attenuation data and the compensated attenuation data to obtain target attenuation data; and
and carrying out image correction on the image to be corrected based on the target attenuation data.
13. The method of claim 12, wherein data fusing the updated attenuation data and the compensated attenuation data to obtain target attenuation data comprises:
determining a first attenuation value outside a scan field of view for the first scan modality from the updated attenuation data;
determining a second attenuation value within a scan field of view for the first scan modality from the compensated attenuation data;
obtaining intermediate attenuation data based on the first attenuation value and the second attenuation value; and
and obtaining the target attenuation data based on the intermediate attenuation data.
14. The method of claim 13, wherein the deriving the target attenuation data based on the intermediate attenuation data comprises:
determining a third attenuation value from the updated attenuation data, wherein the third attenuation value corresponds to a motion area of the scanned object, or a difference between the third attenuation value and the corresponding attenuation value in the compensated attenuation data is larger than a preset difference value, and an image area corresponding to the third attenuation value is in a scanning view of the first scanning mode; and
And replacing the corresponding attenuation value in the intermediate attenuation data with the third attenuation value, and taking the intermediate attenuation data obtained after replacement as the target attenuation data.
15. The method of any one of claims 1-7, wherein the trained deep learning model is trained by:
inputting a positioning sample image for training into a deep learning model to be trained, and outputting first attenuation data; and
model parameters of the deep learning model to be trained are updated based on errors between the first attenuation data and first reference attenuation data.
16. The method of claim 15, wherein the training of the deep learning model further comprises:
inputting the positioning sample image for testing into a deep learning model to be trained, and outputting second attenuation data; and
a test result for the deep learning model is determined based on an error between the second attenuation data and second reference attenuation data.
17. An image correction apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring a positioning image, wherein the positioning image is obtained by carrying out image acquisition on a scanned object, and the positioning image is used for determining a scanning range when the scanned object is scanned by a first scanning mode;
A determination module for determining initial attenuation data based on the localization image using a trained deep learning model; and
and the correction module is used for carrying out image correction on the image to be corrected based on the initial attenuation data, wherein the image to be corrected is obtained by processing the scanning data obtained by the second scanning mode.
18. An electronic device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1-16 when the computer program is executed.
19. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1-16.
CN202311117690.4A 2023-08-31 2023-08-31 Image correction method, device, electronic equipment and storage medium Pending CN117058042A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311117690.4A CN117058042A (en) 2023-08-31 2023-08-31 Image correction method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311117690.4A CN117058042A (en) 2023-08-31 2023-08-31 Image correction method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117058042A true CN117058042A (en) 2023-11-14

Family

ID=88657169

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311117690.4A Pending CN117058042A (en) 2023-08-31 2023-08-31 Image correction method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117058042A (en)

Similar Documents

Publication Publication Date Title
EP2289048B1 (en) Using non-attenuation corrected pet emission images to compensate for incomplete anatomic images
US8150112B2 (en) Regional reconstruction of spatially distributed functions
CN109308728B (en) Positron emission computed tomography image processing method and device
US8787643B2 (en) Functional imaging
EP2609572B1 (en) Dual modality imaging including quality metrics
US20080095414A1 (en) Correction of functional nuclear imaging data for motion artifacts using anatomical data
US20030233039A1 (en) Physiological model based non-rigid image registration
US20110103669A1 (en) Completion of Truncated Attenuation Maps Using MLAA
US20130129172A1 (en) Computed-tomography system and method for determining volume information for a body
US20100284598A1 (en) Image registration alignment metric
CN110415310B (en) Medical scanning imaging method, device, storage medium and computer equipment
CN109978966A (en) The correction information acquiring method of correction for attenuation is carried out to PET activity distributed image
CN109791701A (en) The iterative image reconstruction that the dynamic of formation with the artifact to noise-inducing inhibits
US20080013814A1 (en) Pharmacokinetic Image Registration
CN110458779B (en) Method for acquiring correction information for attenuation correction of PET images of respiration or heart
CN112365479B (en) PET parameter image processing method, device, computer equipment and storage medium
CN102196774B (en) Method for characterizing object movement from CT imaging data
CN117058042A (en) Image correction method, device, electronic equipment and storage medium
US20220156904A1 (en) Providing an optimum subtraction data set
CN115439572A (en) Attenuation correction coefficient image acquisition method and PET image reconstruction method
CN112365593B (en) PET image reconstruction method and system
JP7487406B2 (en) Sharpness-preserving breathing motion compensation
US20230154067A1 (en) Output Validation of an Image Reconstruction Algorithm
CN109859182B (en) Medical image mismatch detection method and device, computer equipment and storage medium
CN115830167A (en) PET image scattering correction method and PET system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination