CN107330949B - Artifact correction method and system - Google Patents

Artifact correction method and system Download PDF

Info

Publication number
CN107330949B
CN107330949B CN201710508431.2A CN201710508431A CN107330949B CN 107330949 B CN107330949 B CN 107330949B CN 201710508431 A CN201710508431 A CN 201710508431A CN 107330949 B CN107330949 B CN 107330949B
Authority
CN
China
Prior art keywords
projection data
phantom
correction
neural network
corrected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710508431.2A
Other languages
Chinese (zh)
Other versions
CN107330949A (en
Inventor
刘炎炎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Healthcare Co Ltd filed Critical Shanghai United Imaging Healthcare Co Ltd
Priority to CN201710508431.2A priority Critical patent/CN107330949B/en
Publication of CN107330949A publication Critical patent/CN107330949A/en
Priority to US15/954,953 priority patent/US10977843B2/en
Application granted granted Critical
Publication of CN107330949B publication Critical patent/CN107330949B/en
Priority to US17/228,690 priority patent/US11908046B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/005Specific pre-processing for tomographic reconstruction, e.g. calibration, source positioning, rebinning, scatter correction, retrospective gating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an artifact correction method and a system, wherein the method comprises the following steps: scanning a phantom, and acquiring first projection data of the phantom through a detector; processing the first projection data of the phantom or obtaining second projection data of the phantom through simulation; constructing a neural network model, and training the neural network model by using the first projection data and the second projection data of the model body to obtain a correction coefficient; and correcting the third projection data of the detection target to be corrected by using the correction coefficient, and reconstructing the medical image of the detection target by using the corrected projection data. The invention is based on the neural network deep learning method, utilizes the training of big data to improve the correction accuracy and obtain better ring artifact correction effect.

Description

Artifact correction method and system
Technical Field
The present invention relates to the field of medical image processing, and in particular, to an artifact correction method and system.
Background
CT has been improved since the last 70 s, and from the first to the fifth generation, scanning time has been shortened and image quality has been improved. CT has been widely used in many areas of medical diagnosis. In the CT, cumulative attenuation coefficients (or called projections) of X-rays passing through a human body slice in various directions are measured, then the distribution of the X-ray attenuation coefficients on the whole cross section is calculated by a computer, and finally the distribution is displayed in an image form to assist clinical diagnosis of diseases. The X-ray imaging system can provide higher soft tissue resolution than common X-ray imaging and solve the problem of three-dimensional structure overlapping, so that the X-ray imaging system has epoch-making significance on medical imaging. In the development process of CT, the artifact is always an important factor for restricting CT. The artifact is a component that does not exist in the real object in the reconstructed image when the real object is scanned, and is one of important factors that cause the CT reconstructed image to lose diagnostic significance. The ring artifact is an important artifact affecting the image quality of a CT reconstructed image, is reflected on the appearance, is a series of concentric rings with the center of a reconstruction center as the center of a circle and gray levels different from surrounding pixels, and belongs to a typical artifact in the CT reconstructed image. The ring artifacts reduce the quality of the CT reconstructed image and influence the subsequent processing and quantitative analysis of the image. The ring artifacts are formed mainly due to the inconsistency of the response of the detector probe elements to the intensity of the radiation field. The ring artifacts may be caused by the influence of the mounting position deviation of the detector on the scanning and the influence of the signal crosstalk on the scanning. Clinically, false diagnosis or missed diagnosis is caused by image blurring easily caused by overlapping of the artifact and pathological tissues, so that it is necessary to correct or reduce the artifact to the maximum extent. Studying ring artifacts and their corresponding solutions is one of the key and hot spots of current CT imaging problems.
At present, most of ring artifact correction methods are based on a certain factor generated by an artifact, and correction is performed in an imaging image by using a correction algorithm, so that the influence of the factor on the artifact of the image is reduced or eliminated.
However, the following problems exist in the correction of ring artifacts in an image by using a correction algorithm:
1. all factors influencing the image ring artifact cannot be corrected at the same time or are mostly corrected;
2. the correction accuracy depends on the applicability of the correction algorithm, and different correction algorithms may result in poor correction effect and accuracy.
Disclosure of Invention
Aiming at the problems existing in the correction of the ring artifact of the image by using a correction algorithm in the imaging image, the invention aims to improve the correction effect and accuracy of the ring artifact correction and solve the problem that all factors influencing the ring artifact of the image cannot be corrected at the same time or most of the factors are corrected in the CT equipment in the prior art.
In order to achieve the purpose of the invention, the technical scheme provided by the invention is as follows:
a method of artifact correction, the method comprising: scanning a phantom, and acquiring first projection data of the phantom through a detector; processing the first projection data of the phantom or obtaining second projection data of the phantom through simulation; constructing a neural network model, and training the neural network model by using the first projection data and the second projection data of the model body to obtain a correction coefficient; and correcting the third projection data of the detection target to be corrected by using the correction coefficient, and reconstructing the medical image of the detection target by using the corrected projection data.
In the present invention, the training the neural network model by using the first projection data and the second projection data of the phantom to obtain a correction coefficient includes: and inputting a plurality of groups of first projection data and second projection data of the phantom under different scanning conditions into the neural network model to obtain correction coefficients corresponding to the plurality of groups of different scanning conditions.
In the present invention, the processing the first projection data of the phantom to obtain the second projection data of the phantom includes: reconstructing first projection data of the phantom to obtain a first image of the phantom; smoothing the first image of the die body to obtain a second image of the die body; and carrying out orthographic projection on the second image of the phantom to obtain second projection data of the phantom
In the present invention, the scanning conditions include: the size of the phantom, the scanning protocol and/or the eccentric arrangement of the phantom.
In the present invention, the optimization goal of the deep learning neural network is that the correction coefficient desired to be output is close to the known correction coefficient.
In the present invention, the training the neural network model by using the first projection data and the second projection data of the phantom to obtain a correction coefficient further includes: and acquiring initial correction parameters, and initializing the neural network model by using the initial correction parameters.
In the present invention, the processing the first projection data of the phantom to obtain the second projection data of the phantom includes: and smoothing the first projection data of the phantom to obtain second projection data of the phantom.
In the present invention, the correcting the third projection data of the detection target to be corrected by using the correction coefficient includes: constructing a correction model according to the correction coefficient; and inputting the third projection data into the correction model to obtain corrected projection data.
In the present invention, the correction model comprises a first and/or second derivative of the detector response.
In order to achieve the above object, the present invention further provides another artifact correction method, including: obtaining at least one training sample, wherein each training sample comprises first projection data and second projection data of a phantom, the first projection data is obtained by scanning the phantom, and the second projection data is obtained based on the first projection data of the phantom; constructing a neural network model, and training the neural network model by using the at least one training sample to obtain a correction parameter; correcting the projection data of the scanning object to be corrected by using the correction parameters to obtain corrected projection data; and reconstructing a medical image of the scanned object using the corrected projection data.
To achieve the above object, the present invention further provides an artifact correction system, including: at least one device including a detector, the device configured to scan a phantom and acquire first projection data of the phantom via the detector; a processor coupled to the at least one device via a network, the processor operable to: obtaining second projection data of the phantom according to the first projection data of the phantom; constructing a neural network model, and training the neural network model by using the first projection data and the second projection data of the model body to obtain a correction coefficient; and correcting the third projection data of the detection target to be corrected by using the correction coefficient, and reconstructing the medical image of the detection target by using the corrected projection data.
Compared with the prior art, the invention has the following beneficial effects:
1. a plurality of factors influencing the image ring artifact are comprehensively considered, and the factors are corrected at the same time.
2. The neural network deep learning based method utilizes big data training to improve correction accuracy and obtain a better ring artifact correction effect.
Drawings
FIG. 1 is a schematic diagram of an artifact correction system provided in accordance with the present invention;
FIG. 2 is a schematic diagram of a processor provided in accordance with the present invention;
FIG. 3 is an exemplary flow chart for obtaining a parametric model provided in accordance with the present invention;
FIG. 4 is a schematic diagram of training a neural network model provided in accordance with the present invention;
FIG. 5 is a schematic diagram of a neural network model provided in accordance with the present invention;
FIG. 6 is an exemplary flow chart for acquiring ring artifact corrected data of a detection target according to the present invention;
FIG. 7 is a water-mode ringing artifact correction effect obtained by the method provided by the present invention; and
fig. 8 shows the artifact correction effect of the human body image obtained by the method provided by the invention.
FIG. 1 labels: 110 is a processor, 120 is a network, 130 is a device;
FIG. 2 labels: 210 is a data acquisition module, 220 is a data processing module, 230 is a training module, and 240 is an output module.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, specific embodiments accompanied with figures and examples are described in detail below.
Fig. 1 is a schematic diagram of an artifact correction system 100 provided in accordance with the present invention. Artifact correction system 100 may include a processor 110, a network 120, and a device 130. The processor 110 and the device 130 may be connected or in communication via a network 120. In some embodiments, the artifact correction system 100 may comprise an imaging system. The artifact correction system 100 may perform artifact correction on imaging data of the imaging system, including but not limited to ringing artifacts. The imaging system may be a single modality imaging system comprising: DSA systems, Magnetic Resonance Imaging (MRI) systems, computed tomography angiography systems (CTA), Positron Emission Tomography (PET) systems, Single Photon Emission Computed Tomography (SPECT) systems, Computed Tomography (CT) systems, Digital Radiography (DR) systems, and the like. In some embodiments, the imaging system may be a multi-modality imaging system, comprising: positron emission tomography-computed tomography (PET-CT) systems, positron emission tomography-magnetic resonance imaging (PET-MRI) systems, single photon emission computed tomography-positron emission tomography (SPECT-PET) systems, digital subtraction angiography-magnetic resonance imaging (DSA-MRI) systems, and the like. It should be noted that the artifact correction system 100 described below is for illustrative purposes only and does not limit the scope of the present invention.
The processor 110 is a device that processes received data and outputs the processing result. The processor 110 may receive data for the device 130 and, based on the data, obtain device configuration information as well as scan information. For example, the processor 110 may obtain information for a detector scan of the device 130 and process it further. Reconstruction, correction, etc. of the scan data. In some embodiments, the processor 110 may be a computer, a smart phone, a laptop computer, an intelligent medical instrument, or other electronic device with CPU functionality.
Network 120 may be any connection that connects two or more devices. For example, the network 120 may be a wired network or a wireless network. In some embodiments, the network 120 may be a single network or a combination of networks. For example, the network 120 may include one or a combination of local area networks, wide area networks, public networks, private networks, wireless local area networks, virtual networks, public telephone networks, intranets, Zigbee networks, near field communication networks, fiber optic networks, the internet, and the like. The modules or units in the artifact correction system 100 may interact with each other via the connection network 120. For example, in the ring artifact correction system 100, the processor 110 may be connected to multiple medical devices via the network 120 and receive scan data from the multiple devices simultaneously.
The device 130 may be a device for data acquisition of a detection target. The apparatus may include a scanning device, an imaging device, a patient bed device, and the like. In some embodiments, the device 130 may scan for detection targets and send the scan data to the processor 110 over the network 120. The apparatus 130 may include one or more of a CT device, an MRI device, a SPECT device, a PET device, a CTA device, a DR device, a PET-CT device, a PET-MRI device, a SPECT-PET device, and the like. In some embodiments, the scan data may be raw projection data of the detected object and sent to the processor 110 for further data processing and manipulation.
In some embodiments, processor 110 may communicate directly with device 130 without going through network 120. In some embodiments, the artifact correction system 100 may further include a display device and/or monitoring device.
Fig. 2 is a schematic diagram of a processor 110 provided in accordance with the present invention. The processor 110 may include a data acquisition module 210, a data processing module 220, a training module 230, and an output module 240. The connections between the modules within the system may be wired, wireless, or a combination of both. Any one of the modules may be local, remote, or a combination of the two. The correspondence between the modules may be one-to-one, or one-to-many.
The data acquisition module 210 may acquire scan information. The scan information may include raw projection data or raw image data of the detection target. In some embodiments, the data acquisition module 210 may scan the detection target through one or more detectors on the device 130 and obtain scan information thereof. The scanned information is data obtained after scanning of all targets in the scanning field of view of the detector, including a patient bed, a patient and other equipment in the scanning field of view. In some embodiments, the data acquisition module 210 may acquire raw projection data of the water phantom through a detector. And sends the acquired original projection data of the water model to the data processing module 220 for further processing.
The data processing module 220 may process the collected information. For example, the data processing module 220 may classify or combine the collected data. For another example, the data processing module 220 may perform operations such as analyzing, screening, classifying, filtering, denoising, and the like on the data or the signal. In some embodiments, the data processing module 220 may process the acquired raw projection data of the water phantom to obtain the processed projection data of the water phantom. The obtained raw projection data of the water phantom and the processed projection data can be used as input of the neural network model and sent to the training module 230 for further operation. In some embodiments, the data processing module 220 may display the data information on a user interface. In some embodiments, the data processing module 220 may include one or more interconnected processing units. Wherein the one or more processing units may be in communication with or connected to some or all of the modules or devices in the artifact correction apparatus.
The training module 230 may perform learning training on the neural network model and output the training result. In some embodiments, the training module 230 performs learning training on the neural network model by using the projection data of the plurality of sets of water models, and obtains the training result through the learning training of the neural network model. The training results can be applied to the scanning of new detection targets and the correction of their ringing artifacts. In some embodiments, the neural network model may include, but is not limited to, a BP neural network, a feedback neural network, a radial basis neural network, a self-organizing neural network, a convolutional neural network, a perceptron neural network, a linear neural network, and the like.
Output module 240 may output the training results of training module 230 from a processor or send information generated by artifact correction system 100 to a user. In some embodiments, the information output by the output module 240 may include text, tables, images, sounds, codes, and the like. For example, the output module 240 may output training results, reconstruction results, instructions, etc. of the artifact correction system 100. In some embodiments, output module 240 may also include one or more physical elements or devices, such as a touch display screen, LED indicator lights, speakers, microphones, and the like.
It will be appreciated by those skilled in the art that the above block diagrams are merely illustrative of the correction of the projection values and that other variations are possible. For example, in some embodiments, a storage module may be added. In addition, the information exchange and transmission modes between the modules are not unique.
Fig. 3 is an exemplary flowchart 300 for obtaining correction coefficients according to the present invention.
In step 310, the processor may acquire first projection data of a water phantom via a detector on the device. The first projection data of the water phantom may be raw projection data of the water phantom. In some embodiments, the processor may select a plurality of different sizes of water molds for scanning, or may select one size of water mold to set different eccentricities for scanning. After the original projection data of the water model is acquired, the original projection data needs to be processed.
In step 320, the processor may process the acquired first projection data of the plurality of sets of water models to obtain second projection data of the water models. The second projection data of the water phantom may be ideal projection data of the water phantom. In some embodiments, the ideal projection data of the water phantom may be obtained by correcting the first projection data of the water phantom by a conventional correction method, or may be obtained by physical simulation. Since the structure and the form of the water model are fixed and known, the artifact in the original projection data obtained by scanning can be corrected through calculation or simulation, and ideal water model projection data can be obtained.
In some embodiments, the second projection data of the water phantom may be obtained by a variety of methods. For example: acquiring original projection data, smoothing the image after reconstruction, and finally carrying out forward projection to obtain ideal projection data; because the structure of the model body is generally simpler, the model body can be modeled, and ideal projection data can be obtained by calculation by introducing an analytic equation of an X-ray transmission process; or the structure of the phantom may be digitized, and ideal projection data may be obtained by simulating the transmission path of X-ray photons, such as monte carlo simulation; even the original projection data may be directly subjected to smoothing processing or the like. It should be noted that the method of obtaining ideal projection data of the water phantom based on the raw projection data of the water phantom is not limited to the method in the example.
In step 330, the first projection data and the second projection data of the multiple sets of water models are used for training and learning the neural network model, and the training result is output after the training of big data. In step 340, a parametric model is output. In some embodiments, the initial model of the neural network model is a parameter initialized neural network model, for example, the parameters may be randomly initialized. And training the initial model of the neural network, namely inputting a training sample into the initial model of the neural network, and obtaining output after the initial model of the neural network is operated. The current parameters of the model are then adjusted back using the optimization function. In some embodiments, the process of adjusting the model in reverse may be an iterative process. After each training using the training sample, the parameters in the neural network model will change, and serve as the "initialization parameters" for the next training input to the training sample. For example, in some embodiments, the parameters of the model may be inversely adjusted using an optimization function. The objective of the optimization function is to adjust the parameters of the model to minimize the difference between the actual output value of the model and the expected output value. After a plurality of training samples are used for training, the parameter values in the model can reach the optimal value, namely the difference value between the actual output value and the expected output value is minimum, and the training is finished. And after the training of the neural network initial model is finished, obtaining a parameter model, wherein the parameter value is fixed.
The artifact correction process described in the present invention is performed directly on the original projection data, and assuming that the original projection data is x, the corrected projection data y can be obtained by the following formula:
Figure BDA0001335206340000121
wherein a isi,bi,ciFor correction factors, dx is the first derivative of the detector response, d2x is the second derivative of the detector response and M, N, P are the correction orders, generally not more than three, as determined by the correction effect. As shown in the above equation, if the correction coefficient is obtained, the corrected projection data can be obtained according to the equation. Through the training of big data, the neural network model can obtain a correction coefficient through ideal projection data and actual measurement projection data of the water model, so that a better ring artifact correction effect is obtained.
In some embodiments, the first derivative dx of the detector response in formula (r) can be understood as the data correction based on the detector position, the second derivative d of the detector response2x may be understood as data correction based on interference signal crosstalk. Formula (I)In the method, the obtained corrected projection data y is corrected by comprehensively considering the influence of detector property factors, detector position factors, signal crosstalk factors and the like on the ring artifacts, and the factors are corrected to finally obtain corrected data. The multivariate correction method can better obtain the ring artifact correction effect.
Fig. 4 is a schematic diagram of training a neural network model according to the present invention. In 401, a water model of one or several dimensions is scanned and projection data for the water model is obtained in 405. The projection data is reconstructed at 402 and water model raw images are obtained at 406. At 403, the original image is smoothed and an ideal water-model image without ringing artifacts is obtained at 407. At 404, the ideal water model image is again orthographic projected and ideal water model projection data is obtained at 408. In 409, raw projection data of the water phantom and ideal projection data are input together into a parametric model neural network, and detector-related parameters (i.e., correction coefficients) are output in 410 through training of neural network learning. In order to ensure the accuracy of the correction coefficient, a plurality of water molds with different sizes can be selected, and the water molds with one size can be obtained by setting different eccentricities.
The correction coefficient can be obtained as shown in the following equation:
[ai,bi,ci,…]=f(rawm,rawidealkV …) formula-
Wherein a isi,bi,ciFor correcting the coefficients, the function f is a trained neural network model, rawmIs the original water model projection data, raw, obtained by scanningidealIs ideal water model projection data obtained by calculation, and kV is a scanning condition. And inputting the water model original projection data obtained under a plurality of groups of different scanning conditions and the corresponding plurality of groups of ideal water model projection data obtained by calculation into a neural network model, and outputting a correction coefficient through training and learning of the neural network model. And applying the obtained correction coefficient to the scanning correction of a new detection target to finish the scanning detection in the same CT. Using projection data from a new scan of the object under inspection,based on the formula (I), the projection data after the ring artifact correction is calculated.
Fig. 5 is a schematic diagram of a model 500 of a neural network provided in accordance with the present invention. As shown, the input at the input is raw projection data rawmIdeal projection data rawidealAnd scanning conditions kV and the like, through mutual weighting of a plurality of connected neuron nodes, and conversion through a function (the function in the invention is a function in a formula II), and finally outputting a correction coefficient set { coef at an output endi}. The training set of the neural network model is ideal correction coefficients obtained under a plurality of protocols. The ideal correction coefficient can be obtained by a conventional correction method or by physical simulation. After big data training, the neural network model can obtain a correction coefficient through ideal projection data and an actually measured projection value, so that a better ring artifact correction effect is obtained. The optimization goal of the neural network model is min | { coef |)i}-{coefi}idealII, where { coefiIs the set of correction coefficients output by the neural network model, { coefi}idealIs a known ideal correction factor. In some embodiments, the optimization goal of the neural network model may be the most common least squares, Σ | coefi-coefidealI or e (coef)i-coefideal)2(ii) a The optimization target may also have a regular term, for example, when the above formula is satisfied, all coefficients are required to be within a certain value range; the optimization objective may also be weighted to ensure that certain coefficients are met preferentially, such as ∑ w (coef)i-coefideal)2. It should be noted that the optimization goals are not limited to those mentioned in the above embodiments. The more the correction coefficient that the neural network model expects to output is closer to the ideal correction coefficient, the better the correction effect is.
In some embodiments, the obtaining of the neural network model may be obtained in the following manner. First, a training sample set is obtained, and those skilled in the art will appreciate that training a neural network model requires a large number of training samples. The training sample set comprises a plurality of trainingsThe training sample, in the present invention, should include the raw projection data of the water phantom and the calculated ideal projection data of the ideal water phantom. A training sample is then obtained from the set of training samples, the expected output value of which is known. And inputting the input value of the training sample into a neural network initial model, and carrying out operation processing on the input value of the training sample by the neural network initial model, wherein the operation processing comprises the processes of feature extraction, feature identification and the like, and finally obtaining a group of output values. During the training process, the parameters of the model are adjusted reversely by using the optimization function. For example, in some embodiments, an optimization function min | { coefi}-{coefi}idealII reverse adjusting the parameters of the model, where { coefiIs the actual output value of the neural network, { coefi}idealIs the expected output value of the training sample. The objective of the optimization function is to adjust the parameters of the model back to minimize the difference between the actual output value and the expected output value of the neural network model. After each training, the parameters in the neural network model are changed and used as the 'initialization parameters' for the next training sample input. And finally, judging whether the trained neural network model meets a preset condition. Wherein the preset condition may be determined by a user. For example, in some embodiments, the preset condition may be that the number of training samples that have been trained reaches a preset value; in other embodiments, the preset condition may be that the trained neural network model is tested, and the test result is qualified. And if the judgment result is yes, directly obtaining the parameter model, and outputting the parameter model to a user or storing the parameter model in a processor. If the judgment result is 'no', repeatedly acquiring a training sample for training, and then performing the steps as above.
Fig. 6 is an exemplary flowchart 600 for acquiring ring artifact corrected data of a detection target according to the present invention. In step 610, the detector acquires third projection data of the detected object, which may be raw projection data of the detected object, and sends the third projection data to the processor. In step 620, the processor calculates the third projection data of the detection target by formula (r) based on the correction coefficient obtained by the neural network model. In step 630, the corrected third projection data of the detection target is output. In some embodiments, the processor may further reconstruct the obtained corrected projection data of the detection target to obtain a corrected image of the detection target, and feed the corrected image back to the user, so as to facilitate a doctor to diagnose the patient.
FIG. 7 is a water-mode ringing artifact correction effect obtained by the method provided by the present invention; fig. 8 shows the artifact correction effect of the human body image obtained by the method provided by the invention. In fig. 7 and 8, (a) the image is an original image with ring artifacts; (b) the image is an image corrected for the original image by the above model. It is evident that the ringing artifacts of the images in (b) have been corrected. As can be seen from fig. 7 and 8, the ring artifact correction effect of the water phantom and the ring artifact correction effect of the human body image are very close. Therefore, all correction coefficients related to the artifacts can be trained through the original projection data and the ideal projection data of the phantom by means of the neural network, and the correction coefficients are utilized to process the human body projection data, so that the ring artifacts in the human body image can be effectively corrected.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method of artifact correction, the method comprising:
scanning a phantom, and acquiring first projection data of the phantom through a detector;
processing the first projection data of the phantom or obtaining second projection data of the phantom through simulation;
constructing a neural network model, and training the neural network model by using a plurality of groups of first projection data and second projection data of the phantom under different scanning conditions to obtain a correction coefficient; and
and correcting the third projection data of the detection target to be corrected by using the correction coefficient, and reconstructing the medical image of the detection target by using the corrected projection data.
2. The method of claim 1, wherein training the neural network model using the first projection data and the second projection data of the phantom under the plurality of sets of different scan conditions to obtain correction coefficients comprises:
and inputting a plurality of groups of first projection data and second projection data of the phantom under different scanning conditions into the neural network model to obtain correction coefficients corresponding to the plurality of groups of different scanning conditions.
3. The method of claim 2, wherein the processing the first projection data of the phantom to obtain the second projection data of the phantom comprises:
reconstructing first projection data of the phantom to obtain a first image of the phantom;
smoothing the first image of the die body to obtain a second image of the die body; and
and carrying out orthographic projection on the second image of the phantom to obtain second projection data of the phantom.
4. The method of claim 2, wherein the scanning conditions comprise a size of the phantom, a scanning protocol, and/or an eccentric placement of the phantom.
5. The method of claim 1, wherein the training the neural network model using the first projection data and the second projection data of the phantom to obtain correction coefficients, further comprises: and acquiring initial correction parameters, and initializing the neural network model by using the initial correction parameters.
6. The method of claim 1, wherein the processing the first projection data of the phantom to obtain the second projection data of the phantom comprises:
and smoothing the first projection data of the phantom to obtain second projection data of the phantom.
7. The method of claim 1, wherein the correcting the third projection data of the detection target to be corrected using the correction coefficient comprises:
constructing a correction model according to the correction coefficient; and
and inputting the third projection data into the correction model to obtain corrected projection data.
8. The method of claim 7, wherein the calibration model comprises a first or second derivative of the detector response.
9. A method of artifact correction, the method comprising:
obtaining at least one training sample, wherein each training sample comprises first projection data and second projection data of a phantom, the first projection data is obtained by scanning the phantom, and the second projection data is obtained based on the first projection data of the phantom;
constructing a neural network model, and training the neural network model by using a plurality of groups of the at least one training sample under different scanning conditions to obtain correction parameters;
correcting the projection data of the scanning object to be corrected by using the correction parameters to obtain corrected projection data; and
reconstructing a medical image of the scanned object using the corrected projection data.
10. An artifact correction system, the system comprising:
at least one device including a detector, the device configured to scan a phantom and acquire first projection data of the phantom via the detector;
a processor coupled to the at least one device via a network, the processor operable to:
obtaining second projection data of the phantom according to the first projection data of the phantom;
constructing a neural network model, and training the neural network model by using a plurality of groups of first projection data and second projection data of the phantom under different scanning conditions to obtain a correction coefficient; and
and correcting the third projection data of the detection target to be corrected by using the correction coefficient, and reconstructing the medical image of the detection target by using the corrected projection data.
CN201710508431.2A 2017-06-28 2017-06-28 Artifact correction method and system Active CN107330949B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201710508431.2A CN107330949B (en) 2017-06-28 2017-06-28 Artifact correction method and system
US15/954,953 US10977843B2 (en) 2017-06-28 2018-04-17 Systems and methods for determining parameters for medical image processing
US17/228,690 US11908046B2 (en) 2017-06-28 2021-04-12 Systems and methods for determining processing parameter for medical image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710508431.2A CN107330949B (en) 2017-06-28 2017-06-28 Artifact correction method and system

Publications (2)

Publication Number Publication Date
CN107330949A CN107330949A (en) 2017-11-07
CN107330949B true CN107330949B (en) 2020-11-03

Family

ID=60198180

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710508431.2A Active CN107330949B (en) 2017-06-28 2017-06-28 Artifact correction method and system

Country Status (1)

Country Link
CN (1) CN107330949B (en)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111443318B (en) * 2019-01-16 2022-08-02 上海联影智能医疗科技有限公司 Magnetic resonance image processing method, magnetic resonance image processing device, storage medium and magnetic resonance imaging system
CN108122265A (en) * 2017-11-13 2018-06-05 深圳先进技术研究院 A kind of CT reconstruction images optimization method and system
CN108053456A (en) * 2017-11-13 2018-05-18 深圳先进技术研究院 A kind of PET reconstruction images optimization method and system
CN108670282B (en) * 2018-03-28 2021-12-17 上海联影医疗科技股份有限公司 Method for correcting bone hardening artifact
CN109171792B (en) * 2018-09-29 2022-06-07 江苏一影医疗设备有限公司 Imaging method and CT imaging system using same
DE102018221943A1 (en) * 2018-12-17 2020-06-18 Siemens Healthcare Gmbh Artifact correction method for three-dimensional volume image data
CN109697741B (en) 2018-12-28 2023-06-16 上海联影智能医疗科技有限公司 PET image reconstruction method, device, equipment and medium
CN109978769B (en) * 2019-04-04 2023-06-20 深圳安科高技术股份有限公司 CT scanning image data interpolation method and system thereof
EP3745161A1 (en) * 2019-05-31 2020-12-02 Canon Medical Systems Corporation A radiation detection apparatus, a method, and a non-transitory computer-readable storage medium including executable instructions
CN110390701B (en) * 2019-07-08 2023-04-25 东软医疗系统股份有限公司 Artifact correction method, artifact correction coefficient simulation method and device
CN110378982B (en) * 2019-07-23 2023-09-12 上海联影医疗科技股份有限公司 Reconstructed image processing method, device, equipment and storage medium
CN110660111B (en) * 2019-09-18 2023-05-30 东软医疗系统股份有限公司 PET scattering correction and image reconstruction method, device and equipment
CN110755100B (en) * 2019-10-17 2023-07-04 沈阳智核医疗科技有限公司 Correction method, correction device, console device and PET system
CN110866880B (en) * 2019-11-14 2023-04-28 上海联影智能医疗科技有限公司 Image artifact detection method, device, equipment and storage medium
CN111062890B (en) * 2019-12-17 2023-09-19 散裂中子源科学中心 Projection data processing method and device, electronic equipment and storage medium
CN110916708A (en) * 2019-12-26 2020-03-27 南京安科医疗科技有限公司 CT scanning projection data artifact correction method and CT image reconstruction method
CN111462168B (en) * 2020-04-22 2023-09-19 上海联影医疗科技股份有限公司 Motion parameter estimation method and motion artifact correction method
CN111627083B (en) * 2020-05-26 2023-11-21 上海联影医疗科技股份有限公司 Bone hardening artifact correction method, device, computer equipment and readable storage medium
CN112734877B (en) * 2021-01-13 2023-04-07 上海联影医疗科技股份有限公司 Method and system for correcting artifacts
CN112991228B (en) * 2021-04-16 2023-02-07 上海联影医疗科技股份有限公司 Method and system for correcting crosstalk
CN114638907B (en) * 2022-02-23 2023-04-14 赛诺威盛科技(北京)股份有限公司 Deep learning-based bone sclerosis artifact correction method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6682491B2 (en) * 2000-02-11 2004-01-27 Kci Licensing, Inc. Method for artifact reduction in intracranial pressure measurements
CN106725569A (en) * 2016-01-30 2017-05-31 上海联影医疗科技有限公司 Osteosclerosis artifact correction method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7346208B2 (en) * 2003-10-25 2008-03-18 Hewlett-Packard Development Company, L.P. Image artifact reduction using a neural network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6682491B2 (en) * 2000-02-11 2004-01-27 Kci Licensing, Inc. Method for artifact reduction in intracranial pressure measurements
CN106725569A (en) * 2016-01-30 2017-05-31 上海联影医疗科技有限公司 Osteosclerosis artifact correction method and device

Also Published As

Publication number Publication date
CN107330949A (en) 2017-11-07

Similar Documents

Publication Publication Date Title
CN107330949B (en) Artifact correction method and system
EP3226766B1 (en) System and method for image calibration
CN112770838B (en) System and method for image enhancement using self-focused deep learning
EP2987114B1 (en) Method and system for determining a phenotype of a neoplasm in a human or animal body
WO2013094186A1 (en) Motion-tracking x-ray ct image processing method and motion-tracking x-ray ct image processing device
EP4018371A1 (en) Systems and methods for accurate and rapid positron emission tomography using deep learning
US20230042000A1 (en) Apparatus and method for quantification of the mapping of the sensory areas of the brain
CN110458904A (en) Generation method, device and the computer storage medium of capsule endoscopic image
WO2021041125A1 (en) Systems and methods for accurate and rapid positron emission tomography using deep learning
EP4003146A1 (en) Systems and methods for determining a fluid and tissue volume estimations using electrical property tomography
CN108885781A (en) For synthesizing the method and system of virtual high dose or high kV computed tomography images according to low dosage or low kV computed tomography images
WO2019034743A1 (en) Ultrasound determination of dynamic air bronchogram and associated devices, systems, and methods
Chen et al. Low-dose CT image denoising model based on sparse representation by stationarily classified sub-dictionaries
CN104000618A (en) Breathing movement gating correction technology implemented with ring true photon number gating method
CN113177991A (en) Method for correcting scattering artifacts in CBCT (cone beam computed tomography) based on planned CT (computed tomography)
Yu et al. Comparison of pre-and post-reconstruction denoising approaches in positron emission tomography
Bajger et al. Lumbar Spine CT synthesis from MR images using CycleGAN-a preliminary study
JP6355760B2 (en) Method for manufacturing imaging subject mold and individualized imaging method
Arunprasath et al. Reconstruction of PET brain image using conjugate gradient algorithm
US11369268B2 (en) Brain-function image data augmentation method
CN112669405B (en) Image reconstruction method, system, readable storage medium and device
KR102077387B1 (en) Artificial intelligence based nuclearmedicine image acquisition method and apparatus
Wang et al. An unsupervised dual contrastive learning framework for scatter correction in cone-beam CT image
CN116269459A (en) Multitasking supervised distributed federal learning CT imaging method and system
Indira et al. Performance evaluation of gradient based PCNN for fusion of medical images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB02 Change of applicant information

Address after: 201807 Shanghai City, north of the city of Jiading District Road No. 2258

Applicant after: Shanghai Lianying Medical Technology Co., Ltd

Address before: 201807 Shanghai City, north of the city of Jiading District Road No. 2258

Applicant before: SHANGHAI UNITED IMAGING HEALTHCARE Co.,Ltd.

CB02 Change of applicant information