CN111862258A - Image metal artifact suppression method - Google Patents

Image metal artifact suppression method Download PDF

Info

Publication number
CN111862258A
CN111862258A CN202010717335.0A CN202010717335A CN111862258A CN 111862258 A CN111862258 A CN 111862258A CN 202010717335 A CN202010717335 A CN 202010717335A CN 111862258 A CN111862258 A CN 111862258A
Authority
CN
China
Prior art keywords
image
domain
suppression method
identifier
artifact
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010717335.0A
Other languages
Chinese (zh)
Other versions
CN111862258B (en
Inventor
李彦明
郑海荣
江洪伟
万丽雯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen National Research Institute of High Performance Medical Devices Co Ltd
Original Assignee
Shenzhen National Research Institute of High Performance Medical Devices Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen National Research Institute of High Performance Medical Devices Co Ltd filed Critical Shenzhen National Research Institute of High Performance Medical Devices Co Ltd
Priority to CN202010717335.0A priority Critical patent/CN111862258B/en
Publication of CN111862258A publication Critical patent/CN111862258A/en
Application granted granted Critical
Publication of CN111862258B publication Critical patent/CN111862258B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The application belongs to the technical field of images, and particularly relates to an image metal artifact suppression method. In the images of high and low energy states of dual-energy CT imaging, the forms and the degrees of metal artifacts are different, the existing method does not utilize the internal relation of the two images for reconstruction, and the performance still has a great improvement space. The application provides an image metal artifact suppression method, which comprises the following steps: dividing an image domain of an image into different sub-domains; extracting an input image and a domain identifier corresponding to the image, and converting the image into a target domain image according to the target domain identifier to obtain a generated image; obtaining a reconstructed image according to the generated image, and calculating reconstruction loss; inputting the image and the reconstructed image into a discriminator to obtain a discrimination result and a domain classification result; calculating the immunity loss and the domain classification loss, and training a deep neural network; and obtaining a picture for inhibiting the artifact by using the trained neural network. The method is suitable for various metal artifacts and has stronger robustness.

Description

Image metal artifact suppression method
Technical Field
The application belongs to the technical field of images, and particularly relates to an image metal artifact suppression method.
Background
In Computed Tomography (CT), metal implants in a patient, including dental fillings, hip prostheses, coils, etc., may cause metal artifacts in the images. The metal object can strongly attenuate the intensity of the X-ray and even completely block the penetration of the X-ray, so that the detector receives damaged or incomplete projection data, and when the data are used for reconstructing an image, light and dark radial stripes can be generated, so that important structural information in the image is lost, and misjudgment of a doctor or a metering error of the size of a target area is caused. Therefore, the method has great clinical significance for inhibiting the metal artifact in the CT image and improving the image quality by utilizing a quick and effective algorithm. Dual energy CT imaging can solve many problems in conventional CT imaging techniques, such as motion artifacts, radiosclerosis, bar artifacts due to incomplete scanning, noise under low dose conditions, etc., and is more convenient to operate, and relatively low in radiation dose to patients, and is now widely used in clinics.
Currently, existing Metal Artifact Reduction (MAR) algorithms can be classified into three categories: the method comprises a metal artifact suppression method based on projection domain interpolation, a metal artifact suppression algorithm based on iterative reconstruction and a metal artifact suppression method based on deep learning. Since metal artifacts usually appear as non-local bright and dark fringes, it is very difficult to model them in the image domain, and before the development of deep learning, most of the work is done in the projection domain, for example, the metal-affected regions are missing in the projection domain, and these algorithms use different methods to interpolate the missing data, but since the projections are taken from a single object under a certain geometry, the enhanced sinogram should satisfy the physical constraints, otherwise, severe secondary artifacts are introduced in the reconstructed CT image. The metal artifact suppression algorithm based on iterative reconstruction utilizes an optimization algorithm to minimize the error between an image and a real result in an image domain, so that a high-quality artifact-removed image is obtained. The algorithm can effectively inhibit metal artifacts generally, but the calculation amount is very large, the requirement on hardware is high, and the timeliness is poor.
Recently, deep learning has also made a great progress in metal artifact suppression. Wang et al apply a pix2pix model to reduce metal artifacts in CT images in the image domain. Zhang et al first estimates a prior image through a Convolutional Neural Network (CNN), and then fills in replacement data for metal damaged areas in the sinogram based on the prior image to reduce secondary artifacts. Park et al applied U-Net to directly recover the sinogram of metal damage.
The metal artifact suppression method based on the projection domain interpolation has the advantages of simple theory, fast calculation, easy realization and the like, but only can deal with simple metal objects, the metal with special shapes is difficult to meet physical constraints, and serious secondary artifacts can be introduced into the reconstructed CT image; the metal artifact suppression algorithm based on iterative reconstruction can effectively suppress artifacts and noise, but the calculation amount is very large, the speed is low, and the practicability is difficult; the existing deep learning MAR algorithm is used for artifact suppression on a single-dose CT image and is not applied to a dual-energy CT image. In the images of high and low energy states of the dual-energy CT imaging, the forms and the degrees of metal artifacts are different, the existing method does not utilize the internal relation of the two images for reconstruction, and the performance still has a great improvement space.
Disclosure of Invention
1. Technical problem to be solved
In the images of high and low energy states based on dual-energy CT imaging, the forms and the degrees of metal artifacts are different, the existing method does not utilize the internal relation of the two images for reconstruction, and the performance still has the problem of greatly improving the space.
2. Technical scheme
In order to achieve the above object, the present application provides an image metal artifact suppressing method, including the steps of:
step 1: dividing an image domain of an image into different sub-domains;
step 2: extracting an input image and a domain identifier corresponding to the image, and converting the image into a target domain image according to the target domain identifier to obtain a generated image;
and step 3: obtaining a reconstructed image according to the generated image, and calculating reconstruction loss;
and 4, step 4: inputting the image and the reconstructed image into a discriminator to obtain a discrimination result and a domain classification result; calculating the immunity loss and the domain classification loss, and training a deep neural network;
and 5: and obtaining a picture for inhibiting the artifact by using the trained neural network.
Another embodiment provided by the present application is: in the step 1, the dual-energy CT image domain is divided into four sub-domains according to the energy state and whether metal artifacts exist or not: high energy state-with artifact, low energy state-with artifact, high energy state-without artifact, low energy state-without artifact.
Another embodiment provided by the present application is: in the step 2, an input image and a domain identifier corresponding to the image are extracted, a target domain is selected, that is, the image is expected to be converted into a domain to which the target image belongs, and then the image is input into the generator network according to the domain identifier and is converted into a target domain image to obtain a generated image.
Another embodiment provided by the present application is: said step 2 comprises the embedding of the identifier.
Another embodiment provided by the present application is: and in the step 3, the errors of the original input image and the reconstructed image are constrained by a loss function.
Another embodiment provided by the present application is: the identifier embedding is to expand the domain identifier and the target domain identifier into two-channel images with the same size as the input image respectively, and then to splice the channel dimensions to the input image.
Another embodiment provided by the present application is: the domain identifier is a binary identifier, and the target domain identifier is a binary identifier.
Another embodiment provided by the present application is: the two-channel graph is two pure blacks or two pure whites or one black and one white. Another embodiment provided by the present application is: and obtaining a model for multi-domain conversion according to the reconstruction loss back propagation training generator and the immunity loss and the domain classification loss back propagation training discriminator.
Another embodiment provided by the present application is: the generator in the model is a common network, and the discriminator is of a double-output structure.
3. Advantageous effects
Compared with the prior art, the image metal artifact suppression method provided by the application has the beneficial effects that:
the application provides an image metal artifact suppression method, which is a novel CT image metal artifact suppression technology based on multi-space image conversion.
The image metal artifact suppression method provided by the application is based on a metal artifact suppression technology of multi-space image conversion, and can be used for improving the image quality of dual-energy CT imaging.
Compared with the traditional MAR algorithm for the projection domain interpolation, the image metal artifact suppression method provided by the application can be suitable for various metal artifacts and is stronger in robustness.
Compared with the traditional metal artifact suppression algorithm based on iterative reconstruction, the method for suppressing the metal artifact of the image has the advantages that after off-line learning is completed, the operation speed of the artifact removal algorithm is very high, and meanwhile, better image quality can be obtained.
Compared with a deep learning method, the image metal artifact suppression method provided by the application can utilize the related information in the images of the high and low energy states of the dual-energy CT imaging, and further improves the MAR effect.
The image metal artifact suppression method is a novel generation countermeasure network based on multi-space image conversion, and is used for metal artifact suppression of dual-energy CT images.
According to the image metal artifact suppression method provided by the application, an image domain of the dual-energy CT can be divided into four sub-domains according to the two attributes of energy state height and whether the metal artifact exists: high energy state-with artifact, low energy state-with artifact, high energy state-without artifact, low energy state-without artifact.
According to the image metal artifact suppression method, the network adopts the idea of counterstudy, the generator in the network performs domain transformation on the image of any input sub-domain to generate the image of other domains, the generated result is judged through the discriminator, the two domains play games with each other until the conversion between the domains has a good effect, so that the generator can input any sub-domain image and can generate the image with the suppressed corresponding energy state artifact.
The image metal artifact suppression method provided by the application is based on the idea of confrontation generation network, introduces the concept of multi-field image conversion, and only trains a pair of generators and discriminators to realize the mutual conversion of dual-energy CT images among different domains, thereby realizing the metal artifact suppression of the CT images.
Drawings
Fig. 1 is a schematic diagram illustrating the principle of image metal artifact suppression according to the present application.
Detailed Description
Hereinafter, specific embodiments of the present application will be described in detail with reference to the accompanying drawings, and it will be apparent to those skilled in the art from this detailed description that the present application can be practiced. Features from different embodiments may be combined to yield new embodiments, or certain features may be substituted for certain embodiments to yield yet further preferred embodiments, without departing from the principles of the present application.
Referring to fig. 1, the present application provides an image metal artifact suppression method, including the steps of:
step 1: dividing an image domain of the dual-energy CT into four sub-domains, and defining a domain identifier for each sub-domain;
step 2: extracting an input image and a domain identifier corresponding to the image, selecting a target domain, namely, expecting to convert the image into a domain to which the target image belongs, and then inputting the image into a generator network according to the target domain identifier to convert the image into a target domain image to obtain a generated image;
and step 3: inputting the generated image in the step 2 into a generator network to generate a reconstructed image by taking the domain to which the input image belongs as a target domain, and comparing the reconstructed image with an original image to calculate reconstruction loss;
and 4, step 4: inputting the input image and the reconstructed image into a discriminator network to obtain a discrimination result and a domain classification result; calculating the immunity loss and the domain classification loss, and training a deep neural network;
and 5: and obtaining a picture for inhibiting the artifact by using the trained neural network.
The previous 4 steps are for training an available network model, and are an off-line learning process; after the model is well learned, the image with metal artifacts is input into the model, and then the image without artifacts is generated, namely, in practical application, artifact removal is realized.
Further, in the step 1, the dual-energy CT image domain is divided into four sub-domains according to the energy state and whether there is a metal artifact or not: high energy state-with artifact, low energy state-with artifact, high energy state-without artifact, low energy state-without artifact.
The dual-energy CT system can image an object by utilizing two X-rays with different energies, and can accurately obtain the composition proportion of the object.
It has two main advantages:
the limitation of single energy imaging in the past has been that objects of different composition may exhibit similar attenuation characteristics, where it is difficult to distinguish between different species using CT values alone.
Secondly, due to the advanced technology, the radiation dose of the existing dual-energy CT is lower than that of the traditional single energy CT, and the dual-energy CT is safer.
Further, in the step 2, the input image and the domain identifier corresponding to the image are extracted by the generator.
Feeding an input image to a generator; the domain identifier of the input image is also paired with the input image, for example, the identifiers of the four energy states are 00, 01, 10, 11, respectively, and a low energy state-artifact image is input, so that its domain identifier is the identifier 01 corresponding to "low energy state-artifact".
Further, the step 2 includes embedding of the identifier.
Further, the error of the target domain image and the reconstructed image is constrained by a loss function in the step 3.
Further, the identifier is embedded to expand the domain identifier and the target domain identifier of the input image into a two-channel map with the same size as the input image, and then the domain identifier and the target domain identifier are spliced to the input image in channel dimension.
Further, the domain identifier is a binary identifier, and the target domain identifier is a binary identifier.
Further, the two-channel map is two pure blacks or two pure whites or one black and one white which are obtained by expanding the binary identifier.
The image domain of dual-energy CT can be divided into four sub-domains according to the two attributes of energy state height and whether there is metal artifact: high energy state-with artifact, low energy state-with artifact, high energy state-without artifact, low energy state-without artifact, respectively designated as DomainiAnd i is 1,2,3, 4. In training the network, the training data used should contain data of four domains of similar magnitude. The image input to the generator G is denoted x, and the image desired to be generated by the generator G is denoted x
Figure BDA0002598698480000051
The domain identifier c (x) is binary (i-1), where i is the domain index corresponding to x, and binary (·) represents a binary operator, which converts the input into a two-bit binary representation. The generator G may extract the input image x and its corresponding domain identifier c (x) and then follow the target domain identifier
Figure BDA0002598698480000052
Convert it into a target domain image
Figure BDA0002598698480000053
This process can be represented by fig. 1 (a). At the same time, a reverse reconstruction process is introduced, similar to the concept of CycleGAN, from the generated image
Figure BDA0002598698480000054
Generating a reconstructed image
Figure BDA0002598698480000055
Constraining with a loss function
Figure BDA0002598698480000056
And
Figure BDA0002598698480000057
to increase the robustness of the network, which is part of fig. 1 (b). The identifier embedding in the figure is to embed binary identifiers c (x) and
Figure BDA0002598698480000058
expanded into two pure black/pure white/black-white two-channel maps of the same size as the input image, respectively, and then stitched into the input image in channel dimensions.
Further, according to the reconstruction loss back propagation training generator and the immunity loss and the domain classification loss back propagation training discriminator, a model for multi-domain conversion is obtained.
Furthermore, a generator in the model is a common network, and the arbiter is of a dual-output structure. In the network model, a generator can be realized by using a common network, such as U-Net, and can also design a similar network according to data by itself; the discriminator adopts a double-output structure, shares a series of convolution layers, and obtains a discrimination result and a domain classification result respectively by using two different full-connection layers when obtaining the feature vector.
The data input by the discriminator D includes not only real image data but also an image output by the generator G. The discriminator has two tasks, and is responsible for judging whether the input image is a real image or a generated image, and simultaneously needs to give a domain identifier to which the input image belongs
Figure BDA0002598698480000059
This step is illustrated in FIG. 1 (c).
In one iteration of network training, a pair of high and low energy images x are first extracted from the dataset1,x2Extracting the domain identifier c (x)1),c(x2) Randomly generating two object domain identifiers
Figure BDA00025986984800000510
Will be provided with
Figure BDA00025986984800000511
And
Figure BDA00025986984800000512
respectively input into the generator to obtain the generated image
Figure BDA00025986984800000513
Will be provided with
Figure BDA00025986984800000514
And
Figure BDA00025986984800000515
then respectively input into the generator to obtain a reconstructed image
Figure BDA0002598698480000061
Calculating reconstruction loss
Figure BDA0002598698480000062
X is to be1,x2,
Figure BDA0002598698480000063
Respectively input into a discriminator to obtain discrimination results Dsrc(x1),Dsrc(x2),
Figure BDA0002598698480000064
And domain classification result Dcls(x1),Dcls(x2),
Figure BDA0002598698480000065
Calculating challenge loss
Figure BDA0002598698480000066
And domain classification loss
Figure BDA0002598698480000067
Figure BDA0002598698480000068
Wherein
Figure BDA0002598698480000069
Represents the domain classification loss of real (real) data,
Figure BDA00025986984800000610
representing a loss of domain classification of the generated (synthetic) data. The loss functions of the training generator G and the discriminator D are respectively
Figure BDA00025986984800000611
Figure BDA00025986984800000612
For each set of inputs x1,x2And calculating loss through the steps, and respectively propagating the training generator G and the discriminator D in the reverse direction to obtain the model G for multi-domain conversion. When the model is tested and used, the data only need to be in the form of images with metal artifacts, original domain identifiers and target domain identifiers
Figure BDA00025986984800000613
An input generator G, wherein
Figure BDA00025986984800000614
For the identity corresponding to c (x) homoenergetic state artifact-free domainAnd obtaining the image with the energy state artifact suppression (such as high energy state-with artifact corresponding to high energy state-without artifact).
The network is trained by using the dual-energy CT high-low energy state images, so that the network can obtain more characteristic information, and metal artifact suppression is realized; four domains are constructed according to two attributes of an energy state and an artifact state, and the artifact suppression problem is solved by domain conversion; in the process of domain conversion, a domain identifier is utilized to realize the sharing of a generator and a discriminator, so that the conversion between every two domains is avoided needing a pair of the generator and the discriminator; the process of training the network using dual energy CT data and the loss design.
Other deep learning methods aim at removing metal artifacts of a single image, do not aim at a dual-energy CT scheme, and different energy states need to retrain a new model by using corresponding energy state data;
other deep learning methods cannot fully utilize information of two energy states, images of four domains share one generator, and the generator can fully learn characteristics of different energy states and different artifact states, so that a better artifact suppression effect is achieved.
Although the present application has been described above with reference to specific embodiments, those skilled in the art will recognize that many changes may be made in the configuration and details of the present application within the principles and scope of the present application. The scope of protection of the application is determined by the appended claims, and all changes that come within the meaning and range of equivalency of the technical features are intended to be embraced therein.

Claims (10)

1. An image metal artifact suppression method is characterized in that: the method comprises the following steps:
step 1: dividing an image domain of an image into different sub-domains;
step 2: extracting an input image and a domain identifier corresponding to the image, and converting the image into a target domain image according to the target domain identifier to obtain a generated image;
and step 3: obtaining a reconstructed image according to the generated image, and calculating reconstruction loss;
and 4, step 4: inputting the image and the reconstructed image into a discriminator to obtain a discrimination result and a domain classification result; calculating the immunity loss and the domain classification loss, and training a deep neural network;
and 5: and obtaining a picture for inhibiting the artifact by using the trained neural network.
2. The image metal artifact suppression method as claimed in claim 1, wherein: in the step 1, the dual-energy CT image domain is divided into four sub-domains according to the energy state and whether metal artifacts exist or not: high energy state-with artifact, low energy state-with artifact, high energy state-without artifact, low energy state-without artifact.
3. The image metal artifact suppression method as claimed in claim 1, wherein: and 2, extracting the input image and the domain identifier corresponding to the image in the step 2, selecting a target domain, namely, expecting to convert the image into the domain to which the target image belongs, and then inputting the image into the generator network according to the target domain identifier to convert the image into the target domain image to obtain a generated image.
4. The image metal artifact suppression method as claimed in claim 1, wherein: said step 2 comprises the embedding of the identifier.
5. The image metal artifact suppression method as claimed in claim 1, wherein: and in the step 3, the errors of the original input image and the reconstructed image are constrained by a loss function.
6. The image metal artifact suppression method as claimed in claim 5, wherein: the identifier is embedded by respectively expanding the domain identifier and the target domain identifier of the input image into a two-channel image with the same size as the input image, and then splicing the two-channel image to the input image in channel dimension.
7. The image metal artifact suppression method as claimed in claim 6, wherein: the domain identifier is a binary identifier, and the target domain identifier is a binary identifier.
8. The image metal artifact suppression method as claimed in claim 7, wherein: the two-channel graph is formed by expanding binary identifiers to obtain two pure blacks or two pure whites or a black white.
9. The image metal artifact suppression method as claimed in any one of claims 1 to 8, wherein: and obtaining a model for multi-domain conversion according to the reconstruction loss back propagation training generator and the immunity loss and the domain classification loss back propagation training discriminator.
10. The image metal artifact suppression method as claimed in claim 9, wherein: the generator in the model is a common network, and the discriminator is of a double-output structure.
CN202010717335.0A 2020-07-23 2020-07-23 Image metal artifact inhibition method Active CN111862258B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010717335.0A CN111862258B (en) 2020-07-23 2020-07-23 Image metal artifact inhibition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010717335.0A CN111862258B (en) 2020-07-23 2020-07-23 Image metal artifact inhibition method

Publications (2)

Publication Number Publication Date
CN111862258A true CN111862258A (en) 2020-10-30
CN111862258B CN111862258B (en) 2024-06-28

Family

ID=72950764

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010717335.0A Active CN111862258B (en) 2020-07-23 2020-07-23 Image metal artifact inhibition method

Country Status (1)

Country Link
CN (1) CN111862258B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115170424A (en) * 2022-07-07 2022-10-11 北京安德医智科技有限公司 Heart ultrasonic image artifact removing method and device

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102067170A (en) * 2007-08-31 2011-05-18 皇家飞利浦电子股份有限公司 Uncertainty maps for segmentation in the presence of metal artifacts
CN108334904A (en) * 2018-02-07 2018-07-27 深圳市唯特视科技有限公司 A kind of multiple domain image conversion techniques based on unified generation confrontation network
CN109472754A (en) * 2018-11-06 2019-03-15 电子科技大学 CT image metal artifact removing method based on image repair
US20190369191A1 (en) * 2018-05-31 2019-12-05 The Board Of Trustees Of The Leland Stanford Junior University MRI reconstruction using deep learning, generative adversarial network and acquisition signal model
US20190377047A1 (en) * 2018-06-07 2019-12-12 Siemens Healthcare Gmbh Artifact Reduction by Image-to-Image Network in Magnetic Resonance Imaging
CN110570492A (en) * 2019-09-11 2019-12-13 清华大学 Neural network training method and apparatus, image processing method and apparatus, and medium
CN110675461A (en) * 2019-09-03 2020-01-10 天津大学 CT image recovery method based on unsupervised learning
CN110728727A (en) * 2019-09-03 2020-01-24 天津大学 Low-dose energy spectrum CT projection data recovery method
CN110728729A (en) * 2019-09-29 2020-01-24 天津大学 Unsupervised CT projection domain data recovery method based on attention mechanism
WO2020033355A1 (en) * 2018-08-06 2020-02-13 Vanderbilt University Deep-learning-based method for metal reduction in ct images and applications of same
CN111292386A (en) * 2020-01-15 2020-06-16 中国人民解放军战略支援部队信息工程大学 CT projection metal trace completion metal artifact correction method based on U-net

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102067170A (en) * 2007-08-31 2011-05-18 皇家飞利浦电子股份有限公司 Uncertainty maps for segmentation in the presence of metal artifacts
CN108334904A (en) * 2018-02-07 2018-07-27 深圳市唯特视科技有限公司 A kind of multiple domain image conversion techniques based on unified generation confrontation network
US20190369191A1 (en) * 2018-05-31 2019-12-05 The Board Of Trustees Of The Leland Stanford Junior University MRI reconstruction using deep learning, generative adversarial network and acquisition signal model
US20190377047A1 (en) * 2018-06-07 2019-12-12 Siemens Healthcare Gmbh Artifact Reduction by Image-to-Image Network in Magnetic Resonance Imaging
WO2020033355A1 (en) * 2018-08-06 2020-02-13 Vanderbilt University Deep-learning-based method for metal reduction in ct images and applications of same
CN109472754A (en) * 2018-11-06 2019-03-15 电子科技大学 CT image metal artifact removing method based on image repair
CN110675461A (en) * 2019-09-03 2020-01-10 天津大学 CT image recovery method based on unsupervised learning
CN110728727A (en) * 2019-09-03 2020-01-24 天津大学 Low-dose energy spectrum CT projection data recovery method
CN110570492A (en) * 2019-09-11 2019-12-13 清华大学 Neural network training method and apparatus, image processing method and apparatus, and medium
CN110728729A (en) * 2019-09-29 2020-01-24 天津大学 Unsupervised CT projection domain data recovery method based on attention mechanism
CN111292386A (en) * 2020-01-15 2020-06-16 中国人民解放军战略支援部队信息工程大学 CT projection metal trace completion metal artifact correction method based on U-net

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
何剑华;龙法宁;朱晓姝;: "基于改进的CycleGAN模型非配对的图像到图像转换", 玉林师范学院学报, no. 02, 1 April 2018 (2018-04-01) *
肖文;曾理;: "CT图像的金属伪影校正方法综述", 中国体视学与图像分析, no. 01, 25 March 2019 (2019-03-25) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115170424A (en) * 2022-07-07 2022-10-11 北京安德医智科技有限公司 Heart ultrasonic image artifact removing method and device

Also Published As

Publication number Publication date
CN111862258B (en) 2024-06-28

Similar Documents

Publication Publication Date Title
Hong et al. End-to-end unpaired image denoising with conditional adversarial networks
WO2022016461A1 (en) Image metal artifact reduction method
CN110675461A (en) CT image recovery method based on unsupervised learning
CN112347850A (en) Infrared image conversion method, living body detection method, device and readable storage medium
CN103034989B (en) A kind of low dosage CBCT image de-noising method based on high-quality prior image
CN112802046B (en) Image generation system for generating pseudo CT from multi-sequence MR based on deep learning
Zhou et al. Limited angle tomography reconstruction: synthetic reconstruction via unsupervised sinogram adaptation
Peng et al. A cross-domain metal trace restoring network for reducing X-ray CT metal artifacts
Cui et al. Toothpix: Pixel-level tooth segmentation in panoramic x-ray images based on generative adversarial networks
CN116664710A (en) CT image metal artifact unsupervised correction method based on transducer
Mostafavi et al. E2sri: Learning to super-resolve intensity images from events
CN110060315A (en) A kind of image motion artifact eliminating method and system based on artificial intelligence
Du et al. Reduction of metal artefacts in CT with Cycle-GAN
CN115100044A (en) Endoscope super-resolution method and system based on three-generator generation countermeasure network
CN111862258B (en) Image metal artifact inhibition method
CN117333751A (en) Medical image fusion method
CN116342414A (en) CT image noise reduction method and system based on similar block learning
CN110176045A (en) A method of dual-energy CT image is generated by single energy CT image
Zhu et al. CT metal artifact correction assisted by the deep learning-based metal segmentation on the projection domain
CN113298900B (en) Processing method based on low signal-to-noise ratio PET image
Hu et al. Parallel sinogram and image framework with co-training strategy for metal artifact reduction in tooth CT images
CN113269846B (en) CT full-scan image reconstruction method and device and terminal equipment
CN117409100B (en) CBCT image artifact correction system and method based on convolutional neural network
Kushwaha et al. Development of Advanced Noise Filtering Techniques for Medical Image Enhancement
Shi et al. Enhanced CT Image Generation by GAN for Improving Thyroid Anatomy Detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant