CN113554570B - Double-domain CT image ring artifact removal method based on deep learning - Google Patents

Double-domain CT image ring artifact removal method based on deep learning Download PDF

Info

Publication number
CN113554570B
CN113554570B CN202110892449.3A CN202110892449A CN113554570B CN 113554570 B CN113554570 B CN 113554570B CN 202110892449 A CN202110892449 A CN 202110892449A CN 113554570 B CN113554570 B CN 113554570B
Authority
CN
China
Prior art keywords
domain
image
projection
artifacts
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110892449.3A
Other languages
Chinese (zh)
Other versions
CN113554570A (en
Inventor
陈希
马劲
常少杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN202110892449.3A priority Critical patent/CN113554570B/en
Publication of CN113554570A publication Critical patent/CN113554570A/en
Application granted granted Critical
Publication of CN113554570B publication Critical patent/CN113554570B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a double-domain CT image ring artifact removal method based on deep learning, which aims at ring artifact of a CT image, uses a correction method of mixing a projection domain and an image domain, respectively corrects artifact in the projection domain and the image domain by using a deep neural network, combines images after double-domain correction into a double-channel image set, divides the double-channel image set into image blocks, carries out quality evaluation on the image blocks by using a deep neural network for image quality evaluation, wherein a channel with high evaluation is reserved, and finally all reserved image blocks are subjected to histogram matching to form a final corrected image. The method uses the deep neural network to process the artifact image in the projection domain and the image domain respectively, and compared with the traditional post-processing algorithm such as regularization iterative algorithm, the method improves the processing speed and reduces the introduction of the artifact.

Description

Double-domain CT image ring artifact removal method based on deep learning
Technical Field
The invention belongs to the field of CT image ring artifact removal, and particularly relates to a double-domain CT image ring artifact removal method based on deep learning.
Background
Currently, computed Tomography (CT) technology is widely used in the medical field. Among other things, due to physical pose changes of the detector in the CT machine, ring artifacts are often introduced in the imaging. Due to displacement, damage or occlusion of the detector, in the projection view obtained after the CT scan, these detector directions form streak artifacts which are discontinuous with other detector data, and when these streak artifacts are reconstructed into a CT image by filtered back projection, an artifact which is distributed in a ring shape with the center of the reconstructed image as the center of the circle, i.e. a ring artifact, is formed. The occurrence of ringing can severely degrade the quality of the reconstructed image, interfering with the doctor's diagnosis. How to suppress ringing is a big problem in the CT field.
The traditional artifact removal method aiming at the ring artifacts is mainly divided into three categories of flat field correction, hardware correction and post-processing algorithm correction, wherein the flat field correction is a common algorithm for removing the ring artifacts, and the background images are subjected to sampling-free measurement before and after data acquisition. The resulting flat field, including the non-uniformity effect of the incident x-ray beam, the response of the scintillator and detector pixels, can be used to correct the scanned data and then reduce artifacts. However, due to the different response functions of the different detectors, the ring artifact cannot be completely eliminated. The hardware correction method requires adjusting the position of the detector during the data acquisition process. This allows for different responses of different detectors and then averages the characteristics of all detectors to reduce artifacts.
Post-processing algorithm correction can be mainly divided into projection domain algorithm and image domain algorithm. The ring artifact is in the form of a strip artifact in a projection domain, and the projection domain algorithm utilizes the frequency characteristic of the strip artifact in the sinogram to carry out frequency domain processing on the sinogram. Such as a wavelet fourier transform method that decomposes the projection map using two-dimensional wavelets and then filters the vertical detail band coefficients. Due to the characteristic of wavelet decomposition, the wavelet fourier filtering method can effectively remove ideal streak artifacts covering the entire sinogram. However, for actual projections containing undesirable streak artifacts, this approach may preserve some artifacts and even introduce secondary artifacts. The image domain algorithm usually converts the ring artifact into a bar artifact under polar coordinates by means of polar coordinate transformation, and then uses filtering to remove the bar artifact. Since the streak artifacts are easier to filter and remove than the ring artifacts, the method can effectively inhibit the ring artifacts, but often has tiny ring artifact residues.
With the development of deep learning technology, deep convolutional neural networks are increasingly widely applied in the field of medical imaging, and many new network architectures and algorithms are proposed for CT ring artifact correction. Strictly speaking, the ring artifact correction algorithm based on deep learning belongs to the post-processing algorithm. These algorithms can also be broadly divided into three broad categories, image domain, projection domain and hybrid domain. The depth neural network used by the projection domain and the image domain algorithm is mostly a coding and decoding network similar to U-net, and compared with the traditional double-domain correction algorithm, the double-domain correction algorithm based on the deep learning has the advantages of rapidness, convenience and better effect, but is more dependent on training data, and has inferior generalization capability compared with the traditional algorithm. The image domain deep learning algorithm often removes some details such as bones, while the projection domain deep learning algorithm is similar to the traditional algorithm and also tends to introduce some secondary artifacts. The mixed domain correction algorithm based on the deep learning often uses the deep neural network as a tool to connect the projection domain and the image domain for the reinforcement correction of the double domains, so that the problems of artifact residues, artifact introduction and the like existing in the single domain algorithm are reduced. Therefore, the mixed domain algorithm can often realize the combination of the advantages of the projection domain and the image domain algorithm, further inhibit the ring artifact and realize a better artifact removal effect.
Disclosure of Invention
The invention aims to overcome the defects and provide a double-domain CT image ring artifact removal method based on deep learning, which further reduces residual artifacts and secondary artifacts on the premise of double-domain artifact removal.
In order to achieve the above object, the present invention comprises the steps of:
s1, a deep neural network for removing artifacts of a projection domain is built, the projection domain network is set as a residual learning network, and the projection domain network uses maximum reverse pooling as up-sampling;
s2, constructing a depth neural network for removing artifacts in an image domain, and performing up-sampling by adopting a bilinear interpolation method;
s3, building a mixed domain depth neural network, selecting image blocks from the double domain correction by using a non-reference image quality evaluation network, and finally forming a complete image;
s4, training data of an image domain, a projection domain and a mixed domain are prepared;
s5, training the networks of the image domain, the projection domain and the mixed domain respectively, and completing verification.
The architecture of both the projection domain network and the image domain network is based on U-net.
In S4, the input data of the projection field is a projection map containing the bar artifact, and the label data is the corresponding bar artifact.
In S4, the input data of the image field is a CT image containing ring artifacts, and the tag data is a corresponding reference image. In S4, the input data of the hybrid domain is an image block containing artifacts, the tag data is an index value for evaluating image quality, and the structural similarity SSIM is used as an index.
The training data of the image domain and the projection domain use a blocking method to divide a complete image into a plurality of overlapped image blocks, and the image blocks with serious artifacts are selected as the training data.
Compared with the prior art, aiming at ring artifacts of CT images, a correction method of mixing a projection domain and an image domain is used, artifact correction is respectively carried out in the projection domain and the image domain by using a depth neural network, then images after the double-domain correction are combined into a double-channel image set and divided into image blocks, the image blocks are subjected to quality evaluation by using the depth neural network for image quality evaluation, channels with high evaluation are reserved, and finally all reserved image blocks are subjected to histogram matching to form a final corrected image. This method has the following advantages:
first: the method has the advantages that the depth neural network is used for processing the artifact image in the projection domain and the image domain respectively, and compared with the traditional post-processing algorithm such as a regularization iterative algorithm, the method has the advantages of improving the processing speed and reducing the introduction of the artifact.
Second,: a new mixed domain depth neural network based on image quality evaluation is used, and the network picks the two-domain correction image, so that the advantages of the two-domain correction can be effectively combined, and residual artifacts or secondary artifacts are further reduced.
Third,: the proposed architecture of dual-domain hybrid correction has a very good modularity, i.e. because the hybrid domain network does not correct directly for the artifact image but selects based on the result of the dual-domain correction, this results in an optimization of the overall method, both in terms of the improvement of the projection domain and in terms of the image domain network.
Fourth,: the label settings of the hybrid domain network may be adjusted for different target requirements. Namely, the mixed domain neural network based on image quality evaluation can be used for designing a label index aiming at the image characteristics of a target, so that the requirements of further fitting are met. If the technology aims at the ring artifact, the feature of the structural similarity of the ring artifact is more obvious, so that the label can be set to be the structural similarity of the reference image and the training image, thereby achieving the more targeted effect. And aiming at images with more noise, the labels can be set as indexes such as mean square error and the like, so that the noise is optimized in a targeted manner.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a diagram of a network architecture of a projection domain in the present invention;
FIG. 3 is a diagram of a hybrid domain network architecture in accordance with the present invention;
FIG. 4 is a graph of the verification result of the present invention; wherein, (a) is a ring artifact image, (b) is an image domain correction image, (c) is a projection domain correction image, (d) is a hybrid domain correction image, and (e) is a reference image.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
Referring to fig. 1, the present invention includes the steps of:
and step 1, constructing a depth neural network for removing artifacts in a projection domain. Because the projection domain data is more sensitive to continuity than the image domain, secondary artifacts are easily introduced due to improper processing, in order to better maintain the continuity of the projection domain, the projection domain network is set as a residual learning network, namely, the output of the network is a learned bar artifact, and the input minus the output of the network is a corrected projection graph. The architecture of the projection domain network is based on U-net, using maximum anti-pooling as upsampling.
And 2, constructing a deep neural network for removing the artifacts in the image domain. The image domain network is not a residual learning network, but a pure de-artifacting network. Also based on U-net, but unlike projection domain networks, upsampling uses bilinear interpolation because more information needs to be saved.
And 3, building a mixed domain deep neural network. Unlike existing hybrid domain networks that use a radon transform layer or enhancement network, the hybrid domain neural network of the present invention uses a no-reference image quality evaluation network to pick out image blocks from the dual domain correction, and finally forms a complete image.
And 4, preparing training data of the double domain and the mixed domain. The input data of the projection field should be a projection map containing bar artifacts, and its label data is the corresponding bar artifact. The input data of the image domain is a CT image containing ring artifacts, and the label data is a corresponding reference image. In order to supplement the training data, the image domain and the projection domain of the training data use a blocking method, namely, a complete image is divided into more overlapped image blocks, and the image blocks with serious artifacts are selected as the training data. Since the mixed domain uses no reference image quality evaluation network, the input data is image blocks containing artifacts, and the label data is an index value of image quality evaluation, the Structural Similarity (SSIM) is used as the label data in the invention, because the structural similarity of the artifact image and the reference image is larger, and the artifact image and the reference image are easier to compare.
And step 5, training the networks of the image domain, the projection domain and the mixed domain respectively, and completing verification.
The invention belongs to a post-processing algorithm based on a dual-domain deep neural network, and the general workflow chart is shown in figure 1. The ring artifact image is projected forward to obtain a projection image, then the ring artifact image and the projection image thereof are corrected by a double-domain network to form a double-channel image, the double-channel image is subdivided into image blocks, and the image blocks are selected and rearranged by a mixed domain network to form a single-channel mixed correction image. The mixed domain network is a reference-free deep convolution network, which can evaluate the quality of the image blocks after the double-domain correction, pick out the image blocks with fewer artifacts and more structure preservation, and then reconstruct the final corrected image. It should be noted that correction directly in the projection domain is prone to introduce secondary artifacts due to the strong continuity between projection image pixels. The projection domain network used by the present technique is therefore a residual network, i.e. the output is a streak artifact. The output of the projection domain network therefore needs to be subtracted from the input to obtain the corrected projection map. In addition, since the image contrast after the two-domain correction is different, histogram matching is required at the time of rearrangement so that the mixed correction image does not have a significant blocking effect. The network architecture of each sub-part is as follows.
1. Projection domain and image domain network architecture.
The network architecture of the two domains is improved based on U-net. In the projection domain, the network architecture is shown in fig. 2. Wherein 4 convolution modules are included, namely a first half of the convolution block 1, a middle convolution block 2 and a second half of the convolution block 3, and a last convolution block 4, all followed by a BN + lrerlu layer except for the last convolution layer. The number of channels increases from 1 to 512 in the first half with the increase in convolution kernel of convolution block 1 and decreases from 512 to 1 in the second half with the decrease in convolution kernel 3. The maximum pooling of 2 x 2 is used as downsampling and the maximum inverse pooling of 2 x 2 is used as upsampling, both of which constitute the network codec process. The image domain is similar to the projection domain network except that since the image domain network is not a residual network, upsampling of the image domain uses bilinear interpolation rather than maximum anti-pooling, which is the only difference.
2. Hybrid domain network architecture.
The hybrid domain network is a reference-free image quality evaluation network, and the network architecture is shown in fig. 3. It uses 2 x 2 max pooling and 2 x 2 min pooling to extract image features and uses multiple fully connected layers to calculate image quality scores. The input to the hybrid domain network is a two-channel image block consisting of two-domain corrected images. After the image blocks are input into the network, the network evaluates the image quality of the image blocks channel by channel, the image block of each channel obtains a score, and the image block of the channel with high score is used as output after comparison. The output image blocks constitute the final corrected image in order.
The experimental verification shows the result shown in figure 4. It can be seen that the image corrected by the pure projection domain introduces shading artifacts, while the image corrected by the pure image domain retains part of the ring artifacts. In contrast, the mixed domain correction method provided by the technology has the advantages that the image residual artifact corrected by the mixed domain correction method is minimum, no new artifact is introduced, and the balance of preserving structural details and removing ring artifacts is achieved.

Claims (3)

1. The double-domain CT image ring artifact removal method based on deep learning is characterized by comprising the following steps of:
s1, a deep neural network for removing artifacts of a projection domain is built, the projection domain network is set as a residual learning network, and the projection domain network uses maximum reverse pooling as up-sampling;
s2, constructing a depth neural network for removing artifacts in an image domain, and performing up-sampling by adopting a bilinear interpolation method;
s3, combining the images after the double-domain correction into a double-channel image, subdividing the double-channel image into image blocks, building a mixed domain depth neural network, selecting the image blocks from the double-domain correction by using a non-reference image quality evaluation network, and finally forming a complete image;
s4, training data of an image domain, a projection domain and a mixed domain are prepared;
the input data of the projection domain is a projection graph containing bar artifacts, and the label data is the corresponding bar artifacts;
the input data of the image domain is a CT image containing ring artifacts, and the label data is a corresponding reference image;
the input data of the mixed domain is an image block containing artifacts, and the label data is an index value of image quality evaluation;
s5, training the networks of the image domain, the projection domain and the mixed domain respectively, and completing verification.
2. The method for removing ring artifacts from a dual-domain CT image based on deep learning of claim 1, wherein the projection domain network and the image domain network are both based on U-net.
3. The method for removing ring artifacts from a dual-domain CT image based on deep learning according to claim 1, wherein the training data of both the image domain and the projection domain uses a blocking method, the complete image is divided into a plurality of overlapped image blocks, the structural similarity SSIM is used as an image quality evaluation index, and the image blocks with serious artifacts are selected as the training data.
CN202110892449.3A 2021-08-04 2021-08-04 Double-domain CT image ring artifact removal method based on deep learning Active CN113554570B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110892449.3A CN113554570B (en) 2021-08-04 2021-08-04 Double-domain CT image ring artifact removal method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110892449.3A CN113554570B (en) 2021-08-04 2021-08-04 Double-domain CT image ring artifact removal method based on deep learning

Publications (2)

Publication Number Publication Date
CN113554570A CN113554570A (en) 2021-10-26
CN113554570B true CN113554570B (en) 2023-12-19

Family

ID=78134125

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110892449.3A Active CN113554570B (en) 2021-08-04 2021-08-04 Double-domain CT image ring artifact removal method based on deep learning

Country Status (1)

Country Link
CN (1) CN113554570B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108648188A (en) * 2018-05-15 2018-10-12 南京邮电大学 A kind of non-reference picture quality appraisement method based on generation confrontation network
CN109145939A (en) * 2018-07-02 2019-01-04 南京师范大学 A kind of binary channels convolutional neural networks semantic segmentation method of Small object sensitivity
CN109741315A (en) * 2018-12-29 2019-05-10 中国传媒大学 A kind of non-reference picture assessment method for encoding quality based on deeply study
CN109816742A (en) * 2018-12-14 2019-05-28 中国人民解放军战略支援部队信息工程大学 Cone-Beam CT geometry artifact minimizing technology based on full connection convolutional neural networks
CN110211194A (en) * 2019-05-21 2019-09-06 武汉理工大学 A method of sparse angular CT imaging artefacts are removed based on deep learning
WO2019173452A1 (en) * 2018-03-07 2019-09-12 Rensselaer Polytechnic Institute Deep neural network for ct metal artifact reduction
WO2019236560A1 (en) * 2018-06-04 2019-12-12 The Regents Of The University Of California Pair-wise or n-way learning framework for error and quality estimation
WO2020033355A1 (en) * 2018-08-06 2020-02-13 Vanderbilt University Deep-learning-based method for metal reduction in ct images and applications of same
CN112288668A (en) * 2020-09-22 2021-01-29 西北工业大学 Infrared and visible light image fusion method based on depth unsupervised dense convolution network
CN112508808A (en) * 2020-11-26 2021-03-16 中国人民解放军战略支援部队信息工程大学 CT (computed tomography) dual-domain joint metal artifact correction method based on generation countermeasure network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10789696B2 (en) * 2018-05-24 2020-09-29 Tfi Digital Media Limited Patch selection for neural network based no-reference image quality assessment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019173452A1 (en) * 2018-03-07 2019-09-12 Rensselaer Polytechnic Institute Deep neural network for ct metal artifact reduction
CN108648188A (en) * 2018-05-15 2018-10-12 南京邮电大学 A kind of non-reference picture quality appraisement method based on generation confrontation network
WO2019236560A1 (en) * 2018-06-04 2019-12-12 The Regents Of The University Of California Pair-wise or n-way learning framework for error and quality estimation
CN109145939A (en) * 2018-07-02 2019-01-04 南京师范大学 A kind of binary channels convolutional neural networks semantic segmentation method of Small object sensitivity
WO2020033355A1 (en) * 2018-08-06 2020-02-13 Vanderbilt University Deep-learning-based method for metal reduction in ct images and applications of same
CN109816742A (en) * 2018-12-14 2019-05-28 中国人民解放军战略支援部队信息工程大学 Cone-Beam CT geometry artifact minimizing technology based on full connection convolutional neural networks
CN109741315A (en) * 2018-12-29 2019-05-10 中国传媒大学 A kind of non-reference picture assessment method for encoding quality based on deeply study
CN110211194A (en) * 2019-05-21 2019-09-06 武汉理工大学 A method of sparse angular CT imaging artefacts are removed based on deep learning
CN112288668A (en) * 2020-09-22 2021-01-29 西北工业大学 Infrared and visible light image fusion method based on depth unsupervised dense convolution network
CN112508808A (en) * 2020-11-26 2021-03-16 中国人民解放军战略支援部队信息工程大学 CT (computed tomography) dual-domain joint metal artifact correction method based on generation countermeasure network

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
NO REFERENCE VIDEO QUALITY ASSESSMENT BASED ON PARAMETRIC ANALYSIS OF HEVC BITSTREAM;Kosuke Izumi et al.;2014 Sixth International Workshop on Quality of Multimedia Experience (QoMEX);全文 *
Removing Ring Artefacts for Photon-Counting Detectors Using Neural Networks in Different Domains;Wei Fang et al.;《IEEE Access》;20200311;摘要,第II.A节,第II.D节,第II.E节,附图2-5 *
基于光谱估计和图像重建的双能量CT迭代算法;常少杰,牟轩沁;CT理论与应用研究;第27卷(第1期);全文 *
基于双域学习的JPEG压缩图像去压缩效应算法;王新欢;任超;何小海;王正勇;李兴龙;;信息技术与网络安全(12);全文 *
基于局部生成对抗网络的水上低照度图像增强;刘文等;计算机工程;第47卷(第5期);全文 *
朱肇光等.数字摄影测量.《摄影测量学》.测绘出版社,1995,163-168. *

Also Published As

Publication number Publication date
CN113554570A (en) 2021-10-26

Similar Documents

Publication Publication Date Title
WO2022047625A1 (en) Image processing method and system, and computer storage medium
US8542944B2 (en) Method and apparatus for multi-scale based dynamic range compression and noise suppression
CN110490832B (en) Magnetic resonance image reconstruction method based on regularized depth image prior method
CN110717956B (en) L0 norm optimization reconstruction method guided by limited angle projection superpixel
EP3186954B1 (en) Image processing apparatus, image processing method, recording medium, and program
CN111539893A (en) Bayer image joint demosaicing denoising method based on guided filtering
CN112396672B (en) Sparse angle cone-beam CT image reconstruction method based on deep learning
CN101915901A (en) Magnetic resonance imaging method and device
CN112270646B (en) Super-resolution enhancement method based on residual dense jump network
CN116645283A (en) Low-dose CT image denoising method based on self-supervision perceptual loss multi-scale convolutional neural network
CN113592745B (en) Unsupervised MRI image restoration method based on antagonism domain self-adaption
CN113554570B (en) Double-domain CT image ring artifact removal method based on deep learning
CN109816747A (en) A kind of metal artifacts reduction method of Cranial Computed Tomography image
Kannan et al. Optimal decomposition level of discrete wavelet transform for pixel based fusion of multi-focused images
CN116630198A (en) Multi-scale fusion underwater image enhancement method combining self-adaptive gamma correction
KR20100097858A (en) Super-resolution using example-based neural networks
CN116823662A (en) Image denoising and deblurring method fused with original features
CN114926452B (en) NSST and beta divergence non-negative matrix factorization-based remote sensing image fusion method
CN107845081B (en) Magnetic resonance image denoising method
Lew et al. Adaptive Gaussian Wiener Filter for CT-Scan Images with Gaussian Noise Variance
Song et al. Unsupervised denoising for satellite imagery using wavelet subband cyclegan
Kadri et al. Colour Image Denoising using Curvelets and Scale Dependent Shrinkage
Tun et al. Joint Training of Noisy Image Patch and Impulse Response of Low-Pass Filter in CNN for Image Denoising
CN116523810B (en) Ultrasonic image processing method, device, equipment and medium
Kamarthi et al. Multimodal Medical Image Fusion Based on Intuitionistic Fuzzy Sets and Weighted Activity Measure in NSST Domain

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant