CN116993845A - CT image artifact removal method based on integrated depth network DnCNN - Google Patents

CT image artifact removal method based on integrated depth network DnCNN Download PDF

Info

Publication number
CN116993845A
CN116993845A CN202310681091.9A CN202310681091A CN116993845A CN 116993845 A CN116993845 A CN 116993845A CN 202310681091 A CN202310681091 A CN 202310681091A CN 116993845 A CN116993845 A CN 116993845A
Authority
CN
China
Prior art keywords
dncnn
training
image
artifact
models
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310681091.9A
Other languages
Chinese (zh)
Other versions
CN116993845B (en
Inventor
靖稳峰
刘盼盼
李星
许鑫
张雪松
李欣雨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN202310681091.9A priority Critical patent/CN116993845B/en
Publication of CN116993845A publication Critical patent/CN116993845A/en
Application granted granted Critical
Publication of CN116993845B publication Critical patent/CN116993845B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a CT image artifact removal method based on an integrated depth network DnCNN, which is characterized by collecting CT cross section data with artifacts, filling and repairing an image artifact part by utilizing surrounding pixel point values of an image artifact region, and taking a repaired CT image as labeling data; randomly dividing data into a training set and a testing set, and initializing the weight of each training sample; training a plurality of DnCNN models, learning residual errors between original data and marked data, calculating a weight coefficient of the round of models after each round of training is completed, and updating sample weights; normalizing the weight coefficient, and integrating the plurality of trained DnCNN models; and finally, performing artifact removal performance evaluation on the integrated network model on the test set. The method solves the problem that the severe streak artifact in the CT image is difficult to remove to a certain extent.

Description

CT image artifact removal method based on integrated depth network DnCNN
Technical Field
The invention belongs to the technical field of CT artifact correction, and particularly relates to a CT image artifact removal method based on an integrated depth network DnCNN.
Background
Since the seventies of the last century, CT technology has been widely used in the fields of medical clinical diagnosis, disease screening, image-guided radiation therapy, surgery, etc. because of its advantages of high resolution, high sensitivity, multi-azimuth imaging, clear anatomical relationship of cross section, etc. during imaging. In CT machine use, the incidence of artifact failure is high, causing abnormal images which are irrelevant to scanned structures in the imaging process, and the abnormal images reduce the image quality and even affect clinical diagnosis.
There are many kinds of artifacts in CT images, in which the images appear as streaks with alternating brightness, called Streak artifacts (Streak artifacts). The factors responsible for such artifacts are complex and studies have shown that beam hardening, scattering, noise, etc. can lead to such artifacts. In CT images, part of streak artifacts are very obvious, and the image quality is seriously affected. When the artifact is processed, various filtering is mostly adopted to fill the artifact area in the traditional method, so that CT values of other artifact-free areas can be changed, and the problems that serious streak artifacts cause uncoordinated whole restored images or a good removing effect cannot be achieved and the like can be caused. In the aspect of deep learning, the large network with the artifact removed has more parameters and weak generalization capability, is difficult to be applied to clinic, has limited small network expression capability, and can not have better effect on serious artifact.
Disclosure of Invention
The invention aims to provide a CT image artifact removal method based on an integrated depth network DnCNN, which can quickly remove artifacts and simultaneously well maintain each tissue structure of a non-artifact region.
The technical scheme adopted by the invention is that the CT image artifact removal method based on the integrated depth network DnCNN is implemented according to the following steps:
step 1, acquiring a CT image with an artifact, filling and repairing an image artifact part by using surrounding pixel point values of an image artifact region, and taking the repaired CT image as labeling data;
step 2, randomly dividing the labeling data into a training set and a testing set, and initializing the weight of each training sample;
step 3, constructing a plurality of DnCNN models, training the DnCNN models, calculating the weight coefficient of the round of models after each round of training is completed, and updating the weight of the sample; normalizing the weight coefficient, and integrating the plurality of trained DnCNN models;
and 4, finally, performing artifact removal performance evaluation on the integrated network model on the test set to obtain an artifact-removed integrated network, and removing artifacts from the artifact-removed CT image through the artifact-removed integrated network.
The invention is also characterized in that:
the specific process of the step 1 is as follows:
step 1.1, picture frame positioning is carried out on an artifact region in a CT image with an artifact, and the artifact region is connected into a closed artifact region;
step 1.2, selecting proper pixel points in a non-artifact region of an image, and filling a closed artifact region by using region pixel values taking the pixel points as starting points;
and 1.3, storing the repaired image, repeating the steps 1.1-1.2 for each artifact area, and storing the final repaired image into a dcm format, namely the labeling data.
The specific process of the step 2 is as follows:
step 2.1, converting the labeling data into Tensor data types applicable to a Pytorch framework, and dividing the data into a training set and a testing set;
step 2.2, initializing weights for each training sample as follows:
where n represents the total number of training samples.
The specific process in the step 3 is as follows:
step 3.1, constructing a plurality of DnCNN models, setting epoch=150, batch size=1, convolution kernel size is 5×5, layer number is 17, the number of convolution kernels of the last convolution layer is 1, and the rest is 64;
step 3.2, training the kth DnCNN model through training samples and the weight of each training sample to obtain a plurality of trained DnCNN models and corresponding weight coefficients thereof;
and 3.3, normalizing the weight coefficient, and integrating the plurality of trained DnCNN models.
The specific process of the step 3.2 is as follows:
step 3.2.1, inputting training samples in the training set into a DnCNN model for training, wherein in the training process, the DnCNN model corresponding to the minimum average weighting loss of the samples is used as a model m obtained in the training k The sample average weighting loss is expressed as:
wherein omega ki Representing the weight, output, of the ith training sample when training the kth model ki Representing the output of the ith sample in each epoch, target i For the ith label data, N represents the total pixel number of each CT image;
step 3.2.2 for model m k Calculate model m k The relative loss of each training sample i is calculated according to the following formula:
step 3.2.3, calculating a model m according to the relative loss of the samples and the weights of different training samples at present k Weighted error rate e of (2) k
Step 3.2.4, based on the weighted error rate e k Calculate model m k Weight coefficient alpha of (2) k
Step 3.2.5, updating the weight of the training sample i according to the relative loss of the sample and the weight coefficient, which is specifically expressed as follows:
and (3) circulating the steps 3.1 to 3.2 for a plurality of times to obtain K trained DnCNN models and corresponding weight coefficients.
The specific process in the step 3.3 is as follows:
normalizing the weight coefficient:
integrating the trained DnCNN models, wherein the integration formula is as follows:
the integrated model m is a final CT image artifact removal model.
The beneficial effects of the invention are as follows:
1) The DnCNN is selected for residual error learning, so that the method has the advantages of less quantity of parameters, strong generalization capability, small required storage and the like, and compared with other methods, the method can better remove artifacts in CT images;
2) The network firstly learns the residual error between the artifact image and the label image, and finally obtains a corrected image by subtracting the residual error image output by the network from the artifact image, so that the damage to the non-artifact area can be reduced to the greatest extent, and the organization structure of the non-artifact area is maintained;
3) According to the invention, by introducing the weight of the training sample, the artifact image which is difficult to remove is emphasized, and finally, the image can achieve a good artifact removal effect.
Drawings
FIG. 1 is a flow chart of a CT image artifact removal method based on an integrated depth network DnCNN;
FIG. 2 is a flow chart of labeling data provided by the present invention;
FIG. 3 is a cross-sectional CT artifact image (a) and an artificially labeled artifact repair map (b) of an embodiment of the present invention;
FIG. 4 is a schematic diagram of an overall structure of DnCNN de-artifacting according to an embodiment of the present invention;
FIG. 5 shows CT artifact images (a 1) and (a 2) of embodiment 1 of the present invention, integrated network output images (b 1) and (b 2) and labeled label images (c 1) and (c 2);
FIG. 6 (a) is a CT artifact image of embodiment 2 of the present invention;
FIG. 6 (b) is an integrated network output image of embodiment 2 of the present invention;
FIG. 6 (c) is a single network output image of embodiment 2 of the present invention;
FIG. 7 (a) is a CT artifact image of embodiment 3 of the present invention;
fig. 7 (b) is an integrated network output image of embodiment 3 of the present invention;
fig. 7 (c) is a single network output image of embodiment 3 of the present invention.
Detailed Description
The invention will be described in detail below with reference to the drawings and the detailed description.
The invention discloses a CT image artifact removal method based on an integrated depth network DnCNN, which is implemented as shown in figure 1, and specifically comprises the following steps:
step 1, acquiring a clinical CT image with strip-shaped artifacts, filling and repairing an image artifact part by using surrounding pixel point values of an image artifact region, and taking the repaired CT image as labeling data, wherein the repairing data flow is shown in a figure 2; the specific process is as follows:
step 1.1, picture frame positioning is carried out on an artifact area in a CT image with artifacts, the artifact area is connected into a closed artifact area, a cross-section CT artifact image is shown in (a) of fig. 3, and an artifact repair image after artificial labeling is shown in (b) of fig. 3;
step 1.2, selecting proper pixel points in a non-artifact region of an image, and filling a closed artifact region by using region pixel values taking the pixel points as starting points;
and 1.3, storing the restored image, if the artifact area in the CT image is more, carrying out frame restoration and storage for multiple times, and storing the final restored image into a dcm format, namely the labeling data.
Step 2, randomly dividing the labeling data into a training set and a testing set, and initializing the weight of each training sample; the specific process is as follows:
step 2.1, converting the labeling data into Tensor data types applicable to a Pytorch framework, and dividing the data into a training set and a testing set;
step 2.2, in order to attach more importance to samples with large training errors in the training process, before training, each training sample is given weight, after training of each DnCNN model is completed, the sample weight is updated according to the sample weight error, and each training sample is initialized to be:
where n represents the total number of training samples.
And 3, constructing a plurality of DnCNN models, wherein the network is in a cascading structure and only comprises Conv, BN and ReLU, and no jump-level connection exists. The network estimates a clean image x according to the image y with the artifact, but the direct output of the network is a residual image v of the two, and the final clean image is
Corresponding to the model of the model,/>this approach is called residual learning (Residual Learning).
Training a plurality of DnCNN models, calculating the weight coefficient of the round of models after each round of training is completed, and updating the sample weight; normalizing the weight coefficient, and integrating the plurality of trained DnCNN models; the specific process is as follows:
step 3.1, constructing a plurality of DnCNN models, setting epoch=150, batch size=1, convolution kernel size is 5×5, layer number is 17, the number of convolution kernels of the last convolution layer is 1, and the rest is 64;
step 3.2, training the kth DnCNN model through training samples and the weight of each training sample to obtain a plurality of trained DnCNN models and corresponding weight coefficients thereof; the specific process is as follows:
step 3.2.1, inputting training samples in the training set into a DnCNN model for training, wherein in the training process, the DnCNN model corresponding to the minimum average weighting loss of the samples is used as a model m obtained in the training k The sample average weighting loss is expressed as:
wherein omega ki Representing the weight, output, of the ith training sample when training the kth model ki Representing the output of the ith sample in each epoch, target i For the ith label data, N represents the total pixel number of each CT image;
in the training process, in order to attach importance to samples with large training errors, the weights of the samples are increased after training of each DnCNN network is completed, the weights of the samples with small training errors are reduced, and the model m is obtained k Calculate model m k The relative loss of each training sample i is calculated according to the following formula:
step 3.2.3, calculating a model m according to the relative loss of the samples and the weights of different training samples at present k Weighted error rate e of (2) k
Weighted error rate measures model m k The greater the weighted error rate, the better the model performance and vice versa, the performance over the entire training set.
Step 3.2.4, based on the weighted error rate e k Calculate model m k Weight coefficient alpha of (2) k
The calculation formula of the weight coefficient shows that the weight coefficient of the model with a large weighted error rate is smaller, otherwise, the weight coefficient is larger, and the model has rationality in practical application.
Step 3.2.5, updating the weight of the training sample i according to the relative loss of the sample and the weight coefficient, which is specifically expressed as follows:
and (3) repeatedly cycling the steps 3.1-3.2 to obtain K trained DnCNN models and corresponding weight coefficients.
And 3.3, normalizing the weight coefficient, and integrating the plurality of trained DnCNN models. The specific process is as follows:
normalizing the weight coefficient:
integrating the trained DnCNN models, wherein the integration formula is as follows:
the integrated model m is a final CT image artifact removal model.
As can be seen from the updated formula of the weight, the weight of a sample with large relative error can be increased in the training process, and the sample can be more emphasized in the training process of the next DnCNN model; the weight of the sample with small relative error becomes smaller and becomes relatively unimportant in the next training, which is beneficial to better learning the sample with large training error.
And 4, finally, performing artifact removal performance evaluation on the integrated network model on the test set to obtain an artifact-removed integrated network, and removing artifacts from the artifact-removed CT image through the artifact-removed integrated network.
Example 1
713 clinical CT images with streak artifacts are acquired, 672 images are used as a training set, 41 images are used as a test set, and a DnCNN network structure is selected during training as shown in fig. 4. The CT artifact images are shown in (a 1) and (a 2) in fig. 5, the integrated network output images are shown in (b 1) and (b 2) in fig. 5, and the labeled label images are shown in (c 1) and (c 2) in fig. 5.
Comparing the network input and output and the corresponding label graph in fig. 5, it can be known that, for the CT image with less serious streak artifact, for example, (a 1), the output of the integrated network is not obviously different from label, so that a good artifact removal effect is achieved; for relatively severe artifact images, such as (a 2), most of the artifacts are successfully removed, and the network output image is significantly visually improved compared to the input image. In addition, when the network removes image artifacts by changing CT values of the artifact-free areas, the artifact-free areas still keep the original structure, namely, compared with methods such as filtering, the algorithm well keeps information of the artifact-free areas while removing the artifacts. Therefore, the integrated network can play a good role in removing the streak artifacts.
Example 2
The CT artifact image is shown in (a 1) of fig. 6, the integrated network output image is shown in (b 1) of fig. 6, and the single network output is shown in (c 1) of fig. 6, wherein i in (b 1) represents the repair image and ii represents the residual image; (c1) I in (1) represents a repair image and II represents a residual image.
Example 3
The CT artifact image is shown in fig. 7 (a 2), the integrated network output image is shown in fig. 7 (b 2), and the single network output is shown in fig. 7 (c 2), wherein i in (b 2) represents the repair image and ii represents the residual image; (c2) I in (1) represents a repair image and II represents a residual image.
The two examples of fig. 6 and 7 compare the striping effect of a single network with that of an integrated network, and comparing the outputs of the two networks can clearly find that the final output image artifact of the integrated network is removed more thoroughly than the single network, as can be clearly seen from the output residual image: the residual error learned by the integrated network is more, namely, the artifact in the original image is removed more thoroughly. The residual image further proves that the residual learning artifact removal method has the advantages that the influence on CT values of artifact-free areas is minimized, and the original tissue structure is well maintained. In addition, for the image with serious artifacts in fig. 7, compared with a single network, the integrated network has obvious improvement effect, so the algorithm can solve the problem that the serious artifacts are difficult to remove to a certain extent.
Through the mode, the invention discloses a CT image artifact removal method based on an integrated depth network DnCNN, which improves the image quality after artifact repair so as to meet the actual use requirements.

Claims (6)

1. The CT image artifact removal method based on the integrated depth network DnCNN is characterized by comprising the following steps of:
step 1, acquiring a CT image with an artifact, filling and repairing an image artifact part by using surrounding pixel point values of an image artifact region, and taking the repaired CT image as labeling data;
step 2, randomly dividing the labeling data into a training set and a testing set, and initializing the weight of each training sample;
step 3, constructing a plurality of DnCNN models, training the DnCNN models, calculating the weight coefficient of the round of models after each round of training is completed, and updating the weight of the sample; normalizing the weight coefficient, and integrating the plurality of trained DnCNN models;
and 4, finally, performing artifact removal performance evaluation on the integrated network model on the test set to obtain an artifact-removed integrated network, and removing artifacts from the artifact-removed CT image through the artifact-removed integrated network.
2. The method for removing artifacts from CT images based on integrated depth network DnCNN according to claim 1, wherein the specific process of step 1 is as follows:
step 1.1, picture frame positioning is carried out on an artifact region in a CT image with an artifact, and the artifact region is connected into a closed artifact region;
step 1.2, selecting proper pixel points in a non-artifact region of an image, and filling a closed artifact region by using region pixel values taking the pixel points as starting points;
and 1.3, storing the repaired image, repeating the steps 1.1-1.2 for each artifact area, and storing the final repaired image into a dcm format, namely the labeling data.
3. The method for removing artifacts from CT images based on integrated depth network DnCNN according to claim 1, wherein step 2 comprises the following specific procedures:
step 2.1, converting the labeling data into Tensor data types applicable to a Pytorch framework, and dividing the data into a training set and a testing set;
step 2.2, initializing weights for each training sample as follows:
where n represents the total number of training samples.
4. The method for removing artifacts from CT images based on integrated depth network DnCNN according to claim 1, wherein the specific process in step 3 is as follows:
step 3.1, constructing a plurality of DnCNN models, setting epoch=150, batch size=1, convolution kernel size is 5×5, layer number is 17, the number of convolution kernels of the last convolution layer is 1, and the rest is 64;
step 3.2, training the kth DnCNN model through training samples and the weight of each training sample to obtain a plurality of trained DnCNN models and corresponding weight coefficients thereof;
and 3.3, normalizing the weight coefficient, and integrating the plurality of trained DnCNN models.
5. The method for removing artifacts from CT images based on integrated depth network DnCNN according to claim 4, wherein the specific procedure in step 3.2 is as follows:
step 3.2.1, inputting training samples in the training set into a DnCNN model for training, wherein in the training process, the DnCNN model corresponding to the minimum average weighting loss of the samples is used as a model m obtained in the training k The sample average weighting loss is expressed as:
wherein omega ki Representing the weight, output, of the ith training sample when training the kth model ki Representing the output of the ith sample in each epoch, target i For the ith label data, N represents the total pixel number of each CT image;
step 3.2.2 for model m k Calculate model m k The relative loss of each training sample i is calculated according to the following formula:
step 3.2.3, calculating a model m according to the relative loss of the samples and the weights of different training samples at present k Weighted error rate e of (2) k
Step 3.2.4, based on the weighted error rate e k Calculate model m k Weight coefficient alpha of (2) k
Step 3.2.5, updating the weight of the training sample i according to the relative loss of the sample and the weight coefficient, which is specifically expressed as follows:
and (3) repeatedly cycling the steps 3.1-3.2 to obtain K trained DnCNN models and corresponding weight coefficients.
6. The method for removing artifacts from CT images based on integrated depth network DnCNN according to claim 4, wherein the specific procedure in step 3.3 is as follows:
normalizing the weight coefficient:
integrating the trained DnCNN models, wherein the integration formula is as follows:
the integrated model m is a final CT image artifact removal model.
CN202310681091.9A 2023-06-09 2023-06-09 CT image artifact removal method based on integrated depth network DnCNN Active CN116993845B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310681091.9A CN116993845B (en) 2023-06-09 2023-06-09 CT image artifact removal method based on integrated depth network DnCNN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310681091.9A CN116993845B (en) 2023-06-09 2023-06-09 CT image artifact removal method based on integrated depth network DnCNN

Publications (2)

Publication Number Publication Date
CN116993845A true CN116993845A (en) 2023-11-03
CN116993845B CN116993845B (en) 2024-03-15

Family

ID=88522136

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310681091.9A Active CN116993845B (en) 2023-06-09 2023-06-09 CT image artifact removal method based on integrated depth network DnCNN

Country Status (1)

Country Link
CN (1) CN116993845B (en)

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204467A (en) * 2016-06-27 2016-12-07 深圳市未来媒体技术研究院 A kind of image de-noising method based on cascade residual error neutral net
CN107480772A (en) * 2017-08-08 2017-12-15 浙江大学 A kind of car plate super-resolution processing method and system based on deep learning
CN108492269A (en) * 2018-03-23 2018-09-04 西安电子科技大学 Low-dose CT image de-noising method based on gradient canonical convolutional neural networks
CN108596861A (en) * 2018-05-14 2018-09-28 南方医科大学 A kind of CT image metal artifact minimizing technologies based on the study of depth residual error
CN108764281A (en) * 2018-04-18 2018-11-06 华南理工大学 A kind of image classification method learning across task depth network based on semi-supervised step certainly
US20190304069A1 (en) * 2018-03-29 2019-10-03 Pixar Denoising monte carlo renderings using neural networks with asymmetric loss
CN110390646A (en) * 2019-06-12 2019-10-29 西南科技大学 A kind of details holding image de-noising method
CN110648292A (en) * 2019-09-11 2020-01-03 昆明理工大学 High-noise image denoising method based on deep convolutional network
CN110738616A (en) * 2019-10-12 2020-01-31 成都考拉悠然科技有限公司 image denoising method with detail information learning capability
CN112509094A (en) * 2020-12-22 2021-03-16 西安交通大学 JPEG image compression artifact elimination algorithm based on cascade residual error coding and decoding network
US20210150674A1 (en) * 2019-11-15 2021-05-20 Disney Enterprises, Inc. Techniques for robust image denoising
WO2021232653A1 (en) * 2020-05-21 2021-11-25 浙江大学 Pet image reconstruction algorithm combining filtered back-projection algorithm and neural network
US20220107378A1 (en) * 2020-10-07 2022-04-07 Hyperfine, Inc. Deep learning methods for noise suppression in medical imaging
CN114387236A (en) * 2021-12-31 2022-04-22 浙江大学嘉兴研究院 Low-dose Sinogram denoising and PET image reconstruction method based on convolutional neural network
WO2022083026A1 (en) * 2020-10-21 2022-04-28 华中科技大学 Ultrasound image denoising model establishing method and ultrasound image denoising method
KR20220135683A (en) * 2021-03-31 2022-10-07 연세대학교 산학협력단 Apparatus for Denoising Low-Dose CT Images and Learning Apparatus and Method Therefor
CN115330615A (en) * 2022-08-09 2022-11-11 腾讯医疗健康(深圳)有限公司 Method, apparatus, device, medium, and program product for training artifact removal model
US20220414453A1 (en) * 2021-06-28 2022-12-29 X Development Llc Data augmentation using brain emulation neural networks
CN115641443A (en) * 2022-12-08 2023-01-24 北京鹰瞳科技发展股份有限公司 Method for training image segmentation network model, method for processing image and product
CN116167947A (en) * 2023-04-13 2023-05-26 西南石油大学 Image noise reduction method based on noise level estimation

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204467A (en) * 2016-06-27 2016-12-07 深圳市未来媒体技术研究院 A kind of image de-noising method based on cascade residual error neutral net
CN107480772A (en) * 2017-08-08 2017-12-15 浙江大学 A kind of car plate super-resolution processing method and system based on deep learning
CN108492269A (en) * 2018-03-23 2018-09-04 西安电子科技大学 Low-dose CT image de-noising method based on gradient canonical convolutional neural networks
US20190304069A1 (en) * 2018-03-29 2019-10-03 Pixar Denoising monte carlo renderings using neural networks with asymmetric loss
CN108764281A (en) * 2018-04-18 2018-11-06 华南理工大学 A kind of image classification method learning across task depth network based on semi-supervised step certainly
CN108596861A (en) * 2018-05-14 2018-09-28 南方医科大学 A kind of CT image metal artifact minimizing technologies based on the study of depth residual error
CN110390646A (en) * 2019-06-12 2019-10-29 西南科技大学 A kind of details holding image de-noising method
CN110648292A (en) * 2019-09-11 2020-01-03 昆明理工大学 High-noise image denoising method based on deep convolutional network
CN110738616A (en) * 2019-10-12 2020-01-31 成都考拉悠然科技有限公司 image denoising method with detail information learning capability
US20210150674A1 (en) * 2019-11-15 2021-05-20 Disney Enterprises, Inc. Techniques for robust image denoising
WO2021232653A1 (en) * 2020-05-21 2021-11-25 浙江大学 Pet image reconstruction algorithm combining filtered back-projection algorithm and neural network
US20220107378A1 (en) * 2020-10-07 2022-04-07 Hyperfine, Inc. Deep learning methods for noise suppression in medical imaging
WO2022083026A1 (en) * 2020-10-21 2022-04-28 华中科技大学 Ultrasound image denoising model establishing method and ultrasound image denoising method
CN112509094A (en) * 2020-12-22 2021-03-16 西安交通大学 JPEG image compression artifact elimination algorithm based on cascade residual error coding and decoding network
KR20220135683A (en) * 2021-03-31 2022-10-07 연세대학교 산학협력단 Apparatus for Denoising Low-Dose CT Images and Learning Apparatus and Method Therefor
US20220414453A1 (en) * 2021-06-28 2022-12-29 X Development Llc Data augmentation using brain emulation neural networks
CN114387236A (en) * 2021-12-31 2022-04-22 浙江大学嘉兴研究院 Low-dose Sinogram denoising and PET image reconstruction method based on convolutional neural network
CN115330615A (en) * 2022-08-09 2022-11-11 腾讯医疗健康(深圳)有限公司 Method, apparatus, device, medium, and program product for training artifact removal model
CN115641443A (en) * 2022-12-08 2023-01-24 北京鹰瞳科技发展股份有限公司 Method for training image segmentation network model, method for processing image and product
CN116167947A (en) * 2023-04-13 2023-05-26 西南石油大学 Image noise reduction method based on noise level estimation

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
DAHIM CHOI, ET AL;: "Integration of 2D iteration and a 3D CNN-based model for multi-type artifact suppression in C-arm cone-beam CT", MACHINE VISION AND APPLICATION, pages 1 - 14 *
MAN M.HO, ET AL;: "RR-DnCNN v2.0:Enhanced Restoration-Reconstruction Deep Neural Network for Down-Sampling-Based Video Coding", IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 30, pages 1 - 14 *
贾慧星;章毓晋;: "基于动态权重裁剪的快速Adaboost训练算法", 计算机学报, no. 02 *

Also Published As

Publication number Publication date
CN116993845B (en) 2024-03-15

Similar Documents

Publication Publication Date Title
CN110232383B (en) Focus image recognition method and focus image recognition system based on deep learning model
CN109816742B (en) Cone beam CT geometric artifact removing method based on fully-connected convolutional neural network
CN109584254A (en) A kind of heart left ventricle's dividing method based on the full convolutional neural networks of deep layer
CN107203989A (en) End-to-end chest CT image dividing method based on full convolutional neural networks
CN110503614B (en) Magnetic resonance image denoising method based on sparse dictionary learning
CN111429379B (en) Low-dose CT image denoising method and system based on self-supervision learning
CN109949235A (en) A kind of chest x-ray piece denoising method based on depth convolutional neural networks
Riddell et al. The watershed algorithm: a method to segment noisy PET transmission images
CN110874860B (en) Target extraction method of symmetrical supervision model based on mixed loss function
CN112017131B (en) CT image metal artifact removing method and device and computer readable storage medium
CN110827232B (en) Cross-modality MRI (magnetic resonance imaging) synthesis method based on morphological characteristics GAN (gamma GAN)
CN109064521A (en) A kind of CBCT using deep learning removes pseudo- image method
CN111161182B (en) MR structure information constrained non-local mean guided PET image partial volume correction method
CN109816747A (en) A kind of metal artifacts reduction method of Cranial Computed Tomography image
CN116993845B (en) CT image artifact removal method based on integrated depth network DnCNN
CN114187235A (en) Artifact insensitive medical image deformation field extraction method and registration method and device
CN117333750A (en) Spatial registration and local global multi-scale multi-modal medical image fusion method
CN116894783A (en) Metal artifact removal method for countermeasure generation network model based on time-varying constraint
CN116630342A (en) Abdominal MRI image segmentation system, method, electronic device, and storage medium
CN116071373A (en) Automatic U-net model tongue segmentation method based on fusion PCA
CN113838161B (en) Sparse projection reconstruction method based on graph learning
CN115861172A (en) Wall motion estimation method and device based on self-adaptive regularized optical flow model
CN112488125B (en) Reconstruction method and system based on high-speed visual diagnosis and BP neural network
Roy et al. MDL-IWS: multi-view deep learning with iterative watershed for pulmonary fissure segmentation
CN111091514A (en) Oral cavity CBCT image denoising method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant