WO2022204868A1 - Method for correcting image artifacts on basis of multi-constraint convolutional neural network - Google Patents

Method for correcting image artifacts on basis of multi-constraint convolutional neural network Download PDF

Info

Publication number
WO2022204868A1
WO2022204868A1 PCT/CN2021/083561 CN2021083561W WO2022204868A1 WO 2022204868 A1 WO2022204868 A1 WO 2022204868A1 CN 2021083561 W CN2021083561 W CN 2021083561W WO 2022204868 A1 WO2022204868 A1 WO 2022204868A1
Authority
WO
WIPO (PCT)
Prior art keywords
loss
mse
ker
image
per
Prior art date
Application number
PCT/CN2021/083561
Other languages
French (fr)
Chinese (zh)
Inventor
郑海荣
李彦明
万丽雯
胡战利
邓富权
Original Assignee
深圳高性能医疗器械国家研究院有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳高性能医疗器械国家研究院有限公司 filed Critical 深圳高性能医疗器械国家研究院有限公司
Priority to PCT/CN2021/083561 priority Critical patent/WO2022204868A1/en
Publication of WO2022204868A1 publication Critical patent/WO2022204868A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/60Image enhancement or restoration using machine learning, e.g. neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the invention relates to the technical field of medical image processing, and more particularly, to a method for correcting image artifacts based on a convolutional neural network with multiple constraints.
  • Coronary Computed Tomography Angiography As an example, it refers to a non-invasive imaging method that uses computers and X-rays to obtain a patient's cardiac tomographic images after intravenous injection of an appropriate contrast agent.
  • the detection method which has the advantages of short scanning time, extensive component information and non-invasive visualization of the vessel wall, is suitable for the diagnosis of suspected coronary heart disease, the follow-up of coronary artery bypass surgery, the assessment of valvular heart disease and the assessment of heart quality.
  • CCTA-acquired images may exhibit motion artifacts and require re-examination.
  • motion artifacts during coronary CT imaging is due to the displacement of image pixels when the CT acquires projection data from different angles.
  • the degree of motion artifacts depends on the unique rate and correction results of the image reconstruction algorithm.
  • the elimination of motion artifacts starts from two aspects: the first is to control the heart rate, reduce the heart rate of the subject, prolong the cardiac cycle, slow down the movement of the coronary arteries and prolong the time of the low-speed movement of the coronary arteries, which can reduce the time resolution of imaging.
  • the second is to improve the temporal resolution.
  • the improvement of time resolution starts from two aspects, one is to improve the time resolution from the hardware method, and the other is to improve the time resolution from the software aspect.
  • the time resolution is improved by increasing the rotation speed of the tube ball, using a wide-body detector and adopting a dual-detector technology.
  • increasing the rotational speed of the tube is limited by physical properties
  • using multi-detector technology is limited by space
  • using wide-body detector technology is limited by economic costs.
  • the use of multi-sector reconstruction technology image reconstruction technology based on compressed sensing (Prior Image Constrained Compressed Sensing, PICCS), motion estimation and compensation algorithms and motion correction technology (Snap Shot Freeze, SSF) can effectively improve the temporal resolution .
  • the purpose of the present invention is to overcome the above-mentioned defects of the prior art, and to provide a method for correcting image artifacts based on a convolutional neural network with multiple constraints.
  • a method for correcting image artifacts based on a convolutional neural network with multiple constraints includes the following steps:
  • Constructing a convolutional neural network with a backbone structure layer and a plurality of branch structure layers wherein the backbone structure layer takes the original image as input, and the multiple branch structure layers are respectively distinguished by different levels obtained by downsampling the original image.
  • the image of the rate is used as input, and the feature image matrix of the last convolutional layer of each branch structure layer is restored to a matrix image of the same size as the target image through upsampling;
  • the convolutional neural network is trained with the goal of convergence of the set overall loss function, and the mapping relationship between the low-dose original image and the standard-dose target image is learned, and during the training process, the backbone structure layer and the multiple The loss value is set for each layer of the branch structure layer.
  • a medical image processing method includes: performing down-sampling on the to-be-processed image multiple times to obtain images of different levels of resolution; inputting the images of different levels of resolution and the to-be-processed image into a trained convolutional neural network obtained by using the above method of the present invention , to get the output image.
  • the advantage of the present invention is that the motion artifact of the image is eliminated by using the convolutional neural network, in terms of hardware, it is not limited by physical characteristics, space and economic costs; in terms of software, it does not require complex calculation. And it is not affected by the change of the patient's heart rate, the image can be obtained to eliminate the artifacts; in terms of deep learning, it can be effectively used as a tool to directly remove the motion artifacts of medical images, and no traditional methods are needed for denoising.
  • the invention realizes image noise reduction based on multi-constrained multi-level convolutional neural network, improves image peak signal-to-noise ratio and structural similarity, and enhances image detail information, thereby obtaining medical images that better meet diagnostic requirements.
  • FIG. 1 is a flowchart of a method for correcting image artifacts based on a convolutional neural network with multiple constraints according to an embodiment of the present invention
  • FIG. 2 is a schematic structural diagram of a densely connected residual block according to an embodiment of the present invention.
  • FIG. 3 is a schematic structural diagram of a stacked dense residual block according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of an attention mechanism according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a residual block according to the present invention.
  • FIG. 6 is a schematic diagram of a multi-constrained multi-level convolutional neural network according to an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of an experimental result according to an embodiment of the present invention.
  • the present invention can be applied to motion artifact correction, noise reduction, etc. of CCTA images or other types of medical images, or can be applied to the field of image super-resolution after proper modification.
  • the artifact correction of CT images is taken as an example for description below.
  • the present invention designs a multi-level convolutional neural network network based on multiple constraints to improve the low-dose CT imaging quality.
  • the reuse of low-dimensional and high-dimensional information and the fusion of local and non-local information can be greatly improved through the attention mechanism, thereby enhancing the performance of traditional convolution operations, which can largely eliminate the noise and artifacts of low-dose CT. film.
  • a joint loss function is specially designed to improve the quality of CT images, and the generated CT images are further guaranteed to meet the requirements of medical diagnosis by combining multiple loss functions.
  • the provided method for correcting image artifacts based on a convolutional neural network with multiple constraints includes the following steps.
  • Step S110 design a stacked dense residual block structure.
  • the traditional convolutional neural network stacks multiple convolutional layers, and realizes the feature extraction of the image through these convolutional layers, thereby enhancing the features and details of the image.
  • the DCR (Densely connected and residual block) module proposed in "Densely Connected Hierarchical Network for Image Denoising” published by Bumjun Park on CVPR in 2019 is used to extract CCTA image features.
  • the DCR structure shown in Figure 2 has the advantage that this module has a better convergence speed than the same number of stacked convolutional layers.
  • the present invention regards DCR as an ordinary convolution module, and performs densely stacked, and builds a stacked dense residual block structure (DDCR), thereby obtaining a large number of features from low-dimensional to high-dimensional, and compared with the same number of stacked
  • DDCR stacked dense residual block structure
  • Step S120 designing an attention mechanism.
  • an attention mechanism is designed to assist the main branch of the network to learn the local features of CCTA, as shown in Figure 4.
  • the use of attention mechanism can improve the utilization of image feature information.
  • the attention mechanism can fuse the local and global features of the image, thereby improving the artifact removal effect.
  • the attention mechanism can be thought of as a plug-and-play module that can be added to any traditional convolutional neural network workflow to improve the performance of the network.
  • Step S130 designing the residual block of the convolutional neural network.
  • residual blocks are a common method to extract low-dimensional to high-dimensional information from images. Compared with general convolutional layers, it is not easy to lose low-dimensional features. In image de-artifacting, this method can maintain the overall characteristics of the image, and at the same time can learn the clear features of coronary arteries, so as to de-artifact the CCTA image.
  • the residual block structure is shown in Figure 5.
  • Step S140 designing a multi-constrained multi-level convolutional neural network.
  • the usual calculation method of the loss function is to calculate the loss value between the input image and the output image.
  • the calculation method adopted is to set a value of calculation loss for each processing layer, which is used to constrain the optimization and convergence of each processing layer, so as to prevent the processing layer from being ignored by the backbone network.
  • the constructed convolutional neural network includes a backbone structure layer and a plurality of branch structure layers, wherein the backbone structure layer takes the original image as input, and the multiple branch structure layers are respectively obtained by down-sampling the original image (such as using The images of different levels of resolution obtained by pixel un-shuffle function) are used as input, and the feature image matrix of the last convolutional layer of each branch structure layer is restored to the same size as the target image through upsampling (such as pixel shuffle).
  • Matrix image, and the image features extracted from each branch structure layer are fused to the backbone structure layer from top to bottom (ie, from low-resolution images to higher-resolution images), and are processed by the attention mechanism in the backbone structure layer. Thereby learning the mapping relationship between the input image and the output target image.
  • MSE mean square error
  • perceptual loss perceptual loss
  • kernel loss the mean square error is to improve the details of the matrix pixels
  • the perceptual loss is to improve the similarity of the overall features of the image
  • the kernel loss is to improve the convergence speed of the network.
  • the input and output backbone network is used as the first layer, and the image is downsampled once by pixel un-shuffle as the second layer, and so on.
  • Loss MSE loss mse_1 +w mse_2 ⁇ loss mse_2 +w mse_3 ⁇ loss mse_3 +w mse_4 ⁇ loss mse_4
  • w mse_2 , w mse_3 and w mse_4 are the weights of MSE loss for each processing layer
  • loss mse_1 , loss mse_2 , loss mse_3 and loss mse_4 are the MSE losses for each processing layer.
  • Loss per loss per_1 +w per_2 ⁇ loss pre_2 +w per_3 ⁇ loss per_3 +w per_4 ⁇ loss per_4
  • w per_2 , w per_3 and w per_4 are the weights of the perceptual loss of each processing layer
  • loss per_1 , loss per_2 , loss per_3 and loss per_4 are the perceptual losses of each processing layer.
  • the kernel loss is calculated as follows:
  • Loss ker loss ker_1 +w ker_2 ⁇ loss ker_2 +w ker_3 ⁇ loss ker_3 +w ker_4 ⁇ loss ker_4
  • w ker_2 , w ker_3 and w ker_4 are the weights of the kernel loss of each processing layer
  • loss ker_1 , loss ker_2 , loss ker_3 and loss ker_4 are the kernel losses of each processing layer.
  • the overall loss function of the convolutional neural network is expressed as:
  • Loss w mse ⁇ Loss MSE +w per ⁇ Loss per +w ker ⁇ Loss ker
  • w mse , w per and w ker are the weights of MSE, perceptual loss and kernel loss, respectively, and the weights can be adjusted according to needs or simulation results.
  • the MSE loss can be generally expressed as:
  • Loss MSE loss mse_1 +w mse_2 ⁇ loss mse_2 +w mse_3 ⁇ loss mse_3 +w mse_4 ⁇ loss mse_4 +,...,+w mse_k ⁇ loss mse_k , where k represents the branch structure layer index. Correspondingly, other loss items are set, which will not be repeated here.
  • Step S150 train the convolutional neural network with the set loss function convergence as the goal.
  • the entire network is optimized using the Adam optimizer.
  • the training optimization process takes the artifact data in the CCTA image data as input, and takes the coronary artery motion artifact-free data in the CCTA image data of the same patient as the reference data, until the network reaches a convergence state, such as the loss function value is less than the set value. set threshold.
  • Step S160 using the trained convolutional neural network to perform image processing.
  • the collected actual images can be de-artifacted and denoised in real time to obtain reconstructed images, that is, the images to be processed are down-sampled multiple times to obtain images with different levels of resolution;
  • the images of different levels of resolution and the images to be processed are input into the above-mentioned trained convolutional neural network to obtain an output image.
  • the beneficial effects of the present invention are as follows: 1) Based on the multi-constraint and multi-level joint loss function enhancement network, the convergence and optimization of the multi-layer processing layer of the network are enhanced, and the local coronary artery is enhanced. details, and improve the effect of artifact correction; 2) Use DCR and DDCR modules to replace ordinary convolutional layer stacking, reduce network parameters, and improve network convergence; 3) Directly move CCTA images based on convolutional neural networks Artifact correction, input the image of coronary artery with motion artifact, and directly obtain the image after coronary motion artifact correction.
  • the present invention may be a system, method and/or computer program product.
  • the computer program product may include a computer-readable storage medium having computer-readable program instructions loaded thereon for causing a processor to implement various aspects of the present invention.
  • a computer-readable storage medium may be a tangible device that can hold and store instructions for use by the instruction execution device.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Non-exhaustive list of computer readable storage media include: portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM) or flash memory), static random access memory (SRAM), portable compact disk read only memory (CD-ROM), digital versatile disk (DVD), memory sticks, floppy disks, mechanically coded devices, such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above.
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable read only memory
  • flash memory static random access memory
  • SRAM static random access memory
  • CD-ROM compact disk read only memory
  • DVD digital versatile disk
  • memory sticks floppy disks
  • mechanically coded devices such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above.
  • Computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (eg, light pulses through fiber optic cables), or through electrical wires transmitted electrical signals.
  • the computer readable program instructions described herein may be downloaded to various computing/processing devices from a computer readable storage medium, or to an external computer or external storage device over a network such as the Internet, a local area network, a wide area network, and/or a wireless network.
  • the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer-readable program instructions from a network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
  • the computer program instructions for carrying out the operations of the present invention may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state setting data, or instructions in one or more programming languages.
  • Source or object code written in any combination including object-oriented programming languages, such as Smalltalk, C++, Python, etc., and conventional procedural programming languages, such as the "C" language or similar programming languages.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through the Internet connect).
  • LAN local area network
  • WAN wide area network
  • custom electronic circuits such as programmable logic circuits, field programmable gate arrays (FPGAs), or programmable logic arrays (PLAs)
  • FPGAs field programmable gate arrays
  • PDAs programmable logic arrays
  • Computer readable program instructions are executed to implement various aspects of the present invention.
  • These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer or other programmable data processing apparatus to produce a machine that causes the instructions when executed by the processor of the computer or other programmable data processing apparatus , resulting in means for implementing the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • These computer readable program instructions can also be stored in a computer readable storage medium, these instructions cause a computer, programmable data processing apparatus and/or other equipment to operate in a specific manner, so that the computer readable medium on which the instructions are stored includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • Computer readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other equipment to cause a series of operational steps to be performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , thereby causing instructions executing on a computer, other programmable data processing apparatus, or other device to implement the functions/acts specified in one or more blocks of the flowcharts and/or block diagrams.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more functions for implementing the specified logical function(s) executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or actions , or can be implemented in a combination of dedicated hardware and computer instructions. It is well known to those skilled in the art that implementation in hardware, implementation in software, and implementation in a combination of software and hardware are all equivalent.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

Disclosed is a method for correcting image artifacts on the basis of a multi-constraint convolutional neural network. The method comprises: constructing a convolutional neural network having a trunk structure layer and a plurality of branch structure layers, wherein the trunk structure layer uses an original image as input, the plurality of branch structure layers respectively use images having different levels of resolutions obtained by down-sampling the original image as input, and the feature image matrix of the last convolutional layer of each branch structure layer is restored to a matrix image of the same size as a target image by means of up-sampling; and training the convolutional neural network with the goal of convergence of a set total loss function, learning a mapping relationship from a low-dose original image to a standard-dose target image, and setting a loss value for each of the trunk structure layer and the plurality of branch structure layers in the training process. The present invention enhances image detail information while improving the peak signal to noise ratio and the structural similarity of images.

Description

一种基于多重约束的卷积神经网络校正图像伪影的方法A Method for Correcting Image Artifacts Based on Convolutional Neural Networks with Multiple Constraints 技术领域technical field
本发明涉及医学图像处理技术领域,更具体地,涉及一种基于多重约束的卷积神经网络校正图像伪影的方法。The invention relates to the technical field of medical image processing, and more particularly, to a method for correcting image artifacts based on a convolutional neural network with multiple constraints.
背景技术Background technique
研究和开发在产生伪影的影像下去除运动伪影,对于目前的医疗诊断领域有着重要的科学意义。以冠状动脉计算机断层扫描血管造影(Cornary Computed Tomography Angiography,CCTA)为例,其是指受试者通过静脉注射适当造影剂后,利用计算机和X射线来获取病人心脏断层图像的非侵入式影像学检测方法,它具有扫描时间短,成分信息广泛和以非侵入式可视化血管壁等优点,适用于可疑冠心病的诊断,冠状动脉搭桥术的随访,评估瓣膜性心脏病和评估心脏质量。然而,CCTA获取的影像可能会出现运动伪影,需要重新进行检查。而大量的X射线照射会出现辐射剂量的累计效应,大幅度增加各种疾病发生的可能性,进而影响人体生理机能,破坏人体组织器官,甚至危害到患者的生命安全。因此,对医学影像的伪影校正,对医疗诊断领域有广阔的应用前景。Research and development to remove motion artifacts in images that produce artifacts has important scientific significance for the current field of medical diagnosis. Taking Coronary Computed Tomography Angiography (CCTA) as an example, it refers to a non-invasive imaging method that uses computers and X-rays to obtain a patient's cardiac tomographic images after intravenous injection of an appropriate contrast agent. The detection method, which has the advantages of short scanning time, extensive component information and non-invasive visualization of the vessel wall, is suitable for the diagnosis of suspected coronary heart disease, the follow-up of coronary artery bypass surgery, the assessment of valvular heart disease and the assessment of heart quality. However, CCTA-acquired images may exhibit motion artifacts and require re-examination. A large amount of X-ray exposure will cause the cumulative effect of radiation dose, which will greatly increase the possibility of various diseases, thereby affecting the physiological functions of the human body, destroying human tissues and organs, and even endangering the life safety of patients. Therefore, the artifact correction of medical images has broad application prospects in the field of medical diagnosis.
冠状动脉CT成像过程中运动伪影的形成是由于图像像素在CT获取不同角度投影数据时发生了位移,运动伪影的程度取决于唯一的速率和图像重建算法的校正结果。一般消除运动伪影从两个方面入手:第一是控制心率,降低受检者的心率、延长心动周期、减慢冠状动脉运动和延长冠状动脉低速运动的时间,可以降低成像时对时间分辨率的需求;第二是提高时间分辨率,基于MDCT冠状动脉成像基本原理及运动伪影产生的原因,要实现CCTA成像并规避运动伪影,需要将特定扫描方式下获得的最高时间分辨率与冠状动脉运动幅度最小的时相相匹配。The formation of motion artifacts during coronary CT imaging is due to the displacement of image pixels when the CT acquires projection data from different angles. The degree of motion artifacts depends on the unique rate and correction results of the image reconstruction algorithm. Generally, the elimination of motion artifacts starts from two aspects: the first is to control the heart rate, reduce the heart rate of the subject, prolong the cardiac cycle, slow down the movement of the coronary arteries and prolong the time of the low-speed movement of the coronary arteries, which can reduce the time resolution of imaging. The second is to improve the temporal resolution. Based on the basic principles of MDCT coronary imaging and the causes of motion artifacts, to achieve CCTA imaging and avoid motion artifacts, it is necessary to compare the highest temporal resolution obtained under a specific scanning method with the coronary artifact. Phases with minimal arterial motion are matched.
而提高时间分辨率又从两个方面着手,一是从硬件方法提高时间分辨 率,二是从软件方面提高时间分辨率。具体地,在硬件方面,从提高管球旋转速度,采用宽体探测器和采用双探测器技术出发来提高时间分辨率。然而,提高管球旋转速度受物理特性的限制,采用多探测器技术则受到空间的限制,而采用宽体探测器技术则受限于经济成本。在软件方面,采用多扇区重建技术,基于压缩感知的图像重建技术(Prior Image Constrained Compressed Sensing,PICCS),运动估算和补偿算法与运动校正技术(Snap Shot Freeze,SSF)可以有效提高时间分辨率。然而,采用多扇区重建技术需要维持患者的心率稳定,且受管球旋转时间和扫描螺距的限制。基于压缩感知的图像重建技术尚未得到验证。运动估算和补偿算法依赖大量的计算和评估。运动校正技术需要采集图像中相对运动伪影较小、图像质量较好的时相,经过复杂计算消除运动伪影。The improvement of time resolution starts from two aspects, one is to improve the time resolution from the hardware method, and the other is to improve the time resolution from the software aspect. Specifically, in terms of hardware, the time resolution is improved by increasing the rotation speed of the tube ball, using a wide-body detector and adopting a dual-detector technology. However, increasing the rotational speed of the tube is limited by physical properties, using multi-detector technology is limited by space, and using wide-body detector technology is limited by economic costs. In terms of software, the use of multi-sector reconstruction technology, image reconstruction technology based on compressed sensing (Prior Image Constrained Compressed Sensing, PICCS), motion estimation and compensation algorithms and motion correction technology (Snap Shot Freeze, SSF) can effectively improve the temporal resolution . However, the use of multi-sector reconstruction techniques requires maintaining a stable patient's heart rate and is limited by the ball rotation time and scan pitch. Image reconstruction techniques based on compressed sensing have not yet been validated. Motion estimation and compensation algorithms rely on a lot of computation and evaluation. The motion correction technology needs to collect the phase with relatively small relative motion artifacts and good image quality in the image, and eliminate the motion artifacts through complex calculations.
此外,对于利用深度学习去除图像伪影的现有技术方案,飞利浦研究中心提出的两项去除CCTA运动伪影的方法是是使用深度学习模型识别、估计运动伪影的等级,辅助传统方法进行运动补偿和估计。然而,这种方案去除运动伪影的效果还有待改进。In addition, for the existing technical solutions of using deep learning to remove image artifacts, the two methods proposed by Philips Research Center to remove CCTA motion artifacts are to use deep learning models to identify and estimate the level of motion artifacts, and assist traditional methods to perform motion. Compensation and Estimation. However, the effect of this scheme in removing motion artifacts needs to be improved.
发明内容SUMMARY OF THE INVENTION
本发明的目的是克服上述现有技术的缺陷,提供一种基于多重约束的卷积神经网络校正图像伪影的方法。The purpose of the present invention is to overcome the above-mentioned defects of the prior art, and to provide a method for correcting image artifacts based on a convolutional neural network with multiple constraints.
根据本发明的第一方面,提供一种基于多重约束的卷积神经网络校正图像伪影的方法。该方法包括以下步骤:According to a first aspect of the present invention, a method for correcting image artifacts based on a convolutional neural network with multiple constraints is provided. The method includes the following steps:
构建具有主干结构层和多个分支结构层的卷积神经网络,其中,所述主干结构层以原始图像为输入,所述多个分支结构层分别以通过对原始图像降采样获得的不同级别分辨率的图像作为输入,并且每一分支结构层的最后一个卷积层的特征图像矩阵,都通过上采样恢复成与目标图像相同大小的矩阵图像;Constructing a convolutional neural network with a backbone structure layer and a plurality of branch structure layers, wherein the backbone structure layer takes the original image as input, and the multiple branch structure layers are respectively distinguished by different levels obtained by downsampling the original image. The image of the rate is used as input, and the feature image matrix of the last convolutional layer of each branch structure layer is restored to a matrix image of the same size as the target image through upsampling;
以设定的总体损失函数收敛为目标训练所述卷积神经网络,学习从低剂量原始图像到标准剂量目标图像之间的映射关系,并且训练过程中,所述主干结构层以及所述多个分支结构层的每一层均设置损失值。The convolutional neural network is trained with the goal of convergence of the set overall loss function, and the mapping relationship between the low-dose original image and the standard-dose target image is learned, and during the training process, the backbone structure layer and the multiple The loss value is set for each layer of the branch structure layer.
根据本发明的第二方面,提供一种医学图像处理方法。该方法包括:将待处理图像进行多次降采样,得到不同级别分辨率的图像;将所述不同级别分辨率的图像和待处理图像输入根据利用本发明上述方法得到的经训练卷积神经网络,获得输出图像。According to a second aspect of the present invention, a medical image processing method is provided. The method includes: performing down-sampling on the to-be-processed image multiple times to obtain images of different levels of resolution; inputting the images of different levels of resolution and the to-be-processed image into a trained convolutional neural network obtained by using the above method of the present invention , to get the output image.
与现有技术相比,本发明的优点在于,利用卷积神经网络消除图像的运动伪影,在硬件方面,不受物理特性、空间和经济成本的限制;在软件方面,不需要复杂的计算且不受患者心率变化的影响,即可得到消除伪影的图像;在深度学习方面,可以有效地作为直接去除医学图像的运动伪影的工具,不再需要传统方法进行去噪。本发明是基于多重约束的多层次卷积神经网络实现图像降噪,在提高图像峰值信噪比和结构相似度的同时,增强了图像细节信息,从而得到更加满足诊断需求的医学图像。Compared with the prior art, the advantage of the present invention is that the motion artifact of the image is eliminated by using the convolutional neural network, in terms of hardware, it is not limited by physical characteristics, space and economic costs; in terms of software, it does not require complex calculation. And it is not affected by the change of the patient's heart rate, the image can be obtained to eliminate the artifacts; in terms of deep learning, it can be effectively used as a tool to directly remove the motion artifacts of medical images, and no traditional methods are needed for denoising. The invention realizes image noise reduction based on multi-constrained multi-level convolutional neural network, improves image peak signal-to-noise ratio and structural similarity, and enhances image detail information, thereby obtaining medical images that better meet diagnostic requirements.
通过以下参照附图对本发明的示例性实施例的详细描述,本发明的其它特征及其优点将会变得清楚。Other features and advantages of the present invention will become apparent from the following detailed description of exemplary embodiments of the present invention with reference to the accompanying drawings.
附图说明Description of drawings
被结合在说明书中并构成说明书的一部分的附图示出了本发明的实施例,并且连同其说明一起用于解释本发明的原理。The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention.
图1是根据本发明一个实施例的基于多重约束的卷积神经网络校正图像伪影方法的流程图;1 is a flowchart of a method for correcting image artifacts based on a convolutional neural network with multiple constraints according to an embodiment of the present invention;
图2是根据本发明一个实施例的密集连接的残差块结构示意图;FIG. 2 is a schematic structural diagram of a densely connected residual block according to an embodiment of the present invention;
图3是根据本发明一个实施例的堆积密集残差块结构示意图;3 is a schematic structural diagram of a stacked dense residual block according to an embodiment of the present invention;
图4是根据本发明一个实施例的注意力机制示意图;FIG. 4 is a schematic diagram of an attention mechanism according to an embodiment of the present invention;
图5是根据本发明一个残差块示意图;5 is a schematic diagram of a residual block according to the present invention;
图6是根据本发明一个实施例的多重约束多层次卷积神经网络示意图;6 is a schematic diagram of a multi-constrained multi-level convolutional neural network according to an embodiment of the present invention;
图7是根据本发明一个实施例的实验结果示意图;7 is a schematic diagram of an experimental result according to an embodiment of the present invention;
附图中,conv-卷积层;res-残差块。In the attached figure, conv-convolutional layer; res-residual block.
具体实施方式Detailed ways
现在将参照附图来详细描述本发明的各种示例性实施例。应注意到: 除非另外具体说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值不限制本发明的范围。Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that the relative arrangement of components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the invention unless specifically stated otherwise.
以下对至少一个示例性实施例的描述实际上仅仅是说明性的,决不作为对本发明及其应用或使用的任何限制。The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail, but where appropriate, such techniques, methods, and apparatus should be considered part of the specification.
在这里示出和讨论的所有例子中,任何具体值应被解释为仅仅是示例性的,而不是作为限制。因此,示例性实施例的其它例子可以具有不同的值。In all examples shown and discussed herein, any specific values should be construed as illustrative only and not limiting. Accordingly, other instances of the exemplary embodiment may have different values.
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。It should be noted that like numerals and letters refer to like items in the following figures, so once an item is defined in one figure, it does not require further discussion in subsequent figures.
本发明可应用于CCTA图像或其他类型的医学图像的运动伪影校正、降噪等,或经过适当更改,应用于图像超分辨领域。为清楚起见,以下以CT图像的伪影校正为例进行说明。The present invention can be applied to motion artifact correction, noise reduction, etc. of CCTA images or other types of medical images, or can be applied to the field of image super-resolution after proper modification. For the sake of clarity, the artifact correction of CT images is taken as an example for description below.
为解决低剂量条件下CT成像质量差,噪声伪影多的问题,本发明设计了一种基于多重约束的多层次卷积神经网络网络,来提高低剂量CT成像质量。此外,通过注意力机制可以大幅度提高低维和高维信息的重复利用以及局部和非局部信息的融合,从而增强传统卷积操作的性能,能够在较大程度上消除低剂量CT的噪声和伪影。另一方面,为提高CT图像质量专门设计了一个联合损失函数,通过联合多重损失函数来进一步保证生成的CT图像满足医学诊断要求。In order to solve the problems of poor CT imaging quality and many noise artifacts under low-dose conditions, the present invention designs a multi-level convolutional neural network network based on multiple constraints to improve the low-dose CT imaging quality. In addition, the reuse of low-dimensional and high-dimensional information and the fusion of local and non-local information can be greatly improved through the attention mechanism, thereby enhancing the performance of traditional convolution operations, which can largely eliminate the noise and artifacts of low-dose CT. film. On the other hand, a joint loss function is specially designed to improve the quality of CT images, and the generated CT images are further guaranteed to meet the requirements of medical diagnosis by combining multiple loss functions.
具体地,参见图1所示,所提供的基于多重约束的卷积神经网络校正图像伪影的方法包括以下步骤。Specifically, as shown in FIG. 1 , the provided method for correcting image artifacts based on a convolutional neural network with multiple constraints includes the following steps.
步骤S110,设计堆积密集残差块结构。Step S110, design a stacked dense residual block structure.
传统的卷积神经网络堆积多个卷积层,通过这些卷积层实现图像的特征提取,从而增强图像的特征和细节。而在本发明一个实施例中,使用2019年Bumjun Park在CVPR上发表“Densely Connected Hierarchical Network for Image Denoising”中提出的DCR(密集连接残差块,Densely connected and  residual block)模块,用于提取CCTA的图像特征。如图2所示的DCR结构,其优势在于,该模块相对于同样数量堆积成的卷积层,收敛速度更好。The traditional convolutional neural network stacks multiple convolutional layers, and realizes the feature extraction of the image through these convolutional layers, thereby enhancing the features and details of the image. In one embodiment of the present invention, the DCR (Densely connected and residual block) module proposed in "Densely Connected Hierarchical Network for Image Denoising" published by Bumjun Park on CVPR in 2019 is used to extract CCTA image features. The DCR structure shown in Figure 2 has the advantage that this module has a better convergence speed than the same number of stacked convolutional layers.
进一步地,本发明将DCR当作普通的卷积模块,进行密集地堆积,构建堆积密集残差块结构(DDCR),从而得到从低维到高维的大量特征,并且相比于同样数量堆积的卷积层而言,其收敛性更好,DDCR结构如图3所示。通过使用DCR模块和DDCR模块替代普通的卷积层堆积,提取特征更全面并能够提供计算效率。Further, the present invention regards DCR as an ordinary convolution module, and performs densely stacked, and builds a stacked dense residual block structure (DDCR), thereby obtaining a large number of features from low-dimensional to high-dimensional, and compared with the same number of stacked For the convolutional layer of , its convergence is better, and the DDCR structure is shown in Figure 3. By using DCR module and DDCR module instead of ordinary convolutional layer stacking, the extracted features are more comprehensive and can provide computational efficiency.
步骤S120,设计注意力机制。Step S120, designing an attention mechanism.
在CCTA图像冠状动脉运动伪影校正的方法中,比较困难的是将局部特征校正成为无伪影的图像。为了使网络能够将其局部特征,尤其是冠状动脉特征进行伪影校正,在一个实施例中,设计了注意力机制(Attention)来辅助网络主支学习CCTA的局部特征,如图4所示。In the method of coronary motion artifact correction in CCTA images, it is more difficult to correct local features into images without artifacts. In order to enable the network to perform artifact correction on its local features, especially coronary artery features, in one embodiment, an attention mechanism is designed to assist the main branch of the network to learn the local features of CCTA, as shown in Figure 4.
使用注意力机制能够提高图像特征信息的利用,将其应用于CCTA冠状动脉运动去伪影领域时,注意力机制可以融合图像的局部特征和全局特征,从而改善了伪影去除效果。此外,注意力机制可以被认为是一种即插即用模块,可以添加至任意传统卷积神经网络工作流程中,提高网络的性能。The use of attention mechanism can improve the utilization of image feature information. When it is applied to the field of CCTA coronary motion artifact removal, the attention mechanism can fuse the local and global features of the image, thereby improving the artifact removal effect. Furthermore, the attention mechanism can be thought of as a plug-and-play module that can be added to any traditional convolutional neural network workflow to improve the performance of the network.
步骤S130,设计卷积神经网络的残差块。Step S130, designing the residual block of the convolutional neural network.
在卷积神经网络中,残差块是一种常见的提取图像低维到高维信息的方法。它相对于一般的卷积层来说,不容易丢失低维的特征。而在图像去伪影中,这种方法能够保持图像的整体特征,同时也能学习到冠状动脉清晰的特征,从而为CCTA图像去伪影的能力。例如,残差块结构如图5所示。In convolutional neural networks, residual blocks are a common method to extract low-dimensional to high-dimensional information from images. Compared with general convolutional layers, it is not easy to lose low-dimensional features. In image de-artifacting, this method can maintain the overall characteristics of the image, and at the same time can learn the clear features of coronary arteries, so as to de-artifact the CCTA image. For example, the residual block structure is shown in Figure 5.
步骤S140,设计多重约束的多层次卷积神经网络。Step S140, designing a multi-constrained multi-level convolutional neural network.
在传统的卷积神经网络中,损失函数通常的计算方法是计算输入图像和输出图像之间的损失值。而在本发明一个实施例中,采取的计算方法是将每一个处理层都设置一个计算损失的值,用于约束每个处理层的优化和收敛,避免处理层被主干网络忽视。而在计算损失函数的时候,每一层的最后一个卷积层的特征图像矩阵,都可以通过PyTorch库中的pixel shuffle 函数恢复成与目标图像相同大小的矩阵图像。In traditional convolutional neural networks, the usual calculation method of the loss function is to calculate the loss value between the input image and the output image. However, in an embodiment of the present invention, the calculation method adopted is to set a value of calculation loss for each processing layer, which is used to constrain the optimization and convergence of each processing layer, so as to prevent the processing layer from being ignored by the backbone network. When calculating the loss function, the feature image matrix of the last convolutional layer of each layer can be restored to a matrix image of the same size as the target image through the pixel shuffle function in the PyTorch library.
如图6所示,构建的卷积神经网络包含主干结构层和多个分支结构层,其中,主干结构层以原始图像为输入,多个分支结构层分别以通过对原始图像降采样(如利用pixel un-shuffle函数)获得的不同级别分辨率的图像作为输入,每一分支结构层的最后一个卷积层的特征图像矩阵,都通过上采样(如pixel shuffle)恢复成与目标图像相同大小的矩阵图像,并且每个分支结构层提取的图像特征从上至下(即从低分辨率图像至较高分辨率图像)依次融合到主干结构层,并在主干结构层利用注意力机制进行处理,从而学习输入图像和输出目标图像之间的映射关系。As shown in Figure 6, the constructed convolutional neural network includes a backbone structure layer and a plurality of branch structure layers, wherein the backbone structure layer takes the original image as input, and the multiple branch structure layers are respectively obtained by down-sampling the original image (such as using The images of different levels of resolution obtained by pixel un-shuffle function) are used as input, and the feature image matrix of the last convolutional layer of each branch structure layer is restored to the same size as the target image through upsampling (such as pixel shuffle). Matrix image, and the image features extracted from each branch structure layer are fused to the backbone structure layer from top to bottom (ie, from low-resolution images to higher-resolution images), and are processed by the attention mechanism in the backbone structure layer. Thereby learning the mapping relationship between the input image and the output target image.
例如,损失函数有均方误差(MSE)、感知损失(perceptual loss)以及核损失(kernel loss)三种。其中均方误差是为了提高矩阵像素的细节;感知损失是为了提高图像的整体特征的相似性;而核损失则是提高网络的收敛速度。For example, there are three loss functions: mean square error (MSE), perceptual loss, and kernel loss. Among them, the mean square error is to improve the details of the matrix pixels; the perceptual loss is to improve the similarity of the overall features of the image; and the kernel loss is to improve the convergence speed of the network.
仍结合图6所示,以输入与输出主干网络作为第1层,通过pixel un-shuffle降采样一次图像作为第2层,以此类推。层数越高,学习的特征越趋向于整体。而且,为了提高每一个处理层的收敛性。每提高一个处理层,将会降低每一层的损失权重。因此,MSE的计算方式如下:Still combined with Figure 6, the input and output backbone network is used as the first layer, and the image is downsampled once by pixel un-shuffle as the second layer, and so on. The higher the number of layers, the more the learned features tend to be holistic. Also, in order to improve the convergence of each processing layer. Each additional processing layer will decrease the loss weight of each layer. Therefore, MSE is calculated as follows:
Loss MSE=loss mse_1+w mse_2×loss mse_2+w mse_3×loss mse_3+w mse_4×loss mse_4其中,w mse_2、w mse_3和w mse_4为每个处理层的MSE损失的权重,而loss mse_1、loss mse_2、loss mse_3和loss mse_4是每个处理层的MSE损失。 Loss MSE = loss mse_1 +w mse_2 ×loss mse_2 +w mse_3 ×loss mse_3 +w mse_4 ×loss mse_4 where w mse_2 , w mse_3 and w mse_4 are the weights of MSE loss for each processing layer, and loss mse_1 , loss mse_2 , loss mse_3 and loss mse_4 are the MSE losses for each processing layer.
同样的,感知损失的计算方式如下:Likewise, the perceptual loss is calculated as follows:
Loss per=loss per_1+w per_2×loss pre_2+w per_3×loss per_3+w per_4×loss per_4 Loss per =loss per_1 +w per_2 ×loss pre_2 +w per_3 ×loss per_3 +w per_4 ×loss per_4
其中,w per_2、w per_3和w per_4为每个处理层的感知损失的权重,而loss per_1、loss per_2、loss per_3和loss per_4是每个处理层的感知损失。 where w per_2 , w per_3 and w per_4 are the weights of the perceptual loss of each processing layer, and loss per_1 , loss per_2 , loss per_3 and loss per_4 are the perceptual losses of each processing layer.
同样的,核损失的计算方式如下:Similarly, the kernel loss is calculated as follows:
Loss ker=loss ker_1+w ker_2×loss ker_2+w ker_3×loss ker_3+w ker_4×loss ker_4 Loss ker = loss ker_1 +w ker_2 ×loss ker_2 +w ker_3 ×loss ker_3 +w ker_4 ×loss ker_4
其中,w ker_2、w ker_3和w ker_4为每个处理层的核损失的权重,而loss ker_1、loss ker_2、loss ker_3和loss ker_4是每个处理层的核损失。 Among them, w ker_2 , w ker_3 and w ker_4 are the weights of the kernel loss of each processing layer, and loss ker_1 , loss ker_2 , loss ker_3 and loss ker_4 are the kernel losses of each processing layer.
而卷积神经网络的总体损失函数表示为:The overall loss function of the convolutional neural network is expressed as:
Loss=w mse×Loss MSE+w per×Loss per+w ker×Loss ker Loss=w mse ×Loss MSE +w per ×Loss per +w ker ×Loss ker
其中,w mse、w per和w ker分别为MSE,感知损失和核损失的权重,权重取值可根据需要或仿真结果进行调整。 Among them, w mse , w per and w ker are the weights of MSE, perceptual loss and kernel loss, respectively, and the weights can be adjusted according to needs or simulation results.
需说明的是,在实际应用中,可根据对图像重建效果或处理速度的要求设置更多数目的分支结构层,此时,例如MSE损失可通用表示为:It should be noted that in practical applications, more branch structure layers can be set according to the requirements for image reconstruction effect or processing speed. In this case, for example, the MSE loss can be generally expressed as:
Loss MSE=loss mse_1+w mse_2×loss mse_2+w mse_3×loss mse_3+w mse_4×loss mse_4+,…,+w mse_k×loss mse_k,其中,k表示分支结构层索引。相应地,设置其他损失项,在此不再赘述。 Loss MSE = loss mse_1 +w mse_2 ×loss mse_2 +w mse_3 ×loss mse_3 +w mse_4 ×loss mse_4 +,…,+w mse_k ×loss mse_k , where k represents the branch structure layer index. Correspondingly, other loss items are set, which will not be repeated here.
通过设置主干结构层和分支结构层等多个处理层,能够融合不同视角下的图像特征,并且通过使用多重约束的多层次联合损失函数,能够提高CCTA图像的质量。By setting up multiple processing layers such as backbone structure layer and branch structure layer, image features from different perspectives can be fused, and the quality of CCTA images can be improved by using a multi-level joint loss function with multiple constraints.
步骤S150,以设定的损失函数收敛为目标训练卷积神经网络。Step S150, train the convolutional neural network with the set loss function convergence as the goal.
例如,整个网络使用Adam优化器来优化。训练优化过程从CCTA图像数据中的有伪影数据作为输入,从同一个病人的CCTA图像数据中的冠状动脉无运动伪影的数据作为参考数据,直到网络达到收敛状态,如损失函数值小于设定阈值。For example, the entire network is optimized using the Adam optimizer. The training optimization process takes the artifact data in the CCTA image data as input, and takes the coronary artery motion artifact-free data in the CCTA image data of the same patient as the reference data, until the network reaches a convergence state, such as the loss function value is less than the set value. set threshold.
步骤S160,利用经训练的卷积神经网络进行图像处理。Step S160, using the trained convolutional neural network to perform image processing.
利用经训练的卷积神经网络,即可对采集的实际图像进行实时去伪影、降噪等,以获得重建图像,即将待处理图像进行多次降采样,得到不同级别分辨率的图像;将所述不同级别分辨率的图像和待处理图像输入上述经训练卷积神经网络,获得输出图像。Using the trained convolutional neural network, the collected actual images can be de-artifacted and denoised in real time to obtain reconstructed images, that is, the images to be processed are down-sampled multiple times to obtain images with different levels of resolution; The images of different levels of resolution and the images to be processed are input into the above-mentioned trained convolutional neural network to obtain an output image.
为进一步验证本发明的效果,进行了实验,结果如图7所示,从上至下各行示出的分别是输入图像、输出图像和目标图像。可以看出,本发明的方法可以有效校正CCTA图像上的冠状动脉的运动伪影。同时,可以在一定程度上恢复图像细节信息。In order to further verify the effect of the present invention, experiments are carried out, and the results are shown in FIG. 7 , the rows from top to bottom show the input image, the output image and the target image respectively. It can be seen that the method of the present invention can effectively correct the motion artifacts of the coronary arteries on the CCTA image. At the same time, image detail information can be recovered to a certain extent.
综上所述,与现有技术相比,本发明的有益效果体现在:1)基于多重约束多层次的联合损失函数增强网络对网络多层处理层的收敛和优化,增强了冠状动脉的局部细节,提高了伪影校正的效果;2)使用DCR和DDCR模块替代普通的卷积层堆积,减少了网络参数,提高了网络的收敛性;3) 基于卷积神经网络直接对CCTA图像进行运动伪影校正,输入冠状动脉含有运动伪影的图像,直接获得冠状动脉运动伪影校正后的图像。To sum up, compared with the prior art, the beneficial effects of the present invention are as follows: 1) Based on the multi-constraint and multi-level joint loss function enhancement network, the convergence and optimization of the multi-layer processing layer of the network are enhanced, and the local coronary artery is enhanced. details, and improve the effect of artifact correction; 2) Use DCR and DDCR modules to replace ordinary convolutional layer stacking, reduce network parameters, and improve network convergence; 3) Directly move CCTA images based on convolutional neural networks Artifact correction, input the image of coronary artery with motion artifact, and directly obtain the image after coronary motion artifact correction.
本发明可以是系统、方法和/或计算机程序产品。计算机程序产品可以包括计算机可读存储介质,其上载有用于使处理器实现本发明的各个方面的计算机可读程序指令。The present invention may be a system, method and/or computer program product. The computer program product may include a computer-readable storage medium having computer-readable program instructions loaded thereon for causing a processor to implement various aspects of the present invention.
计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是――但不限于――电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、静态随机存取存储器(SRAM)、便携式压缩盘只读存储器(CD-ROM)、数字多功能盘(DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。这里所使用的计算机可读存储介质不被解释为瞬时信号本身,诸如无线电波或者其他自由传播的电磁波、通过波导或其他传输媒介传播的电磁波(例如,通过光纤电缆的光脉冲)、或者通过电线传输的电信号。A computer-readable storage medium may be a tangible device that can hold and store instructions for use by the instruction execution device. The computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (non-exhaustive list) of computer readable storage media include: portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM) or flash memory), static random access memory (SRAM), portable compact disk read only memory (CD-ROM), digital versatile disk (DVD), memory sticks, floppy disks, mechanically coded devices, such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above. Computer-readable storage media, as used herein, are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (eg, light pulses through fiber optic cables), or through electrical wires transmitted electrical signals.
这里所描述的计算机可读程序指令可以从计算机可读存储介质下载到各个计算/处理设备,或者通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。The computer readable program instructions described herein may be downloaded to various computing/processing devices from a computer readable storage medium, or to an external computer or external storage device over a network such as the Internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer-readable program instructions from a network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
用于执行本发明操作的计算机程序指令可以是汇编指令、指令集架构(ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码,所述编程语言包括面向对象的编程语言—诸如Smalltalk、C++、Python等,以及常规的过程式编程语言—诸如“C”语言或类似的编程语言。计算机可读 程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络—包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程序指令的状态信息来个性化定制电子电路,例如可编程逻辑电路、现场可编程门阵列(FPGA)或可编程逻辑阵列(PLA),该电子电路可以执行计算机可读程序指令,从而实现本发明的各个方面。The computer program instructions for carrying out the operations of the present invention may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state setting data, or instructions in one or more programming languages. Source or object code written in any combination, including object-oriented programming languages, such as Smalltalk, C++, Python, etc., and conventional procedural programming languages, such as the "C" language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through the Internet connect). In some embodiments, custom electronic circuits, such as programmable logic circuits, field programmable gate arrays (FPGAs), or programmable logic arrays (PLAs), can be personalized by utilizing state information of computer readable program instructions. Computer readable program instructions are executed to implement various aspects of the present invention.
这里参照根据本发明实施例的方法、装置(系统)和计算机程序产品的流程图和/或框图描述了本发明的各个方面。应当理解,流程图和/或框图的每个方框以及流程图和/或框图中各方框的组合,都可以由计算机可读程序指令实现。Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理器,从而生产出一种机器,使得这些指令在通过计算机或其它可编程数据处理装置的处理器执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer or other programmable data processing apparatus to produce a machine that causes the instructions when executed by the processor of the computer or other programmable data processing apparatus , resulting in means for implementing the functions/acts specified in one or more blocks of the flowchart and/or block diagrams. These computer readable program instructions can also be stored in a computer readable storage medium, these instructions cause a computer, programmable data processing apparatus and/or other equipment to operate in a specific manner, so that the computer readable medium on which the instructions are stored includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
也可以把计算机可读程序指令加载到计算机、其它可编程数据处理装置、或其它设备上,使得在计算机、其它可编程数据处理装置或其它设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其它可编程数据处理装置、或其它设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。Computer readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other equipment to cause a series of operational steps to be performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , thereby causing instructions executing on a computer, other programmable data processing apparatus, or other device to implement the functions/acts specified in one or more blocks of the flowcharts and/or block diagrams.
附图中的流程图和框图显示了根据本发明的多个实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或指令的一部分,所述 模块、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。对于本领域技术人员来说公知的是,通过硬件方式实现、通过软件方式实现以及通过软件和硬件结合的方式实现都是等价的。The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more functions for implementing the specified logical function(s) executable instructions. In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It is also noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented in dedicated hardware-based systems that perform the specified functions or actions , or can be implemented in a combination of dedicated hardware and computer instructions. It is well known to those skilled in the art that implementation in hardware, implementation in software, and implementation in a combination of software and hardware are all equivalent.
以上已经描述了本发明的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术改进,或者使本技术领域的其它普通技术人员能理解本文披露的各实施例。本发明的范围由所附权利要求来限定。Various embodiments of the present invention have been described above, and the foregoing descriptions are exemplary, not exhaustive, and not limiting of the disclosed embodiments. Numerous modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the invention is defined by the appended claims.

Claims (10)

  1. 一种基于多重约束的卷积神经网络校正图像伪影的方法,包括以下步骤:A method for correcting image artifacts based on a convolutional neural network with multiple constraints, comprising the following steps:
    构建具有主干结构层和多个分支结构层的卷积神经网络,其中,所述主干结构层以原始图像为输入,所述多个分支结构层分别以通过对原始图像降采样获得的不同级别分辨率的图像作为输入,并且每一分支结构层的最后一个卷积层的特征图像矩阵,都通过上采样恢复成与目标图像相同大小的矩阵图像;Constructing a convolutional neural network with a backbone structure layer and a plurality of branch structure layers, wherein the backbone structure layer takes the original image as input, and the multiple branch structure layers are respectively distinguished by different levels obtained by downsampling the original image. The image of the rate is used as input, and the feature image matrix of the last convolutional layer of each branch structure layer is restored to a matrix image of the same size as the target image through upsampling;
    以设定的总体损失函数收敛为目标训练所述卷积神经网络,学习从低剂量原始图像到标准剂量目标图像之间的映射关系,并且训练过程中,所述主干结构层以及所述多个分支结构层的每一层均设置损失值。The convolutional neural network is trained with the goal of convergence of the set overall loss function, and the mapping relationship between the low-dose original image and the standard-dose target image is learned, and during the training process, the backbone structure layer and the multiple The loss value is set for each layer of the branch structure layer.
  2. 根据权利要求1所述的方法,其特征在于,所述总体损失函数表示为:The method according to claim 1, wherein the overall loss function is expressed as:
    Loss=w mse×Loss MSE+w per×Loss per+w ker×Loss ker Loss=w mse ×Loss MSE +w per ×Loss per +w ker ×Loss ker
    其中,Loss MSE是均方差损失,Loss per是感知损失,Loss ker是核损失,w mse,w per和w ker分别是对应项的权重。 Among them, Loss MSE is the mean square error loss, Loss per is the perceptual loss, Loss ker is the kernel loss, and w mse , w per and w ker are the weights of the corresponding items, respectively.
  3. 根据权利要求2所述的方法,其特征在于,所述均方差损失表示为:The method according to claim 2, wherein the mean square error loss is expressed as:
    Loss MSE=loss mse_1+w mse_2×loss mse_2+w mse_3×loss mse_3+w mse_4×loss mse_4+,…,+w mse_k×loss mse_kLoss MSE = loss mse_1 +w mse_2 ×loss mse_2 +w mse_3 ×loss mse_3 +w mse_4 ×loss mse_4 +,…,+w mse_k ×loss mse_k ;
    其中,w mse_2,w mse_3,w mse_4和w mse_k为对应分支结构层的均方差损失的权重,loss mse_2,loss mse_3,loss mse_4和loss mse_k是对应分支结构层的均方差损失,loss mse_1是主干结构层的损失,k是分支结构层的索引。 Among them, w mse_2 , w mse_3 , w mse_4 and w mse_k are the weights of the mean square error loss of the corresponding branch structure layer, loss mse_2 , loss mse_3 , loss mse_4 and loss mse_k are the mean square error loss of the corresponding branch structure layer, and loss mse_1 is the backbone The loss of the structure layer, k is the index of the branch structure layer.
  4. 根据权利要求2所述的方法,其特征在于,所述感知损失表示为:The method of claim 2, wherein the perceptual loss is expressed as:
    Loss per=loss per_1+w per_2×loss pre_2+w per_3×loss per_3+w per_4×loss per_4+,…,+w per_k×loss per_kLoss per =loss per_1 +w per_2 ×loss pre_2 +w per_3 ×loss per_3 +w per_4 ×loss per_4 +,…,+w per_k ×loss per_k ;
    其中,w per_2,w per_3,w per_4和w per_k是对应分支结构层的感知损失的权重,loss per_2,loss per_3,loss per_4和loss per_k是对应分支结构层的感知损失,loss per_1是主干结构层的感知损失,k是分支结构层的索引。 Among them, w per_2 , w per_3 , w per_4 and w per_k are the weights of the perceptual loss of the corresponding branch structure layer, loss per_2 , loss per_3 , loss per_4 and loss per_k are the perceptual losses of the corresponding branch structure layer, loss per_1 is the backbone structure layer The perceptual loss of , k is the index of the branch structure layer.
  5. 根据权利要求2所述的方法,其特征在于,所述核损失表示为:The method according to claim 2, wherein the core loss is expressed as:
    Loss ker=loss ker_1+w ker_2×loss ker_2+w ker_3×loss ker_3+w ker_4×loss ker_4+,…,+w ker_k×loss ker_kLoss ker = loss ker_1 +w ker_2 ×loss ker_2 +w ker_3 ×loss ker_3 +w ker_4 ×loss ker_4 +,…,+w ker_k ×loss ker_k ;
    其中,w ker_2,w ker_3,w ker_4和w ker_k是对应分支结构层的核损失的权重,loss ker_2,loss ker_3,loss ker_4和loss ker_k是对应分支结构层的核损失,loss ker_1是主干结构层的核损失,k是分支结构层的索引。 Among them, w ker_2 , w ker_3 , w ker_4 and w ker_k are the weights of the kernel loss corresponding to the branch structure layer, loss ker_2 , loss ker_3 , loss ker_4 and loss ker_k are the kernel losses of the corresponding branch structure layer, and loss ker_1 is the backbone structure layer The kernel loss of , k is the index of the branch structure layer.
  6. 根据权利要求1所述的方法,其中,所述主干结构层包含多层卷积层和堆积密集连接残差块结构,并采用注意力机制融合从各分支结构层提取的特征,获得输出图像。The method according to claim 1, wherein the backbone structure layer comprises a multi-layer convolution layer and a stacked densely connected residual block structure, and adopts an attention mechanism to fuse features extracted from each branch structure layer to obtain an output image.
  7. 根据权利要求6所述的方法,其中,在训练过程中,将不同级别分辨率的图像分别输入对应的分支结构层,每个分支结构层包含多层卷积层和堆积密集连接残差块结构,并且较低分辨率图像对应的分支结构层的输出特征依次与较高分辨率图像对应的分支结构层中的第一卷积层的输出级联,最终融合到所述主干网络层。The method according to claim 6, wherein, in the training process, images of different levels of resolution are respectively input into corresponding branch structure layers, each branch structure layer comprises a multi-layer convolution layer and a stacked densely connected residual block structure , and the output features of the branch structure layer corresponding to the lower resolution image are sequentially cascaded with the output of the first convolution layer in the branch structure layer corresponding to the higher resolution image, and finally fused to the backbone network layer.
  8. 一种医学图像处理方法,包括:A medical image processing method, comprising:
    将待处理图像进行多次降采样,得到不同级别分辨率的图像;Downsampling the to-be-processed image multiple times to obtain images with different levels of resolution;
    将所述不同级别分辨率的图像和待处理图像输入根据权利要求1至7任一项所述方法得到的经训练卷积神经网络,获得输出图像。The images with different levels of resolution and the images to be processed are input into the trained convolutional neural network obtained by the method according to any one of claims 1 to 7 to obtain an output image.
  9. 一种计算机可读存储介质,其上存储有计算机程序,其中,该程序被处理器执行时实现根据权利要求1至8中任一项所述方法的步骤。A computer-readable storage medium having stored thereon a computer program, wherein the program, when executed by a processor, implements the steps of the method according to any one of claims 1 to 8.
  10. 一种计算机设备,包括存储器和处理器,在所述存储器上存储有能够在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现权利要求1至8中任一项所述的方法的步骤。A computer device, comprising a memory and a processor, a computer program that can be run on the processor is stored in the memory, and characterized in that, when the processor executes the program, any one of claims 1 to 8 is implemented The steps of the method described in item.
PCT/CN2021/083561 2021-03-29 2021-03-29 Method for correcting image artifacts on basis of multi-constraint convolutional neural network WO2022204868A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/083561 WO2022204868A1 (en) 2021-03-29 2021-03-29 Method for correcting image artifacts on basis of multi-constraint convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/083561 WO2022204868A1 (en) 2021-03-29 2021-03-29 Method for correcting image artifacts on basis of multi-constraint convolutional neural network

Publications (1)

Publication Number Publication Date
WO2022204868A1 true WO2022204868A1 (en) 2022-10-06

Family

ID=83456935

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/083561 WO2022204868A1 (en) 2021-03-29 2021-03-29 Method for correcting image artifacts on basis of multi-constraint convolutional neural network

Country Status (1)

Country Link
WO (1) WO2022204868A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180075581A1 (en) * 2016-09-15 2018-03-15 Twitter, Inc. Super resolution using a generative adversarial network
CN108171649A (en) * 2017-12-08 2018-06-15 广东工业大学 A kind of image stylizing method for keeping focus information
CN108305236A (en) * 2018-01-16 2018-07-20 腾讯科技(深圳)有限公司 Image enhancement processing method and device
WO2020041882A1 (en) * 2018-08-29 2020-03-05 Uti Limited Partnership Neural network trained system for producing low dynamic range images from wide dynamic range images
CN111899315A (en) * 2020-08-07 2020-11-06 深圳先进技术研究院 Method for reconstructing low-dose image by using multi-scale feature perception depth network
CN111931624A (en) * 2020-08-03 2020-11-13 重庆邮电大学 Attention mechanism-based lightweight multi-branch pedestrian heavy identification method and system
US20210000438A1 (en) * 2018-03-07 2021-01-07 Rensselaer Polytechnic Institute Deep neural network for ct metal artifact reduction

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180075581A1 (en) * 2016-09-15 2018-03-15 Twitter, Inc. Super resolution using a generative adversarial network
CN108171649A (en) * 2017-12-08 2018-06-15 广东工业大学 A kind of image stylizing method for keeping focus information
CN108305236A (en) * 2018-01-16 2018-07-20 腾讯科技(深圳)有限公司 Image enhancement processing method and device
US20210000438A1 (en) * 2018-03-07 2021-01-07 Rensselaer Polytechnic Institute Deep neural network for ct metal artifact reduction
WO2020041882A1 (en) * 2018-08-29 2020-03-05 Uti Limited Partnership Neural network trained system for producing low dynamic range images from wide dynamic range images
CN111931624A (en) * 2020-08-03 2020-11-13 重庆邮电大学 Attention mechanism-based lightweight multi-branch pedestrian heavy identification method and system
CN111899315A (en) * 2020-08-07 2020-11-06 深圳先进技术研究院 Method for reconstructing low-dose image by using multi-scale feature perception depth network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CHEN QIHANG; YUAN ZHIDONG; ZHOU CHAO; ZHANG WEIGUANG; ZHANG MENGXI; YANG YONGFENG; LIANG DONG; LIU XIN; ZHENG HAIRONG; CHENG GUANX: "Low-dose dental CT image enhancement using a multiscale feature sensing network", NUCLEAR INSTRUMENTS & METHODS IN PHYSICS RESEARCH. SECTION A, ELSEVIER BV * NORTH-HOLLAND, NL, vol. 981, 11 August 2020 (2020-08-11), NL , XP086285823, ISSN: 0168-9002, DOI: 10.1016/j.nima.2020.164530 *
PARK BUMJUN; YU SONGHYUN; JEONG JECHANG: "Densely Connected Hierarchical Network for Image Denoising", 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), IEEE, 16 June 2019 (2019-06-16), pages 2104 - 2113, XP033747262, DOI: 10.1109/CVPRW.2019.00263 *
XUE HENGZHI; TENG YUEYANG; TIE CHANGJUN; WAN QIAN; WU JUN; LI MING; LIANG GUODONG; LIANG DONG; LIU XIN; ZHENG HAIRONG; YANG YONGFE: "A 3D attention residual encoder–decoder least-square GAN for low-count PET denoising", NUCLEAR INSTRUMENTS & METHODS IN PHYSICS RESEARCH. SECTION A, ELSEVIER BV * NORTH-HOLLAND, NL, vol. 983, 8 September 2020 (2020-09-08), NL , XP086297240, ISSN: 0168-9002, DOI: 10.1016/j.nima.2020.164638 *

Similar Documents

Publication Publication Date Title
Kulathilake et al. A review on deep learning approaches for low-dose computed tomography restoration
CN111325686B (en) Low-dose PET three-dimensional reconstruction method based on deep learning
Xia et al. Super-resolution of cardiac MR cine imaging using conditional GANs and unsupervised transfer learning
CN115953494B (en) Multi-task high-quality CT image reconstruction method based on low dose and super resolution
CN112419173A (en) Deep learning framework and method for generating CT image from PET image
CN113554665A (en) Blood vessel segmentation method and device
WO2022226886A1 (en) Image processing method based on transform domain denoising autoencoder as a priori
Hou et al. CT image quality enhancement via a dual-channel neural network with jointing denoising and super-resolution
WO2022094911A1 (en) Weight-sharing double-region generative adversarial network and image generation method therefor
Liu et al. MRCON-Net: Multiscale reweighted convolutional coding neural network for low-dose CT imaging
Yin et al. Unpaired low-dose CT denoising via an improved cycle-consistent adversarial network with attention ensemble
CN112419175A (en) Weight-sharing dual-region generation countermeasure network and image generation method thereof
Xia et al. Deep residual neural network based image enhancement algorithm for low dose CT images
Chen et al. DuSFE: Dual-Channel Squeeze-Fusion-Excitation co-attention for cross-modality registration of cardiac SPECT and CT
WO2022204868A1 (en) Method for correcting image artifacts on basis of multi-constraint convolutional neural network
Tran et al. Deep learning-based inpainting for chest X-ray image
CN116563554A (en) Low-dose CT image denoising method based on hybrid characterization learning
WO2022094779A1 (en) Deep learning framework and method for generating ct image from pet image
CN112991220B (en) Method for correcting image artifact by convolutional neural network based on multiple constraints
Li et al. Dual-domain fusion deep convolutional neural network for low-dose CT denoising
CN112991220A (en) Method for correcting image artifacts by convolutional neural network based on multiple constraints
Ren et al. Low dose CT image denoising using multi-level feature fusion network and edge constraints
KR20220071554A (en) Medical Image Fusion System
Deng et al. TT U-Net: Temporal Transformer U-Net for Motion Artifact Reduction Using PAD (Pseudo All-Phase Clinical-Dataset) in Cardiac CT
WO2022193276A1 (en) Deep learning method for low dose estimation of medical image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21933554

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21933554

Country of ref document: EP

Kind code of ref document: A1