WO2023000244A1 - 一种图像处理方法、系统及其应用 - Google Patents

一种图像处理方法、系统及其应用 Download PDF

Info

Publication number
WO2023000244A1
WO2023000244A1 PCT/CN2021/107789 CN2021107789W WO2023000244A1 WO 2023000244 A1 WO2023000244 A1 WO 2023000244A1 CN 2021107789 W CN2021107789 W CN 2021107789W WO 2023000244 A1 WO2023000244 A1 WO 2023000244A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
energy
low
image processing
frequency information
Prior art date
Application number
PCT/CN2021/107789
Other languages
English (en)
French (fr)
Inventor
郑海荣
李彦明
万丽雯
周豪杰
胡战利
庞志峰
Original Assignee
深圳高性能医疗器械国家研究院有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳高性能医疗器械国家研究院有限公司 filed Critical 深圳高性能医疗器械国家研究院有限公司
Priority to PCT/CN2021/107789 priority Critical patent/WO2023000244A1/zh
Publication of WO2023000244A1 publication Critical patent/WO2023000244A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation

Definitions

  • the present application belongs to the technical field of image synthesis, and in particular relates to an image processing method, system and application thereof.
  • Dual-energy computed tomography has become a more effective non-invasive diagnostic method applied to traditional CT scans.
  • the data sets obtained by two different energy x-rays have richer scan information. It can be applied to more clinical applications, such as urinary tract stone detection, tophi detection and removal of bone and metal artifacts.
  • dual-energy CT scans use half of the low-energy CT scans instead of the original high-energy CT scans to reduce the radiation dose.
  • existing dual-energy CT implementations still have various disadvantages, including signal cross-interference during high- and low-energy CT volume scans, and short time intervals between high- and low-energy CT scans.
  • the high-energy and low-energy CT volume signals interfere with each other, the time interval interferes, and the dose increase caused by the long time of high-energy CT scanning.
  • this application provides an image processing method, system and application thereof.
  • the present application provides an image processing method, which extracts high-frequency information and low-frequency information in low-energy CT images, and reconstructs the high-frequency information and low-frequency information to obtain high-energy CT images; using multi-scale
  • the feature map constrains the high-energy CT image, and improves the quality of the high-energy CT image to obtain a synthesized high-energy CT image.
  • the high-frequency information includes high-dimensional features and local features
  • the low-frequency information includes low-dimensional features and non-local features
  • Another implementation manner provided by the present application is: extracting high-frequency information and low-frequency information in the low-energy CT image, and establishing a mapping relationship between the low-energy CT image and the high-energy CT image.
  • the multi-scale feature map maintains texture details of the high-energy CT image, and removes artifacts and noise.
  • the synthesized high-energy CT image is a high-energy CT image with high texture and low noise.
  • the present application also provides an image processing system, including a generator module, a discriminator module, and a loss function module; the generator module is used to extract feature information of a low-energy CT image, and reconstruct the low-energy CT image to obtain a high-energy CT image ; The discriminator module is used to judge whether the output image is real; the loss function module is used to improve the image quality of the high-energy CT image.
  • the generator module includes a feature extraction submodule and a reconstruction submodule
  • the feature extraction submodule uses a U-shaped network as a framework
  • the residual network is a backbone network.
  • the U-shaped network includes 4 layers of encoding and decoding, a residual network is used between the encoding and decoding, and the residual network is the backbone network.
  • the loss function module uses multi-scale feature map constraints as the loss function.
  • the present application also provides an application of an image processing method, which is applied to image reconstruction, image super-resolution or image noise reduction.
  • the image processing method provided by this application adopts a generative adversarial network based on multi-scale feature map constraints to realize the technology of directly synthesizing high-energy CT images through a low-energy CT scan. While ensuring the image structure similarity and texture details, it reduces low-energy CT. Bone artifacts and metal artifacts in the image greatly reduce noise and improve the signal-to-noise ratio, thereby obtaining CT images that better meet the diagnostic requirements.
  • the image processing method provided in this application uses a U-shaped network architecture to extract high-frequency features, and uses a residual network as a backbone network to extract low-frequency features of images; and uses multi-scale feature maps to constrain the images obtained by the generator, Constructing the corresponding loss function, while ensuring the similarity of image structure and texture details, reduces the bone artifact and metal artifact in the low-energy CT image, greatly reduces the noise, improves the signal-to-noise ratio, and thus obtains a more satisfactory diagnosis.
  • CT image uses a U-shaped network architecture to extract high-frequency features, and uses a residual network as a backbone network to extract low-frequency features of images; and uses multi-scale feature maps to constrain the images obtained by the generator, Constructing the corresponding loss function, while ensuring the similarity of image structure and texture details, reduces the bone artifact and metal artifact in the low-energy CT image, greatly reduces the noise, improves the signal-to-noise ratio, and thus obtains a more satisfactory
  • the image processing method provided in this application can effectively improve the image quality.
  • Fig. 1 is the schematic flow chart of the image processing of the present application
  • Fig. 2 is a schematic structural diagram of the generator module of the present application.
  • Fig. 3 is a schematic structural diagram of the discriminator module of the present application.
  • Figure 4 is a schematic diagram of the multi-scale feature map constraint structure of the present application.
  • Fig. 5 is a schematic diagram of the experimental results of the present application.
  • High-energy CT images are reconstructed images obtained from high-energy X-ray scanning. High and low energy refer to the difference in energy of X-rays.
  • the present application provides an image processing method that extracts high-frequency information and low-frequency information in low-energy CT images, and reconstructs the high-frequency information and low-frequency information to obtain high-energy CT images; during the training process A multi-scale feature map is used to constrain the synthesis process of the high-energy CT image, so as to improve the quality of the synthesized high-energy CT image.
  • the high-frequency information includes high-dimensional features and local features; the low-frequency information includes low-dimensional features and non-local features.
  • the extraction of high-frequency information and low-frequency information in the low-energy CT image establishes a mapping relationship between the low-energy CT image and the high-energy CT image.
  • the multi-scale feature map maintains texture details of the high-energy CT image, and removes artifacts and noise.
  • the second high-energy CT image is a high-energy CT image with high texture and low noise.
  • the present application also provides an image processing system, including a generator module, a discriminator module, and a loss function module; the generator module is used to extract feature information of a low-energy CT image, and reconstruct the low-energy CT image to obtain a high-energy CT image ; The discriminator module is used to judge whether the output image is real; the loss function module is used to improve the image quality of the high-energy CT image.
  • the setup uses the U-shaped network as the framework, and the residual network as the end-to-end generator module of the backbone network.
  • the generator module has two main functions. The first is to extract high-frequency and low-frequency information in low-energy CT images, and realize the mapping between low-energy CT images and high-energy CT images. Mapping of high-frequency biological information, using a backbone network to extract low-frequency biological information. The second is to reconstruct the extracted image features to obtain a synthetic high-energy CT image.
  • This module mainly consists of two parts.
  • the main body is composed of a 4-layer encoding and decoding U-shaped network.
  • a 9-layer residual network is used as the backbone network between encoding and decoding, and a 9-layer residual network is used between the encoding and decoding of each layer.
  • the feature extraction network consists of a 256x256 size picture input, with two 3x3 convolutions and ReLU activation functions in each layer, that is, the light yellow module in Figure 2.
  • the beginning of entering the next layer must first go through a pooling operation, that is, the red module in Figure 2.
  • the number of channels in the network has gradually doubled three times from 64 in the first layer to 512.
  • the backbone network that is, the purple module in Figure 2
  • the residual module which consists of nine 3x3 convolution and ReLU activation functions.
  • the decoding process remains symmetrical to the encoding.
  • the second part is the image reconstruction sub-module.
  • the convolution kernel size is 3x3 and the feature map is convoluted and compressed to 1 channel, which is the output CT image.
  • the discriminator module consists of 8 sets of convolutional layers, each of which has a convolution kernel size of 3x3 and a convolution operation with a step size of 1.
  • the activation function is LReLU, which is indicated by the light yellow and dark yellow in Figure 3.
  • the number of channels of the feature map is gradually doubled from 32 to 256, and the input is passed to the pooling module indicated in red for parameter compression.
  • the final compressed feature map passes through the fully connected network twice, that is, the purple module in Figure 3, to determine whether the output image is true.
  • the generator module includes a feature extraction sub-module and a reconstruction sub-module, the feature extraction sub-module is based on a U-shaped network, and the residual network is a backbone network.
  • the U-shaped network includes 4 layers of encoding and decoding, a residual network is used between the encoding and decoding, and the residual network is the backbone network.
  • the loss function module uses multi-scale feature map constraints as a loss function.
  • Pr and Pz represent the probability distribution of the high-energy CT image and the generated high-energy CT image
  • represents the penalty coefficient, which is used to prevent the problems of mode collapse and gradient disappearance during the training of the generative confrontation network.
  • D(x) is the result of discriminating low-energy ct images
  • D(G(z)) is the result of the high-energy ct image obtained by the discriminator
  • Penalize sampling expectations for gradients Penalizes the result for the gradient.
  • L MSE (G(x), y) is the optimization target, w is the width, h is the height, G(x) is the synthesized high-energy image, and y (x, y) is the target high-energy image.
  • G(x), y) is the optimization target
  • w is the width
  • h is the height
  • G(x) is the synthesized high-energy image
  • y (x, y) is the target high-energy image.
  • L msf (G(x),Y) is the multi-scale feature constraint target; to constrain expectations; is the Euclidean distance of multi-scale features
  • Conv and m represent multi-scale convolution kernels from 1 to 3
  • H, W, and C are the height, width and number of channels of the sampled image
  • ⁇ m is the weight of each scale, respectively Set to 0.3, 0.2 and 0.3.
  • L(G(x),y) is ; ⁇ adv , ⁇ mse , ⁇ msf respectively represent the weight of each loss function and are set as hyperparameters.
  • the entire network is optimized using the Adam optimizer; train the network so that the loss function curve converges to the same order of magnitude
  • the present application also provides an application of an image processing method, which is applied to image reconstruction, image super-resolution or image noise reduction.
  • this method can be applied to other types of medical image reconstruction; in addition to being applied to reconstruction, this method can also be applied to the field of image super-resolution or noise reduction after appropriate changes; multi-scale feature constraints can be It is considered a plug-and-play module that can be added to any traditional convolutional neural network workflow to improve the performance of the network;
  • this application combines the U-Net feature extraction network with residuals to extract high-dimensional and low-dimensional features as well as local and non-local features, which greatly improves the detail expressiveness of high and low energy CT images; secondly, introduces The multi-scale feature loss function is used to effectively improve the quality of the image according to our target direction, such as contrast information, texture information, structural information, etc.; and after mixing with the original low-energy CT image, it can achieve the same as the mixed image provided by Siemens. quality.
  • each row is a group of test samples, where the left side (LECT) is the low-energy CT image, which is the input image of the network, and the middle (Unet-HL, Unet-MSF and WGAN-HL) is the comparison algorithm
  • LECT left side
  • Unet-HL, Unet-MSF and WGAN-HL middle
  • HECT right side
  • the pixel range is from -1000 to 1000.
  • the reconstructed image obtained in this application is closer to the contrast and texture details of the real high-energy CT image, and through Unet-MSF and Unet-HL, it can be seen that after introducing multi-scale feature constraints
  • the Unet network also has texture quality and contrast improvements.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

本申请属于图像合成技术领域,特别是涉及一种图像处理方法、系统及其应用。现有的技术中高低能CT量信号相互干扰,时间间隔干扰,高能CT扫描时间长导致的剂量增加。本申请提供了一种图像处理方法,提取低能CT图像中的高频信息与低频信息,将所述高频信息和所述低频信息进行重建得到高能CT图像;采用多尺度特征图对所述高能CT图像进行约束,提高所述高能CT图像质量得到合成的高能CT图像。实现通过一次低能CT扫描直接合成高能CT图像的技术,在保证图像结构相似度与纹理细节的同时,降低了低能CT图像中的骨伪影与金属伪影,大量减少噪声,提高了信噪比,从而得到更加满足诊断需求的CT图像。

Description

一种图像处理方法、系统及其应用 技术领域
本申请属于图像合成技术领域,特别是涉及一种图像处理方法、系统及其应用。
背景技术
双能计算机断层扫描(Dual-energy CT),成为一种更有效的非侵入式诊断方法应用于传统的CT扫描,它通过两种不同能量的x射线得到的数据集拥有更丰富的扫描信息,可以适用于更多的临床应用,比如尿路结石检测,痛风石检测和去除骨骼与金属伪影。并且相对于传统CT扫描,双能CT的扫描方式通过使用一半的低能CT扫描替代原来的高能CT扫描,实现了辐射剂量的降低。然而,现有的双能CT的实现方式仍存在不同的缺点,包括高低能CT量扫描时出现的信号交叉干扰,高低能CT扫描存在短时间的时间间隔。并且由于高能CT扫描的能量积累,依然会造成各种疾病发生的可能性,进而影响人体健康。因此,研究和开发更低剂量下的无干扰无偏差的高质量CT图像重建方法,对于目前的医疗诊断领域都有着重要的科学意义和广阔的应用前景。
现有的使用深度学习来将高能CT图像的先验知识加入到低能CT图像中,来获得合成的伪高能CT图像。该方法实现保证了深度学习可以有效学习到高低能CT图像之间的差异,并且通过低能CT可以合成高质量的伪高能CT图像。也有在深度学习网络的基础上加入了残差结构对低能CT图像先进行去噪处理,再采用4层编码与解码的U-net架构方式来提取特征,使得图像获得了更多的细节以及更好的视觉效果。
但是现有的技术中高低能CT量信号相互干扰,时间间隔干扰,高能CT扫描时间长导致的剂量增加。
发明内容
1.要解决的技术问题
基于现有的技术中高低能CT量信号相互干扰,时间间隔干扰,高能CT扫描时间长导致的剂量增加的问题,本申请提供了一种图像处理方法、系统及其应用。
2.技术方案
为了达到上述的目的,本申请提供了一种图像处理方法,提取低能CT图像中的高频信息与低频信息,将所述高频信息和所述低频信息进行重建得到高能CT图像;采用多尺度特征图对所述高能CT图像进行约束,提高所述高能CT图像质量得到合成的高能CT图像。
本申请提供的另一种实施方式为:所述高频信息包括高维特征和局部特征;所述低频信 息包括低维特征和非局部特征
本申请提供的另一种实施方式为:所述提取低能CT图像中的高频信息与低频信息,建立所述低能CT图像与所述高能CT图像之间的映射关系。
本申请提供的另一种实施方式为:所述多尺度特征图对所述高能CT图像的纹理细节进行保持,去除伪影与噪声。
本申请提供的另一种实施方式为:所述合成高能CT图像为高纹理低噪声的高能CT图像。
本申请还提供一种图像处理系统,包括生成器模块、判别器模块和损失函数模块;所述生成器模块,用于提取低能CT图像特征信息,对所述低能CT图像进行重建得到高能CT图像;所述判别器模块,用于判别输出图像是否为真;所述损失函数模块,用于提高所述高能CT图像的图像质量。
本申请提供的另一种实施方式为:所述生成器模块包括特征提取子模块和重建子模块,所述特征提取子模块以U型网络为框架,残差网络为骨干网络。
本申请提供的另一种实施方式为:所述U型网络包括4层编码和解码,所述编码与解码之间采用残差网络,所述残差网络为所述骨干网络。
本申请提供的另一种实施方式为:所述损失函数模块以多尺度特征图约束作为损失函数。
本申请还提供一种图像处理方法的应用,将所述的图像处理方法应用于图像重建、图像超分辨或者图像降噪。
3.有益效果
与现有技术相比,本申请提供的一种图像处理方法、系统及其应用及应用的有益效果在于:
本申请提供的图像处理方法,采用基于多尺度特征图约束的生成对抗网络,实现通过一次低能CT扫描直接合成高能CT图像的技术,在保证图像结构相似度与纹理细节的同时,降低了低能CT图像中的骨伪影与金属伪影,大量减少噪声,提高了信噪比,从而得到更加满足诊断需求的CT图像。
本申请提供的图像处理方法,通过使用U型网络架构来提取高频特征,使用残差网络作为骨干网络提取图像的低频特征;并且通过使用多尺度特征图在对生成器得到的图像进行约束,构造相应的损失函数,在保证图像结构相似度与纹理细节的同时,降低了低能CT图像中的骨伪影与金属伪影,大量减少噪声,提高了信噪比,从而得到更加满足诊断需求的CT图像。
本申请提供的图像处理方法,可以有效的提高图像的质量。
附图说明
图1是本申请的图像处理的流程示意图;
图2是本申请的生成器模块结构示意图;
图3是本申请的判别器模块结构示意图;
图4是本申请的多尺度特征图约束结构示意图;
图5是本申请的实验结果示意图。
具体实施方式
在下文中,将参考附图对本申请的具体实施例进行详细地描述,依照这些详细的描述,所属领域技术人员能够清楚地理解本申请,并能够实施本申请。在不违背本申请原理的情况下,各个不同的实施例中的特征可以进行组合以获得新的实施方式,或者替代某些实施例中的某些特征,获得其它优选的实施方式。
由于在使用低能CT量的X射线的会导致重建图像产生大量噪声和金属伪影;由于物理结构限制,导致高低能CT图像存在时间偏差;由于扫描时间长,导致病人受到高能CT辐射的剂量增加。高能CT图像为高能量X射线扫描得到重建图像,高低能指的是x射线的能量不同。
参见图1~5,本申请提供一种图像处理方法,提取低能CT图像中的高频信息与低频信息,将所述高频信息和所述低频信息进行重建得到高能CT图像;在训练过程中采用多尺度特征图对所述高能CT图像合成过程进行约束,提高所述合成高能CT图像质量。
进一步地,所述高频信息包括高维特征和局部特征;所述低频信息包括低维特征和非局部特征。
进一步地,所述提取低能CT图像中的高频信息与低频信息,建立所述低能CT图像与所述高能CT图像之间的映射关系。
进一步地,所述多尺度特征图对所述高能CT图像的纹理细节进行保持,去除伪影与噪声。
进一步地,所述第二高能CT图像为高纹理低噪声的高能CT图像。
本申请还提供一种图像处理系统,包括生成器模块、判别器模块和损失函数模块;所述生成器模块,用于提取低能CT图像特征信息,对所述低能CT图像进行重建得到高能CT图像;所述判别器模块,用于判别输出图像是否为真;所述损失函数模块,用于提高所述高能CT图像的图像质量。
设置以U型网络为框架,残差网络为骨干网络的端到端的生成器模块。
生成器模块的主要作用有两个,第一是提取低能CT图像中的高频与低频信息,实现低能CT图像到高能CT图像之间的映射,具体地是使用U型网络的长连接实现高频生物信息的映射,使用骨干网络提取低频生物信息。第二是将提取的图像特征进行重建,得到一张合成的高能CT图像。
该模块主要包含两部分组成,主体由一个4层编解码的U型网络组成,在编码与解码之间使用了一个9层的残差网络作为骨干网络,并且每层的编解码之间使用了跳跃链接来解决训练中的梯度消失和梯度爆炸的问题。特征提取网络由一个256x256尺寸的图片输入,经过每层带有两个3x3卷积与ReLU激活函数组成,即图2中的浅黄色模块。进入下一层的开始要先经过一次池化操作,即图2中的红色模块。由此网络的通道数从第一层的64,逐渐翻倍三次变为512。再将此时的特征图输入到骨干网络中,即图2中的紫色模块,到达残差模块,残差模块由9个3x3卷积与ReLU激活函数组成。解码过程与编码保持对称。第二部分是图像的重建子模块,通过最后一层卷积层,由卷积核尺寸为3x3卷积压缩特征图至1通道,即为输出的CT图像。
判别器模块由8组卷积层组成,其中每一个卷积核尺寸为3x3,步长为1的卷积操作,激活函数为LReLU,由图三中的浅黄色加深黄色表示。特征图的通道数由32逐步翻倍增加至256,输入传入红色表示的池化模块进行参数压缩。最终压缩后的特征图经过两次全连接网络,即图三的紫色模块,判别输出的图像是否为真。
进一步地,所述生成器模块包括特征提取子模块和重建子模块,所述特征提取子模块以U型网络为框架,残差网络为骨干网络。
进一步地,所述U型网络包括4层编码和解码,所述编码与解码之间采用残差网络,所述残差网络为所述骨干网络。
进一步地,所述损失函数模块以多尺度特征图约束作为损失函数。
以多尺度特征图约束作为损失函数的生成对抗网络。这一约束可以有效的提高图像的质量,例如纹理细节的保持,去除伪影与噪声的能力。
首先,基本的生成对抗网络的目标函数如下:
Figure PCTCN2021107789-appb-000001
其中P r和P z代表了高能CT图像与生成的高能CT图像的概率分布,
Figure PCTCN2021107789-appb-000002
代表了在目标图像与生成图像分布中随机采集的概率分布,λ代表了惩罚系数,它的作用是防止生成对抗网络训练时出现的模式坍塌和梯度消失的问题。
Figure PCTCN2021107789-appb-000003
为优化目标;
Figure PCTCN2021107789-appb-000004
为判别低能ct图像结果期望;D(x)为判别低能ct图像得到的结果;
Figure PCTCN2021107789-appb-000005
为判别器得到高能ct图像结果的期望;D(G(z))为判别器得到高能ct图像结果;
Figure PCTCN2021107789-appb-000006
为梯度惩罚采样期望;
Figure PCTCN2021107789-appb-000007
为梯度惩罚结果。
当训练一个像素级的生成网络时,经常会出现配对像素之间的偏移,造成细节上的错误,因此我们引入L2损失函数校准生成像素与真实像素之间的差异,如下所示:
Figure PCTCN2021107789-appb-000008
L MSE(G(x),y)是优化目标,w是宽,h是高,G(x)是合成的高能图像,y (x,y)是目标高能图像在生成图片边缘信息时,引入了一个多尺度的特征图损失函数,它可以有效的提取目标图片的高频信息,保证局部模式和纹理信息有效地被提取,且不会被特定的像素约束,x为输入低能CT图像,y为目标高能CT图像。
Figure PCTCN2021107789-appb-000009
其中L msf(G(x),Y)为多尺度特征约束目标;
Figure PCTCN2021107789-appb-000010
为约束期望;
Figure PCTCN2021107789-appb-000011
为多尺度特征的欧式距离,Conv和m代表了多尺度的卷积核从1到3,H,W,C是采样图片的高,宽和通道数量,β m是每一个尺度的权重,分别设置为0.3,0.2和0.3。
最后,混合损失函数如下所示:
Figure PCTCN2021107789-appb-000012
其中L(G(x),y)为 λ adv,λ mse,λ msf分别代表了各个损失函数的权重,设置为超参数。
整个网络使用Adam优化器来优化;训练网络,使损失函数曲线收敛于同一数量级
本申请还提供一种图像处理方法的应用,将所述的图像处理方法应用于图像重建、图像超分辨或者图像降噪。除应用于DECT图像重建,该方法可应用于其他类型医学图像重建;除应用于重建外,该方法经过适当更改后,也可应用于图像超分辨领域或者是降噪;多尺度特征约束可以被认为是一种即插即用模块,可以添加至任意传统卷积神经网络工作流程中,提高网络的性能;
使用带有残差结构的特征提取网络对WGAN图像特征信息的利用,并将其在双能CT成像领域。
本申请基于WGAN模型下结合了带有残差的U-Net特征提取网络,从高维和低维特征以及局部与非局部特征进行提取,大大提高了高低能CT图像的细节表现力;其次,引入了多尺度特征损失函数,有效的按照我们的目标方向提高了图像的质量,例如对比度信息,纹理信息, 结构信息等;并且经过与原始低能CT图像混合后,可以达到西门子提供的混合图像一样的质量。
实验结果:
本申请的技术方案在对比度恢复与金属伪影的实验中取得了非常的结果。在实验图中,每一行为一组测试样例,其中左侧(LECT)为低能CT量CT图像,是网络的输入图像,中间(Unet-HL,Unet-MSF和WGAN-HL)为对比算法得到的高能CT图像,金色高亮的为本申请得到的高能CT图像,右侧(HECT)为标准的高能CT图像,像素范围均为-1000到1000。
通过对比输入图像与其他对比算法可以看到,本申请得到的重建图像更加接近真实高能CT图像的对比度与纹理细节,并且通过对Unet-MSF和Unet-HL,可以看到引入多尺度特征约束后的Unet网络也有了纹理质量与对比度的提高。
尽管在上文中参考特定的实施例对本申请进行了描述,但是所属领域技术人员应当理解,在本申请公开的原理和范围内,可以针对本申请公开的配置和细节做出许多修改。本申请的保护范围由所附的权利要求来确定,并且权利要求意在涵盖权利要求中技术特征的等同物文字意义或范围所包含的全部修改。

Claims (10)

  1. 一种图像处理方法,其特征在于:提取低能CT图像中的高频信息与低频信息,将所述高频信息和所述低频信息进行重建得到高能CT图像;采用多尺度特征图对所述高能CT图像进行约束,提高所述高能CT图像质量得到合成的高能CT图像。
  2. 如权利要求1所述的图像处理方法,其特征在于:所述高频信息包括高维特征和局部特征;所述低频信息包括低维特征和非局部特征
  3. 如权利要求1所述的图像处理方法,其特征在于:所述提取低能CT图像中的高频信息与低频信息,建立所述低能CT图像与所述高能CT图像之间的映射关系。
  4. 如权利要求1所述的图像处理方法,其特征在于:所述多尺度特征图对所述高能CT图像的纹理细节进行保持,去除伪影与噪声。
  5. 如权利要求1~4中任一项所述的图像处理方法,其特征在于:所述合成高能CT图像为高纹理低噪声的高能CT图像。
  6. 一种图像处理系统,其特征在于:包括生成器模块、判别器模块和损失函数模块;
    所述生成器模块,用于提取低能CT图像特征信息,对所述低能CT图像进行重建得到高能CT图像;
    所述判别器模块,用于判别输出图像是否为真;
    所述损失函数模块,用于提高所述高能CT图像的图像质量。
  7. 如权利要求6所述的图像处理系统,其特征在于:所述生成器模块包括特征提取子模块和重建子模块,所述特征提取子模块以U型网络为框架,残差网络为骨干网络。
  8. 如权利要求7所述的图像处理系统,其特征在于:所述U型网络包括4层编码和解码,所述编码与解码之间采用残差网络,所述残差网络为所述骨干网络。
  9. 如权利要求8所述的图像处理系统,其特征在于:所述损失函数模块以多尺度特征图约束作为损失函数。
  10. 一种图像处理方法的应用,其特征在于:将所述权利要求1~5中任一项所述的图像处理方法应用于图像重建、图像超分辨或者图像降噪。
PCT/CN2021/107789 2021-07-22 2021-07-22 一种图像处理方法、系统及其应用 WO2023000244A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/107789 WO2023000244A1 (zh) 2021-07-22 2021-07-22 一种图像处理方法、系统及其应用

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/107789 WO2023000244A1 (zh) 2021-07-22 2021-07-22 一种图像处理方法、系统及其应用

Publications (1)

Publication Number Publication Date
WO2023000244A1 true WO2023000244A1 (zh) 2023-01-26

Family

ID=84980307

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/107789 WO2023000244A1 (zh) 2021-07-22 2021-07-22 一种图像处理方法、系统及其应用

Country Status (1)

Country Link
WO (1) WO2023000244A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116363485A (zh) * 2023-05-22 2023-06-30 齐鲁工业大学(山东省科学院) 一种基于改进YOLOv5的高分辨率目标检测方法
CN117876279A (zh) * 2024-03-11 2024-04-12 浙江荷湖科技有限公司 基于扫描光场序列图像的去除运动伪影方法及系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080240525A1 (en) * 2007-03-29 2008-10-02 Martti Kalke Method and system for reconstructing a medical image of an object
CN104156917A (zh) * 2014-07-30 2014-11-19 天津大学 基于双能谱的x射线ct图像增强方法
CN110070516A (zh) * 2019-03-14 2019-07-30 天津大学 一种面向医学能谱ct的图像融合方法
CN112634390A (zh) * 2020-12-17 2021-04-09 深圳先进技术研究院 基于Wasserstein生成对抗网络模型的高能图像合成方法、装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080240525A1 (en) * 2007-03-29 2008-10-02 Martti Kalke Method and system for reconstructing a medical image of an object
CN104156917A (zh) * 2014-07-30 2014-11-19 天津大学 基于双能谱的x射线ct图像增强方法
CN110070516A (zh) * 2019-03-14 2019-07-30 天津大学 一种面向医学能谱ct的图像融合方法
CN112634390A (zh) * 2020-12-17 2021-04-09 深圳先进技术研究院 基于Wasserstein生成对抗网络模型的高能图像合成方法、装置

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116363485A (zh) * 2023-05-22 2023-06-30 齐鲁工业大学(山东省科学院) 一种基于改进YOLOv5的高分辨率目标检测方法
CN116363485B (zh) * 2023-05-22 2024-03-12 齐鲁工业大学(山东省科学院) 一种基于改进YOLOv5的高分辨率目标检测方法
CN117876279A (zh) * 2024-03-11 2024-04-12 浙江荷湖科技有限公司 基于扫描光场序列图像的去除运动伪影方法及系统
CN117876279B (zh) * 2024-03-11 2024-05-28 浙江荷湖科技有限公司 基于扫描光场序列图像的去除运动伪影方法及系统

Similar Documents

Publication Publication Date Title
Kang et al. Deep convolutional framelet denosing for low-dose CT via wavelet residual network
CN112396672B (zh) 一种基于深度学习的稀疏角度锥束ct图像重建方法
US20210012463A1 (en) System and method for processing data acquired utilizing multi-energy computed tomography imaging
CN110223255B (zh) 一种基于残差编解码网络的低剂量ct图像去噪递归方法
CN112598759B (zh) 抑制低剂量ct图像中伪影噪声的多尺度特征生成对抗网络
WO2023000244A1 (zh) 一种图像处理方法、系统及其应用
CN110517198B (zh) 用于ldct图像去噪的高频敏感gan网络
CN107292858B (zh) 一种基于低秩分解和稀疏表示的多模态医学图像融合方法
Kwon et al. Cycle-free CycleGAN using invertible generator for unsupervised low-dose CT denoising
CN112837244B (zh) 一种基于渐进式生成对抗网络的低剂量ct图像降噪及去伪影方法
Gajera et al. CT-scan denoising using a charbonnier loss generative adversarial network
Xue et al. LCPR-Net: low-count PET image reconstruction using the domain transform and cycle-consistent generative adversarial networks
Hou et al. CT image quality enhancement via a dual-channel neural network with jointing denoising and super-resolution
Feng et al. A preliminary study on projection denoising for low-dose CT imaging using modified dual-domain U-net
Panda et al. A 3D wide residual network with perceptual loss for brain MRI image denoising
CN116645283A (zh) 基于自监督感知损失多尺度卷积神经网络的低剂量ct图像去噪方法
Chan et al. An attention-based deep convolutional neural network for ultra-sparse-view CT reconstruction
Liu et al. MRCON-Net: Multiscale reweighted convolutional coding neural network for low-dose CT imaging
CN118154451A (zh) 基于结构非对齐配对数据集的深度学习ct图像去噪方法
CN114187181A (zh) 基于残差信息精炼的双路径肺部ct图像超分辨率方法
CN112258438B (zh) 一种基于非配对数据的ldct图像恢复方法
Malczewski PET image reconstruction using compressed sensing
CN113506353A (zh) 一种图像处理方法、系统及其应用
CN113902912B (zh) Cbct影像的处理方法、神经网络系统的创建方法、以及装置
CN114202464B (zh) 基于深度学习的x射线ct局部高分辨率成像方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21950499

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE