WO2021168920A1 - 基于多剂量等级的低剂量图像增强方法、系统、计算机设备及存储介质 - Google Patents

基于多剂量等级的低剂量图像增强方法、系统、计算机设备及存储介质 Download PDF

Info

Publication number
WO2021168920A1
WO2021168920A1 PCT/CN2020/079412 CN2020079412W WO2021168920A1 WO 2021168920 A1 WO2021168920 A1 WO 2021168920A1 CN 2020079412 W CN2020079412 W CN 2020079412W WO 2021168920 A1 WO2021168920 A1 WO 2021168920A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
current
dose
image
feature
Prior art date
Application number
PCT/CN2020/079412
Other languages
English (en)
French (fr)
Inventor
胡战利
梁栋
黄振兴
杨永峰
刘新
郑海荣
Original Assignee
深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳先进技术研究院 filed Critical 深圳先进技术研究院
Publication of WO2021168920A1 publication Critical patent/WO2021168920A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • A61B6/032Transmission computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10108Single photon emission computed tomography [SPECT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the present invention relates to the technical field of image enhancement, in particular to a low-dose image enhancement method, system, computer equipment and storage medium based on multiple dose levels.
  • Computed tomography is an important imaging method to obtain information about the internal structure of objects through a non-destructive method. It has many advantages such as high resolution, high sensitivity, and multiple levels. It is one of the largest medical imaging diagnostic equipment in my country. It is widely used in various medical and clinical examination fields. However, due to the need to use X-rays in the CT scanning process, as people gradually understand the potential hazards of radiation, the issue of CT radiation dose has attracted more and more attention.
  • the rational use of low dose (As Low As Reasonably Achievable, ALARA) principle requires that the radiation dose to the patient be reduced as much as possible on the premise of satisfying the clinical diagnosis. Therefore, the research and development of new low-dose CT imaging methods can not only ensure the quality of CT imaging but also reduce harmful radiation dose, which has important scientific significance and application prospects in the field of medical diagnosis.
  • Application number 201910499262.X discloses "a shallow residual codec recursive network for low-dose CT image denoising"; reduce the network by reducing the number of layers and convolution kernels in the residual codec network
  • the complexity of the recursive process is used to improve the performance of the network.
  • the algorithm learns end-to-end mapping through network training to obtain high-quality images.
  • the original low-dose CT image is cascaded to the next input.
  • the problem of image distortion after multiple recursions can be effectively avoided, image features can be better extracted, and detailed information of the image can be preserved.
  • the present invention can not only reduce the complexity of the network, but also improve the performance of the network, so that the denoised image The image details are well preserved, and the image structure is clearer.
  • Application number CN110559009A discloses "GAN-based multi-modal low-dose CT conversion method, system and medium for high-dose CT”; input arbitrary modal low-dose CT; perform two-dimensional discrete wavelet transform on low-dose CT to obtain multiple decompositions Result; input the low-dose CT and its multiple decomposition results into the encoder in the trained GAN network for encoding, and then decode the encoding result through the decoder in the GAN network to obtain the corresponding high-dose modal image.
  • the present invention inputs the low-dose CT and its wavelet transform results into the encoder in the trained GAN network for encoding, and then performs encoding through the decoder in the GAN network.
  • the encoding result is decoded to obtain the corresponding high-dose modal image, which can conveniently realize the conversion of low-dose CT images in any modal to generate high-dose CT images.
  • the low-dose image reconstruction method in the above technical solution only considers image information for reconstruction; however, the image information obtained by different doses is different. Generally, the image information with a higher dose level is compared with the image information with a lower dose level. It will be more complicated; in the process of reconstructing the image, the lower the dose, the more difficult it is to reconstruct the image; therefore, in the actual clinical scanning process, the scanning dose of each patient is not the same, and the different scanning dose will affect the later stage.
  • the reconstructed image has a significant impact; the above scheme only considers the single latitude of image information, and directly reconstructs the acquired low-dose image information through the above-mentioned algorithm, and the formed high-dose image still cannot meet the requirements, so it has There is room for improvement.
  • the purpose of the present invention is a low-dose image enhancement method based on multiple dose levels, which can further improve the definition of the low-dose image after reconstruction.
  • a low-dose image enhancement method based on multiple dose levels including:
  • the current input image information is feature extracted and the current image feature information is obtained; the current image feature information and the current transformation information are fused to form the current reconstructed image information.
  • the image is reconstructed through data of multiple latitudes to improve the reconstruction.
  • the definition of the image that is, the current input image information is first evaluated through the constructed dose level evaluation model, and then the corresponding current dose level information is obtained. After the current dose level information is transformed, the current image feature information is compared Mutual fusion, and finally reconstructed to obtain higher-definition reconstructed image information.
  • the present invention can be further configured as: the dose level evaluation model includes multiple convolutional layers and two fully connected layers that are sequentially connected, and each convolutional layer except the last convolutional layer
  • the ReLU activation function, the batch regularization layer and the maximum pooling layer are sequentially connected, among which the convolution layer uses a 3x3 convolution kernel.
  • the present invention can be further configured to: use cross-entropy loss as the loss function for evaluating the dose level of the current input image information.
  • the dose level evaluation model can realize the unknown dose. , Perform dose level assessment on low-dose images to form one of the parameters required for reconstructed images.
  • the present invention can be further configured as follows: the method for performing feature transformation processing on the current dose level information through the feature transformation module and forming the current transformation information is as follows:
  • the feature transformation process adopts the feature transformation function G, which is specifically as follows:
  • A is the zoom operation
  • B is the offset operation
  • the dose level information can be transformed into data that can be fused with the image feature information, so as to facilitate subsequent data processing.
  • the present invention can be further configured as: the cascade fusion model includes a plurality of cascade fusion modules, each cascade fusion module corresponds to a feature transformation module, and the cascade fusion module provides an image for the feature transformation module Feature information; multiple cascaded fusion modules sequentially perform feature extraction on the input image and obtain corresponding image feature information and fuse the image feature information with the corresponding transformation information to form a fitting image information;
  • the feature fusion process can be expressed as:
  • F out F in +f(F b ,A*F b +B)
  • F in and F out represent input and output feature maps
  • (A, B) represents the feature conversion operation of the module, that is, A is a scaling operation, B is an offset operation; f is a fusion operation.
  • the present invention can be further configured as: the cascade fusion module includes a down-sampling layer, an up-sampling layer, and a feature fusion layer in sequence.
  • the settings of multiple cascaded fusion modules can be dynamically adjusted according to the scale of the data set, which has a certain degree of scalability, and further improves network performance, so that the denoised image retains image details and has a more structured structure. Clear.
  • the present invention can be further configured as: the loss function for fusing the current image feature information and the current transformation information adopts an average square error function.
  • the degree of the gap with the actual data can be predicted, and the average square error function can be used to more effectively express the error situation.
  • the second object of the present invention is to provide a low-dose image enhancement system based on multiple dose levels, which can further improve the definition of the low-dose image after reconstruction.
  • a low-dose image enhancement system based on multiple dose levels including:
  • Image input module used to obtain current input image information, where the input image information includes low-dose image information
  • Image dose level evaluation module used to feed back the current input image information to the constructed dose level evaluation model to evaluate the current input image information and form current dose level information corresponding to the current input image information;
  • Image fusion module According to the constructed feature transformation module, perform feature transformation processing on current dose level information and form current transformation information; according to the constructed cascade fusion model to perform feature extraction on current input image information and obtain current image feature information ; And merge the current image feature information with the current transformation information to form the current reconstructed image information.
  • the third object of the present invention is to provide a computer-readable storage medium capable of storing corresponding programs, facilitating further improvement of the definition of low-dose images after reconstruction.
  • a computer-readable storage medium includes a program that can be loaded and executed by a processor to realize the above-mentioned low-dose image enhancement method based on multiple dose levels.
  • the fourth object of the present invention is to provide a computer device that can further improve the clarity of the low-dose image after reconstruction.
  • a computer device including a memory, a processor, and a program stored on the memory and capable of running on the processor.
  • the program can be loaded and executed by the processor to realize the above-mentioned low-dose image based on multiple dose levels Enhancement method.
  • the present invention includes the following beneficial technical effects: the input image can be graded and reconstructed based on the divided grade and the input image, so as to further improve the clarity of the reconstructed image.
  • Figure 1 is a flowchart of a low-dose image enhancement method based on multiple dose levels.
  • Fig. 2 is a flowchart of a method for performing feature transformation processing on current dose level information through a feature transformation module and forming current transformation information.
  • Figure 3 is a schematic diagram of a multi-dose level low-dose image enhancement method.
  • Fig. 4 is a schematic diagram of a reference standard image.
  • Figure 5 is a schematic diagram of an image reconstructed by a CNN network.
  • Figure 6 is a schematic diagram of the RED-CNN restoration result.
  • FIG. 7 is a schematic diagram of the reconstructed image result of this embodiment.
  • Fig. 8 is a structural diagram of a low-dose image enhancement system based on multiple dose levels.
  • the embodiment of the present invention provides a low-dose image enhancement method based on multiple dose levels, including: acquiring current input image information, the input image information including low-dose image information; and feeding back the current input image information to the constructed dose level assessment
  • the model is used to evaluate the dose level of the current input image information and form current dose level information corresponding to the current input image information; according to the constructed feature transformation module to perform feature transformation processing on the current dose level information and form current transformation information;
  • the constructed cascade fusion model is used to extract the features of the current input image information and obtain the current image feature information; and fuse the current image feature information and the current transformation information to form the current reconstructed image information.
  • the image is reconstructed through multiple latitude data to improve the reconstruction.
  • the definition of the image that is, the current input image information is first evaluated through the constructed dose level evaluation model, and then the corresponding current dose level information is obtained. After the current dose level information is transformed, the current image feature information is compared Mutual fusion, and finally reconstructed to obtain higher-definition reconstructed image information.
  • the embodiment of the present invention provides a low-dose image enhancement method based on multiple dose levels, and the main flow of the method is described as follows.
  • Step 1000 Obtain current input image information; the input image information includes low-dose image information.
  • the current input image information may be a picture brought by the patient or a scanned image formed after the scanning device on-site completes the scan, and the scanning device may be a CT device or the like.
  • Step 2000 Feedback the current input image information to the constructed dose level evaluation model to evaluate the dose level of the current input image information and form current dose level information corresponding to the current input image information.
  • the current dose level information corresponding to the current input image information is determined according to the dose level evaluation model shown in FIG. 3.
  • the dose level evaluation model includes multiple convolutional layers and two fully connected layers that are sequentially connected. In this embodiment, seven convolutional layers are preferably used, and the convolutional layers use 3x3 convolution kernels; each convolutional layer is followed by Connected with ReLU activation function, batch regularization layer and maximum pooling layer.
  • the ReLU activation function can overcome the problem of gradient disappearance, and thus achieve the effect of accelerating the training speed; the biggest problem in deep learning is the problem of gradient disappearance, which is particularly serious when saturated activation functions such as tanh and sigmod are used.
  • each layer must be multiplied by the first derivative of the activation function.
  • Each layer of the gradient will be attenuated by one layer.
  • the gradient G will continue to attenuate until it disappears
  • the input layer is standardized, but also the input of each intermediate layer of the network (before the activation function) is standardized, so that the output obeys a normal distribution with a mean value of 0 and a variance of 1. Avoid the problem of deviation of variable distribution.
  • the input of each layer is standardized only by calculating the mean and variance of a small batch of data in the current layer, which is equivalent to forcing the distribution of the input value of any neuron in each layer of neural network back to the mean value of 0, and the variance is The standard normal distribution of 1.
  • Batch normalization can avoid gradient disappearance and gradient explosion, and force the increasingly skewed distribution back to a more standard distribution, so that the activation input value falls in the region where the nonlinear function is more sensitive to the input, and small changes in the input will cause
  • a large change in the loss function can make the gradient larger and avoid the problem of gradient disappearance, and the larger gradient means that the learning convergence speed is fast, which can greatly accelerate the training speed; because batch standardization is not applied to the entire data set, but mini- In batch, some noise will be generated, which can improve the generalization ability of the model.
  • the input is split into different regions.
  • each element of the output is the largest element in its corresponding region.
  • the function of the maximum pooling operation is that as long as a feature is extracted in any quadrant, it will remain in the maximized pooling output. Therefore, the actual function of the maximization operation is to keep the maximum value if a certain feature is extracted in the filter. If this feature is not extracted, it may not exist in the upper right quadrant, and the maximum value is still very small.
  • the parameter settings of the dose level evaluation model are as follows:
  • the loss function preferably adopts cross entropy loss.
  • the dose level division is completed.
  • the same object is scanned according to different dose conditions, and multiple dose level image data sets of the same object are obtained as training data.
  • the optimizer adopts the Adam optimization algorithm, and the learning rate adopts 0.0001 to train 200 epochs.
  • the trained dose level evaluation model can form a corresponding relationship between the input image and the dose level.
  • Step 3000 Perform feature transformation processing on the current dose level information according to the constructed feature transformation module and form current transformation information.
  • the feature transformation process adopts the feature transformation function G, which is specifically as follows:
  • A is the zoom operation
  • B is the offset operation.
  • the dose level information can be transformed into data that can be fused with the image feature information to facilitate subsequent data processing.
  • the method for performing feature transformation processing on current dose level information through the feature transformation module and forming current transformation information is as follows:
  • Step 3100 Perform feature extraction on current input image information and obtain current image feature information.
  • the method of image feature extraction can be any of the following, which can be selected according to the actual situation. They are HOG (histogram of Oriented Gradient) and SIFT (Scale-invariant features transform). Variable feature transformation), SURF (Speeded Up Robust Features, accelerated robust features, improvements to sift), DOG (Difference of Gaussian, Gaussian function difference), LBP (Local Binary Pattern, local binary pattern), HAAR (haar-like) ,haar features, note that haar is a personal name, haar proposed a wavelet used as a filter, and named this filter haar filter. Later, someone used this filter on the image, which is the haar feature of the image), namely The size of the extracted image feature map is h ⁇ w ⁇ 64.
  • Step 3200 Preprocess the current dose level information to form current dose level preprocessing information.
  • the preprocessing process maps the current dose level information to a feature map of 64 channels corresponding to the current dose level preprocessing information by using a convolution layer with a convolution kernel of 1x1; the preprocessing process Using the softmax activation function, the data values are distributed between 0-1.
  • the current dose level preprocessing information includes the first preprocessing information and the second preprocessing information. Both the first preprocessing information and the second preprocessing information are mapped into a feature map with a channel number of 64 through a convolutional layer with a 1x1 convolution kernel. 1 ⁇ 1 ⁇ 64, that is, two convolution kernels with 1 ⁇ 1 convolution layers are used.
  • Step 3300 Perform dot multiplication processing according to the current dose level preprocessing information and the current image feature information to form a zoom matrix and form current zoom processing information.
  • the current dose level preprocessing information in this step can be the first preprocessing information or the second preprocessing information. Since the two preprocessing information are the same, you can choose one of them. In this embodiment, the first preprocessing information is preferred. One preprocessing information.
  • Step 3400 Adding the preprocessing information of the current dose level and the current scaling processing information to form current transformation information.
  • the current dose level pre-processing information in this step is the remaining pre-processing information, that is, the second pre-processing information in this embodiment.
  • Step 4000 According to the constructed cascade fusion model, feature extraction is performed on current input image information and current image feature information is obtained; the current image feature information and current transformation information are fused to form current reconstructed image information.
  • the process of extracting features of the current input image information in step 3100 can independently adopt a specific disclosed method, or the process of extracting features of the current input image information can be performed through the constructed cascade fusion model.
  • This embodiment It is preferable to use the method of feature extraction through the cascade fusion model, which can further simplify the network.
  • Process the current input image information that is, realize data processing through a convolutional layer to form data corresponding to the image feature map size of h ⁇ w ⁇ 64.
  • the cascade fusion model includes multiple cascade fusion modules. Each dose fusion module uses two convolutions to complete the basic image feature F b extraction. Each cascade fusion module corresponds to a feature transformation module.
  • the cascade fusion module is a feature transformation. The module provides image feature information; multiple cascaded fusion modules sequentially perform feature extraction on the input image and obtain corresponding image feature information and fuse the image feature information with the corresponding transformation information to form the matched image information;
  • the feature fusion process can be expressed as:
  • F out F in +f(F b ,A*F b +B)
  • F in and F out represent input and output feature maps
  • (A, B) represents the feature conversion operation of the module, that is, A is a scaling operation, B is an offset operation; f is a fusion operation.
  • the cascade fusion module includes a down-sampling layer, an up-sampling layer and a feature fusion layer in turn.
  • the dose level evaluation model disclosed in step 2000 is imported, and the network training is completed according to the training data.
  • the loss function for fusing the current image feature information and the current transformation information adopts a mean square error function (mean square error), and other forms of loss functions can also be used for training.
  • the Adam optimization algorithm can be used, the learning rate is 0.0001, and the training is 1000 epochs.
  • the embodiment of the present invention provides a computer-readable storage medium, which can be implemented when loaded and executed by a processor as shown in Fig. 1 to Fig. 2. The steps described in the process.
  • the computer-readable storage medium includes, for example, a USB flash drive, a mobile hard disk, a read-only memory (Read-Only Memory, ROM), a random access memory (Random Access Memory, RAM), a magnetic disk, or an optical disk that can store various programs.
  • a USB flash drive for example, a USB flash drive, a mobile hard disk, a read-only memory (Read-Only Memory, ROM), a random access memory (Random Access Memory, RAM), a magnetic disk, or an optical disk that can store various programs.
  • ROM read-only memory
  • RAM Random Access Memory
  • magnetic disk or an optical disk that can store various programs.
  • optical disk that can store various programs.
  • the embodiments of the present invention provide a computer device, including a memory, a processor, and a program stored on the memory and capable of running on the processor.
  • the program can be loaded and executed by the processor to achieve the following Figure 1- Figure 2.
  • the low-dose image enhancement method based on multiple dose levels described in the process.
  • an embodiment of the present invention provides a low-dose image enhancement system based on multiple dose levels, including:
  • Image input module used to obtain current input image information, where the input image information includes low-dose image information
  • Image dose level evaluation module used to feed back the current input image information to the constructed dose level evaluation model to evaluate the current input image information and form current dose level information corresponding to the current input image information;
  • Image fusion module According to the constructed feature transformation module, perform feature transformation processing on current dose level information and form current transformation information; according to the constructed cascade fusion model to perform feature extraction on current input image information and obtain current image feature information ; And merge the current image feature information with the current transformation information to form the current reconstructed image information.
  • the present invention can also be applied to PET (positron emission tomography), SPECT (single photon emission computed tomography) image reconstruction or other sparse-based image reconstruction after proper deformation. Projection sampled image reconstruction.
  • Figure 4 is the reference standard image
  • Figure 5 is the image reconstructed by the CNN network
  • Figure 6 is the RED-CNN restoration result
  • Figure 7 is the reconstructed image result of the solution of this embodiment.
  • the disclosed system, device, and method can be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods, for example, multiple units or components may be divided. Combined or can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the present application essentially or the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , Including several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory, random access memory, magnetic disk or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Medical Informatics (AREA)
  • General Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Optics & Photonics (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Pulmonology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

一种基于多剂量等级的低剂量图像增强方法、系统、计算机设备及存储介质,解决单一输入图像进行图像重建过程中,存在清晰度无法达到要求的问题,包括:获取当前输入图像信息,将当前输入图像信息反馈至所构建的剂量等级评估模型以形成与当前输入图像信息相互对应的当前剂量等级信息;根据所构建的特征变换模块以对当前剂量等级信息进行特征变换处理并形成当前变换信息(3000);根据所构建的级联融合模型以对当前输入图像信息进行特征提取并获取当前图像特征信息;并将当前图像特征信息与当前变换信息进行融合以形成当前重建图像信息(4000)。该方法可以对输入图像进行等级评估,并基于所划分的等级以及输入图像进行重建,进一步提高图像重建后的清晰度。

Description

基于多剂量等级的低剂量图像增强方法、系统、计算机设备及存储介质 技术领域
本发明涉及图像增强的技术领域,尤其是涉及一种基于多剂量等级的低剂量图像增强方法、系统、计算机设备及存储介质。
背景技术
计算机断层成像(CT)是通过无损方式获取物体内部结构信息的一种重要成像手段,它拥有高分辨率、高灵敏度以及多层次等众多优点,是我国装机量最大的医疗影像诊断设备之一,被广泛应用于各个医疗临床检查领域。然而,由于CT扫描过程中需要使用X射线,随着人们对辐射潜在危害的逐步了解,CT辐射剂量问题越来越受到人们的重视。合理使用低剂量(As Low As Reasonably Achievable,ALARA)原则要求在满足临床诊断的前提下,尽量降低对患者的辐射剂量。因此,研究和开发新的低剂量CT成像方法,既能保证CT成像质量又减少有害的辐射剂量,对于医疗诊断领域具有重要的科学意义和应用前景。
申请号201910499262.X公开的“一种用于低剂量CT图像去噪的浅层残差编解码递归网络”;通过减少残差编解码网络中的层数和卷积核的个数以降低网络的复杂度,利用递归过程提升了网络的性能,该算法通过网络训练学习端对端的映射以获取优质图像,在每次递归时,都将原始的低剂量CT图像级联到下一次的输入,可有效地避免图像在多次递归后失真的问题,能够更好地提取图像特征,保留图像的细节信息,本发明不仅可以降低网络的复杂度,还能提高网络性能,使得去噪后的图像很好地保留了图像细节,图像结构更加清晰。
申请号CN110559009A公开的“基于GAN的多模态低剂量CT转换高剂量CT的方法、系统及介质”;输入任意模态的低剂量CT;对低剂量CT进行二维离散小波变换得到多个分解结果;将低剂量CT及其多个分解结果输入训练好的GAN网络中的编码器进行编码,再通过GAN网络中的解码器对编码结果解码得到对应的高剂量模态图像。本发明基于GAN在多域转换的广泛发展以及传统小波变换的分解能力,将低剂量CT及其小波变换结果一起输入训练好的GAN网络中的编码器进行编码再通过GAN网络中的解码器对编码结果解码得到对应的高剂量模态图像,能便利的实现任意模态的低剂量CT图转换生成高剂量CT图。
上述技术方案中的低剂量图像重建方法,仅仅只考虑图像信息进行重建; 但是不同剂量所获取到的图像信息是不同,一般情况下,剂量等级高的图像信息相较于剂量等级低的图像信息会更加复杂;在重建图像过程中,剂量越低则重建图像的难度越高;所以在实际临床扫描过程中,每个病人的扫描剂量不尽相同,而不尽相同的扫描剂量则会对后期重建图像产生重大的影响;而上述方案仅仅考虑图像信息这单一纬度,直接把所获取到的低剂量图像信息通过上述的算法进行重建,进而所形成的高剂量图像是仍然无法满足要求,故具有一定的改进空间。
发明内容
针对现有技术存在的不足,本发明目的一种基于多剂量等级的低剂量图像增强方法,能够进一步提高低剂量图像经过重建后的清晰度。
本发明的上述发明目的一是通过以下技术方案得以实现的:
一种基于多剂量等级的低剂量图像增强方法,包括:
获取当前输入图像信息,所述输入图像信息包括低剂量图像信息;
将当前输入图像信息反馈至所构建的剂量等级评估模型以对当前输入图像信息进行评估剂量等级并形成与当前输入图像信息相互对应的当前剂量等级信息;
根据所构建的特征变换模块以对当前剂量等级信息进行特征变换处理并形成当前变换信息;
根据所构建的级联融合模型以对当前输入图像信息进行特征提取并获取当前图像特征信息;并将当前图像特征信息与当前变换信息进行融合以形成当前重建图像信息。
通过采用上述技术方案,在进行低剂量图像重建过程中,不仅仅只是考虑输入图像信息这一纬度,同时考虑输入图像信息的剂量等级进行考量,通过多个纬度的数据来重建图像以提高重建后的图像清晰度,即首先通过所构建的剂量等级评估模型来对当前输入图像信息进行评估,进而获得相应的当前剂量等级信息,在通过对当前剂量等级信息进行特征变换之后与当前图像特征信息进行相互融合,最终重建得到清晰度更高的重建图像信息。
本发明在一较佳示例中可以进一步配置为:所述剂量等级评估模型包括依次连接的多个卷积层以及两个全连接层,且除去最后一个卷积层之外的每个卷积层后依次连接有ReLU激活函数、批量正则化层和最大池化层,其中,卷积层采 用3x3卷积核。
本发明在一较佳示例中可以进一步配置为:对当前输入图像信息进行剂量等级评估的损失函数采用交叉熵损失。
通过采用上述技术方案,由于在实际临床扫描过程中,每个病人的扫描剂量不尽相同,故需要对当前输入图像信息的剂量等级进行评估,通过剂量等级评估模型能够实现对未知剂量的情况下,对低剂量图像进行剂量等级评估,形成重建图像所需的其中一个参数。
本发明在一较佳示例中可以进一步配置为:通过特征变换模块以对当前剂量等级信息进行特征变换处理并形成当前变换信息的方法如下:
对当前输入图像信息进行特征提取并获取当前图像特征信息;
将当前剂量等级信息进行预处理以形成当前剂量等级预处理信息;
根据当前剂量等级预处理信息与当前图像特征信息进行点乘处理以缩放矩阵并形成当前缩放处理信息;
根据当前剂量等级预处理信息与当前缩放处理信息相加以形成当前变换信息;
特征变换处理采用特征变换函数G,具体如下:
(A,B)=G(P)
其中,A为缩放操作,B为偏移操作。
通过采用上述技术方案,在经过特征变换处理后,能够使得剂量等级信息转化成为能够与图像特征信息进行融合的数据,以便于后续数据处理。
本发明在一较佳示例中可以进一步配置为:所述级联融合模型包括多个级联融合模块,每一个级联融合模块对应有一个特征变换模块,级联融合模块为特征变换模块提供图像特征信息;多个级联融合模块依次对所输入的图像进行特征提取并获取对应的图像特征信息并将图像特征信息与对应变换信息进行融合以形成拟对应合图像信息;
其中,特征融合过程可以表示为:
F out=F in+f(F b,A*F b+B)
F in和F out代表输入和输出特征图,(A,B)代表该模块的特征转换操作,即A 为缩放操作,B为偏移操作;f为融合操作。
本发明在一较佳示例中可以进一步配置为:所述级联融合模块依次包括下采样层、上采样层和特征融合层。
通过采用上述技术方案,多个级联融合模块的设置可以根据数据集的规模动态调整,具有一定的延展性,进一步提高网络性能,使得去噪后的图像很好地保留了图像细节,结构更加清晰。
本发明在一较佳示例中可以进一步配置为:将当前图像特征信息与当前变换信息进行融合的损失函数采用平均平方误差函数。
通过采用上述技术方案,能够预测与实际数据的差距程度,且采用平均平方误差函数能够更有效的表达误差的情况。
本发明目的二是提供一种基于多剂量等级的低剂量图像增强系统,能够进一步提高低剂量图像经过重建后的清晰度。
本发明的上述发明目的二是通过以下技术方案得以实现的:
一种基于多剂量等级的低剂量图像增强系统,包括:
图像输入模块:用于获取当前输入图像信息,所述输入图像信息包括低剂量图像信息;
图像剂量等级评估模块:用于将当前输入图像信息反馈至所构建的剂量等级评估模型以对当前输入图像信息进行评估剂量等级并形成与当前输入图像信息相互对应的当前剂量等级信息;
图像融合模块:根据所构建的特征变换模块以对当前剂量等级信息进行特征变换处理并形成当前变换信息;根据所构建的级联融合模型以对当前输入图像信息进行特征提取并获取当前图像特征信息;并将当前图像特征信息与当前变换信息进行融合以形成当前重建图像信息。
本发明目的三是提供一种计算机可读存储介质,能够存储相应的程序,便于实现进一步提高低剂量图像经过重建后的清晰度。
本发明的上述发明目的三是通过以下技术方案得以实现的:
一种计算机可读存储介质,包括能够被处理器加载执行时实现如上述的基于多剂量等级的低剂量图像增强方法的程序。
本发明目的四是提供一种计算机设备,能够进一步提高低剂量图像经过重 建后的清晰度。
本发明的上述发明目的四是通过以下技术方案得以实现的:
一种计算机设备,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的程序,该程序能够被处理器加载执行时实现如上述的基于多剂量等级的低剂量图像增强方法。
综上所述,本发明包括以下有益技术效果:可以对输入图像进行等级评估,并基于所划分的等级以及输入图像进行重建,进一步提高图像重建后的清晰度。
附图说明
图1是基于多剂量等级的低剂量图像增强方法的流程框图。
图2是通过特征变换模块以对当前剂量等级信息进行特征变换处理并形成当前变换信息的方法的流程框图。
图3是多剂量等级的低剂量图像增强方法的示意图。
图4是参考标准图像的示意图。
图5是CNN网络重建的图像的示意图。
图6是RED-CNN复原结果的示意图。
图7是本实施例重建图像结果的示意图。
图8是基于多剂量等级的低剂量图像增强系统的结构示意图。
具体实施方式
以下结合附图对本发明作进一步详细说明。
本具体实施例仅仅是对本发明的解释,其并不是对本发明的限制,本领域技术人员在阅读完本说明书后可以根据需要对本实施例做出没有创造性贡献的修改,但只要在本发明的权利要求范围内都受到专利法的保护。
本发明实施例提供一种基于多剂量等级的低剂量图像增强方法,包括:获取当前输入图像信息,所述输入图像信息包括低剂量图像信息;将当前输入图像信息反馈至所构建的剂量等级评估模型以对当前输入图像信息进行评估剂量等级并形成与当前输入图像信息相互对应的当前剂量等级信息;根据所构建的特征变换模块以对当前剂量等级信息进行特征变换处理并形成当前变换信息;根据所构建的级联融合模型以对当前输入图像信息进行特征提取并获取当前图像特征信息;并将当前图像特征信息与当前变换信息进行融合以形成当前重建图像信 息。
本发明实施例中,在进行低剂量图像重建过程中,不仅仅只是考虑输入图像信息这一纬度,同时考虑输入图像信息的剂量等级进行考量,通过多个纬度的数据来重建图像以提高重建后的图像清晰度,即首先通过所构建的剂量等级评估模型来对当前输入图像信息进行评估,进而获得相应的当前剂量等级信息,在通过对当前剂量等级信息进行特征变换之后与当前图像特征信息进行相互融合,最终重建得到清晰度更高的重建图像信息。
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
另外,本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,如无特殊说明,一般表示前后关联对象是一种“或”的关系。
下面结合说明书附图对本发明实施例作进一步详细描述。
本发明实施例提供一种基于多剂量等级的低剂量图像增强方法,所述方法的主要流程描述如下。
如图1所示:
步骤1000:获取当前输入图像信息;输入图像信息包括低剂量图像信息。
其中,当前输入图像信息可以为患者带来的图片或者现场的扫描设备完成扫描后形成的扫描图像,该扫描设备可以为CT设备等。
步骤2000:将当前输入图像信息反馈至所构建的剂量等级评估模型以对当前输入图像信息进行评估剂量等级并形成与当前输入图像信息相互对应的当前剂量等级信息。
其中,根据图3所示的剂量等级评估模型来确定当前输入图像信息所对应的当前剂量等级信息。该剂量等级评估模型包括依次连接的多个卷积层以及两个全连接层,本实施例中卷积层优选采用7个,且卷积层采用3x3卷积核;每个卷 积层后依次连接有ReLU激活函数、批量正则化层和最大池化层。
关于ReLU激活函数优选采用以下函数公式:
f(x)=max(0,x)
采用ReLU激活函数相比sigmod函数、tanh函数,能够克服梯度消失的问题,进而达到加快训练速度的效果;深度学习中最大的问题是梯度消失问题,使用tanh、sigmod等饱和激活函数情况下特别严重(神经网络在进行方向误差传播时,各个层都要乘以激活函数的一阶导数,梯度每传递一层就会衰减一层,网络层数较多时,梯度G就会不停衰减直到消失),使得训练网络收敛越来越慢,而ReLU函数凭借其线性、非饱和的形式,训练速度则快很多。
关于批量正规化层,不仅仅对输入层做标准化处理,还对网络的每一中间层的输入(激活函数前)做标准化处理,使得输出服从均值为0,方差为1的正态分布,从而避免变量分布偏移的问题。在训练期间,仅通过计算当前层一小批数据的均值和方差来标准化每一层的输入,相当于把每层神经网络任意神经元这个输入值的分布强行拉回到均值为0,方差为1的标准正态分布。通过批量正规化能够避免梯度消失和梯度爆炸,把越来越偏的分布强制拉回比较标准的分布,使得激活输入值落在非线性函数对输入比较敏感的区域,输入的小变化就会导致损失函数较大的变化,可以让梯度变大,避免梯度消失问题产生,而且梯度变大意味着学习收敛速度快,能大大加快训练速度;因为批量标准化不是应用在整个数据集,而是mini-batch上,会产生一些噪声,可以提高模型泛化能力。
关于最大池化层,执行过程中,将输入拆分成不同的区域,对于输出,输出的每个元素都是其对应区域中的最大元素。最大池化操作的功能就是只要在任何一个象限内提取到某个特征,它都会保留在最大化的池化输出内。所以最大化运算的实际作用就是,如果在过滤器中提取到某个特征,那么保留其最大值。如果没有提取到这个特征,可能在右上象限中不存在这个特征,那么其中的最大值也还是很小。
根据图3所示,剂量等级评估模型的参数设置如下表:
Figure PCTCN2020079412-appb-000001
Figure PCTCN2020079412-appb-000002
该损失函数优选采用交叉熵损失。
需说明的是,对于如图3所示的剂量等级评估模型,本领域的技术人员可根据实际应用场景作适当变型,例如,采用更多或更少的卷积层,采用其它类型的激活函数。
在训练过程中,根据现有CT设备实际扫描剂量参数,完成剂量等级划分,本实施例中,我们优选设置5个剂量等级;例如,控制电压为70KeV时,对不同扫描电流进行剂量等级定义,具体如下:
扫描电流(mA) 剂量等级
0~30 等级一
30~130 等级二
130~230 等级三
230~330 等级四
≥330 等级五
根据上述剂量等级划分,分别根据不同剂量条件扫描相同物体,获取相同物体的多个剂量等级图像数据集作为训练数据。在训练过程中,优化器采用Adam优化算法,学习率采用0.0001训练200epochs。经训练后的剂量等级评估模型,能够形成输入图像与剂量等级形成对应关系。
步骤3000:根据所构建的特征变换模块以对当前剂量等级信息进行特征变换处理并形成当前变换信息。
其中,特征变换处理采用特征变换函数G,具体如下:
(A,B)=G(P)
其中,A为缩放操作,B为偏移操作。在经过特征变换处理后,能够使得剂量等级信息转化成为能够与图像特征信息进行融合的数据,以便于后续数据处理。
具体的,如图2所示,通过特征变换模块以对当前剂量等级信息进行特征变换处理并形成当前变换信息的方法如下:
步骤3100:对当前输入图像信息进行特征提取并获取当前图像特征信息。
其中,对图像进行特征提取的方法可以为以下的任意一种,可根据实际情况而进行选择,分别为HOG(histogram of Oriented Gradient,方向梯度直方图)、SIFT(Scale-invariant features transform,尺度不变特征变换)、SURF(Speeded Up Robust Features,加速稳健特征,对sift的改进)、DOG(Difference of Gaussian,高斯函数差分)、LBP(Local Binary Pattern,局部二值模式)、HAAR(haar-like,haar类特征,注意haar是个人名,haar提出了一个用作滤波器的小波,为这个滤波器命名为haar滤波器,后来有人把这个滤波器用到了图像上,就是图像的haar特征),即提取到的图像特征图尺寸为h×w×64。
步骤3200:将当前剂量等级信息进行预处理以形成当前剂量等级预处理信息。
其中,该预处理过程通过使用卷积核为1x1的卷积层将当前剂量等级信息映射为当前剂量等级预处理信息所对应的通道数为64的特征图1×1×64;该预处理过程采用softmax激活函数,将数据值分布到0-1之间。当前剂量等级预处理信息包括第一预处理信息以及第二预处理信息,第一预处理信息与第二预处理 信息均通过卷积核为1x1的卷积层映射为通道数为64的特征图1×1×64,即使用了两个卷积核为1x1的卷积层实现。
步骤3300:根据当前剂量等级预处理信息与当前图像特征信息进行点乘处理以缩放矩阵并形成当前缩放处理信息。
其中,该步骤中的当前剂量等级预处理信息可以为第一预处理信息或第二预处理信息,由于两个预处理信息是相同的,故择一选择即可,本实施例中优选为第一预处理信息。
步骤3400:根据当前剂量等级预处理信息与当前缩放处理信息相加以形成当前变换信息。
其中,该步骤中的当前剂量等级预处理信息即为余下的预处理信息,即本实施例中为第二预处理信息。
步骤4000:根据所构建的级联融合模型以对当前输入图像信息进行特征提取并获取当前图像特征信息;并将当前图像特征信息与当前变换信息进行融合以形成当前重建图像信息。
其中,步骤3100中关于对当前输入图像信息进行特征提取的过程可以独立采用具体公开的方法,也可以通过所构建的级联融合模型来进行对当前输入图像信息进行特征提取的过程,本实施例中优选采用通过级联融合模型进行特征提取的方式,能够进一步简化网络。对当前输入图像信息进行处理,即通过一个卷积层实现数据处理,形成对应图像特征图尺寸为h×w×64的数据。
级联融合模型包括多个级联融合模块,每个剂量融合模块采用两个卷积完成基本图像特征F b提取,每一个级联融合模块对应有一个特征变换模块,级联融合模块为特征变换模块提供图像特征信息;多个级联融合模块依次对所输入的图像进行特征提取并获取对应的图像特征信息并将图像特征信息与对应变换信息进行融合以形成拟对应合图像信息;
其中,特征融合过程可以表示为:
F out=F in+f(F b,A*F b+B)
F in和F out代表输入和输出特征图,(A,B)代表该模块的特征转换操作,即A为缩放操作,B为偏移操作;f为融合操作。
级联融合模块依次包括下采样层、上采样层和特征融合层。
单元 操作 参数
下采样层 卷积 3x3x64
上采样层 反卷积 3x3x64
特征融合层 卷积 1x1x64
在进行完整网络训练过程中,导入步骤2000中所公开的剂量等级评估模型,根据训练数据完成网络训练。将当前图像特征信息与当前变换信息进行融合的损失函数采用平均平方误差函数(mean square error),也可以采用其他形式的损失函数进行训练。关于优化器可以采用Adam优化算法,学习率采用0.0001,训练1000epochs。
本发明实施例提供一种计算机可读存储介质,包括能够被处理器加载执行时实现如图1-图2。流程中所述的各个步骤。
所述计算机可读存储介质例如包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
基于同一发明构思,本发明实施例提供一种计算机设备,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的程序,该程序能够被处理器加载执行时实现如图1-图2。流程中所述的基于多剂量等级的低剂量图像增强方法。
基于同一发明构思,如图8所示,本发明实施例提供一种基于多剂量等级的低剂量图像增强系统,包括:
图像输入模块:用于获取当前输入图像信息,所述输入图像信息包括低剂量图像信息;
图像剂量等级评估模块:用于将当前输入图像信息反馈至所构建的剂量等级评估模型以对当前输入图像信息进行评估剂量等级并形成与当前输入图像信息相互对应的当前剂量等级信息;
图像融合模块:根据所构建的特征变换模块以对当前剂量等级信息进行特征变换处理并形成当前变换信息;根据所构建的级联融合模型以对当前输入图像 信息进行特征提取并获取当前图像特征信息;并将当前图像特征信息与当前变换信息进行融合以形成当前重建图像信息。
需说明的是,本发明除应用于CT图像重建,经适当变形后,也可应用于PET(正电子发射断层成像术)、SPECT(单光子发射计算机断层成像术)图像重建或其它的基于稀疏投影采样的图像重建。
经验证,利用本发明进行图像重建,能够获得更清晰并包含更多细节的图像。参见图4至图7所示,其中图4是参考标准图像,图5是CNN网络重建的图像,图6是RED-CNN复原结果,图7是本实施例方案的重建图像结果。
需要说明的是,虽然上文按照特定顺序描述了各个步骤,但是并不意味着必须按照上述特定顺序来执行各个步骤,实际上,这些步骤中的一些可以并发执行,甚至改变顺序,只要能够实现所需要的功能即可。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。 上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器、随机存取存储器、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,以上实施例仅用以对本申请的技术方案进行了详细介绍,但以上实施例的说明只是用于帮助理解本发明的方法及其核心思想,不应理解为对本发明的限制。本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本发明的保护范围之内。

Claims (10)

  1. 一种基于多剂量等级的低剂量图像增强方法,其特征是:包括:
    获取当前输入图像信息,所述输入图像信息包括低剂量图像信息;
    将当前输入图像信息反馈至所构建的剂量等级评估模型以对当前输入图像信息进行评估剂量等级并形成与当前输入图像信息相互对应的当前剂量等级信息;
    根据所构建的特征变换模块以对当前剂量等级信息进行特征变换处理并形成当前变换信息;
    根据所构建的级联融合模型以对当前输入图像信息进行特征提取并获取当前图像特征信息;并将当前图像特征信息与当前变换信息进行融合以形成当前重建图像信息。
  2. 根据权利要求1所述的基于多剂量等级的低剂量图像增强方法,其特征是:所述剂量等级评估模型包括依次连接的多个卷积层以及两个全连接层,且除去最后一个卷积层之外的每个卷积层后依次连接有ReLU激活函数、批量正则化层和最大池化层,其中,卷积层采用3x3卷积核。
  3. 根据权利要求1所述的基于多剂量等级的低剂量图像增强方法,其特征是:对当前输入图像信息进行剂量等级评估的损失函数采用交叉熵损失。
  4. 根据权利要求1所述的基于多剂量等级的低剂量图像增强方法,其特征是:通过特征变换模块以对当前剂量等级信息进行特征变换处理并形成当前变换信息的方法如下:
    对当前输入图像信息进行特征提取并获取当前图像特征信息;
    将当前剂量等级信息进行预处理以形成当前剂量等级预处理信息;
    根据当前剂量等级预处理信息与当前图像特征信息进行点乘处理以缩放矩阵并形成当前缩放处理信息;
    根据当前剂量等级预处理信息与当前缩放处理信息相加以形成当前变换信息;
    特征变换处理采用特征变换函数G,具体如下:
    (A,B)=G(P)
    其中,A为缩放操作,B为偏移操作。
  5. 根据权利要求4所述的基于多剂量等级的低剂量图像增强方法,其特征 是:所述级联融合模型包括多个级联融合模块,每一个级联融合模块对应有一个特征变换模块,级联融合模块为特征变换模块提供图像特征信息;多个级联融合模块依次对所输入的图像进行特征提取并获取对应的图像特征信息并将图像特征信息与对应变换信息进行融合以形成拟对应合图像信息;
    其中,特征融合过程可以表示为:
    F out=F in+f(F b,A*F b+B)
    F in和F out代表输入和输出特征图,(A,B)代表该模块的特征转换操作,即A为缩放操作,B为偏移操作;f为融合操作。
  6. 根据权利要求4所述的基于多剂量等级的低剂量图像增强方法,其特征是:所述级联融合模块依次包括下采样层、上采样层和特征融合层。
  7. 根据权利要求1所述的基于多剂量等级的低剂量图像增强方法,其特征是:将当前图像特征信息与当前变换信息进行融合的损失函数采用平均平方误差函数。
  8. 一种基于多剂量等级的低剂量图像增强系统,其特征是,包括:
    图像输入模块:用于获取当前输入图像信息,所述输入图像信息包括低剂量图像信息;
    图像剂量等级评估模块:用于将当前输入图像信息反馈至所构建的剂量等级评估模型以对当前输入图像信息进行评估剂量等级并形成与当前输入图像信息相互对应的当前剂量等级信息;
    图像融合模块:根据所构建的特征变换模块以对当前剂量等级信息进行特征变换处理并形成当前变换信息;根据所构建的级联融合模型以对当前输入图像信息进行特征提取并获取当前图像特征信息;并将当前图像特征信息与当前变换信息进行融合以形成当前重建图像信息。
  9. 一种计算机可读存储介质,其特征是,存储有能够被处理器加载执行时实现如权利要求1至7中任一项所述的基于多剂量等级的低剂量图像增强方法的程序。
  10. 一种计算机设备,其特征是:包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的程序,该程序能够被处理器加载执行时实现如权利要求1至7中任一项所述的基于多剂量等级的低剂量图像增强方法。
PCT/CN2020/079412 2020-02-29 2020-03-14 基于多剂量等级的低剂量图像增强方法、系统、计算机设备及存储介质 WO2021168920A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010132540.0A CN111325695B (zh) 2020-02-29 2020-02-29 基于多剂量等级的低剂量图像增强方法、系统及存储介质
CN202010132540.0 2020-02-29

Publications (1)

Publication Number Publication Date
WO2021168920A1 true WO2021168920A1 (zh) 2021-09-02

Family

ID=71171462

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/079412 WO2021168920A1 (zh) 2020-02-29 2020-03-14 基于多剂量等级的低剂量图像增强方法、系统、计算机设备及存储介质

Country Status (2)

Country Link
CN (1) CN111325695B (zh)
WO (1) WO2021168920A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117058555A (zh) * 2023-06-29 2023-11-14 北京空间飞行器总体设计部 一种遥感卫星图像分级管理的方法及装置

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022027595A1 (zh) * 2020-08-07 2022-02-10 深圳先进技术研究院 利用多尺度特征感知深度网络重建低剂量图像的方法
CN114757847B (zh) * 2022-04-24 2024-07-09 汕头市超声仪器研究所股份有限公司 多信息提取的扩展U-Net及其在低剂量X射线成像的应用方法
CN117272941B (zh) * 2023-09-21 2024-10-11 北京百度网讯科技有限公司 数据处理方法、装置、设备、计算机可读存储介质及产品

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106388843A (zh) * 2016-10-25 2017-02-15 上海联影医疗科技有限公司 医学影像设备及其扫描方法
WO2018200493A1 (en) * 2017-04-25 2018-11-01 The Board Of Trustees Of The Leland Stanford Junior University Dose reduction for medical imaging using deep convolutional neural networks
CN109741254A (zh) * 2018-12-12 2019-05-10 深圳先进技术研究院 字典训练及图像超分辨重建方法、系统、设备及存储介质
CN110223255A (zh) * 2019-06-11 2019-09-10 太原科技大学 一种用于低剂量ct图像去噪的浅层残差编解码递归网络
CN110559009A (zh) * 2019-09-04 2019-12-13 中山大学 基于gan的多模态低剂量ct转换高剂量ct的方法、系统及介质

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019019199A1 (en) * 2017-07-28 2019-01-31 Shenzhen United Imaging Healthcare Co., Ltd. SYSTEM AND METHOD FOR IMAGE CONVERSION
CN107481297B (zh) * 2017-08-31 2021-06-15 南方医科大学 一种基于卷积神经网络的ct图像重建方法
BR112020007105A2 (pt) * 2017-10-09 2020-09-24 The Board Of Trustees Of The Leland Stanford Junior University método para treinar um dispositivo de diagnóstico por imagem para realizar uma imagem para diagnóstico médico com uma dose reduzida de agente de contraste
CN107958471B (zh) * 2017-10-30 2020-12-18 深圳先进技术研究院 基于欠采样数据的ct成像方法、装置、ct设备及存储介质
CN108122265A (zh) * 2017-11-13 2018-06-05 深圳先进技术研究院 一种ct重建图像优化方法及系统
CN108053456A (zh) * 2017-11-13 2018-05-18 深圳先进技术研究院 一种pet重建图像优化方法及系统
CN108961237B (zh) * 2018-06-28 2020-08-21 安徽工程大学 一种基于卷积神经网络的低剂量ct图像分解方法
CN109166161B (zh) * 2018-07-04 2023-06-30 东南大学 一种基于噪声伪影抑制卷积神经网络的低剂量ct图像处理系统
CN110210524B (zh) * 2019-05-13 2023-05-02 东软医疗系统股份有限公司 一种图像增强模型的训练方法、图像增强方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106388843A (zh) * 2016-10-25 2017-02-15 上海联影医疗科技有限公司 医学影像设备及其扫描方法
WO2018200493A1 (en) * 2017-04-25 2018-11-01 The Board Of Trustees Of The Leland Stanford Junior University Dose reduction for medical imaging using deep convolutional neural networks
CN109741254A (zh) * 2018-12-12 2019-05-10 深圳先进技术研究院 字典训练及图像超分辨重建方法、系统、设备及存储介质
CN110223255A (zh) * 2019-06-11 2019-09-10 太原科技大学 一种用于低剂量ct图像去噪的浅层残差编解码递归网络
CN110559009A (zh) * 2019-09-04 2019-12-13 中山大学 基于gan的多模态低剂量ct转换高剂量ct的方法、系统及介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117058555A (zh) * 2023-06-29 2023-11-14 北京空间飞行器总体设计部 一种遥感卫星图像分级管理的方法及装置

Also Published As

Publication number Publication date
CN111325695B (zh) 2023-04-07
CN111325695A (zh) 2020-06-23

Similar Documents

Publication Publication Date Title
WO2021168920A1 (zh) 基于多剂量等级的低剂量图像增强方法、系统、计算机设备及存储介质
CN110827216B (zh) 图像去噪的多生成器生成对抗网络学习方法
CN111325686B (zh) 一种基于深度学习的低剂量pet三维重建方法
US11158069B2 (en) Unsupervised deformable registration for multi-modal images
CN107481297B (zh) 一种基于卷积神经网络的ct图像重建方法
Gao et al. A deep convolutional network for medical image super-resolution
WO2021017006A1 (zh) 图像处理方法及装置、神经网络及训练方法、存储介质
CN111709897B (zh) 一种基于域变换的正电子发射断层图像的重建方法
CN109741254B (zh) 字典训练及图像超分辨重建方法、系统、设备及存储介质
WO2022226886A1 (zh) 基于变换域下去噪自动编码器作为先验的图像处理方法
US20240185484A1 (en) System and method for image reconstruction
CN112419173A (zh) 一种由pet图像生成ct图像的深度学习框架和方法
Yang et al. Super-resolution of medical image using representation learning
Ikuta et al. A deep convolutional gated recurrent unit for CT image reconstruction
Huang et al. Super-resolution and inpainting with degraded and upgraded generative adversarial networks
CN116681888A (zh) 一种智能图像分割方法及系统
Liu et al. MRCON-Net: Multiscale reweighted convolutional coding neural network for low-dose CT imaging
Li et al. A comprehensive survey on deep learning techniques in CT image quality improvement
Yang et al. Low‐dose CT denoising with a high‐level feature refinement and dynamic convolution network
WO2022094779A1 (zh) 一种由pet图像生成ct图像的深度学习框架和方法
US11455755B2 (en) Methods and apparatus for neural network based image reconstruction
Zhu et al. Teacher-student network for CT image reconstruction via meta-learning strategy
WO2022193276A1 (zh) 一种用于医学图像低剂量估计的深度学习方法
WO2021031069A1 (zh) 一种图像重建方法及装置
Wang et al. Optimization algorithm of CT image edge segmentation using improved convolution neural network

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20921256

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20921256

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20921256

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 10.07.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20921256

Country of ref document: EP

Kind code of ref document: A1