WO2021168920A1 - 基于多剂量等级的低剂量图像增强方法、系统、计算机设备及存储介质 - Google Patents
基于多剂量等级的低剂量图像增强方法、系统、计算机设备及存储介质 Download PDFInfo
- Publication number
- WO2021168920A1 WO2021168920A1 PCT/CN2020/079412 CN2020079412W WO2021168920A1 WO 2021168920 A1 WO2021168920 A1 WO 2021168920A1 CN 2020079412 W CN2020079412 W CN 2020079412W WO 2021168920 A1 WO2021168920 A1 WO 2021168920A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- current
- dose
- image
- feature
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 66
- 230000009466 transformation Effects 0.000 claims abstract description 68
- 230000004927 fusion Effects 0.000 claims abstract description 42
- 230000008569 process Effects 0.000 claims abstract description 25
- 238000013210 evaluation model Methods 0.000 claims abstract description 18
- 238000000605 extraction Methods 0.000 claims abstract description 14
- 238000011156 evaluation Methods 0.000 claims abstract description 5
- 230000006870 function Effects 0.000 claims description 37
- 238000007781 pre-processing Methods 0.000 claims description 26
- 238000012545 processing Methods 0.000 claims description 25
- 230000004913 activation Effects 0.000 claims description 11
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 238000011176 pooling Methods 0.000 claims description 6
- 238000005070 sampling Methods 0.000 claims description 6
- 238000007499 fusion processing Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 238000002591 computed tomography Methods 0.000 description 17
- 238000012549 training Methods 0.000 description 14
- 208000009119 Giant Axonal Neuropathy Diseases 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 201000003382 giant axonal neuropathy 1 Diseases 0.000 description 6
- 238000013527 convolutional neural network Methods 0.000 description 4
- 230000008034 disappearance Effects 0.000 description 4
- 230000005855 radiation Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000000354 decomposition reaction Methods 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000013170 computed tomography imaging Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000002600 positron emission tomography Methods 0.000 description 2
- 229920006395 saturated elastomer Polymers 0.000 description 2
- 238000002603 single-photon emission computed tomography Methods 0.000 description 2
- 230000002238 attenuated effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000001066 destructive effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/02—Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
- A61B6/03—Computed tomography [CT]
- A61B6/032—Transmission computed tomography [CT]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10104—Positron emission tomography [PET]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10108—Single photon emission computed tomography [SPECT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Definitions
- the present invention relates to the technical field of image enhancement, in particular to a low-dose image enhancement method, system, computer equipment and storage medium based on multiple dose levels.
- Computed tomography is an important imaging method to obtain information about the internal structure of objects through a non-destructive method. It has many advantages such as high resolution, high sensitivity, and multiple levels. It is one of the largest medical imaging diagnostic equipment in my country. It is widely used in various medical and clinical examination fields. However, due to the need to use X-rays in the CT scanning process, as people gradually understand the potential hazards of radiation, the issue of CT radiation dose has attracted more and more attention.
- the rational use of low dose (As Low As Reasonably Achievable, ALARA) principle requires that the radiation dose to the patient be reduced as much as possible on the premise of satisfying the clinical diagnosis. Therefore, the research and development of new low-dose CT imaging methods can not only ensure the quality of CT imaging but also reduce harmful radiation dose, which has important scientific significance and application prospects in the field of medical diagnosis.
- Application number 201910499262.X discloses "a shallow residual codec recursive network for low-dose CT image denoising"; reduce the network by reducing the number of layers and convolution kernels in the residual codec network
- the complexity of the recursive process is used to improve the performance of the network.
- the algorithm learns end-to-end mapping through network training to obtain high-quality images.
- the original low-dose CT image is cascaded to the next input.
- the problem of image distortion after multiple recursions can be effectively avoided, image features can be better extracted, and detailed information of the image can be preserved.
- the present invention can not only reduce the complexity of the network, but also improve the performance of the network, so that the denoised image The image details are well preserved, and the image structure is clearer.
- Application number CN110559009A discloses "GAN-based multi-modal low-dose CT conversion method, system and medium for high-dose CT”; input arbitrary modal low-dose CT; perform two-dimensional discrete wavelet transform on low-dose CT to obtain multiple decompositions Result; input the low-dose CT and its multiple decomposition results into the encoder in the trained GAN network for encoding, and then decode the encoding result through the decoder in the GAN network to obtain the corresponding high-dose modal image.
- the present invention inputs the low-dose CT and its wavelet transform results into the encoder in the trained GAN network for encoding, and then performs encoding through the decoder in the GAN network.
- the encoding result is decoded to obtain the corresponding high-dose modal image, which can conveniently realize the conversion of low-dose CT images in any modal to generate high-dose CT images.
- the low-dose image reconstruction method in the above technical solution only considers image information for reconstruction; however, the image information obtained by different doses is different. Generally, the image information with a higher dose level is compared with the image information with a lower dose level. It will be more complicated; in the process of reconstructing the image, the lower the dose, the more difficult it is to reconstruct the image; therefore, in the actual clinical scanning process, the scanning dose of each patient is not the same, and the different scanning dose will affect the later stage.
- the reconstructed image has a significant impact; the above scheme only considers the single latitude of image information, and directly reconstructs the acquired low-dose image information through the above-mentioned algorithm, and the formed high-dose image still cannot meet the requirements, so it has There is room for improvement.
- the purpose of the present invention is a low-dose image enhancement method based on multiple dose levels, which can further improve the definition of the low-dose image after reconstruction.
- a low-dose image enhancement method based on multiple dose levels including:
- the current input image information is feature extracted and the current image feature information is obtained; the current image feature information and the current transformation information are fused to form the current reconstructed image information.
- the image is reconstructed through data of multiple latitudes to improve the reconstruction.
- the definition of the image that is, the current input image information is first evaluated through the constructed dose level evaluation model, and then the corresponding current dose level information is obtained. After the current dose level information is transformed, the current image feature information is compared Mutual fusion, and finally reconstructed to obtain higher-definition reconstructed image information.
- the present invention can be further configured as: the dose level evaluation model includes multiple convolutional layers and two fully connected layers that are sequentially connected, and each convolutional layer except the last convolutional layer
- the ReLU activation function, the batch regularization layer and the maximum pooling layer are sequentially connected, among which the convolution layer uses a 3x3 convolution kernel.
- the present invention can be further configured to: use cross-entropy loss as the loss function for evaluating the dose level of the current input image information.
- the dose level evaluation model can realize the unknown dose. , Perform dose level assessment on low-dose images to form one of the parameters required for reconstructed images.
- the present invention can be further configured as follows: the method for performing feature transformation processing on the current dose level information through the feature transformation module and forming the current transformation information is as follows:
- the feature transformation process adopts the feature transformation function G, which is specifically as follows:
- A is the zoom operation
- B is the offset operation
- the dose level information can be transformed into data that can be fused with the image feature information, so as to facilitate subsequent data processing.
- the present invention can be further configured as: the cascade fusion model includes a plurality of cascade fusion modules, each cascade fusion module corresponds to a feature transformation module, and the cascade fusion module provides an image for the feature transformation module Feature information; multiple cascaded fusion modules sequentially perform feature extraction on the input image and obtain corresponding image feature information and fuse the image feature information with the corresponding transformation information to form a fitting image information;
- the feature fusion process can be expressed as:
- F out F in +f(F b ,A*F b +B)
- F in and F out represent input and output feature maps
- (A, B) represents the feature conversion operation of the module, that is, A is a scaling operation, B is an offset operation; f is a fusion operation.
- the present invention can be further configured as: the cascade fusion module includes a down-sampling layer, an up-sampling layer, and a feature fusion layer in sequence.
- the settings of multiple cascaded fusion modules can be dynamically adjusted according to the scale of the data set, which has a certain degree of scalability, and further improves network performance, so that the denoised image retains image details and has a more structured structure. Clear.
- the present invention can be further configured as: the loss function for fusing the current image feature information and the current transformation information adopts an average square error function.
- the degree of the gap with the actual data can be predicted, and the average square error function can be used to more effectively express the error situation.
- the second object of the present invention is to provide a low-dose image enhancement system based on multiple dose levels, which can further improve the definition of the low-dose image after reconstruction.
- a low-dose image enhancement system based on multiple dose levels including:
- Image input module used to obtain current input image information, where the input image information includes low-dose image information
- Image dose level evaluation module used to feed back the current input image information to the constructed dose level evaluation model to evaluate the current input image information and form current dose level information corresponding to the current input image information;
- Image fusion module According to the constructed feature transformation module, perform feature transformation processing on current dose level information and form current transformation information; according to the constructed cascade fusion model to perform feature extraction on current input image information and obtain current image feature information ; And merge the current image feature information with the current transformation information to form the current reconstructed image information.
- the third object of the present invention is to provide a computer-readable storage medium capable of storing corresponding programs, facilitating further improvement of the definition of low-dose images after reconstruction.
- a computer-readable storage medium includes a program that can be loaded and executed by a processor to realize the above-mentioned low-dose image enhancement method based on multiple dose levels.
- the fourth object of the present invention is to provide a computer device that can further improve the clarity of the low-dose image after reconstruction.
- a computer device including a memory, a processor, and a program stored on the memory and capable of running on the processor.
- the program can be loaded and executed by the processor to realize the above-mentioned low-dose image based on multiple dose levels Enhancement method.
- the present invention includes the following beneficial technical effects: the input image can be graded and reconstructed based on the divided grade and the input image, so as to further improve the clarity of the reconstructed image.
- Figure 1 is a flowchart of a low-dose image enhancement method based on multiple dose levels.
- Fig. 2 is a flowchart of a method for performing feature transformation processing on current dose level information through a feature transformation module and forming current transformation information.
- Figure 3 is a schematic diagram of a multi-dose level low-dose image enhancement method.
- Fig. 4 is a schematic diagram of a reference standard image.
- Figure 5 is a schematic diagram of an image reconstructed by a CNN network.
- Figure 6 is a schematic diagram of the RED-CNN restoration result.
- FIG. 7 is a schematic diagram of the reconstructed image result of this embodiment.
- Fig. 8 is a structural diagram of a low-dose image enhancement system based on multiple dose levels.
- the embodiment of the present invention provides a low-dose image enhancement method based on multiple dose levels, including: acquiring current input image information, the input image information including low-dose image information; and feeding back the current input image information to the constructed dose level assessment
- the model is used to evaluate the dose level of the current input image information and form current dose level information corresponding to the current input image information; according to the constructed feature transformation module to perform feature transformation processing on the current dose level information and form current transformation information;
- the constructed cascade fusion model is used to extract the features of the current input image information and obtain the current image feature information; and fuse the current image feature information and the current transformation information to form the current reconstructed image information.
- the image is reconstructed through multiple latitude data to improve the reconstruction.
- the definition of the image that is, the current input image information is first evaluated through the constructed dose level evaluation model, and then the corresponding current dose level information is obtained. After the current dose level information is transformed, the current image feature information is compared Mutual fusion, and finally reconstructed to obtain higher-definition reconstructed image information.
- the embodiment of the present invention provides a low-dose image enhancement method based on multiple dose levels, and the main flow of the method is described as follows.
- Step 1000 Obtain current input image information; the input image information includes low-dose image information.
- the current input image information may be a picture brought by the patient or a scanned image formed after the scanning device on-site completes the scan, and the scanning device may be a CT device or the like.
- Step 2000 Feedback the current input image information to the constructed dose level evaluation model to evaluate the dose level of the current input image information and form current dose level information corresponding to the current input image information.
- the current dose level information corresponding to the current input image information is determined according to the dose level evaluation model shown in FIG. 3.
- the dose level evaluation model includes multiple convolutional layers and two fully connected layers that are sequentially connected. In this embodiment, seven convolutional layers are preferably used, and the convolutional layers use 3x3 convolution kernels; each convolutional layer is followed by Connected with ReLU activation function, batch regularization layer and maximum pooling layer.
- the ReLU activation function can overcome the problem of gradient disappearance, and thus achieve the effect of accelerating the training speed; the biggest problem in deep learning is the problem of gradient disappearance, which is particularly serious when saturated activation functions such as tanh and sigmod are used.
- each layer must be multiplied by the first derivative of the activation function.
- Each layer of the gradient will be attenuated by one layer.
- the gradient G will continue to attenuate until it disappears
- the input layer is standardized, but also the input of each intermediate layer of the network (before the activation function) is standardized, so that the output obeys a normal distribution with a mean value of 0 and a variance of 1. Avoid the problem of deviation of variable distribution.
- the input of each layer is standardized only by calculating the mean and variance of a small batch of data in the current layer, which is equivalent to forcing the distribution of the input value of any neuron in each layer of neural network back to the mean value of 0, and the variance is The standard normal distribution of 1.
- Batch normalization can avoid gradient disappearance and gradient explosion, and force the increasingly skewed distribution back to a more standard distribution, so that the activation input value falls in the region where the nonlinear function is more sensitive to the input, and small changes in the input will cause
- a large change in the loss function can make the gradient larger and avoid the problem of gradient disappearance, and the larger gradient means that the learning convergence speed is fast, which can greatly accelerate the training speed; because batch standardization is not applied to the entire data set, but mini- In batch, some noise will be generated, which can improve the generalization ability of the model.
- the input is split into different regions.
- each element of the output is the largest element in its corresponding region.
- the function of the maximum pooling operation is that as long as a feature is extracted in any quadrant, it will remain in the maximized pooling output. Therefore, the actual function of the maximization operation is to keep the maximum value if a certain feature is extracted in the filter. If this feature is not extracted, it may not exist in the upper right quadrant, and the maximum value is still very small.
- the parameter settings of the dose level evaluation model are as follows:
- the loss function preferably adopts cross entropy loss.
- the dose level division is completed.
- the same object is scanned according to different dose conditions, and multiple dose level image data sets of the same object are obtained as training data.
- the optimizer adopts the Adam optimization algorithm, and the learning rate adopts 0.0001 to train 200 epochs.
- the trained dose level evaluation model can form a corresponding relationship between the input image and the dose level.
- Step 3000 Perform feature transformation processing on the current dose level information according to the constructed feature transformation module and form current transformation information.
- the feature transformation process adopts the feature transformation function G, which is specifically as follows:
- A is the zoom operation
- B is the offset operation.
- the dose level information can be transformed into data that can be fused with the image feature information to facilitate subsequent data processing.
- the method for performing feature transformation processing on current dose level information through the feature transformation module and forming current transformation information is as follows:
- Step 3100 Perform feature extraction on current input image information and obtain current image feature information.
- the method of image feature extraction can be any of the following, which can be selected according to the actual situation. They are HOG (histogram of Oriented Gradient) and SIFT (Scale-invariant features transform). Variable feature transformation), SURF (Speeded Up Robust Features, accelerated robust features, improvements to sift), DOG (Difference of Gaussian, Gaussian function difference), LBP (Local Binary Pattern, local binary pattern), HAAR (haar-like) ,haar features, note that haar is a personal name, haar proposed a wavelet used as a filter, and named this filter haar filter. Later, someone used this filter on the image, which is the haar feature of the image), namely The size of the extracted image feature map is h ⁇ w ⁇ 64.
- Step 3200 Preprocess the current dose level information to form current dose level preprocessing information.
- the preprocessing process maps the current dose level information to a feature map of 64 channels corresponding to the current dose level preprocessing information by using a convolution layer with a convolution kernel of 1x1; the preprocessing process Using the softmax activation function, the data values are distributed between 0-1.
- the current dose level preprocessing information includes the first preprocessing information and the second preprocessing information. Both the first preprocessing information and the second preprocessing information are mapped into a feature map with a channel number of 64 through a convolutional layer with a 1x1 convolution kernel. 1 ⁇ 1 ⁇ 64, that is, two convolution kernels with 1 ⁇ 1 convolution layers are used.
- Step 3300 Perform dot multiplication processing according to the current dose level preprocessing information and the current image feature information to form a zoom matrix and form current zoom processing information.
- the current dose level preprocessing information in this step can be the first preprocessing information or the second preprocessing information. Since the two preprocessing information are the same, you can choose one of them. In this embodiment, the first preprocessing information is preferred. One preprocessing information.
- Step 3400 Adding the preprocessing information of the current dose level and the current scaling processing information to form current transformation information.
- the current dose level pre-processing information in this step is the remaining pre-processing information, that is, the second pre-processing information in this embodiment.
- Step 4000 According to the constructed cascade fusion model, feature extraction is performed on current input image information and current image feature information is obtained; the current image feature information and current transformation information are fused to form current reconstructed image information.
- the process of extracting features of the current input image information in step 3100 can independently adopt a specific disclosed method, or the process of extracting features of the current input image information can be performed through the constructed cascade fusion model.
- This embodiment It is preferable to use the method of feature extraction through the cascade fusion model, which can further simplify the network.
- Process the current input image information that is, realize data processing through a convolutional layer to form data corresponding to the image feature map size of h ⁇ w ⁇ 64.
- the cascade fusion model includes multiple cascade fusion modules. Each dose fusion module uses two convolutions to complete the basic image feature F b extraction. Each cascade fusion module corresponds to a feature transformation module.
- the cascade fusion module is a feature transformation. The module provides image feature information; multiple cascaded fusion modules sequentially perform feature extraction on the input image and obtain corresponding image feature information and fuse the image feature information with the corresponding transformation information to form the matched image information;
- the feature fusion process can be expressed as:
- F out F in +f(F b ,A*F b +B)
- F in and F out represent input and output feature maps
- (A, B) represents the feature conversion operation of the module, that is, A is a scaling operation, B is an offset operation; f is a fusion operation.
- the cascade fusion module includes a down-sampling layer, an up-sampling layer and a feature fusion layer in turn.
- the dose level evaluation model disclosed in step 2000 is imported, and the network training is completed according to the training data.
- the loss function for fusing the current image feature information and the current transformation information adopts a mean square error function (mean square error), and other forms of loss functions can also be used for training.
- the Adam optimization algorithm can be used, the learning rate is 0.0001, and the training is 1000 epochs.
- the embodiment of the present invention provides a computer-readable storage medium, which can be implemented when loaded and executed by a processor as shown in Fig. 1 to Fig. 2. The steps described in the process.
- the computer-readable storage medium includes, for example, a USB flash drive, a mobile hard disk, a read-only memory (Read-Only Memory, ROM), a random access memory (Random Access Memory, RAM), a magnetic disk, or an optical disk that can store various programs.
- a USB flash drive for example, a USB flash drive, a mobile hard disk, a read-only memory (Read-Only Memory, ROM), a random access memory (Random Access Memory, RAM), a magnetic disk, or an optical disk that can store various programs.
- ROM read-only memory
- RAM Random Access Memory
- magnetic disk or an optical disk that can store various programs.
- optical disk that can store various programs.
- the embodiments of the present invention provide a computer device, including a memory, a processor, and a program stored on the memory and capable of running on the processor.
- the program can be loaded and executed by the processor to achieve the following Figure 1- Figure 2.
- the low-dose image enhancement method based on multiple dose levels described in the process.
- an embodiment of the present invention provides a low-dose image enhancement system based on multiple dose levels, including:
- Image input module used to obtain current input image information, where the input image information includes low-dose image information
- Image dose level evaluation module used to feed back the current input image information to the constructed dose level evaluation model to evaluate the current input image information and form current dose level information corresponding to the current input image information;
- Image fusion module According to the constructed feature transformation module, perform feature transformation processing on current dose level information and form current transformation information; according to the constructed cascade fusion model to perform feature extraction on current input image information and obtain current image feature information ; And merge the current image feature information with the current transformation information to form the current reconstructed image information.
- the present invention can also be applied to PET (positron emission tomography), SPECT (single photon emission computed tomography) image reconstruction or other sparse-based image reconstruction after proper deformation. Projection sampled image reconstruction.
- Figure 4 is the reference standard image
- Figure 5 is the image reconstructed by the CNN network
- Figure 6 is the RED-CNN restoration result
- Figure 7 is the reconstructed image result of the solution of this embodiment.
- the disclosed system, device, and method can be implemented in other ways.
- the device embodiments described above are merely illustrative.
- the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods, for example, multiple units or components may be divided. Combined or can be integrated into another system, or some features can be ignored or not implemented.
- the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
- the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
- the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
- the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
- the technical solution of the present application essentially or the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , Including several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) execute all or part of the steps of the methods described in the various embodiments of the present application.
- the aforementioned storage media include: U disk, mobile hard disk, read-only memory, random access memory, magnetic disk or optical disk and other media that can store program codes.
Landscapes
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Medical Informatics (AREA)
- General Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Surgery (AREA)
- Data Mining & Analysis (AREA)
- Biophysics (AREA)
- High Energy & Nuclear Physics (AREA)
- Heart & Thoracic Surgery (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Optics & Photonics (AREA)
- Pathology (AREA)
- Radiology & Medical Imaging (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Pulmonology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
Description
扫描电流(mA) | 剂量等级 |
0~30 | 等级一 |
30~130 | 等级二 |
130~230 | 等级三 |
230~330 | 等级四 |
≥330 | 等级五 |
单元 | 操作 | 参数 |
下采样层 | 卷积 | 3x3x64 |
上采样层 | 反卷积 | 3x3x64 |
特征融合层 | 卷积 | 1x1x64 |
Claims (10)
- 一种基于多剂量等级的低剂量图像增强方法,其特征是:包括:获取当前输入图像信息,所述输入图像信息包括低剂量图像信息;将当前输入图像信息反馈至所构建的剂量等级评估模型以对当前输入图像信息进行评估剂量等级并形成与当前输入图像信息相互对应的当前剂量等级信息;根据所构建的特征变换模块以对当前剂量等级信息进行特征变换处理并形成当前变换信息;根据所构建的级联融合模型以对当前输入图像信息进行特征提取并获取当前图像特征信息;并将当前图像特征信息与当前变换信息进行融合以形成当前重建图像信息。
- 根据权利要求1所述的基于多剂量等级的低剂量图像增强方法,其特征是:所述剂量等级评估模型包括依次连接的多个卷积层以及两个全连接层,且除去最后一个卷积层之外的每个卷积层后依次连接有ReLU激活函数、批量正则化层和最大池化层,其中,卷积层采用3x3卷积核。
- 根据权利要求1所述的基于多剂量等级的低剂量图像增强方法,其特征是:对当前输入图像信息进行剂量等级评估的损失函数采用交叉熵损失。
- 根据权利要求1所述的基于多剂量等级的低剂量图像增强方法,其特征是:通过特征变换模块以对当前剂量等级信息进行特征变换处理并形成当前变换信息的方法如下:对当前输入图像信息进行特征提取并获取当前图像特征信息;将当前剂量等级信息进行预处理以形成当前剂量等级预处理信息;根据当前剂量等级预处理信息与当前图像特征信息进行点乘处理以缩放矩阵并形成当前缩放处理信息;根据当前剂量等级预处理信息与当前缩放处理信息相加以形成当前变换信息;特征变换处理采用特征变换函数G,具体如下:(A,B)=G(P)其中,A为缩放操作,B为偏移操作。
- 根据权利要求4所述的基于多剂量等级的低剂量图像增强方法,其特征 是:所述级联融合模型包括多个级联融合模块,每一个级联融合模块对应有一个特征变换模块,级联融合模块为特征变换模块提供图像特征信息;多个级联融合模块依次对所输入的图像进行特征提取并获取对应的图像特征信息并将图像特征信息与对应变换信息进行融合以形成拟对应合图像信息;其中,特征融合过程可以表示为:F out=F in+f(F b,A*F b+B)F in和F out代表输入和输出特征图,(A,B)代表该模块的特征转换操作,即A为缩放操作,B为偏移操作;f为融合操作。
- 根据权利要求4所述的基于多剂量等级的低剂量图像增强方法,其特征是:所述级联融合模块依次包括下采样层、上采样层和特征融合层。
- 根据权利要求1所述的基于多剂量等级的低剂量图像增强方法,其特征是:将当前图像特征信息与当前变换信息进行融合的损失函数采用平均平方误差函数。
- 一种基于多剂量等级的低剂量图像增强系统,其特征是,包括:图像输入模块:用于获取当前输入图像信息,所述输入图像信息包括低剂量图像信息;图像剂量等级评估模块:用于将当前输入图像信息反馈至所构建的剂量等级评估模型以对当前输入图像信息进行评估剂量等级并形成与当前输入图像信息相互对应的当前剂量等级信息;图像融合模块:根据所构建的特征变换模块以对当前剂量等级信息进行特征变换处理并形成当前变换信息;根据所构建的级联融合模型以对当前输入图像信息进行特征提取并获取当前图像特征信息;并将当前图像特征信息与当前变换信息进行融合以形成当前重建图像信息。
- 一种计算机可读存储介质,其特征是,存储有能够被处理器加载执行时实现如权利要求1至7中任一项所述的基于多剂量等级的低剂量图像增强方法的程序。
- 一种计算机设备,其特征是:包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的程序,该程序能够被处理器加载执行时实现如权利要求1至7中任一项所述的基于多剂量等级的低剂量图像增强方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010132540.0A CN111325695B (zh) | 2020-02-29 | 2020-02-29 | 基于多剂量等级的低剂量图像增强方法、系统及存储介质 |
CN202010132540.0 | 2020-02-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021168920A1 true WO2021168920A1 (zh) | 2021-09-02 |
Family
ID=71171462
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/079412 WO2021168920A1 (zh) | 2020-02-29 | 2020-03-14 | 基于多剂量等级的低剂量图像增强方法、系统、计算机设备及存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111325695B (zh) |
WO (1) | WO2021168920A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117058555A (zh) * | 2023-06-29 | 2023-11-14 | 北京空间飞行器总体设计部 | 一种遥感卫星图像分级管理的方法及装置 |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022027595A1 (zh) * | 2020-08-07 | 2022-02-10 | 深圳先进技术研究院 | 利用多尺度特征感知深度网络重建低剂量图像的方法 |
CN114757847B (zh) * | 2022-04-24 | 2024-07-09 | 汕头市超声仪器研究所股份有限公司 | 多信息提取的扩展U-Net及其在低剂量X射线成像的应用方法 |
CN117272941B (zh) * | 2023-09-21 | 2024-10-11 | 北京百度网讯科技有限公司 | 数据处理方法、装置、设备、计算机可读存储介质及产品 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106388843A (zh) * | 2016-10-25 | 2017-02-15 | 上海联影医疗科技有限公司 | 医学影像设备及其扫描方法 |
WO2018200493A1 (en) * | 2017-04-25 | 2018-11-01 | The Board Of Trustees Of The Leland Stanford Junior University | Dose reduction for medical imaging using deep convolutional neural networks |
CN109741254A (zh) * | 2018-12-12 | 2019-05-10 | 深圳先进技术研究院 | 字典训练及图像超分辨重建方法、系统、设备及存储介质 |
CN110223255A (zh) * | 2019-06-11 | 2019-09-10 | 太原科技大学 | 一种用于低剂量ct图像去噪的浅层残差编解码递归网络 |
CN110559009A (zh) * | 2019-09-04 | 2019-12-13 | 中山大学 | 基于gan的多模态低剂量ct转换高剂量ct的方法、系统及介质 |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019019199A1 (en) * | 2017-07-28 | 2019-01-31 | Shenzhen United Imaging Healthcare Co., Ltd. | SYSTEM AND METHOD FOR IMAGE CONVERSION |
CN107481297B (zh) * | 2017-08-31 | 2021-06-15 | 南方医科大学 | 一种基于卷积神经网络的ct图像重建方法 |
BR112020007105A2 (pt) * | 2017-10-09 | 2020-09-24 | The Board Of Trustees Of The Leland Stanford Junior University | método para treinar um dispositivo de diagnóstico por imagem para realizar uma imagem para diagnóstico médico com uma dose reduzida de agente de contraste |
CN107958471B (zh) * | 2017-10-30 | 2020-12-18 | 深圳先进技术研究院 | 基于欠采样数据的ct成像方法、装置、ct设备及存储介质 |
CN108122265A (zh) * | 2017-11-13 | 2018-06-05 | 深圳先进技术研究院 | 一种ct重建图像优化方法及系统 |
CN108053456A (zh) * | 2017-11-13 | 2018-05-18 | 深圳先进技术研究院 | 一种pet重建图像优化方法及系统 |
CN108961237B (zh) * | 2018-06-28 | 2020-08-21 | 安徽工程大学 | 一种基于卷积神经网络的低剂量ct图像分解方法 |
CN109166161B (zh) * | 2018-07-04 | 2023-06-30 | 东南大学 | 一种基于噪声伪影抑制卷积神经网络的低剂量ct图像处理系统 |
CN110210524B (zh) * | 2019-05-13 | 2023-05-02 | 东软医疗系统股份有限公司 | 一种图像增强模型的训练方法、图像增强方法及装置 |
-
2020
- 2020-02-29 CN CN202010132540.0A patent/CN111325695B/zh active Active
- 2020-03-14 WO PCT/CN2020/079412 patent/WO2021168920A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106388843A (zh) * | 2016-10-25 | 2017-02-15 | 上海联影医疗科技有限公司 | 医学影像设备及其扫描方法 |
WO2018200493A1 (en) * | 2017-04-25 | 2018-11-01 | The Board Of Trustees Of The Leland Stanford Junior University | Dose reduction for medical imaging using deep convolutional neural networks |
CN109741254A (zh) * | 2018-12-12 | 2019-05-10 | 深圳先进技术研究院 | 字典训练及图像超分辨重建方法、系统、设备及存储介质 |
CN110223255A (zh) * | 2019-06-11 | 2019-09-10 | 太原科技大学 | 一种用于低剂量ct图像去噪的浅层残差编解码递归网络 |
CN110559009A (zh) * | 2019-09-04 | 2019-12-13 | 中山大学 | 基于gan的多模态低剂量ct转换高剂量ct的方法、系统及介质 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117058555A (zh) * | 2023-06-29 | 2023-11-14 | 北京空间飞行器总体设计部 | 一种遥感卫星图像分级管理的方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
CN111325695B (zh) | 2023-04-07 |
CN111325695A (zh) | 2020-06-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021168920A1 (zh) | 基于多剂量等级的低剂量图像增强方法、系统、计算机设备及存储介质 | |
CN110827216B (zh) | 图像去噪的多生成器生成对抗网络学习方法 | |
CN111325686B (zh) | 一种基于深度学习的低剂量pet三维重建方法 | |
US11158069B2 (en) | Unsupervised deformable registration for multi-modal images | |
CN107481297B (zh) | 一种基于卷积神经网络的ct图像重建方法 | |
Gao et al. | A deep convolutional network for medical image super-resolution | |
WO2021017006A1 (zh) | 图像处理方法及装置、神经网络及训练方法、存储介质 | |
CN111709897B (zh) | 一种基于域变换的正电子发射断层图像的重建方法 | |
CN109741254B (zh) | 字典训练及图像超分辨重建方法、系统、设备及存储介质 | |
WO2022226886A1 (zh) | 基于变换域下去噪自动编码器作为先验的图像处理方法 | |
US20240185484A1 (en) | System and method for image reconstruction | |
CN112419173A (zh) | 一种由pet图像生成ct图像的深度学习框架和方法 | |
Yang et al. | Super-resolution of medical image using representation learning | |
Ikuta et al. | A deep convolutional gated recurrent unit for CT image reconstruction | |
Huang et al. | Super-resolution and inpainting with degraded and upgraded generative adversarial networks | |
CN116681888A (zh) | 一种智能图像分割方法及系统 | |
Liu et al. | MRCON-Net: Multiscale reweighted convolutional coding neural network for low-dose CT imaging | |
Li et al. | A comprehensive survey on deep learning techniques in CT image quality improvement | |
Yang et al. | Low‐dose CT denoising with a high‐level feature refinement and dynamic convolution network | |
WO2022094779A1 (zh) | 一种由pet图像生成ct图像的深度学习框架和方法 | |
US11455755B2 (en) | Methods and apparatus for neural network based image reconstruction | |
Zhu et al. | Teacher-student network for CT image reconstruction via meta-learning strategy | |
WO2022193276A1 (zh) | 一种用于医学图像低剂量估计的深度学习方法 | |
WO2021031069A1 (zh) | 一种图像重建方法及装置 | |
Wang et al. | Optimization algorithm of CT image edge segmentation using improved convolution neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20921256 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20921256 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20921256 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 10.07.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20921256 Country of ref document: EP Kind code of ref document: A1 |