CN111325695A - Low-dose image enhancement method and system based on multi-dose grade and storage medium - Google Patents
Low-dose image enhancement method and system based on multi-dose grade and storage medium Download PDFInfo
- Publication number
- CN111325695A CN111325695A CN202010132540.0A CN202010132540A CN111325695A CN 111325695 A CN111325695 A CN 111325695A CN 202010132540 A CN202010132540 A CN 202010132540A CN 111325695 A CN111325695 A CN 111325695A
- Authority
- CN
- China
- Prior art keywords
- information
- dose
- current
- image
- transformation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 61
- 230000009466 transformation Effects 0.000 claims abstract description 70
- 230000004927 fusion Effects 0.000 claims abstract description 39
- 238000012545 processing Methods 0.000 claims abstract description 28
- 238000013210 evaluation model Methods 0.000 claims abstract description 20
- 238000000605 extraction Methods 0.000 claims abstract description 9
- 230000006870 function Effects 0.000 claims description 35
- 238000007781 pre-processing Methods 0.000 claims description 27
- 230000004913 activation Effects 0.000 claims description 10
- 238000005070 sampling Methods 0.000 claims description 8
- 238000006243 chemical reaction Methods 0.000 claims description 5
- 238000011176 pooling Methods 0.000 claims description 5
- 238000007499 fusion processing Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 230000008569 process Effects 0.000 abstract description 17
- 238000002591 computed tomography Methods 0.000 description 16
- 238000012549 training Methods 0.000 description 12
- 238000010586 diagram Methods 0.000 description 8
- 208000009119 Giant Axonal Neuropathy Diseases 0.000 description 6
- 201000003382 giant axonal neuropathy 1 Diseases 0.000 description 6
- 230000008034 disappearance Effects 0.000 description 5
- 238000013527 convolutional neural network Methods 0.000 description 4
- 230000005855 radiation Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000000354 decomposition reaction Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 2
- 101150064138 MAP1 gene Proteins 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000002238 attenuated effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000013170 computed tomography imaging Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000002600 positron emission tomography Methods 0.000 description 2
- 238000002603 single-photon emission computed tomography Methods 0.000 description 2
- 230000003213 activating effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000004027 cell Anatomy 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 229920006395 saturated elastomer Polymers 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/02—Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
- A61B6/03—Computed tomography [CT]
- A61B6/032—Transmission computed tomography [CT]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10104—Positron emission tomography [PET]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10108—Single photon emission computed tomography [SPECT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Medical Informatics (AREA)
- General Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Surgery (AREA)
- Data Mining & Analysis (AREA)
- Biophysics (AREA)
- High Energy & Nuclear Physics (AREA)
- Heart & Thoracic Surgery (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Optics & Photonics (AREA)
- Pathology (AREA)
- Radiology & Medical Imaging (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Pulmonology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a multi-dose level-based low-dose image enhancement method, a multi-dose level-based low-dose image enhancement system and a storage medium, which solve the problem that the definition cannot meet the requirement in the process of image reconstruction by using an input image singly, and comprise the following steps: acquiring current input image information, and feeding the current input image information back to the constructed dose grade evaluation model to form current dose grade information corresponding to the current input image information; performing characteristic transformation processing on the current dose grade information according to the constructed characteristic transformation module to form current transformation information; according to the constructed cascade fusion model, feature extraction is carried out on the current input image information and the current image feature information is obtained; and fusing the current image characteristic information with the current transformation information to form current reconstructed image information. The invention can evaluate the grade of the input image, and rebuild based on the grade and the input image, thereby further improving the definition of the rebuilt image.
Description
Technical Field
The invention relates to the technical field of image enhancement, in particular to a low-dose image enhancement method and system based on multi-dose grade and a storage medium.
Background
Computed Tomography (CT) is an important imaging means for obtaining internal structural information of an object in a nondestructive manner, has many advantages of high resolution, high sensitivity, multiple levels and the like, is one of medical image diagnostic devices with the largest machine loading amount in China, and is widely applied to various medical clinical examination fields. However, as the use of X-rays is required during CT scanning, the problem of CT radiation dose is increasingly gaining attention as people become increasingly aware of the potential hazards of radiation. The rationale for using Low doses (As Low As reasonable Achievable, ALARA) requires that the radiation dose to the patient be minimized while meeting the clinical diagnosis. Therefore, the research and development of a new low-dose CT imaging method can ensure the CT imaging quality and reduce the harmful radiation dose, and has important scientific significance and application prospect in the field of medical diagnosis.
Application No. 201910499262.X discloses "a shallow layer residual error coding and decoding recursive network for denoising low-dose CT images"; the invention can reduce the complexity of the network, improve the network performance by reducing the number of layers and convolution kernels in the residual coding and decoding network, and improve the network performance by utilizing the recursion process.
Methods, systems and media for converting multi-modal GAN-based low-dose CT to high-dose CT disclosed in application No. CN 110559009A; inputting low-dose CT of any modality; carrying out two-dimensional discrete wavelet transform on the low-dose CT to obtain a plurality of decomposition results; and inputting the low-dose CT and a plurality of decomposition results thereof into a trained coder in the GAN network for coding, and decoding the coding results through a decoder in the GAN network to obtain a corresponding high-dose modal image. Based on the wide development of GAN in multi-domain conversion and the decomposition capability of the traditional wavelet transformation, the invention inputs the low-dose CT and the wavelet transformation result thereof into the encoder in the trained GAN network together for encoding, and then decodes the encoding result through the decoder in the GAN network to obtain the corresponding high-dose modal image, thereby conveniently realizing the conversion of the low-dose CT image of any modality to generate the high-dose CT image.
In the low-dose image reconstruction method in the technical scheme, only image information is considered for reconstruction; however, the image information obtained by different doses is different, and generally, the image information with a high dose level is more complicated than the image information with a low dose level; in the process of reconstructing the image, the lower the dose, the higher the difficulty of reconstructing the image; therefore, in the actual clinical scanning process, the scanning dose of each patient is different, and the different scanning doses have great influence on the later-stage reconstructed image; in the scheme, only the single latitude of the image information is considered, the acquired low-dose image information is directly reconstructed through the algorithm, and the formed high-dose image still cannot meet the requirement, so that a certain improvement space is provided.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a low-dose image enhancement method based on multi-dose grade, which can further improve the definition of a low-dose image after reconstruction.
The above object of the present invention is achieved by the following technical solutions:
a method of low-dose image enhancement based on multiple dose levels, comprising:
acquiring current input image information, wherein the input image information comprises low-dose image information;
feeding back the current input image information to the constructed dose grade evaluation model to evaluate the dose grade of the current input image information and form current dose grade information corresponding to the current input image information;
performing characteristic transformation processing on the current dose grade information according to the constructed characteristic transformation module to form current transformation information;
according to the constructed cascade fusion model, feature extraction is carried out on the current input image information and the current image feature information is obtained; and fusing the current image characteristic information with the current transformation information to form current reconstructed image information.
By adopting the technical scheme, in the low-dose image reconstruction process, not only the latitude of the input image information is considered, but also the dose grade of the input image information is considered, the image is reconstructed through data of a plurality of latitudes to improve the definition of the reconstructed image, namely, the current input image information is evaluated through the constructed dose grade evaluation model to further obtain the corresponding current dose grade information, and the current dose grade information is mutually fused with the current image characteristic information after being subjected to characteristic transformation, and finally the reconstructed image information with higher definition is reconstructed.
The present invention in a preferred example may be further configured to: the dose level evaluation model comprises a plurality of convolution layers and two full-connection layers which are connected in sequence, and a ReLU activation function, a batch regularization layer and a maximum pooling layer are connected in sequence after each convolution layer except the last convolution layer, wherein the convolution layers adopt a 3x3 convolution kernel.
The present invention in a preferred example may be further configured to: the loss function for dose level assessment of the current input image information employs cross entropy loss.
By adopting the technical scheme, the scanning dose of each patient is different in the actual clinical scanning process, so that the dose grade of the current input image information needs to be evaluated, and the dose grade evaluation of the low-dose image can be carried out under the condition of unknown dose through the dose grade evaluation model to form one parameter required by the reconstructed image.
The present invention in a preferred example may be further configured to: the method for performing characteristic transformation processing on the current dose grade information through the characteristic transformation module and forming the current transformation information comprises the following steps:
extracting the characteristics of the current input image information and acquiring the characteristic information of the current image;
preprocessing the current dose grade information to form current dose grade preprocessing information;
performing dot multiplication processing according to the current dose grade preprocessing information and the current image characteristic information to obtain a scaling matrix and form current scaling processing information;
adding the pre-processing information according to the current dose level and the current scaling processing information to form current transformation information;
the feature transformation processing adopts a feature transformation function G, which is specifically as follows:
where A is the zoom operation and B is the offset operation.
By adopting the technical scheme, after the characteristic transformation processing, the dose grade information can be converted into the data which can be fused with the image characteristic information, so that the subsequent data processing is facilitated.
The present invention in a preferred example may be further configured to: the cascade fusion model comprises a plurality of cascade fusion modules, each cascade fusion module corresponds to one feature transformation module, and the cascade fusion modules provide image feature information for the feature transformation modules; the cascade fusion modules sequentially extract the features of the input image, acquire corresponding image feature information and fuse the image feature information and the corresponding transformation information to form quasi-corresponding image information;
wherein the feature fusion process can be expressed as:
andrepresenting input and output characteristic graphs, and (A, B) representing characteristic conversion operation of the module, namely A is scaling operation and B is offset operation; f is the fusion operation.
The present invention in a preferred example may be further configured to: the cascade fusion module sequentially comprises a down-sampling layer, an up-sampling layer and a feature fusion layer.
By adopting the technical scheme, the arrangement of the plurality of cascade fusion modules can be dynamically adjusted according to the scale of the data set, certain ductility is achieved, the network performance is further improved, the image details are well reserved for the denoised image, and the structure is clearer.
The present invention in a preferred example may be further configured to: and adopting an average square error function as a loss function for fusing the current image characteristic information and the current transformation information.
By adopting the technical scheme, the difference degree between the actual data and the predicted data can be predicted, and the error can be more effectively expressed by adopting the average square error function.
The invention also aims to provide a low-dose image enhancement system based on multi-dose grade, which can further improve the definition of the low-dose image after reconstruction.
The second aim of the invention is realized by the following technical scheme:
a multi-dose level based low-dose image enhancement system comprising:
an image input module: the system comprises a processor, a display and a display, wherein the processor is used for acquiring current input image information, and the input image information comprises low-dose image information;
an image dose level assessment module: the dose grading evaluation model is used for feeding back the current input image information to the constructed dose grading evaluation model so as to evaluate the dose grade of the current input image information and form current dose grade information corresponding to the current input image information;
an image fusion module: performing characteristic transformation processing on the current dose grade information according to the constructed characteristic transformation module to form current transformation information; according to the constructed cascade fusion model, feature extraction is carried out on the current input image information and the current image feature information is obtained; and fusing the current image characteristic information with the current transformation information to form current reconstructed image information.
The third purpose of the present invention is to provide a computer readable storage medium, which can store corresponding programs, and is convenient for further improving the definition of the low-dose image after reconstruction.
The third object of the invention is realized by the following technical scheme:
a computer readable storage medium comprising a program which when executed by a processor implements a multi-dose level based low-dose image enhancement method as described above.
The fourth purpose of the invention is to provide a computer device which can further improve the definition of the low-dose image after reconstruction.
The fourth object of the invention is realized by the following technical scheme:
a computer device comprising a memory, a processor and a program stored on said memory and executable on said processor, the program being capable of being loaded for execution by the processor to implement a multi-dose level based low-dose image enhancement method as described above.
In summary, the invention has the following beneficial technical effects: the grade evaluation can be carried out on the input image, and the image is reconstructed based on the grade and the input image, so that the definition of the reconstructed image is further improved.
Drawings
Fig. 1 is a block flow diagram of a low-dose image enhancement method based on multiple dose levels.
FIG. 2 is a block flow diagram of a method for feature transformation processing of current dose level information by a feature transformation module and forming current transformation information.
Fig. 3 is a schematic diagram of a multi-dose level low dose image enhancement method.
Fig. 4 is a schematic diagram of a reference standard image.
Fig. 5 is a schematic diagram of an image reconstructed by a CNN network.
FIG. 6 is a diagram of the result of RED-CNN restoration.
Fig. 7 is a schematic diagram of a result of reconstructing an image according to the present embodiment.
Fig. 8 is a schematic diagram of the structure of a multi-dose level based low-dose image enhancement system.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
The present embodiment is only for explaining the present invention, and it is not limited to the present invention, and those skilled in the art can make modifications of the present embodiment without inventive contribution as needed after reading the present specification, but all of them are protected by patent law within the scope of the claims of the present invention.
The embodiment of the invention provides a low-dose image enhancement method based on multiple dose levels, which comprises the following steps: acquiring current input image information, wherein the input image information comprises low-dose image information; feeding back the current input image information to the constructed dose grade evaluation model to evaluate the dose grade of the current input image information and form current dose grade information corresponding to the current input image information; performing characteristic transformation processing on the current dose grade information according to the constructed characteristic transformation module to form current transformation information; according to the constructed cascade fusion model, feature extraction is carried out on the current input image information and the current image feature information is obtained; and fusing the current image characteristic information with the current transformation information to form current reconstructed image information.
In the embodiment of the invention, in the process of reconstructing the low-dose image, not only the latitude of the input image information is considered, but also the dose grade of the input image information is considered, the image is reconstructed through data of a plurality of latitudes to improve the definition of the reconstructed image, namely, the current input image information is evaluated through a constructed dose grade evaluation model to further obtain corresponding current dose grade information, and the current dose grade information is mutually fused with the current image characteristic information after being subjected to characteristic transformation, and finally the reconstructed image information with higher definition is reconstructed.
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship, unless otherwise specified.
The embodiments of the present invention will be described in further detail with reference to the drawings attached hereto.
The embodiment of the invention provides a low-dose image enhancement method based on multi-dose level, and the main flow of the method is described as follows.
As shown in fig. 1:
step 1000: acquiring current input image information; the input image information includes low dose image information.
The current input image information may be a picture taken by a patient or a scanned image formed after a field scanning device completes scanning, and the scanning device may be a CT device or the like.
Step 2000: feeding back the current input image information to the constructed dose grade evaluation model to evaluate the dose grade of the current input image information and form current dose grade information corresponding to the current input image information.
Wherein, the current dose level information corresponding to the current input image information is determined according to the dose level evaluation model shown in fig. 3. The dose level evaluation model comprises a plurality of convolution layers and two full-connection layers which are connected in sequence, in the embodiment, the number of the convolution layers is preferably 7, and the convolution layers adopt 3x3 convolution kernels; and a ReLU activation function, a batch regularization layer and a maximum pooling layer are sequentially connected behind each convolution layer.
The following functional formula is preferably used for the ReLU activation function:
compared with a sigmod function and a tanh function, the problem of gradient disappearance can be solved by adopting a ReLU activation function, and the effect of accelerating the training speed is further achieved; the biggest problem in deep learning is the gradient disappearance problem, which is particularly serious when a saturated activation function such as tanh and sigmod is used (when a neural network carries out direction error propagation, each layer is multiplied by a first derivative of the activation function, each layer is attenuated by one layer when the gradient is transferred, and when the number of network layers is large, the gradient G is continuously attenuated until disappearance), so that the training network converges more and less slowly, and the training speed of the ReLU function is much faster by virtue of the linear and unsaturated form of the ReLU function.
Regarding the batch normalization layer, not only the input layer but also the input (before activating the function) of each middle layer of the network are normalized, so that the output follows normal distribution with a mean value of 0 and a variance of 1, thereby avoiding the problem of variable distribution deviation. During training, the input of each layer is normalized by calculating the mean and variance of a small batch of data of the current layer, which is equivalent to forcibly pulling back the distribution of the input value of any neuron of the neural network of each layer to a standard normal distribution with the mean of 0 and the variance of 1. Gradient disappearance and gradient explosion can be avoided through batch normalization, more and more biased distribution is forcibly pulled back to the distribution of a comparison standard, so that an activation input value falls in a region where a nonlinear function is sensitive to input, a small change of input can cause a large change of a loss function, the gradient can be enlarged, the problem of gradient disappearance is avoided, and the enlargement of the gradient means that the learning convergence speed is high, and the training speed can be greatly accelerated; since batch normalization is not applied to the entire data set, but mini-batch, some noise is generated, which may improve the generalization capability of the model.
With respect to the max pooling layer, in execution, the input is split into different regions, and for the output, each element of the output is the largest element in its corresponding region. The function of the max pooling operation is to retain a certain feature within the maximized pooled output as long as it is extracted in any one quadrant. The practical effect of the maximization operation is to preserve the maximum value if a feature is extracted in the filter. If this feature is not extracted, it may not be present in the upper right quadrant, and the maximum value is still small.
According to fig. 3, the parameter settings of the dose rating evaluation model are as follows:
the loss function preferably employs cross-entropy loss.
It should be noted that, for the dose level estimation model shown in fig. 3, those skilled in the art can make appropriate modifications according to the actual application scenario, for example, more or less convolution layers are used, and other types of activation functions are used.
In the training process, dose grade division is completed according to the actual scanning dose parameters of the existing CT equipment, and in the embodiment, 5 dose grades are preferably set; for example, when the control voltage is 70KeV, the dose level definition is performed for different scan currents, which is specifically as follows:
and according to the dose grade division, scanning the same object according to different dose conditions respectively, and acquiring a plurality of dose grade image data sets of the same object as training data. In the training process, the optimizer adopts an Adam optimization algorithm, and the learning rate adopts 0.0001 to train 200 epochs. The trained dose grade evaluation model can form a corresponding relation between the input image and the dose grade.
Step 3000: and performing characteristic transformation processing on the current dose grade information according to the constructed characteristic transformation module to form current transformation information.
The feature transformation processing adopts a feature transformation function G, which is specifically as follows:
where A is the zoom operation and B is the offset operation. After the characteristic transformation processing, the dose level information can be converted into data which can be fused with the image characteristic information, so that the subsequent data processing is facilitated.
Specifically, as shown in fig. 2, the method for performing feature transformation processing on the current dose level information by the feature transformation module and forming current transformation information is as follows:
step 3100: and performing feature extraction on the current input image information and acquiring the current image feature information.
The method for extracting features of an image may be any one of the following methods, which may be selected according to actual situations, and the methods are respectively HOG (histogram of Oriented Gradient), SIFT (Scale-invariant features transform), SURF (Speeded Up robust features, improvement on SIFT), DOG (Difference of Gaussian function), LBP (Local Binary Pattern), HAAR (HAAR-like features, HAAR is a personal name, HAAR proposes a wavelet used as a filter, and the HAAR is a HAAR feature of an image if the filter is used on the image later), that is, the extracted image feature map has a size of h × w ×.
Step 3200: the current dose level information is pre-processed to form current dose level pre-processed information.
The preprocessing process maps current dose level information to a feature map 1 × 1 × 64 with the channel number of 64 corresponding to the current dose level preprocessing information by using a convolutional layer with a convolution kernel of 1x1, the preprocessing process adopts a softmax activation function and distributes data values between 0 and 1, the current dose level preprocessing information comprises first preprocessing information and second preprocessing information, and the first preprocessing information and the second preprocessing information are both mapped to the feature map 1 × 1 × 64 with the channel number of 64 by using the convolutional layer with the convolution kernel of 1x1, namely the preprocessing process is realized by using two convolutional layers with the convolution kernels of 1x 1.
Step 3300: and performing dot multiplication processing according to the current dose grade preprocessing information and the current image characteristic information to obtain a scaling matrix and form current scaling processing information.
The current dose level preprocessing information in this step may be the first preprocessing information or the second preprocessing information, and since the two preprocessing information are the same, the selection may be selected alternatively, and the first preprocessing information is preferred in this embodiment.
Step 3400: the pre-processing information is added to the current scaling information according to the current dose level to form current transform information.
The current dose level preprocessing information in this step is the remaining preprocessing information, i.e., the second preprocessing information in this embodiment.
Step 4000: according to the constructed cascade fusion model, feature extraction is carried out on the current input image information and the current image feature information is obtained; and fusing the current image characteristic information with the current transformation information to form current reconstructed image information.
In the step 3100, a specifically disclosed method may be independently adopted in the process of extracting the features of the current input image information, or the process of extracting the features of the current input image information may be performed through the constructed cascade fusion model, in this embodiment, a method of extracting the features through the cascade fusion model is preferably adopted, which may further simplify the network.
The cascade fusion model comprises a plurality of cascade fusion modules, and each dose fusion module adopts two convolutions to complete basic image characteristicsExtracting, wherein each cascade fusion module corresponds to one feature transformation module and provides image feature information for the feature transformation modules; the cascade fusion modules sequentially extract the features of the input image, acquire corresponding image feature information and fuse the image feature information and the corresponding transformation information to form quasi-corresponding image information;
wherein the feature fusion process can be expressed as:
andrepresenting input and output characteristic graphs, and (A, B) representing characteristic conversion operation of the module, namely A is scaling operation and B is offset operation; f is the fusion operation.
The cascade fusion module sequentially comprises a down-sampling layer, an up-sampling layer and a feature fusion layer.
Unit cell | Operation of | Parameter(s) |
Downsampling layer | Convolution with a bit line | 3x3x64 |
Upper sampling layer | Deconvolution | 3x3x64 |
Feature fusion layer | Convolution with a bit line | 1x1x64 |
In the process of performing the complete network training, the dose level assessment model disclosed in step 2000 is imported, and the network training is completed according to the training data. The loss function for fusing the current image characteristic information and the current transformation information adopts a mean square error function (mean square error), and can also adopt other forms of loss functions for training. For the optimizer, an Adam optimization algorithm can be used, the learning rate is 0.0001, and 1000 epochs are trained.
Embodiments of the present invention provide a computer-readable storage medium including instructions that, when loaded and executed by a processor, implement the method described in fig. 1-2. The individual steps described in the flow.
The computer-readable storage medium includes, for example: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Based on the same inventive concept, the embodiment of the present invention provides a computer device, which includes a memory, a processor, and a program stored in the memory and executable on the processor, and the program can be loaded and executed by the processor to implement the methods shown in fig. 1-2. The low-dose image enhancement method based on multiple dose levels described in the procedure.
Based on the same inventive concept, as shown in fig. 8, an embodiment of the present invention provides a low-dose image enhancement system based on multiple dose levels, including:
an image input module: the system comprises a processor, a display and a display, wherein the processor is used for acquiring current input image information, and the input image information comprises low-dose image information;
an image dose level assessment module: the dose grading evaluation model is used for feeding back the current input image information to the constructed dose grading evaluation model so as to evaluate the dose grade of the current input image information and form current dose grade information corresponding to the current input image information;
an image fusion module: performing characteristic transformation processing on the current dose grade information according to the constructed characteristic transformation module to form current transformation information; according to the constructed cascade fusion model, feature extraction is carried out on the current input image information and the current image feature information is obtained; and fusing the current image characteristic information with the current transformation information to form current reconstructed image information.
It should be noted that, in addition to being applied to CT image reconstruction, the present invention, after being appropriately deformed, may also be applied to PET (positron emission tomography), SPECT (single photon emission computed tomography) image reconstruction, or other image reconstruction based on sparse projection sampling.
By verification, the image reconstruction method can be used for reconstructing the image, and the image which is clearer and contains more details can be obtained. Referring to fig. 4 to 7, fig. 4 is a reference standard image, fig. 5 is an image reconstructed by the CNN network, fig. 6 is a RED-CNN restoration result, and fig. 7 is a reconstructed image result according to the embodiment.
It should be noted that, although the steps are described in a specific order, the steps are not necessarily performed in the specific order, and in fact, some of the steps may be performed concurrently or even in a changed order as long as the required functions are achieved.
It will be clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to perform all or part of the above described functions. For the specific working processes of the system, the apparatus and the unit described above, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described here again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: u disk, removable hard disk, read only memory, random access memory, magnetic or optical disk, etc. for storing program codes.
The above embodiments are only used to describe the technical solutions of the present application in detail, but the above embodiments are only used to help understanding the method and the core idea of the present invention, and should not be construed as limiting the present invention. Those skilled in the art should also appreciate that they can easily conceive of various changes and substitutions within the technical scope of the present disclosure.
Claims (10)
1. A low-dose image enhancement method based on multiple dose levels is characterized in that: the method comprises the following steps:
acquiring current input image information, wherein the input image information comprises low-dose image information;
feeding back the current input image information to the constructed dose grade evaluation model to evaluate the dose grade of the current input image information and form current dose grade information corresponding to the current input image information;
performing characteristic transformation processing on the current dose grade information according to the constructed characteristic transformation module to form current transformation information;
according to the constructed cascade fusion model, feature extraction is carried out on the current input image information and the current image feature information is obtained; and fusing the current image characteristic information with the current transformation information to form current reconstructed image information.
2. A multi-dose level based low dose image enhancement method as claimed in claim 1, wherein: the dose level evaluation model comprises a plurality of convolution layers and two full-connection layers which are connected in sequence, and a ReLU activation function, a batch regularization layer and a maximum pooling layer are connected in sequence after each convolution layer except the last convolution layer, wherein the convolution layers adopt a 3x3 convolution kernel.
3. A multi-dose level based low dose image enhancement method as claimed in claim 1, wherein: the loss function for dose level assessment of the current input image information employs cross entropy loss.
4. A multi-dose level based low dose image enhancement method as claimed in claim 1, wherein: the method for performing characteristic transformation processing on the current dose grade information through the characteristic transformation module and forming the current transformation information comprises the following steps:
extracting the characteristics of the current input image information and acquiring the characteristic information of the current image;
preprocessing the current dose grade information to form current dose grade preprocessing information;
performing dot multiplication processing according to the current dose grade preprocessing information and the current image characteristic information to obtain a scaling matrix and form current scaling processing information;
adding the pre-processing information according to the current dose level and the current scaling processing information to form current transformation information;
the feature transformation processing adopts a feature transformation function G, which is specifically as follows:
where A is the zoom operation and B is the offset operation.
5. A multi-dose level based low dose image enhancement method as claimed in claim 4, wherein: the cascade fusion model comprises a plurality of cascade fusion modules, each cascade fusion module corresponds to one feature transformation module, and the cascade fusion modules provide image feature information for the feature transformation modules; the cascade fusion modules sequentially extract the features of the input image, acquire corresponding image feature information and fuse the image feature information and the corresponding transformation information to form quasi-corresponding image information;
wherein the feature fusion process can be expressed as:
6. A multi-dose level based low dose image enhancement method as claimed in claim 4, wherein: the cascade fusion module sequentially comprises a down-sampling layer, an up-sampling layer and a feature fusion layer.
7. A multi-dose level based low dose image enhancement method as claimed in claim 1, wherein: and adopting an average square error function as a loss function for fusing the current image characteristic information and the current transformation information.
8. A multi-dose level based low dose image enhancement system, comprising:
an image input module: the system comprises a processor, a display and a display, wherein the processor is used for acquiring current input image information, and the input image information comprises low-dose image information;
an image dose level assessment module: the dose grading evaluation model is used for feeding back the current input image information to the constructed dose grading evaluation model so as to evaluate the dose grade of the current input image information and form current dose grade information corresponding to the current input image information;
an image fusion module: performing characteristic transformation processing on the current dose grade information according to the constructed characteristic transformation module to form current transformation information; according to the constructed cascade fusion model, feature extraction is carried out on the current input image information and the current image feature information is obtained; and fusing the current image characteristic information with the current transformation information to form current reconstructed image information.
9. A computer-readable storage medium storing a program which, when loaded and executed by a processor, implements a method of low-dose image enhancement based on multiple dose levels according to any one of claims 1 to 7.
10. A computer device, characterized by: comprising a memory, a processor and a program stored on said memory and executable on said processor, which program is capable of being loaded and executed by the processor to implement the multi-dose level based low-dose image enhancement method according to any one of claims 1 to 7.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010132540.0A CN111325695B (en) | 2020-02-29 | 2020-02-29 | Low-dose image enhancement method and system based on multi-dose grade and storage medium |
PCT/CN2020/079412 WO2021168920A1 (en) | 2020-02-29 | 2020-03-14 | Low-dose image enhancement method and system based on multiple dose levels, and computer device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010132540.0A CN111325695B (en) | 2020-02-29 | 2020-02-29 | Low-dose image enhancement method and system based on multi-dose grade and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111325695A true CN111325695A (en) | 2020-06-23 |
CN111325695B CN111325695B (en) | 2023-04-07 |
Family
ID=71171462
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010132540.0A Active CN111325695B (en) | 2020-02-29 | 2020-02-29 | Low-dose image enhancement method and system based on multi-dose grade and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111325695B (en) |
WO (1) | WO2021168920A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022027595A1 (en) * | 2020-08-07 | 2022-02-10 | 深圳先进技术研究院 | Method for reconstructing low-dose image by using multiscale feature sensing deep network |
WO2023206426A1 (en) * | 2022-04-24 | 2023-11-02 | 汕头市超声仪器研究所股份有限公司 | Multi-information extraction extended u-net and application method therefor in low-dose x-ray imaging |
CN117272941A (en) * | 2023-09-21 | 2023-12-22 | 北京百度网讯科技有限公司 | Data processing method, apparatus, device, computer readable storage medium and product |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117058555B (en) * | 2023-06-29 | 2024-07-30 | 北京空间飞行器总体设计部 | Method and device for hierarchical management of remote sensing satellite images |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107481297A (en) * | 2017-08-31 | 2017-12-15 | 南方医科大学 | A kind of CT image rebuilding methods based on convolutional neural networks |
CN107610195A (en) * | 2017-07-28 | 2018-01-19 | 上海联影医疗科技有限公司 | The system and method for image conversion |
CN107958471A (en) * | 2017-10-30 | 2018-04-24 | 深圳先进技术研究院 | CT imaging methods, device, CT equipment and storage medium based on lack sampling data |
CN108053456A (en) * | 2017-11-13 | 2018-05-18 | 深圳先进技术研究院 | A kind of PET reconstruction images optimization method and system |
CN108122265A (en) * | 2017-11-13 | 2018-06-05 | 深圳先进技术研究院 | A kind of CT reconstruction images optimization method and system |
CN108961237A (en) * | 2018-06-28 | 2018-12-07 | 安徽工程大学 | A kind of low-dose CT picture breakdown method based on convolutional neural networks |
CN109166161A (en) * | 2018-07-04 | 2019-01-08 | 东南大学 | A kind of low-dose CT image processing system inhibiting convolutional neural networks based on noise artifacts |
US20190108634A1 (en) * | 2017-10-09 | 2019-04-11 | The Board Of Trustees Of The Leland Stanford Junior University | Contrast Dose Reduction for Medical Imaging Using Deep Learning |
CN110210524A (en) * | 2019-05-13 | 2019-09-06 | 东软医疗系统股份有限公司 | A kind of training method, image enchancing method and the device of image enhancement model |
CN110223255A (en) * | 2019-06-11 | 2019-09-10 | 太原科技大学 | A kind of shallow-layer residual error encoding and decoding Recursive Networks for low-dose CT image denoising |
CN110559009A (en) * | 2019-09-04 | 2019-12-13 | 中山大学 | Method, system and medium for converting multi-modal low-dose CT into high-dose CT based on GAN |
CN110753935A (en) * | 2017-04-25 | 2020-02-04 | 小利兰·斯坦福大学托管委员会 | Dose reduction using deep convolutional neural networks for medical imaging |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106388843A (en) * | 2016-10-25 | 2017-02-15 | 上海联影医疗科技有限公司 | Medical imaging equipment and scanning method thereof |
CN109741254B (en) * | 2018-12-12 | 2022-09-27 | 深圳先进技术研究院 | Dictionary training and image super-resolution reconstruction method, system, equipment and storage medium |
-
2020
- 2020-02-29 CN CN202010132540.0A patent/CN111325695B/en active Active
- 2020-03-14 WO PCT/CN2020/079412 patent/WO2021168920A1/en active Application Filing
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110753935A (en) * | 2017-04-25 | 2020-02-04 | 小利兰·斯坦福大学托管委员会 | Dose reduction using deep convolutional neural networks for medical imaging |
WO2019019199A1 (en) * | 2017-07-28 | 2019-01-31 | Shenzhen United Imaging Healthcare Co., Ltd. | System and method for image conversion |
CN107610195A (en) * | 2017-07-28 | 2018-01-19 | 上海联影医疗科技有限公司 | The system and method for image conversion |
CN107633540A (en) * | 2017-07-28 | 2018-01-26 | 上海联影医疗科技有限公司 | The system and method for image conversion |
CN107481297A (en) * | 2017-08-31 | 2017-12-15 | 南方医科大学 | A kind of CT image rebuilding methods based on convolutional neural networks |
US20190108634A1 (en) * | 2017-10-09 | 2019-04-11 | The Board Of Trustees Of The Leland Stanford Junior University | Contrast Dose Reduction for Medical Imaging Using Deep Learning |
CN107958471A (en) * | 2017-10-30 | 2018-04-24 | 深圳先进技术研究院 | CT imaging methods, device, CT equipment and storage medium based on lack sampling data |
CN108122265A (en) * | 2017-11-13 | 2018-06-05 | 深圳先进技术研究院 | A kind of CT reconstruction images optimization method and system |
CN108053456A (en) * | 2017-11-13 | 2018-05-18 | 深圳先进技术研究院 | A kind of PET reconstruction images optimization method and system |
CN108961237A (en) * | 2018-06-28 | 2018-12-07 | 安徽工程大学 | A kind of low-dose CT picture breakdown method based on convolutional neural networks |
CN109166161A (en) * | 2018-07-04 | 2019-01-08 | 东南大学 | A kind of low-dose CT image processing system inhibiting convolutional neural networks based on noise artifacts |
CN110210524A (en) * | 2019-05-13 | 2019-09-06 | 东软医疗系统股份有限公司 | A kind of training method, image enchancing method and the device of image enhancement model |
CN110223255A (en) * | 2019-06-11 | 2019-09-10 | 太原科技大学 | A kind of shallow-layer residual error encoding and decoding Recursive Networks for low-dose CT image denoising |
CN110559009A (en) * | 2019-09-04 | 2019-12-13 | 中山大学 | Method, system and medium for converting multi-modal low-dose CT into high-dose CT based on GAN |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022027595A1 (en) * | 2020-08-07 | 2022-02-10 | 深圳先进技术研究院 | Method for reconstructing low-dose image by using multiscale feature sensing deep network |
WO2023206426A1 (en) * | 2022-04-24 | 2023-11-02 | 汕头市超声仪器研究所股份有限公司 | Multi-information extraction extended u-net and application method therefor in low-dose x-ray imaging |
CN117272941A (en) * | 2023-09-21 | 2023-12-22 | 北京百度网讯科技有限公司 | Data processing method, apparatus, device, computer readable storage medium and product |
Also Published As
Publication number | Publication date |
---|---|
WO2021168920A1 (en) | 2021-09-02 |
CN111325695B (en) | 2023-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7179757B2 (en) | Dose Reduction for Medical Imaging Using Deep Convolutional Neural Networks | |
CN111325686B (en) | Low-dose PET three-dimensional reconstruction method based on deep learning | |
Ghodrati et al. | MR image reconstruction using deep learning: evaluation of network structure and loss functions | |
CN111325695B (en) | Low-dose image enhancement method and system based on multi-dose grade and storage medium | |
Tang et al. | Unpaired low‐dose CT denoising network based on cycle‐consistent generative adversarial network with prior image information | |
Wang et al. | Machine learning for tomographic imaging | |
Li et al. | Low‐dose CT image denoising with improving WGAN and hybrid loss function | |
CN109741254B (en) | Dictionary training and image super-resolution reconstruction method, system, equipment and storage medium | |
US20240185484A1 (en) | System and method for image reconstruction | |
Chan et al. | An attention-based deep convolutional neural network for ultra-sparse-view CT reconstruction | |
Li et al. | Learning non-local perfusion textures for high-quality computed tomography perfusion imaging | |
Karimi et al. | Reducing streak artifacts in computed tomography via sparse representation in coupled dictionaries | |
Thomas | Bio-medical Image Denoising using Autoencoders | |
CN110874855A (en) | Collaborative imaging method and device, storage medium and collaborative imaging equipment | |
US11455755B2 (en) | Methods and apparatus for neural network based image reconstruction | |
WO2021051049A1 (en) | Few-view ct image reconstruction system | |
Du et al. | DRGAN: a deep residual generative adversarial network for PET image reconstruction | |
CN116245969A (en) | Low-dose PET image reconstruction method based on deep neural network | |
Garehdaghi et al. | Positron emission tomography image enhancement using magnetic resonance images and U-net structure | |
US10347014B2 (en) | System and method for image reconstruction | |
CN111445406B (en) | Low-dose CT picture quality improvement method, system and equipment | |
Xie et al. | 3D few-view CT image reconstruction with deep learning | |
CN110580689A (en) | image reconstruction method and device | |
WO2022193276A1 (en) | Deep learning method for low dose estimation of medical image | |
Liu et al. | Low dose ct noise artifact reduction based on multi-scale weighted convolutional coding network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |