CN112017136A - Lung CT image parameter reconstruction method, system, terminal and storage medium based on deep learning - Google Patents

Lung CT image parameter reconstruction method, system, terminal and storage medium based on deep learning Download PDF

Info

Publication number
CN112017136A
CN112017136A CN202010783409.0A CN202010783409A CN112017136A CN 112017136 A CN112017136 A CN 112017136A CN 202010783409 A CN202010783409 A CN 202010783409A CN 112017136 A CN112017136 A CN 112017136A
Authority
CN
China
Prior art keywords
image
pulmonary
lung
branch
intra
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010783409.0A
Other languages
Chinese (zh)
Inventor
刘峰
周振
刘秋月
俞益洲
王亦洲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shenrui Bolian Technology Co Ltd
Shenzhen Deepwise Bolian Technology Co Ltd
Original Assignee
Beijing Shenrui Bolian Technology Co Ltd
Shenzhen Deepwise Bolian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Shenrui Bolian Technology Co Ltd, Shenzhen Deepwise Bolian Technology Co Ltd filed Critical Beijing Shenrui Bolian Technology Co Ltd
Priority to CN202010783409.0A priority Critical patent/CN112017136A/en
Publication of CN112017136A publication Critical patent/CN112017136A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Abstract

The application provides a lung CT image parameter reconstruction method, a system, a terminal and a storage medium based on deep learning, wherein the method comprises the following steps: inputting the acquired lung CT image into a preset progressive upsampling skeleton network model, and outputting a characteristic image; inputting the characteristic image into a 3D convolutional neural network model, wherein the 3D convolutional neural network model is a convolutional neural network model consisting of an intra-pulmonary branch, an extra-pulmonary branch and a pulmonary mask branch; classifying each pixel of the characteristic image through the lung mask branch, determining the lung inner part and the lung outer part, and performing classification marking; respectively inputting the intra-pulmonary part and the extra-pulmonary part of the characteristic image into the intra-pulmonary branch and the extra-pulmonary branch for characteristic learning to generate an intra-pulmonary image and an extra-pulmonary image; combining the generated images of the intra-pulmonary branches and the extra-pulmonary branches to form a complete CT generated image; the method and the device utilize the deep learning technology to realize the conversion of different parameters of the CT image.

Description

Lung CT image parameter reconstruction method, system, terminal and storage medium based on deep learning
Technical Field
The present application relates to the field of medical imaging and computer-aided technologies, and in particular, to a method, a system, a terminal, and a storage medium for reconstructing lung CT image parameters based on deep learning.
Background
Lung CT is the most important means for detecting lung diseases at present, and in practical clinical application, doctors can set parameters of scanning CT according to different situations, and different parameters will directly affect the imaging quality and imaging time of final CT, such as the layer thickness and interlayer spacing of CT, different reconstruction algorithms and scanning dose.
Generally, a CT having a thickness of 2mm or less is called a thin-layer CT, and a CT having a thickness of 2mm or more is called a thick-layer CT. The thin-layer CT has the advantages of higher image resolution, clear observation of coronary and sagittal lesions, improvement of the diagnosis rate of doctors, and reduction of the number of false positives, and is recommended to use. However, in actual clinical practice, thick-layer CT is still used, on one hand, because the data occupies less space, thick-layer CT is often stored in a storage system of a hospital, and on the other hand, in some emergency situations, the time taken for generating thick-layer CT is short, and the thick-layer CT is more suitable for rapid diagnosis.
Different reconstruction algorithms affect the sharpness of the generated image and the image definition of different tissues and organs, and sharper convolution kernels (such as Siemens B70, I70 and the like) generate images with higher sharpness, so that the images are suitable for observing tissues and organs of the lung, but more noise is introduced at the same time. Images generated by relatively smooth convolution kernels (such as siemens' I30, B30, etc.) are relatively smooth and more suitable for observing the mediastinal region.
At present, most of work is to study the CT reconstruction problem of a single parameter, wherein the thick-layer CT reconstruction problem involves little related work due to the improvement of D dimension resolution in a 3D image. The thick-layer CT reconstruction generally solves the problem from the perspective of image super-resolution, and the existing methods can be divided into the traditional direct interpolation method and the deep learning algorithm. The traditional interpolation method, such as bicubic interpolation, bilinear interpolation and the like, has no requirement on data quantity and high calculation speed, but the interpolation effect has larger difference from a real thin-layer image and the image is fuzzy, so that the method can not be applied to an actual scene basically. The deep learning algorithm can be very close to a real target image under the condition of having a large amount of training data, but the existing methods are super-resolution algorithms which are directly transferred and are suitable for natural images, and context information of a 3D image is not fully utilized in the process of up-sampling and resolution improvement, so that the detail generation effect of a lung CT image is poor, for example, a leaf intervallum and a small blood vessel part are basically invisible in a generated thin-layer image, and the reading experience of a doctor is seriously influenced.
The deep learning algorithm comprises two types, one type is a method based on a 2D model, an original 3D CT image is sent into a convolutional neural network from a view angle of a coronal position or a sagittal position as a 2D image, and the method is directly transferred and is suitable for a super-resolution reconstruction method of a natural image to improve the image resolution. The method does not fully utilize the context information of the 3D image in the process of up-sampling and resolution enhancement, so that the detail generation effect of the lung CT image is poor, for example, the intertillage and small blood vessel parts are basically invisible in the generated thin-layer image, the reading experience of a doctor is seriously influenced, in addition, for the problem of the convolution kernel reconstruction of the CT image, the current work is limited in a 2D network model, generally, a network structure suitable for a natural image is directly migrated into the 3D medical image, and the convolution kernel conversion is respectively carried out on the CT image of each layer, so that the 3D context relation of the data is ignored in the mode, and the further optimization is not carried out on different data distribution in and out of the lung in the lung CT. The other type is a method based on a 3D model, a 3D image with the same size as that of a thin-layer CT is obtained by an interpolation method, and then the 3D image is sent to a convolutional neural network for further learning, or a thick-layer CT is directly sent to the network to obtain a 3D image with the same size as that of the input 3D image, and then the thin-layer CT with the same size as that of the target output is obtained by performing upsampling by applying a sub-pixel convolution or transposition convolution mode.
Due to the limitations of storage space and generation time, a doctor only selects a set of parameters suitable for the current requirements in an actual clinical scene to generate a CT image, but a fixed-parameter CT image may not meet various subsequent requirements, and may also hinder the development of some retrospective studies.
Based on the above background, there is a need for a lung CT image parameter reconstruction method, system, terminal and storage medium based on deep learning, which can convert different parameters of a CT image according to requirements, such as converting a thick-layer CT into a thin-layer CT, and converting a CT reconstructed by a convolution kernel into a CT reconstructed by another convolution kernel, thereby saving CT generation time and storage space, meeting different requirements of a doctor, improving diagnosis rate of a corresponding focus, and avoiding missed diagnosis and misdiagnosis.
Disclosure of Invention
Aiming at the defects of the prior art, the lung CT image parameter reconstruction method, the system, the terminal and the storage medium based on deep learning are provided, and the problems that in the prior art, the reconstruction of a CT image with a single parameter cannot meet the requirement, the CT images with different parameters cannot be mutually converted and the like are solved.
In order to solve the above technical problem, in a first aspect, the present application provides a lung CT image parameter reconstruction method based on deep learning, including:
inputting the acquired lung CT image into a preset progressive upsampling skeleton network model, and outputting a characteristic image;
inputting the characteristic image into a 3D convolutional neural network model, wherein the 3D convolutional neural network model is a convolutional neural network model consisting of an intra-pulmonary branch, an extra-pulmonary branch and a pulmonary mask branch;
classifying each pixel of the characteristic image through the lung mask branch, determining the lung inner part and the lung outer part, and performing classification marking;
respectively inputting the intra-pulmonary part and the extra-pulmonary part of the characteristic image into the intra-pulmonary branch and the extra-pulmonary branch for characteristic learning to generate an intra-pulmonary image and an extra-pulmonary image;
and combining the generated images of the intra-pulmonary branches and the extra-pulmonary branches to form a complete CT generated image.
Optionally, the inputting the lung CT image into a preset progressive upsampling skeleton network model, and outputting a feature image includes:
inputting the lung CT image into a coding module for feature extraction to obtain a feature image, and performing multiple convolution and pooling on the feature image to realize down-sampling to obtain feature images with different resolutions;
inputting the feature maps with different resolutions into a decoding module for up-sampling to obtain the position information of image pixels;
adding or connecting the feature graphs of the same scale corresponding to the coding module and the decoding module in parallel to obtain feature images fusing different levels of features;
the progressive upsampling framework network model comprises a coding module, a decoding module and a jump connection module.
Optionally, the feature image is input to a 3D convolutional neural network model, where the 3D convolutional neural network model is a convolutional neural network model composed of an intrapulmonary branch, an extrapulmonary branch, and a pulmonary mask branch, and the method includes:
the lung mask branch completes lung segmentation task, and the intra-pulmonary branch, the extra-pulmonary branch and the lung mask branch are all composed of 3D convolutional layers with the same structure.
Optionally, the classifying each pixel of the feature image through the lung mask branch, determining an inside lung portion and an outside lung portion, and performing a classification label includes:
inputting the characteristic image into a lung mask branch of a convolutional neural network model, outputting segmentation results of the inside and outside lung parts of the characteristic image, and performing classification marking;
calculating a loss value between the segmentation result of the feature image and a real lung segmentation gold standard by using a cross entropy loss function;
updating model parameters of lung mask branches of the progressive upsampling skeleton network and the convolutional neural network model according to the obtained loss value;
and iteratively training the lung mask branches of the progressive upsampling skeleton network and the convolutional neural network model.
Optionally, the inputting the intra-pulmonary portion and the extra-pulmonary portion of the feature image into the intra-pulmonary branch and the extra-pulmonary branch respectively for feature learning to generate an intra-pulmonary image and an extra-pulmonary image includes:
inputting the extrapulmonary part of the characteristic image into an extrapulmonary branch of a convolutional neural network model to generate an extrapulmonary image;
calculating a loss value between the extrapulmonary image and a real target image by using an L1 loss function;
carrying out gradient back transmission and updating on model parameters of extrapulmonary branches of the progressive upsampling skeleton network and convolutional neural network models according to the obtained loss values;
performing negation operation on each pixel point of the extrapulmonary image to generate an intrapulmonary image;
calculating a loss value between the intra-lung image and a real target image by using a difference perception loss function;
and updating model parameters of the intra-pulmonary branches of the progressive upsampling skeleton network and the convolutional neural network model according to the obtained loss value.
Optionally, the calculating a loss value between the intra-lung image and a real target image by using a difference perceptual loss function includes:
adding a weight W to each pixel point of the intra-lung image based on a raw L1 loss functioni,j,k
Calculating a loss value between the intra-lung image and a real target image by using a difference perception loss function;
the weight calculation formula is as follows:
Figure BDA0002621017550000051
wherein the content of the first and second substances,
Figure BDA0002621017550000052
representing the generated image, X representing the real target image, i, j, k being the pixel index positions, α, β and γ being the three hyper-parameters.
Optionally, the method further includes:
inputting the CT generated image into a preset CNN model pre-trained on a natural image ImageNet;
extracting real target images of the last three layers of the CNN model and generating a characteristic diagram of the images;
calculating a loss value between the real target image and the feature map of the generated image by using a feature matching loss function;
and carrying out gradient returning and updating on model parameters of the progressive upsampling skeleton network model and the convolutional neural network model according to the obtained loss values.
In a second aspect, the present application further provides a system for reconstructing lung CT image parameters based on deep learning, including:
the sampling processing unit is configured to input the acquired lung CT image into a preset progressive upsampling skeleton network model and output a characteristic image;
the model input unit is used for inputting the characteristic image into a 3D convolutional neural network model, and the 3D convolutional neural network model is a convolutional neural network model consisting of an intra-pulmonary branch, an extra-pulmonary branch and a pulmonary mask branch;
the image segmentation unit is configured to classify each pixel of the feature image through the lung mask branch, determine an inside lung portion and an outside lung portion, and perform classification labeling;
the branch learning unit is configured to input the intra-pulmonary part and the extra-pulmonary part of the feature image into the intra-pulmonary branch and the extra-pulmonary branch respectively for feature learning, and generate an intra-pulmonary image and an extra-pulmonary image;
and the image combination unit is matched and used for combining the generated images of the intra-pulmonary branches and the extra-pulmonary branches to form a complete CT generated image.
Optionally, the sampling processing unit is specifically configured to:
inputting the lung CT image into a coding module for feature extraction to obtain a feature image, and performing multiple convolution and pooling on the feature image to realize down-sampling to obtain feature images with different resolutions;
inputting the feature maps with different resolutions into a decoding module for up-sampling to obtain the position information of image pixels;
adding or connecting the feature graphs of the same scale corresponding to the coding module and the decoding module in parallel to obtain feature images fusing different levels of features;
the progressive upsampling framework network model comprises a coding module, a decoding module and a jump connection module.
Optionally, the model input unit is specifically configured to:
the lung mask branch completes lung segmentation task, and the intra-pulmonary branch, the extra-pulmonary branch and the lung mask branch are all composed of 3D convolutional layers with the same structure.
Optionally, the image segmentation unit is specifically configured to:
inputting the characteristic image into a lung mask branch of a convolutional neural network model, outputting segmentation results of the inside and outside lung parts of the characteristic image, and performing classification marking;
calculating a loss value between the segmentation result of the feature image and a real lung segmentation gold standard by using a cross entropy loss function;
updating model parameters of lung mask branches of the progressive upsampling skeleton network and the convolutional neural network model according to the obtained loss value;
and iteratively training the lung mask branches of the progressive upsampling skeleton network and the convolutional neural network model.
Optionally, the branch learning unit is specifically configured to:
inputting the extrapulmonary part of the characteristic image into an extrapulmonary branch of a convolutional neural network model to generate an extrapulmonary image;
calculating a loss value between the extrapulmonary image and a real target image by using an L1 loss function;
carrying out gradient back transmission and updating on model parameters of extrapulmonary branches of the progressive upsampling skeleton network and convolutional neural network models according to the obtained loss values;
performing negation operation on each pixel point of the extrapulmonary image to generate an intrapulmonary image;
calculating a loss value between the intra-lung image and a real target image by using a difference perception loss function;
and updating model parameters of the intra-pulmonary branches of the progressive upsampling skeleton network and the convolutional neural network model according to the obtained loss value.
Optionally, the branch learning unit is further specifically configured to:
adding a weight W to each pixel point of the intra-lung image based on a raw L1 loss functioni,j,k
Calculating a loss value between the intra-lung image and a real target image by using a difference perception loss function;
the weight calculation formula is as follows:
Figure BDA0002621017550000071
wherein the content of the first and second substances,
Figure BDA0002621017550000072
representing the generated image, X representing the real target image, i, j, k being the pixel index positions, α, β and γ being the three hyper-parameters.
Optionally, the system further includes a model parameter updating unit, specifically configured to:
inputting the CT generated image into a preset CNN model pre-trained on a natural image ImageNet;
extracting real target images of the last three layers of the CNN model and generating a characteristic diagram of the images;
calculating a loss value between the real target image and the feature map of the generated image by using a feature matching loss function;
and carrying out gradient returning and updating on model parameters of the progressive upsampling skeleton network model and the convolutional neural network model according to the obtained loss values.
In a third aspect, the present application provides a terminal, comprising:
a processor, a memory, wherein,
the memory is used for storing a computer program which,
the processor is used for calling and running the computer program from the memory so as to make the terminal execute the method of the terminal.
In a fourth aspect, the present application provides a computer storage medium having instructions stored thereon, which when executed on a computer, cause the computer to perform the method of the above aspects.
Compared with the prior art, the method has the following beneficial effects:
1. the lung CT image parameter reconstruction method, the system, the terminal and the storage medium based on deep learning can be applied to the thick-layer CT reconstruction problem, the CT reconstruction problems of different convolution kernels and the CT reconstruction problems of different doses, and the like, are not limited to the application given herein, and only corresponding training data are needed to perform model training when reconstruction tasks of different parameters are realized.
2. The method and the device design a 3D framework network supporting progressive upsampling on the basis of the original 2DU-Net network. Aiming at the convolution kernel reconstruction task, the framework network fully utilizes 3D context information and features of different scales of input data to carry out feature extraction and feature fusion of different levels by utilizing the encoding and decoding processes through a jump connection structure. Aiming at the problem of thick-layer CT reconstruction, a pre-or post-upsampling mode different from the prior art is adopted, and an upsampling process is integrated into an integral skeleton network, so that the problem of different D dimension resolutions of input and output images in a thick-layer CT reconstruction task is effectively solved, the resolution of the generated images is improved, the GPU video memory consumption and the calculation time are saved, and the application efficiency of a network model in an actual clinical scene is improved.
3. The method provides a multi-branch frame structure on the basis of a skeleton network, introduces a lung segmentation task, respectively learns the inside and outside of the lung, finally combines generated images of the two branches to form a complete CT image, further reduces the difficulty of network learning, and enables the network to more easily fit a real image according to respective distribution characteristics inside and outside the lung; meanwhile, due to the introduction of the lung mask branch, the problem of lung CT image parameter reconstruction is solved, and the problem of lung segmentation of a corresponding generated CT image is also solved.
4. In terms of generating effect, the existing method rarely focuses on generating effect of lung detail parts, such as leaf cleft and small blood vessel parts, the parts can not be displayed basically by using a model trained by the existing method, and in order to solve the problem, a difference perception loss function and a feature matching loss function are provided. In the training process of the multi-branch frame structure, the weight of the small part of the original HU value is improved by utilizing the difference of HU values of different parts and applying a difference perception loss function, and the generation effect of the detailed part of the image of the lung texture is effectively improved; by applying the characteristic matching loss function and drawing the generated image and the real image in the characteristic level, the interference of a large amount of noise existing in the CT image to network learning is reduced, the quality of the generated image is visually improved, and a doctor can use the generated CT image to read and diagnose the image. Compared with the prior method, the network framework training model provided by the application is obviously improved in vision and quantitative indexes.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of a lung CT image parameter reconstruction method based on deep learning according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of a deep learning-based lung CT image parameter reconstruction network model according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a deep learning-based lung CT image parameter reconstruction system according to another embodiment of the present application;
fig. 4 is a schematic structural diagram of a terminal system according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a flowchart illustrating a method for reconstructing parameters of a lung CT image based on deep learning according to an embodiment of the present application, where the method 100 includes:
s101: inputting the acquired lung CT image into a preset progressive upsampling skeleton network model, and outputting a characteristic image;
s102: inputting the characteristic image into a 3D convolutional neural network model, wherein the 3D convolutional neural network model is a convolutional neural network model consisting of an intra-pulmonary branch, an extra-pulmonary branch and a pulmonary mask branch;
s103: classifying each pixel of the characteristic image through the lung mask branch, determining the lung inner part and the lung outer part, and performing classification marking;
s104: respectively inputting the intra-pulmonary part and the extra-pulmonary part of the characteristic image into the intra-pulmonary branch and the extra-pulmonary branch for characteristic learning to generate an intra-pulmonary image and an extra-pulmonary image;
s105: and combining the generated images of the intra-pulmonary branches and the extra-pulmonary branches to form a complete CT generated image.
Specifically, as shown in fig. 2, the present application is a multi-branch lung CT parameter reconstruction algorithm with a progressive upsampling skeleton network, the progressive upsampling skeleton network serves as a front shared weight network, the rear part of the progressive upsampling skeleton network is divided into three different branches respectively corresponding to an intra-lung branch, an extra-lung branch and a lung mask branch, each branch is composed of 3D convolutional layers with the same structure, and a finally obtained CT image generated by a model is a combination of results of the three branches. Different branches learn different areas inside and outside the lung, so that the learning difficulty of the network is reduced, the branches can be understood as learning general features inside and outside the lung before being sent to the branches, and the respective detailed features of the branches can be learned after being sent to the branches.
The applicant finds that HU value distribution in and out of the lung of a human body is obviously different, but in the prior method, the characteristic is not utilized in a thick-layer CT reconstruction task or a convolution kernel reconstruction task, the study in and out of the lung is not distinguished, and the respective fitting effect can be influenced. In terms of generation effect, the former method rarely focuses on generation effect of lung detail parts, such as leaf septal and small blood vessel parts, and the parts can not be displayed basically by using a model trained by the existing method. Through the design, compared with the prior method, the model provided by the application is obviously improved in visual and quantitative indexes.
Therefore, the conversion of different parameters of the CT image is realized by utilizing the deep learning technology, and the number of layers in the D dimension can be increased by aiming at the 3D image of D, W and H in the thick-layer CT reconstruction task through the network model. In the convolution kernel reconstruction task of the reconstruction algorithm, the network model can convert the sharpness of the image while keeping the same resolution.
Based on the foregoing embodiment, as an optional embodiment, the S101 inputs the lung CT image into a preset progressive upsampling skeleton network model, and outputs a feature image, including:
inputting the lung CT image into a coding module for feature extraction to obtain a feature image, and performing multiple convolution and pooling on the feature image to realize down-sampling to obtain feature images with different resolutions;
inputting the feature maps with different resolutions into a decoding module for up-sampling to obtain the position information of image pixels;
adding or connecting the feature graphs of the same scale corresponding to the coding module and the decoding module in parallel to obtain feature images fusing different levels of features;
the progressive upsampling framework network model comprises a coding module, a decoding module and a jump connection module.
Specifically, the three dimensions of the input and output images of a 3D skeletal network supporting progressive upsampling are sequentially labeled D, W and H. The whole structure presents a symmetrical U shape and is divided into an encoding part and a decoding part, wherein the encoding part carries out down sampling, and the decoding part carries out up sampling. The decoding module comprises a plurality of up-sampling units which are connected in sequence, each up-sampling unit comprises a convolution layer and a nonlinear activation layer which are connected in sequence, and the convolution layer is used for adjusting the characteristic dimension of an input image; the encoding module comprises a plurality of down-sampling units, and each down-sampling unit comprises a plurality of convolution layers and nonlinear active layers which are connected in sequence. In the coding part, the input image is downsampled for 5 times in total to fully acquire the characteristics of different layersAnd 3D context information, which we denote as the number of downsamplings
Figure BDA0002621017550000111
To
Figure BDA0002621017550000112
In the decoding part, the upsampling is still performed 5 times, and the downsampling exhibits a symmetrical structure, which is denoted as f1To f5. The 3D skeleton network supporting progressive upsampling is improved based on an original 2DU-Net structure, the improved network better integrates features and 3D context information of different levels, and meanwhile the requirement of increasing image resolution in a thick-layer CT reconstruction task can be met. In addition, compared with the jump connection of the original U-Net, the jump connection corresponding to each layer is preceded by the corresponding jump connection
Figure BDA0002621017550000113
The layers are transposed convolution operation and then summed with fi+1And (4) layer splicing, because the splicing operation needs the images with the same size, the images of different layers are ensured to be the same in size.
It should be noted that for the thick layer CT reconstruction task, the downsampling rate of D, W and H is different in three dimensions of downsampling, W and H are 32 times, while D dimension is only 2 times, in order to keep more spatial information as much as possible, because 2 times downsampling can ensure enough receptive field. When in upsampling, the upsampling rate of the D dimension is larger than the downsampling rate, and the design can finally realize the difference of the input and output D dimensions. For the convolution kernel reconstruction task, the network keeps the same up-sampling rate and down-sampling rate to ensure that the resolution of the input image and the resolution of the output image are unchanged, and the framework network obtains different output sizes by modifying the size of the convolution kernel or the step length in the network, so that the framework network can be suitable for reconstruction tasks with different requirements.
The progressive upsampling framework network provided by the application can have different downsampling and upsampling times and different upsampling rates and downsampling rates, the structure aiming at the thick-layer CT reconstruction task is mainly protected by a strategy of fusing the improvement of resolution ratio into an original U-Net network, namely, the size of the D dimension of a 3D image is gradually improved in a U-shaped structure of the network in a jumping connection mode, the structure aiming at the convolution kernel reconstruction task can effectively utilize the context information of 3D, and the model effect is improved.
Based on the foregoing embodiment, as an optional embodiment, the S102 inputs the feature image into a 3D convolutional neural network model, where the 3D convolutional neural network model is a convolutional neural network model composed of an intrapulmonary branch, an extrapulmonary branch, and a pulmonary mask branch, and includes:
the lung mask branch completes lung segmentation task, and the intra-pulmonary branch, the extra-pulmonary branch and the lung mask branch are all composed of 3D convolutional layers with the same structure.
Specifically, the lung mask branches complete the lung segmentation task, each pixel is classified, the intra-pulmonary part is classified as 1, the extra-pulmonary part is classified as 0, and is recorded as M, wherein the intra-pulmonary branch only learns the intra-pulmonary part and is recorded as Xi(ii) a Extrapulmonary branch learns only extrapulmonary part, recorded as Xo. The resulting CT image generated by the model is a combination of the three branch results, which can indicate that M [ < X > ]i+(1-M)⊙XoWherein |, indicates that the corresponding pixel is multiplied. In addition, in terms of the loss function, the three branches design different loss functions.
Based on the foregoing embodiment, as an optional embodiment, the S103 classifies each pixel of the feature image through the lung mask branch, determines an inside lung portion and an outside lung portion, and performs classification labeling, including:
inputting the characteristic image into a lung mask branch of a convolutional neural network model, outputting segmentation results of the inside and outside lung parts of the characteristic image, and performing classification marking;
calculating a loss value between the segmentation result of the feature image and a real lung segmentation gold standard by using a cross entropy loss function;
updating model parameters of lung mask branches of the progressive upsampling skeleton network and the convolutional neural network model according to the obtained loss value;
and iteratively training the lung mask branches of the progressive upsampling skeleton network and the convolutional neural network model.
Specifically, for the lung mask branch, as the lung mask branch can be regarded as a segmentation task, a cross entropy loss function commonly used in the segmentation task is directly adopted, softmax operation is carried out on 2 channels output by the branch, and then a loss value is calculated with a real lung segmentation gold standard.
Based on the foregoing embodiment, as an optional embodiment, the S104 inputs the intra-pulmonary portion and the extra-pulmonary portion of the feature image into the intra-pulmonary branch and the extra-pulmonary branch respectively for feature learning, and generates an intra-pulmonary image and an extra-pulmonary image, including:
inputting the extrapulmonary part of the characteristic image into an extrapulmonary branch of a convolutional neural network model to generate an extrapulmonary image;
calculating a loss value between the extrapulmonary image and a real target image by using an L1 loss function;
carrying out gradient back transmission and updating on model parameters of extrapulmonary branches of the progressive upsampling skeleton network and convolutional neural network models according to the obtained loss values;
performing negation operation on each pixel point of the extrapulmonary image to generate an intrapulmonary image;
calculating a loss value between the intra-lung image and a real target image by using a difference perception loss function;
and updating model parameters of the intra-pulmonary branches of the progressive upsampling skeleton network and the convolutional neural network model according to the obtained loss value.
It should be noted that, here, the real target image is a CT image of which the CT apparatus is set under the target parameter.
Based on the foregoing embodiment, as an optional embodiment, the calculating a loss value between the intra-lung image and the real target image by using the difference perception loss function in S104 includes:
adding a weight W to each pixel point of the intra-lung image based on a raw L1 loss functioni,j,k
Calculating a loss value between the intra-lung image and a real target image by using a difference perception loss function;
the weight calculation formula is as follows:
Figure BDA0002621017550000131
wherein the content of the first and second substances,
Figure BDA0002621017550000132
representing the generated image, X representing the real target image, i, j, k being the pixel index positions, α, β and γ being the three hyper-parameters.
Specifically, the application finds that the distribution of HU values inside and outside the lung is not the same, and therefore, the loss value calculation is performed for the extrapulmonary branch and extrapulmonary branch respectively:
for the extrapulmonary branch, the intra-pulmonary portion is omitted by the pulmonary mask information, the L1 loss value is calculated at the extrapulmonary portion, and then a gradient back pass is performed.
For the intra-pulmonary branches, due to the fact that the generation effect of details in the lung such as leaf septal and small blood vessel parts is poor, the parts have the characteristics that the HU value is small and the proportion of the parts in the whole pixels in the lung is small through statistics, therefore, a difference perception loss function is provided for the intra-pulmonary parts, and a weight is added to each pixel on the basis of the original L1 loss value. The purpose of this weighting is to increase the attention of the location in the lung where the HU value is smaller, and to raise the proportion of the loss value of this portion, under the same L1 error. The per-pixel weight is calculated as follows:
Figure BDA0002621017550000141
wherein the content of the first and second substances,
Figure BDA0002621017550000142
representing the generated image, X representing the real target image, i, j, k being the pixel index positions, α, β and γ being three hyper-parameters which control the HU value differencesThe magnitude of the degree of the different influence weights is set to 0.5, 5 and 0.5 in this application in the order of the values of the α, β and γ parameters.
Based on the foregoing embodiment, as an optional embodiment, the method 100 further includes:
inputting the CT generated image into a preset CNN model pre-trained on a natural image ImageNet;
extracting real target images of the last three layers of the CNN model and generating a characteristic diagram of the images;
calculating a loss value between the real target image and the feature map of the generated image by using a feature matching loss function;
and carrying out gradient returning and updating on model parameters of the progressive upsampling skeleton network model and the convolutional neural network model according to the obtained loss values.
Specifically, in addition to the loss function of each branch, a feature matching loss function is additionally added to the generated image obtained by final combination to reduce the influence of noise on network learning. This is because real target images generally have a lot of noise, and these unwanted noise should not be over-focused during the network learning process. The implementation of the feature matching loss function utilizes a CNN model pre-trained on a natural image ImageNet, the real target images of the last three layers of the CNN model and the feature maps of the generated images are respectively extracted, and the sum of the square errors of the three groups of feature maps is used as a loss value to carry out gradient return. The loss function draws the generated image and the real image at the characteristic level, ignores the influence of noise in the image, and leads the network to be more focused on the learning of the image content.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a deep learning-based lung CT image parameter reconstruction system according to an embodiment of the present application, where the system 300 includes:
the sampling processing unit 301 is configured to input the acquired lung CT image to a preset progressive upsampling skeleton network model, and output a feature image;
a model input unit 302 configured to input the feature image into a 3D convolutional neural network model, where the 3D convolutional neural network model is a convolutional neural network model composed of an intra-pulmonary branch, an extra-pulmonary branch, and a pulmonary mask branch;
an image segmentation unit 303, configured to classify each pixel of the feature image through the lung mask branch, determine an inside lung portion and an outside lung portion, and perform a classification label;
a branch learning unit 304 configured to input the intra-pulmonary part and the extra-pulmonary part of the feature image into the intra-pulmonary branch and the extra-pulmonary branch respectively for feature learning, and generate an intra-pulmonary image and an extra-pulmonary image;
an image combining unit 305 is configured to combine the generated images of the intra-pulmonary branch and the extra-pulmonary branch to form a complete CT generated image.
Based on the foregoing embodiment, as an optional embodiment, the sampling processing unit 301 is specifically configured to:
inputting the lung CT image into a coding module for feature extraction to obtain a feature image, and performing multiple convolution and pooling on the feature image to realize down-sampling to obtain feature images with different resolutions;
inputting the feature maps with different resolutions into a decoding module for up-sampling to obtain the position information of image pixels;
adding or connecting the feature graphs of the same scale corresponding to the coding module and the decoding module in parallel to obtain feature images fusing different levels of features;
the progressive upsampling framework network model comprises a coding module, a decoding module and a jump connection module.
Based on the foregoing embodiment, as an optional embodiment, the model input unit 302 is specifically configured to:
the lung mask branch completes lung segmentation task, and the intra-pulmonary branch, the extra-pulmonary branch and the lung mask branch are all composed of 3D convolutional layers with the same structure.
Based on the foregoing embodiment, as an optional embodiment, the image segmentation unit 303 is specifically configured to:
inputting the characteristic image into a lung mask branch of a convolutional neural network model, outputting segmentation results of the inside and outside lung parts of the characteristic image, and performing classification marking;
calculating a loss value between the segmentation result of the feature image and a real lung segmentation gold standard by using a cross entropy loss function;
updating model parameters of lung mask branches of the progressive upsampling skeleton network and the convolutional neural network model according to the obtained loss value;
and iteratively training the lung mask branches of the progressive upsampling skeleton network and the convolutional neural network model.
Based on the foregoing embodiment, as an optional embodiment, the branch learning unit 304 is specifically configured to:
inputting the extrapulmonary part of the characteristic image into an extrapulmonary branch of a convolutional neural network model to generate an extrapulmonary image;
calculating a loss value between the extrapulmonary image and a real target image by using an L1 loss function;
carrying out gradient back transmission and updating on model parameters of extrapulmonary branches of the progressive upsampling skeleton network and convolutional neural network models according to the obtained loss values;
performing negation operation on each pixel point of the extrapulmonary image to generate an intrapulmonary image;
calculating a loss value between the intra-lung image and a real target image by using a difference perception loss function;
and updating model parameters of the intra-pulmonary branches of the progressive upsampling skeleton network and the convolutional neural network model according to the obtained loss value.
Based on the foregoing embodiment, as an optional embodiment, the branch learning unit 304 is further specifically configured to:
adding a weight W to each pixel point of the intra-lung image based on a raw L1 loss functioni,j,k
Calculating a loss value between the intra-lung image and a real target image by using a difference perception loss function;
the weight calculation formula is as follows:
Figure BDA0002621017550000161
wherein the content of the first and second substances,
Figure BDA0002621017550000162
representing the generated image, X representing the real target image, i, j, k being the pixel index positions, α, β and γ being the three hyper-parameters.
Based on the foregoing embodiment, as an optional embodiment, the system 300 further includes a model parameter updating unit, which is specifically configured to:
inputting the CT generated image into a preset CNN model pre-trained on a natural image ImageNet;
extracting real target images of the last three layers of the CNN model and generating a characteristic diagram of the images;
calculating a loss value between the real target image and the feature map of the generated image by using a feature matching loss function;
and carrying out gradient returning and updating on model parameters of the progressive upsampling skeleton network model and the convolutional neural network model according to the obtained loss values.
It should be noted that the progressive upsampling skeleton network and the convolutional neural network are the front and rear parts of the model, and the end-to-end training of the whole model is performed through the feature matching loss function.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a terminal system 400 according to an embodiment of the present disclosure, where the terminal system 400 may be used to execute a method for reconstructing lung CT image parameters based on deep learning according to an embodiment of the present disclosure.
The terminal system 400 may include: a processor 401, a memory 402, and a communication unit 403. The components communicate via one or more buses, and those skilled in the art will appreciate that the architecture of the servers shown in the figures is not intended to be limiting, and may be a bus architecture, a star architecture, a combination of more or less components than those shown, or a different arrangement of components.
The memory 402 may be used for storing instructions executed by the processor 401, and the memory 402 may be implemented by any type of volatile or non-volatile storage terminal or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic disk or optical disk. The execution instructions in the memory 402, when executed by the processor 401, enable the terminal system 400 to perform some or all of the steps in the method embodiments described below.
The processor 401 is a control center of the storage terminal, connects various parts of the entire electronic terminal using various interfaces and lines, and performs various functions of the electronic terminal and/or processes data by operating or executing software programs and/or modules stored in the memory 402 and calling data stored in the memory. The processor may be composed of an Integrated Circuit (IC), for example, a single packaged IC, or a plurality of packaged ICs connected with the same or different functions. For example, the processor 401 may only include a Central Processing Unit (CPU). In the embodiment of the present invention, the CPU may be a single operation core, or may include multiple operation cores.
A communication unit 403, configured to establish a communication channel so that the storage terminal can communicate with other terminals. And receiving user data sent by other terminals or sending the user data to other terminals.
The present application also provides a computer storage medium, wherein the computer storage medium may store a program, and the program may include some or all of the steps in the embodiments provided by the present invention when executed. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM) or a Random Access Memory (RAM).
1. The lung CT image parameter reconstruction method, the system, the terminal and the storage medium based on deep learning can be applied to the thick-layer CT reconstruction problem, the CT reconstruction problems of different convolution kernels and the CT reconstruction problems of different doses, and the like, are not limited to the application given herein, and only corresponding training data are needed to perform model training when reconstruction tasks of different parameters are realized.
2. The method designs a 3D framework network supporting progressive upsampling on the basis of the original 2D U-Net network. Aiming at the convolution kernel reconstruction task, the framework network fully utilizes 3D context information and features of different scales of input data to carry out feature extraction and feature fusion of different levels by utilizing the encoding and decoding processes through a jump connection structure. Aiming at the problem of thick-layer CT reconstruction, a pre-or post-upsampling mode different from the prior art is adopted, and an upsampling process is integrated into an integral skeleton network, so that the problem of different D dimension resolutions of input and output images in a thick-layer CT reconstruction task is effectively solved, the resolution of the generated images is improved, the GPU video memory consumption and the calculation time are saved, and the application efficiency of a network model in an actual clinical scene is improved.
3. The method provides a multi-branch frame structure on the basis of a skeleton network, introduces a lung segmentation task, respectively learns the inside and outside of the lung, finally combines generated images of the two branches to form a complete CT image, further reduces the difficulty of network learning, and enables the network to more easily fit a real image according to respective distribution characteristics inside and outside the lung; meanwhile, due to the introduction of the lung mask branch, the problem of lung CT image parameter reconstruction is solved, and the problem of lung segmentation of a correspondingly generated target CT image is also solved.
4. In terms of generating effect, the existing method rarely focuses on generating effect of lung detail parts, such as leaf cleft and small blood vessel parts, the parts can not be displayed basically by using a model trained by the existing method, and in order to solve the problem, a difference perception loss function and a feature matching loss function are provided. In the training process of the multi-branch frame structure, the weight of the small part of the original HU value is improved by utilizing the difference of HU values of different parts and applying a difference perception loss function, and the generation effect of the detailed part of the image of the lung texture is effectively improved; by applying the characteristic matching loss function and drawing the generated image and the real image in the characteristic level, the interference of a large amount of noise existing in the CT image to network learning is reduced, the quality of the generated image is visually improved, and a doctor can use the generated CT image to read and diagnose the image. Compared with the prior method, the network framework training model provided by the application is obviously improved in vision and quantitative indexes.
The embodiments are described in a progressive manner in the specification, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system provided by the embodiment, the description is relatively simple because the system corresponds to the method provided by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present application are explained herein using specific examples, which are provided only to help understand the method and the core idea of the present application. It should be noted that, for those skilled in the art, it is possible to make several improvements and modifications to the present application without departing from the principle of the present application, and such improvements and modifications also fall within the scope of the claims of the present application.
It is further noted that, in the present specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.

Claims (10)

1. A lung CT image parameter reconstruction method based on deep learning is characterized by comprising the following steps:
inputting the acquired lung CT image into a preset progressive upsampling skeleton network model, and outputting a characteristic image;
inputting the characteristic image into a 3D convolutional neural network model, wherein the 3D convolutional neural network model is a convolutional neural network model consisting of an intra-pulmonary branch, an extra-pulmonary branch and a pulmonary mask branch;
classifying each pixel of the characteristic image through the lung mask branch, determining the lung inner part and the lung outer part, and performing classification marking;
respectively inputting the intra-pulmonary part and the extra-pulmonary part of the characteristic image into the intra-pulmonary branch and the extra-pulmonary branch for characteristic learning to generate an intra-pulmonary image and an extra-pulmonary image;
and combining the generated images of the intra-pulmonary branches and the extra-pulmonary branches to form a complete CT generated image.
2. The deep learning-based lung CT image parameter reconstruction method according to claim 1, wherein the inputting the lung CT image into a preset progressive upsampling skeleton network model and outputting a characteristic image comprises:
inputting the lung CT image into a coding module for feature extraction to obtain a feature image, and performing multiple convolution and pooling on the feature image to realize down-sampling to obtain feature images with different resolutions;
inputting the feature maps with different resolutions into a decoding module for up-sampling to obtain the position information of image pixels;
adding or connecting the feature graphs of the same scale corresponding to the coding module and the decoding module in parallel to obtain feature images fusing different levels of features;
the progressive upsampling framework network model comprises a coding module, a decoding module and a jump connection module.
3. The deep learning-based lung CT image parameter reconstruction method according to claim 1, wherein the feature image is input to a 3D convolutional neural network model, and the 3D convolutional neural network model is a convolutional neural network model composed of an intrapulmonary branch, an extrapulmonary branch and a lung mask branch, and comprises:
the lung mask branch completes lung segmentation task, and the intra-pulmonary branch, the extra-pulmonary branch and the lung mask branch are all composed of 3D convolutional layers with the same structure.
4. The method for reconstructing parameters of CT lung image based on deep learning of claim 1, wherein the classifying each pixel of the feature image by the lung mask branch, determining the lung interior part and the lung exterior part and performing classification labeling comprises:
inputting the characteristic image into a lung mask branch of a convolutional neural network model, outputting segmentation results of the inside and outside lung parts of the characteristic image, and performing classification marking;
calculating a loss value between the segmentation result of the feature image and a real lung segmentation gold standard by using a cross entropy loss function;
updating model parameters of lung mask branches of the progressive upsampling skeleton network and the convolutional neural network model according to the obtained loss value;
and iteratively training the lung mask branches of the progressive upsampling skeleton network and the convolutional neural network model.
5. The method for reconstructing parameters of a pulmonary CT image based on deep learning of claim 1, wherein the inputting the intra-pulmonary portion and the extra-pulmonary portion of the feature image into the intra-pulmonary branch and the extra-pulmonary branch respectively for feature learning to generate the intra-pulmonary image and the extra-pulmonary image comprises:
inputting the extrapulmonary part of the characteristic image into an extrapulmonary branch of a convolutional neural network model to generate an extrapulmonary image;
calculating a loss value between the extrapulmonary image and a real target image by using an L1 loss function;
carrying out gradient back transmission and updating on model parameters of extrapulmonary branches of the progressive upsampling skeleton network and convolutional neural network models according to the obtained loss values;
performing negation operation on each pixel point of the extrapulmonary image to generate an intrapulmonary image;
calculating a loss value between the intra-lung image and a real target image by using a difference perception loss function;
and updating model parameters of the intra-pulmonary branches of the progressive upsampling skeleton network and the convolutional neural network model according to the obtained loss value.
6. The method for reconstructing pulmonary CT image parameter based on deep learning of claim 5, wherein the calculating the loss value between the intra-pulmonary image and the real target image by using the difference perception loss function comprises:
adding a weight W to each pixel point of the intra-lung image based on a raw L1 loss functioni,j,k
Calculating a loss value between the intra-lung image and a real target image by using a difference perception loss function;
the weight calculation formula is as follows:
Figure FDA0002621017540000021
wherein the content of the first and second substances,
Figure FDA0002621017540000031
representing the generated image, X representing the real target image, i, j, k being the pixel index positions, α, β and γ being the three hyper-parameters.
7. The deep learning based lung CT image parameter reconstruction method according to claim 1, further comprising:
inputting the CT generated image into a preset CNN model pre-trained on a natural image ImageNet;
extracting real target images of the last three layers of the CNN model and generating a characteristic diagram of the images;
calculating a loss value between the real target image and the feature map of the generated image by using a feature matching loss function;
and carrying out gradient returning and updating on model parameters of the progressive upsampling skeleton network model and the convolutional neural network model according to the obtained loss values.
8. A lung CT image parameter reconstruction system based on deep learning is characterized by comprising:
the sampling processing unit is configured to input the acquired lung CT image into a preset progressive upsampling skeleton network model and output a characteristic image;
the model input unit is used for inputting the characteristic image into a 3D convolutional neural network model, and the 3D convolutional neural network model is a convolutional neural network model consisting of an intra-pulmonary branch, an extra-pulmonary branch and a pulmonary mask branch;
the image segmentation unit is configured to classify each pixel of the feature image through the lung mask branch, determine an inside lung portion and an outside lung portion, and perform classification labeling;
the branch learning unit is configured to input the intra-pulmonary part and the extra-pulmonary part of the feature image into the intra-pulmonary branch and the extra-pulmonary branch respectively for feature learning, and generate an intra-pulmonary image and an extra-pulmonary image;
and the image combination unit is matched and used for combining the generated images of the intra-pulmonary branches and the extra-pulmonary branches to form a complete CT generated image.
9. A terminal, comprising:
a processor;
a memory for storing instructions for execution by the processor;
wherein the processor is configured to perform the method of any one of claims 1-7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-7.
CN202010783409.0A 2020-08-06 2020-08-06 Lung CT image parameter reconstruction method, system, terminal and storage medium based on deep learning Pending CN112017136A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010783409.0A CN112017136A (en) 2020-08-06 2020-08-06 Lung CT image parameter reconstruction method, system, terminal and storage medium based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010783409.0A CN112017136A (en) 2020-08-06 2020-08-06 Lung CT image parameter reconstruction method, system, terminal and storage medium based on deep learning

Publications (1)

Publication Number Publication Date
CN112017136A true CN112017136A (en) 2020-12-01

Family

ID=73500150

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010783409.0A Pending CN112017136A (en) 2020-08-06 2020-08-06 Lung CT image parameter reconstruction method, system, terminal and storage medium based on deep learning

Country Status (1)

Country Link
CN (1) CN112017136A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116843825A (en) * 2023-06-01 2023-10-03 中国机械总院集团沈阳铸造研究所有限公司 Progressive CBCT sparse view reconstruction method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909581A (en) * 2017-11-03 2018-04-13 杭州依图医疗技术有限公司 Lobe of the lung section dividing method, device, system, storage medium and the equipment of CT images
CN108428229A (en) * 2018-03-14 2018-08-21 大连理工大学 It is a kind of that apparent and geometric properties lung's Texture Recognitions are extracted based on deep neural network
US20190220701A1 (en) * 2018-01-16 2019-07-18 Siemens Healthcare Gmbh Trained generative network for lung segmentation in medical imaging
CN110916708A (en) * 2019-12-26 2020-03-27 南京安科医疗科技有限公司 CT scanning projection data artifact correction method and CT image reconstruction method
CN110956635A (en) * 2019-11-15 2020-04-03 上海联影智能医疗科技有限公司 Lung segment segmentation method, device, equipment and storage medium
US20200184647A1 (en) * 2017-06-08 2020-06-11 The United States Of America, As Represented By The Secretary Department Of Health And Human Service Progressive and multi-path holistically nested networks for segmentation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200184647A1 (en) * 2017-06-08 2020-06-11 The United States Of America, As Represented By The Secretary Department Of Health And Human Service Progressive and multi-path holistically nested networks for segmentation
CN107909581A (en) * 2017-11-03 2018-04-13 杭州依图医疗技术有限公司 Lobe of the lung section dividing method, device, system, storage medium and the equipment of CT images
US20190220701A1 (en) * 2018-01-16 2019-07-18 Siemens Healthcare Gmbh Trained generative network for lung segmentation in medical imaging
CN108428229A (en) * 2018-03-14 2018-08-21 大连理工大学 It is a kind of that apparent and geometric properties lung's Texture Recognitions are extracted based on deep neural network
CN110956635A (en) * 2019-11-15 2020-04-03 上海联影智能医疗科技有限公司 Lung segment segmentation method, device, equipment and storage medium
CN110916708A (en) * 2019-12-26 2020-03-27 南京安科医疗科技有限公司 CT scanning projection data artifact correction method and CT image reconstruction method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
FABIAN ISENSEE等: "An attempt at beating the 3D U-Net", "ARXIV:1908.02182V2", 4 October 2019 (2019-10-04) *
QIUYUE LIU等: "Multi-stream Progressive Up-Sampling Network for Dense CT Image Reconstruction", 《 MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION – MICCAI 2020 》, 29 September 2020 (2020-09-29) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116843825A (en) * 2023-06-01 2023-10-03 中国机械总院集团沈阳铸造研究所有限公司 Progressive CBCT sparse view reconstruction method
CN116843825B (en) * 2023-06-01 2024-04-05 中国机械总院集团沈阳铸造研究所有限公司 Progressive CBCT sparse view reconstruction method

Similar Documents

Publication Publication Date Title
CN109166130B (en) Image processing method and image processing device
US20210406591A1 (en) Medical image processing method and apparatus, and medical image recognition method and apparatus
CN110930416B (en) MRI image prostate segmentation method based on U-shaped network
CN111368849B (en) Image processing method, image processing device, electronic equipment and storage medium
CN110348515A (en) Image classification method, image classification model training method and device
Vu et al. Perception-enhanced image super-resolution via relativistic generative adversarial networks
CN116309648A (en) Medical image segmentation model construction method based on multi-attention fusion
CN111951281A (en) Image segmentation method, device, equipment and storage medium
CN113658040A (en) Face super-resolution method based on prior information and attention fusion mechanism
Wu et al. Vessel-GAN: Angiographic reconstructions from myocardial CT perfusion with explainable generative adversarial networks
CN109754357B (en) Image processing method, processing device and processing equipment
CN115375711A (en) Image segmentation method of global context attention network based on multi-scale fusion
CN113902945A (en) Multi-modal breast magnetic resonance image classification method and system
CN115375548A (en) Super-resolution remote sensing image generation method, system, equipment and medium
CN116612174A (en) Three-dimensional reconstruction method and system for soft tissue and computer storage medium
CN116258933A (en) Medical image segmentation device based on global information perception
CN113643297B (en) Computer-aided age analysis method based on neural network
Zhang et al. MinimalGAN: diverse medical image synthesis for data augmentation using minimal training data
CN112017136A (en) Lung CT image parameter reconstruction method, system, terminal and storage medium based on deep learning
CN113269774A (en) Parkinson disease classification and lesion region labeling method of MRI (magnetic resonance imaging) image
WO2021184195A1 (en) Medical image reconstruction method, and medical image reconstruction network training method and apparatus
CN115439470B (en) Polyp image segmentation method, computer readable storage medium and computer device
CN115965785A (en) Image segmentation method, device, equipment, program product and medium
CN115761358A (en) Method for classifying myocardial fibrosis based on residual capsule network
Liu et al. Dual UNet low-light image enhancement network based on attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination