CN117173543B - Mixed image reconstruction method and system for lung adenocarcinoma and pulmonary tuberculosis - Google Patents

Mixed image reconstruction method and system for lung adenocarcinoma and pulmonary tuberculosis Download PDF

Info

Publication number
CN117173543B
CN117173543B CN202311445228.7A CN202311445228A CN117173543B CN 117173543 B CN117173543 B CN 117173543B CN 202311445228 A CN202311445228 A CN 202311445228A CN 117173543 B CN117173543 B CN 117173543B
Authority
CN
China
Prior art keywords
target
feature
vector
encoder
lung adenocarcinoma
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311445228.7A
Other languages
Chinese (zh)
Other versions
CN117173543A (en
Inventor
孙昕
莫玺文
李永徽
刘彦迪
张晓东
邢志珩
丁文龙
唐琼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN202311445228.7A priority Critical patent/CN117173543B/en
Publication of CN117173543A publication Critical patent/CN117173543A/en
Application granted granted Critical
Publication of CN117173543B publication Critical patent/CN117173543B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a method and a system for reconstructing a mixed image of lung adenocarcinoma and pulmonary tuberculosis, which relate to the technical field of medical image reconstruction and comprise the following steps: training a preset variation encoder based on the first training set and the second training set to obtain a target variation encoder; respectively extracting features of the first training set and the second training set based on the target encoder to respectively obtain a first feature mapping set and a second feature mapping set which are Gaussian distributed in a potential space; sampling from the Gaussian distribution overlapped part of the first feature mapping set and the second feature mapping set in the potential space to obtain a sampling feature vector; performing re-parameterization on the sampling feature vector based on a target decoder to generate a target potential variable; based on the target decoder, the target latent variable is converted into a target reconstructed image of a mixture of lung adenocarcinoma and tuberculosis. The invention relieves the technical problem that the traditional deep learning algorithm is difficult to collect a sufficient number of confusing samples.

Description

Mixed image reconstruction method and system for lung adenocarcinoma and pulmonary tuberculosis
Technical Field
The invention relates to the technical field of medical image reconstruction, in particular to a method and a system for reconstructing a mixed image of lung adenocarcinoma and pulmonary tuberculosis.
Background
Lung adenocarcinoma and tuberculosis are two disorders characterized by significant manifestations. In clinical applications, however, lung adenocarcinoma and tuberculosis may exhibit some similar features on CT images, such as infiltration, nodules, masses, etc. Therefore, when analyzing images, the type of the two diseases is easy to be misjudged, and the treatment is delayed or wrong, so that serious consequences are brought to the patient.
Conventional deep learning algorithms typically require extensive data sets to train the model in processing the image recognition task to achieve better performance. However, it is often difficult to collect a sufficient number of confusing CT samples and corresponding identification tags, and thus the need for large-scale data for deep learning algorithms cannot be met.
Disclosure of Invention
The invention aims to solve at least one technical problem and provide a method and a system for reconstructing a mixed image of lung adenocarcinoma and pulmonary tuberculosis.
In a first aspect, an embodiment of the present invention provides a method for reconstructing a mixed image of lung adenocarcinoma and tuberculosis, including: training a preset variation encoder based on the first training set and the second training set to obtain a target variation encoder; the target variable encoder comprises a target encoder and a target decoder; the first training set comprises a plurality of preset lung adenocarcinoma CT image data, and the second training set comprises a plurality of preset tuberculosis CT image data; respectively extracting features of the first training set and the second training set based on the target encoder to respectively obtain a first feature mapping set and a second feature mapping set which are Gaussian distributed in a potential space; sampling from the Gaussian distribution overlapped part of the first feature mapping set and the second feature mapping set in the potential space to obtain a sampling feature vector; the sampling feature vector comprises a mean vector and a variance vector of Gaussian distribution of potential variables in a potential space; performing re-parameterization on the sampling feature vector based on the target decoder to generate a target potential variable; based on the target decoder, the target latent variable is converted into a target reconstructed image of a lung adenocarcinoma and tuberculosis mixture.
Further, based on the target encoder, respectively extracting features of the first training set and the second training set to respectively obtain a first feature mapping set and a second feature mapping set which are gaussian distributed in a potential space, including: performing feature extraction on the first training set based on the target encoder to obtain a first feature mapping set which is Gaussian distributed in a potential space; the first feature map set includes a plurality of first feature maps; the first feature mapping is the mapping of the preset lung adenocarcinoma CT image data to a potential space, and comprises a mean vector and a variance vector; performing feature extraction on the second training set based on the target encoder to obtain a second feature mapping set which is Gaussian distributed in a potential space; the second feature map set includes a plurality of second feature maps; the second feature mapping is a mapping of the preset tuberculosis CT image data to a potential space, and comprises a mean vector and a variance vector.
Further, the target encoder includes a plurality of sets of convolutional layer-active layer-max-pooling layer structures; feature extraction is performed on the first training set based on the target encoder to obtain a first feature mapping set which is Gaussian distributed in a potential space, and the feature extraction method comprises the following steps: inputting target lung adenocarcinoma CT image data into the target encoder, performing feature extraction on the target lung adenocarcinoma CT image data through a convolution layer, performing activation function processing in an activation layer, and performing downsampling through a maximum pooling layer to obtain a first feature mapping of the target lung adenocarcinoma CT image data in a potential space; the target lung adenocarcinoma CT image data is preset lung adenocarcinoma CT image data in the first training set.
Further, the target decoder includes a plurality of transpose convolutional layer-active layer structures; converting the target latent variable into a mixed target reconstructed image of lung adenocarcinoma and tuberculosis based on the target decoder, comprising: and up-sampling the target potential variable through the deconvolution operation of the transposed convolution layer, and limiting the pixel value through an activation function to generate the target reconstruction image.
Further, the method further comprises the following steps: performing cluster analysis on the first feature mapping set and the second feature mapping set in a potential space to obtain a first center vector and a second center vector respectively; an identification of the target reconstructed image is determined based on a distance of the sampled feature vector from the first center vector and the second center vector in potential space.
Further, the calculation formula of the identification degree of the target reconstructed image comprises:the method comprises the steps of carrying out a first treatment on the surface of the Wherein d is the discrimination, x i For the sampled feature vector, X 1 X is the first center vector 0 Is the second center vector.
In a second aspect, an embodiment of the present invention further provides a hybrid image reconstruction system for lung adenocarcinoma and tuberculosis, including: the device comprises a training module, an extracting module, a sampling module, a generating module and a reconstructing module; the training module is used for training the preset variable encoder based on the first training set and the second training set to obtain a target variable encoder; the target variable encoder comprises a target encoder and a target decoder; the first training set comprises a plurality of preset lung adenocarcinoma CT image data, and the second training set comprises a plurality of preset tuberculosis CT image data; the extraction module is used for extracting the characteristics of the first training set and the second training set based on the target encoder respectively to obtain a first characteristic mapping set and a second characteristic mapping set which are distributed in a Gaussian mode in a potential space; the sampling module is used for sampling from the Gaussian distribution overlapped part of the first feature mapping set and the second feature mapping set in the potential space to obtain a sampling feature vector; the sampling feature vector comprises a mean vector and a variance vector of Gaussian distribution of potential variables in a potential space; the generating module is used for carrying out re-parameterization on the sampling feature vector based on the target decoder to generate a target potential variable; the reconstruction module is used for converting the target latent variable into a target reconstructed image of lung adenocarcinoma and tuberculosis mixture based on the target decoder.
Further, the device also comprises an identification module for: performing cluster analysis on the first feature mapping set and the second feature mapping set in a potential space to obtain a first center vector and a second center vector respectively; an identification of the target reconstructed image is determined based on a distance of the sampled feature vector from the first center vector and the second center vector in potential space.
In a third aspect, an embodiment of the present invention further provides an electronic device, including: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method according to the first aspect when executing the computer program.
In a fourth aspect, embodiments of the present invention also provide a computer readable storage medium storing computer instructions which, when executed by a processor, implement a method as described in the first aspect.
The invention provides a method and a system for reconstructing a mixed image of lung adenocarcinoma and lung tuberculosis, which utilize image data and related mixed parameters which are generated by a trained variational encoder and contain mixed characteristics between the lung tuberculosis and the lung adenocarcinoma, wherein the generated reconstructed image can be used for resolution training of medical staff and can also be used as a training set of a deep learning algorithm, additional image acquisition and preparation work are not needed, and the cost of additional data acquisition can be reduced.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following description will briefly introduce the drawings that are needed in the detailed description or the prior art, it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for reconstructing a mixed image of lung adenocarcinoma and pulmonary tuberculosis according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a network structure of a variable encoder according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of feature extraction according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a hybrid image reconstruction system for lung adenocarcinoma and pulmonary tuberculosis according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
Fig. 1 is a flowchart of a method for reconstructing a mixed image of lung adenocarcinoma and tuberculosis, according to an embodiment of the present invention. As shown in fig. 1, the method specifically includes the following steps:
step S102, training a preset variation encoder based on a first training set and a second training set to obtain a target variation encoder; the target variable encoder comprises a target encoder and a target decoder; the first training set comprises a plurality of preset lung adenocarcinoma CT image data, and the second training set comprises a plurality of preset tuberculosis CT image data.
Step S104, respectively carrying out feature extraction on the first training set and the second training set based on the target encoder to respectively obtain a first feature mapping set and a second feature mapping set which are distributed in a Gaussian manner in a potential space.
Step S106, sampling is carried out from the Gaussian distribution overlapped part of the first feature mapping set and the second feature mapping set in the potential space, and a sampling feature vector is obtained; the sampled feature vectors include mean and variance vectors of a gaussian distribution of the underlying variable in the underlying space.
Step S108, the sampling feature vector is re-parameterized based on the target decoder to generate target potential variables.
Step S110, converting the target latent variable into a target reconstructed image of the lung adenocarcinoma and tuberculosis mixture based on the target decoder.
The invention provides a method for reconstructing a mixed image of lung adenocarcinoma and lung tuberculosis, which utilizes image data and related mixed parameters, which are generated by a trained variational encoder and contain mixed characteristics, between the lung tuberculosis and the lung adenocarcinoma, and the generated reconstructed image can be used for resolution training of medical staff and can also be used as a training set of a deep learning algorithm, additional image acquisition and preparation work are not needed, and the cost of additional data acquisition can be reduced.
In an embodiment of the invention, the network model used to reconstruct the image is built based on automatic coding of the variations. The network of the encoder and the decoder of the variational automatic encoder is composed of a convolutional neural network, and uses python language, including numpy and pytorch packets, mainly comprising a two-dimensional convolution formula, a transposed convolution formula, a maximum pooling formula and a ReLU activation formula. In a variational self-encoder, a potential space is defined, represented by a mean vector and a variance vector. Assuming that the reconstructed image is generated by latent variables, the original data is first mapped into the latent space, and then the image is regenerated from the latent space. The loss function of the variational self-encoder consists of two parts: reconstruction loss and regularization loss. Reconstruction loss measures the difference between the original data and the data regenerated from the potential space; regularization loss is used to constrain the distribution of potential variables to approximate an a priori distribution, typically a standard normal distribution is chosen. The specific algorithm formula is as follows:
two-dimensional convolution formula:
wherein I is an input image, K is a convolution kernel, m and n are convolution kernel sizes, and C is a convolution result.
Two-dimensional transposed convolution formula:
C(i,j)=
where I is the input image, K is the convolution kernel, m and n are the convolution kernel sizes, s is the stride, p is the vertical fill level, q is the horizontal fill level, and C is the post-convolution result.
Pooling formula:
where I is the input image, m and n are the pooling sizes, and k is the adjustable coefficient.
ReLU activation formula:
where I is the input image and x is the activated image.
The variation is derived from the encoder loss function formula:
where M is the total number of samples, M is the number of samples used for training, and the mean square error is used as a measure to measure the raw dataAnd reconstructing data in potential spaceDifferences between;representing the mean of the j-th dimension of the potential space,representing the variance of the j-th dimension, the distribution used to constrain the potential space is close to a standard normal distribution by calculating the KL divergence for each potential variable dimension j.
Specifically, step S104 includes the steps of:
step S1041, extracting features of the first training set based on a target encoder to obtain a first feature mapping set which is Gaussian distributed in a potential space; the first feature map set includes a plurality of first feature maps; the first feature map is a map of preset lung adenocarcinoma CT image data to a potential space, and comprises a mean vector and a variance vector.
Optionally, in an embodiment of the present invention, the target encoder includes multiple sets of convolutional layer-active layer-max-pooling layer structures; wherein the activation layer adopts a ReLU activation function. Specifically, inputting target lung adenocarcinoma CT image data into a target encoder, performing feature extraction on the target lung adenocarcinoma CT image data through a convolution layer, performing activation function processing in an activation layer, and performing downsampling through a maximum pooling layer to obtain a first feature map of the target lung adenocarcinoma CT image data in a potential space; the target lung adenocarcinoma CT image data is a preset lung adenocarcinoma CT image data in the first training set.
Step S1042, extracting features of the second training set based on the target encoder to obtain a second feature mapping set which is Gaussian distributed in the potential space; the second feature map set includes a plurality of second feature maps; the second feature map is a map of preset tuberculosis CT image data to a potential space, and comprises a mean vector and a variance vector.
Optionally, in an embodiment of the present invention, the target decoder includes a plurality of transpose convolution layer-activation layer structures; wherein, the activation layer of the target decoder adopts a ReLU activation function and a Sigmoid activation function.
Specifically, step S108 further includes: and up-sampling the target potential variable through the deconvolution operation of the transposed convolution layer, and limiting the pixel value through the activation function to generate a target reconstruction image.
The method provided by the embodiment of the invention further comprises the step of carrying out cluster analysis on vectors in a potential space, and calculating the distance between different samples, and specifically comprises the following steps:
respectively carrying out cluster analysis on the first feature mapping set and the second feature mapping set in the potential space to respectively obtain a first center vector and a second center vector;
the identity of the target reconstructed image is determined based on the distance of the sampled feature vector from the first and second center vectors in the potential space.
Specifically, in the embodiment of the present invention, the calculation formula of the identification degree of the target reconstructed image includes:
wherein d is the discrimination and x i For sampling feature vectors, X 1 X is the first center vector 0 Is the second center vector.
Example two
The embodiment of the invention provides a specific application embodiment of a mixed image reconstruction method for lung adenocarcinoma and pulmonary tuberculosis, which comprises the following steps:
(1) Training set:
the data base in the embodiment of the invention is based on the CT data of the typical tuberculosis and lung adenocarcinoma of the sea and river hospital, and the same patient can be regarded as a plurality of image data for input as the CT belongs to scanning. Specifically, the training dataset comprises 2500 representative lung adenocarcinoma CT image data, 2500 representative tuberculosis CT image data.
(2) Generating an algorithm:
firstly, a data set is read, the data set comprises a tuberculosis CT image and a lung adenocarcinoma CT image which can be clearly distinguished, the CT image is input into an image generation network model, the image size is 256 x 256, the input image enters an encoder, as shown in fig. 2, wherein fig. 2 is a network structure schematic diagram of a variable-division encoder provided by the embodiment of the invention, the encoder is composed of five groups of convolution-activation-maximum pooling, the input image is subjected to feature extraction through each convolution layer, and then is subjected to processing of a ReLU activation function and downsampling through the maximum pooling layer, so that the image is compressed into a feature vector of 4 x 32; 512 lung adenocarcinoma and tuberculosis feature maps are obtained from conv1 to conv5 respectively to form two groups of Gaussian distributions, and a group of mean values and variances are extracted from overlapping parts of the two groups of Gaussian distributions, as shown in fig. 3, wherein fig. 3 is a feature extraction schematic diagram provided according to an embodiment of the present invention, so as to enable the fused Gaussian distributions to contain common features of lung adenocarcinoma and tuberculosis images as much as possible.
To sample from the potential space, the variational self-encoder uses a re-parameterization technique. The re-parameterization method receives the mean vector and the variance vector output by the encoder, and generates random potential variables through normal distribution sampling. The decoder is responsible for converting the re-parameterized latent variables into reconstructed images. The decoder architecture includes five sets of "transpose convolution-activation" combinations, up-sampling the low-dimensional features by a deconvolution operation, and the last layer uses a Sigmoid activation function to limit the pixel values to between 0 and 1 to generate a reconstructed image.
(3) Training process (i.e. process of determining variational self-encoder model layer super-parameters and adjustable parameters):
submitting the training set to model training, calculating reconstruction loss and KL divergence in the training process, calculating the gradient of the total loss with respect to model parameters by using a back propagation algorithm, and updating the model parameters by using an optimizer. Until a specified number of training rounds is reached, images are generated that include confusing features of their mixture between lung adenocarcinoma and tuberculosis. After the image features are minimized, the vectors in the potential space are subjected to clustering analysis, the distance between different samples can be calculated, and by comparing the distances between samples of two diseases in the potential space and resampling, an image generation model is obtained.
(4) Manual threshold definition and evaluation:
resampling requires a set of parameters that represent the distance in potential space from two disease samples on a model, and in practice can be translated into discrimination:
wherein the numerator is the distance from the sampling point to the lung adenocarcinoma center in the subspace, and the denominator is the sum of the distances from the sampling point to the tuberculosis center and the lung adenocarcinoma center. The discrimination d is actually examined according to the performance of the sample by a doctor in the sea and river hospital at Tianjin university, and the generated image with the discrimination of 0.65 is more significant in actual work. Thus, the artificial threshold in the present embodiment is defined as 0.65. I.e. above 0.65, the doctor should prefer that the image belongs to lung adenocarcinoma and vice versa.
The using process comprises the following steps: the CT image generated by the method and a small amount of CT images in practice are submitted to a doctor for training, and the understanding condition of the doctor on the CT is obtained according to the judging result of the doctor on the image. Wherein the error rate of the generated data should be approximately equal to the error rate of the actual data. The doctor can train on the images, and the training is completed after the accuracy of the generated data and the actual data are synchronously improved.
In some optional implementations provided by the embodiments of the present invention, the CT image generated by the method provided by the embodiments of the present invention and a small amount of CT images in practice may be submitted to a conventional deep learning algorithm, and trained, so as to improve the accuracy of recognition of the deep learning algorithm, so that the technical problem that it is difficult for the conventional deep learning algorithm to collect a sufficient number of confusing samples may be alleviated.
Example III
Fig. 4 is a schematic diagram of a hybrid image reconstruction system for lung adenocarcinoma and tuberculosis according to an embodiment of the present invention. As shown in fig. 4, the system includes: training module 10, extraction module 20, sampling module 30, generation module 40, and reconstruction module 50.
Specifically, the training module 10 is configured to train the preset variable encoder based on the first training set and the second training set to obtain a target variable encoder; the target variable encoder comprises a target encoder and a target decoder; the first training set comprises a plurality of preset lung adenocarcinoma CT image data, and the second training set comprises a plurality of preset tuberculosis CT image data.
The extracting module 20 is configured to perform feature extraction on the first training set and the second training set based on the target encoder, so as to obtain a first feature mapping set and a second feature mapping set that are gaussian distributed in the potential space.
A sampling module 30, configured to sample from the overlapping part of gaussian distributions of the first feature mapping set and the second feature mapping set in the potential space, to obtain a sampled feature vector; the sampled feature vectors include mean and variance vectors of a gaussian distribution of the underlying variable in the underlying space.
A generating module 40, configured to generate a target latent variable based on the target decoder performing a re-parameterization on the sampling feature vector.
A reconstruction module 50 for converting the target latent variable into a target reconstructed image of the lung adenocarcinoma and tuberculosis mixture based on the target decoder.
The invention provides a lung adenocarcinoma and pulmonary tuberculosis mixed image reconstruction system, which utilizes image data and related mixed parameters, which are generated by a trained variational encoder and contain mixed characteristics, between the pulmonary tuberculosis and the lung adenocarcinoma, and the generated reconstructed image can be used for resolution training of medical staff and can also be used as a training set of a deep learning algorithm, additional image acquisition and preparation work are not needed, and the cost of additional data acquisition can be reduced.
In an embodiment of the present invention, the target encoder includes a plurality of sets of convolutional layer-active layer-max-pooling layer structures and the target decoder includes a plurality of transposed convolutional layer-active layer structures.
Specifically, the extraction module 20 is further configured to:
performing feature extraction on the first training set based on the target encoder to obtain a first feature mapping set which is Gaussian distributed in a potential space; the first feature map set includes a plurality of first feature maps; the first feature map is a map of preset lung adenocarcinoma CT image data to a potential space, and comprises a mean vector and a variance vector.
Specifically, inputting target lung adenocarcinoma CT image data into a target encoder, performing feature extraction on the target lung adenocarcinoma CT image data through a convolution layer, performing activation function processing in an activation layer, and performing downsampling through a maximum pooling layer to obtain a first feature map of the target lung adenocarcinoma CT image data in a potential space; the target lung adenocarcinoma CT image data is a preset lung adenocarcinoma CT image data in the first training set.
Performing feature extraction on the second training set based on the target encoder to obtain a second feature mapping set which is Gaussian distributed in a potential space; the second feature map set includes a plurality of second feature maps; the second feature map is a map of preset tuberculosis CT image data to a potential space, and comprises a mean vector and a variance vector.
Specifically, the generating module 40 is further configured to: and up-sampling the target potential variable through the deconvolution operation of the transposed convolution layer, and limiting the pixel value through the activation function to generate a target reconstruction image.
Specifically, as shown in fig. 4, the device further includes an identification module 60 for:
respectively carrying out cluster analysis on the first feature mapping set and the second feature mapping set in the potential space to respectively obtain a first center vector and a second center vector;
the identity of the target reconstructed image is determined based on the distance of the sampled feature vector from the first and second center vectors in the potential space.
The calculation formula of the identification degree of the target reconstruction image comprises the following steps:
wherein d is the discrimination and x i For sampling feature vectors, X 1 X is the first center vector 0 Is the second center vector.
The embodiment of the invention also provides electronic equipment, which comprises: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method as in embodiment one and embodiment two when executing the computer program.
The embodiment of the invention also provides a computer readable storage medium, wherein the computer readable storage medium stores computer instructions which, when executed by a processor, implement the method as in the first and second embodiments.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present disclosure describes embodiments, not every embodiment is provided with a separate embodiment, and that this description is provided for clarity only, and that the disclosure is not limited to the embodiments described in detail below, and that the embodiments described in the examples may be combined as appropriate to form other embodiments that will be apparent to those skilled in the art.

Claims (8)

1. A method of reconstructing a blended image of lung adenocarcinoma and tuberculosis, comprising:
training a preset variation encoder based on the first training set and the second training set to obtain a target variation encoder; the target variable encoder comprises a target encoder and a target decoder; the first training set comprises a plurality of preset lung adenocarcinoma CT image data, and the second training set comprises a plurality of preset tuberculosis CT image data;
respectively extracting features of the first training set and the second training set based on the target encoder to respectively obtain a first feature mapping set and a second feature mapping set which are Gaussian distributed in a potential space;
sampling from the Gaussian distribution overlapped part of the first feature mapping set and the second feature mapping set in the potential space to obtain a sampling feature vector; the sampling feature vector comprises a mean vector and a variance vector of Gaussian distribution of potential variables in a potential space;
performing re-parameterization on the sampling feature vector based on the target decoder to generate a target potential variable;
converting the target latent variable into a target reconstructed image of a lung adenocarcinoma and tuberculosis mixture based on the target decoder;
further comprises:
performing cluster analysis on the first feature mapping set and the second feature mapping set in a potential space to obtain a first center vector and a second center vector respectively;
an identification of the target reconstructed image is determined based on a distance of the sampled feature vector from the first center vector and the second center vector in potential space.
2. The method according to claim 1, characterized in that: performing feature extraction on the first training set and the second training set based on the target encoder to obtain a first feature mapping set and a second feature mapping set which are gaussian distributed in a potential space, respectively, including:
performing feature extraction on the first training set based on the target encoder to obtain a first feature mapping set which is Gaussian distributed in a potential space; the first feature map set includes a plurality of first feature maps; the first feature mapping is the mapping of the preset lung adenocarcinoma CT image data to a potential space, and comprises a mean vector and a variance vector;
performing feature extraction on the second training set based on the target encoder to obtain a second feature mapping set which is Gaussian distributed in a potential space; the second feature map set includes a plurality of second feature maps; the second feature mapping is a mapping of the preset tuberculosis CT image data to a potential space, and comprises a mean vector and a variance vector.
3. The method according to claim 2, characterized in that: the target encoder includes a plurality of sets of convolutional layer-active layer-max-pooling layer structures; feature extraction is performed on the first training set based on the target encoder to obtain a first feature mapping set which is Gaussian distributed in a potential space, and the feature extraction method comprises the following steps:
inputting target lung adenocarcinoma CT image data into the target encoder, performing feature extraction on the target lung adenocarcinoma CT image data through a convolution layer, performing activation function processing in an activation layer, and performing downsampling through a maximum pooling layer to obtain a first feature mapping of the target lung adenocarcinoma CT image data in a potential space; the target lung adenocarcinoma CT image data is preset lung adenocarcinoma CT image data in the first training set.
4. The method according to claim 1, characterized in that: the target decoder includes a plurality of transposed convolutional layer-active layer structures; converting the target latent variable into a mixed target reconstructed image of lung adenocarcinoma and tuberculosis based on the target decoder, comprising:
and up-sampling the target potential variable through the deconvolution operation of the transposed convolution layer, and limiting the pixel value through an activation function to generate the target reconstruction image.
5. The method according to claim 1, characterized in that: the calculation formula of the identification degree of the target reconstruction image comprises the following steps:
wherein d is the discrimination, x i For the samplingFeature vector, X 1 X is the first center vector 0 Is the second center vector.
6. A hybrid image reconstruction system for lung adenocarcinoma and tuberculosis, comprising: the device comprises a training module, an extracting module, a sampling module, a generating module and a reconstructing module; wherein,
the training module is used for training the preset variable encoder based on the first training set and the second training set to obtain a target variable encoder; the target variable encoder comprises a target encoder and a target decoder; the first training set comprises a plurality of preset lung adenocarcinoma CT image data, and the second training set comprises a plurality of preset tuberculosis CT image data;
the extraction module is used for extracting the characteristics of the first training set and the second training set based on the target encoder respectively to obtain a first characteristic mapping set and a second characteristic mapping set which are distributed in a Gaussian mode in a potential space;
the sampling module is used for sampling from the Gaussian distribution overlapped part of the first feature mapping set and the second feature mapping set in the potential space to obtain a sampling feature vector; the sampling feature vector comprises a mean vector and a variance vector of Gaussian distribution of potential variables in a potential space;
the generating module is used for carrying out re-parameterization on the sampling feature vector based on the target decoder to generate a target potential variable;
the reconstruction module is used for converting the target latent variable into a target reconstruction image of lung adenocarcinoma and tuberculosis mixture based on the target decoder;
the system also comprises an identification module for:
performing cluster analysis on the first feature mapping set and the second feature mapping set in a potential space to obtain a first center vector and a second center vector respectively;
an identification of the target reconstructed image is determined based on a distance of the sampled feature vector from the first center vector and the second center vector in potential space.
7. An electronic device, comprising: memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method according to any one of claims 1-5 when the computer program is executed.
8. A computer readable storage medium storing computer instructions which, when executed by a processor, implement the method of any one of claims 1-5.
CN202311445228.7A 2023-11-02 2023-11-02 Mixed image reconstruction method and system for lung adenocarcinoma and pulmonary tuberculosis Active CN117173543B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311445228.7A CN117173543B (en) 2023-11-02 2023-11-02 Mixed image reconstruction method and system for lung adenocarcinoma and pulmonary tuberculosis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311445228.7A CN117173543B (en) 2023-11-02 2023-11-02 Mixed image reconstruction method and system for lung adenocarcinoma and pulmonary tuberculosis

Publications (2)

Publication Number Publication Date
CN117173543A CN117173543A (en) 2023-12-05
CN117173543B true CN117173543B (en) 2024-02-02

Family

ID=88930118

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311445228.7A Active CN117173543B (en) 2023-11-02 2023-11-02 Mixed image reconstruction method and system for lung adenocarcinoma and pulmonary tuberculosis

Country Status (1)

Country Link
CN (1) CN117173543B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108416336A (en) * 2018-04-18 2018-08-17 特斯联(北京)科技有限公司 A kind of method and system of intelligence community recognition of face
CN113421250A (en) * 2021-07-05 2021-09-21 北京理工大学 Intelligent fundus disease diagnosis method based on lesion-free image training
CN113435488A (en) * 2021-06-17 2021-09-24 深圳大学 Image sampling probability improving method and application thereof
CN114067168A (en) * 2021-10-14 2022-02-18 河南大学 Cloth defect image generation system and method based on improved variational self-encoder network
CN114548281A (en) * 2022-02-23 2022-05-27 重庆邮电大学 Unsupervised self-adaptive weight-based heart data anomaly detection method
CN114862811A (en) * 2022-05-19 2022-08-05 湖南大学 Defect detection method based on variational automatic encoder
CN116052724A (en) * 2023-01-28 2023-05-02 深圳大学 Lung sound enhancement method, system, device and storage medium
CN116597285A (en) * 2023-07-17 2023-08-15 吉林大学 Pulmonary tissue pathology image processing model, construction method and image processing method
CN116631043A (en) * 2023-07-25 2023-08-22 南京信息工程大学 Natural countermeasure patch generation method, training method and device of target detection model
CN116910752A (en) * 2023-07-17 2023-10-20 重庆邮电大学 Malicious code detection method based on big data
WO2023202231A1 (en) * 2022-04-20 2023-10-26 北京华睿博视医学影像技术有限公司 Image reconstruction method and apparatus, and electronic device and storage medium
CN116958712A (en) * 2023-09-20 2023-10-27 山东建筑大学 Image generation method, system, medium and device based on prior probability distribution

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019051359A1 (en) * 2017-09-08 2019-03-14 The General Hospital Corporation A system and method for automated labeling and annotating unstructured medical datasets
US20220076829A1 (en) * 2020-09-10 2022-03-10 Delineo Diagnostics, Inc. Method and apparatus for analyzing medical image data in a latent space representation

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108416336A (en) * 2018-04-18 2018-08-17 特斯联(北京)科技有限公司 A kind of method and system of intelligence community recognition of face
CN113435488A (en) * 2021-06-17 2021-09-24 深圳大学 Image sampling probability improving method and application thereof
CN113421250A (en) * 2021-07-05 2021-09-21 北京理工大学 Intelligent fundus disease diagnosis method based on lesion-free image training
CN114067168A (en) * 2021-10-14 2022-02-18 河南大学 Cloth defect image generation system and method based on improved variational self-encoder network
CN114548281A (en) * 2022-02-23 2022-05-27 重庆邮电大学 Unsupervised self-adaptive weight-based heart data anomaly detection method
WO2023202231A1 (en) * 2022-04-20 2023-10-26 北京华睿博视医学影像技术有限公司 Image reconstruction method and apparatus, and electronic device and storage medium
CN114862811A (en) * 2022-05-19 2022-08-05 湖南大学 Defect detection method based on variational automatic encoder
CN116052724A (en) * 2023-01-28 2023-05-02 深圳大学 Lung sound enhancement method, system, device and storage medium
CN116597285A (en) * 2023-07-17 2023-08-15 吉林大学 Pulmonary tissue pathology image processing model, construction method and image processing method
CN116910752A (en) * 2023-07-17 2023-10-20 重庆邮电大学 Malicious code detection method based on big data
CN116631043A (en) * 2023-07-25 2023-08-22 南京信息工程大学 Natural countermeasure patch generation method, training method and device of target detection model
CN116958712A (en) * 2023-09-20 2023-10-27 山东建筑大学 Image generation method, system, medium and device based on prior probability distribution

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
叶晨 ; 关玮 ; .生成式对抗网络的应用综述.同济大学学报(自然科学版).2020,(第04期),全文. *
生成式对抗网络的应用综述;叶晨;关玮;;同济大学学报(自然科学版)(第04期);全文 *

Also Published As

Publication number Publication date
CN117173543A (en) 2023-12-05

Similar Documents

Publication Publication Date Title
CN109146988B (en) Incomplete projection CT image reconstruction method based on VAEGAN
CN107610194B (en) Magnetic resonance image super-resolution reconstruction method based on multi-scale fusion CNN
CN110348330B (en) Face pose virtual view generation method based on VAE-ACGAN
CN111127316B (en) Single face image super-resolution method and system based on SNGAN network
CN109949276A (en) A kind of lymph node detection method in improvement SegNet segmentation network
CN105488759B (en) A kind of image super-resolution rebuilding method based on local regression model
CN112634265B (en) Method and system for constructing and segmenting fully-automatic pancreas segmentation model based on DNN (deep neural network)
CN110720915A (en) Brain electrical impedance tomography method based on GAN
CN109741254A (en) Dictionary training and Image Super-resolution Reconstruction method, system, equipment and storage medium
CN116452618A (en) Three-input spine CT image segmentation method
WO2021120069A1 (en) Low-dose image reconstruction method and system on basis of a priori differences between anatomical structures
CN112215878B (en) X-ray image registration method based on SURF feature points
CN107358625B (en) SAR image change detection method based on SPP Net and region-of-interest detection
CN117173543B (en) Mixed image reconstruction method and system for lung adenocarcinoma and pulmonary tuberculosis
CN116503505A (en) Artifact removal method, device, equipment and medium for CBCT image
CN111696167A (en) Single image super-resolution reconstruction method guided by self-example learning
CN111325756A (en) Three-dimensional image artery and vein segmentation method and system based on deep learning network
CN116071270A (en) Electronic data generation method and system for generating countermeasure network based on deformable convolution
CN115601535A (en) Chest radiograph abnormal recognition domain self-adaption method and system combining Wasserstein distance and difference measurement
CN115100306A (en) Four-dimensional cone-beam CT imaging method and device for pancreatic region
CN114332278A (en) OCTA image motion correction method based on deep learning
CN112581513B (en) Cone beam computed tomography image feature extraction and corresponding method
CN112967295A (en) Image processing method and system based on residual error network and attention mechanism
Song et al. Super resolution reconstruction of medical image based on adaptive quad-tree decomposition
CN116977473B (en) Sparse angle CT reconstruction method and device based on projection domain and image domain

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant