CN112465118B - Low-rank generation type countermeasure network construction method for medical image generation - Google Patents

Low-rank generation type countermeasure network construction method for medical image generation Download PDF

Info

Publication number
CN112465118B
CN112465118B CN202011343299.2A CN202011343299A CN112465118B CN 112465118 B CN112465118 B CN 112465118B CN 202011343299 A CN202011343299 A CN 202011343299A CN 112465118 B CN112465118 B CN 112465118B
Authority
CN
China
Prior art keywords
rank
low
convolution
model
generation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011343299.2A
Other languages
Chinese (zh)
Other versions
CN112465118A (en
Inventor
高静
陈志奎
赵文瀚
姚晨辉
李朋
张佳宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN202011343299.2A priority Critical patent/CN112465118B/en
Publication of CN112465118A publication Critical patent/CN112465118A/en
Application granted granted Critical
Publication of CN112465118B publication Critical patent/CN112465118B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

A low-rank generation type confrontation network construction method for medical image generation belongs to the field of deep learning and comprises the following construction steps: 1) utilizing a principal component mode to approximate full-rank convolution operation in the GAN model, and constructing low-rank convolution operation based on a calculation rule of tensor CP decomposition; 2) constructing a low-rank dimension convolutional layer and a low-rank channel convolutional layer to replace a full-rank convolutional layer by using the low-rank convolution operation of the step 1, adding a ReLU activation function and a batch regularization term between the low-rank convolutional layers, adjusting the data distribution of the low-rank convolutional layers, and designing a low-rank generation model; 3) and fusing the low-rank generation model and the full-rank discrimination model to construct a complete medical image low-rank generation type countermeasure network. The method provided by the invention has the following effects: the low-rank generation countermeasure method effectively reduces computational resources such as FLOPs, memory, video memory and space occupation in a medical image generation task, and has a good generation effect.

Description

Low-rank generation type countermeasure network construction method for medical image generation
Technical Field
The invention belongs to the technical field of artificial intelligence medical images, and discloses a medical image generation-oriented low-rank generation type countermeasure network construction method.
Background
Medical images are an important scientific research means, have a strong promoting effect on the research of life science, and have become one of the essential means for assisting medical workers in disease diagnosis and treatment. The medical image data with high quality and high precision can help doctors to accurately acquire the illness state and pathological information of patients. With the continuous development of basic science, medical imaging methods such as X-ray imaging, magnetic resonance imaging, ultrasonic imaging, Computed Tomography (CT) and the like have also been developed. With the increasing abundance of medical image information, the number of medical images generated in the diagnosis and treatment process is multiplied, so that the workload of the diagnosing doctor and the radiologist is increased increasingly, and great challenges are brought to the application of the existing medical images.
In recent years, with rapid development of the field of artificial intelligence, computer-aided diagnosis (CAD) technology has become an important approach to solve the above problems. The core idea of the CAD technology is to combine the fields of iconography, artificial intelligence, digital image processing and the like by means of the strong calculation and analysis capability of a computer, establish an efficient medical image and diagnosis and treatment information analysis system, provide efficient and reliable diagnosis opinions for doctors, and greatly reduce the workload of the diagnosis and treatment doctors. At present, the CAD technology based on artificial intelligence is widely applied to the field of medical images, and good effects are obtained in tasks such as medical image segmentation, medical image fusion, medical image super-resolution reconstruction and the like.
As an important research direction of artificial intelligence, deep learning has extremely high application value in the field of medical images. At present, a large number of related researches show that the trained supervised deep learning model can effectively extract hidden high-dimensional characteristic information in a medical image, and compared with the traditional method, the performance of the method is remarkably improved in tasks such as medical image segmentation, reconstruction and the like, for example, a deep image segmentation network U-Net which is widely applied to medical image segmentation, and a deep rapid regional focus detection network Fast R-CNN which is applied to lung nodule recognition, glaucoma diagnosis and the like.
However, due to the limitations of Ultrasonic Imaging (ultrasound Imaging), Magnetic Resonance Imaging (Magnetic Resonance Imaging), and X-ray Imaging, the medical image data is inevitably interfered by noise, and the Imaging resolution is not as expected, so that the existing supervised deep learning method has a poor effect in extracting the detailed features of the medical image. Moreover, the supervised deep learning method requires a large amount of labeled medical image data, and although a medical institution accumulates a large amount of image data during a diagnosis process, the labeling of the medical image data still requires the participation of a physician with a professional background. In order to solve the above challenges, related researches propose a medical image processing method based on unsupervised deep learning.
The generative countermeasure network (GAN) is a novel unsupervised deep learning model and comprises a generative model and a discriminant model, and the data distribution of a real image is learned by utilizing the mutual game between the generative model and the discriminant model to generate a vivid target image. Compared with the traditional deep learning method, the GAN does not need to depend on a large amount of labeled medical image data, and can effectively model the detailed characteristics of the medical image. At present, GAN has been successfully applied to medical image fusion, medical image segmentation, high resolution reconstruction of medical images and other tasks. In recent years, a large number of GAN derivative models have been proposed by researchers, such as deep convolution produced countermeasure networks (DCGAN), Wasserstein GAN (WGAN), Wasserstein GAN with gradient penalties (WGAN-GP), sequence GAN (seqgan), semi-supervised conditional produced countermeasure networks (cGAN), and the like. The methods improve the performance of the GAN in the field of medical imaging from the aspects of computing architecture, network structure, training method and the like.
Although the GAN can effectively learn medical image modes, model the distribution of image data and generate vivid images, the accuracy of tasks such as medical image classification and identification, image segmentation and the like is greatly improved. However, the existing GAN method based on a full rank generator and a discriminator includes a large number of redundant learnable parameters, which causes the model to occupy a large number of floating point operations (FLOPs), memory, video memory and other operation resources in inference training. In addition, as the number of model layers increases, the occupied computing resources of the model layers increase sharply. The high resource occupation of GAN severely limits its application to medical image generation tasks.
Therefore, the invention provides a low-rank generation type countermeasure network construction method for medical image generation. The method combines a tensor CP low-rank decomposition and generation type countermeasure network architecture to compress redundant parameters of an image generation model, and further designs a low-rank medical image generation method. The low-rank network structure can effectively reduce the calculation resources occupied by the model in the inference training on the premise of keeping the generation effect of the medical image, thereby greatly improving the performance of the generation of the medical image and expanding the usability of the generation of the medical image in the actual diagnosis and treatment environment.
Disclosure of Invention
The invention provides a medical image generation-oriented low-rank generation type countermeasure network construction method. According to the method, tensor CP decomposition calculation rules are adopted, a principal component mode of a low-rank convolutional layer approximate to a full-rank convolutional layer is designed, a countermeasure framework is adopted, the detailed characteristics of medical image data are modeled by using the real distribution of the data, and a high-quality medical image is rapidly generated. In addition, the method constructs a network structure based on tensor CP decomposition, and simultaneously, in order to adjust the distribution of network model output, batch regularization and activation functions are added into the network to introduce nonlinear factors, so that model parameters are compressed, the consumption of operation resources is reduced, the operation speed is increased, and the generation effect of medical images is ensured.
In order to ensure that the generative confrontation network model obtained by the method achieves the above effects, the technical scheme adopted by the invention is as follows:
step 1, approximating full-rank convolution operation in a GAN model by using a principal component mode, and constructing low-rank convolution operation based on a calculation rule of tensor CP decomposition.
Step 2, constructing a calculation layer of a low-rank generation model by using the low-rank convolution operation of the step 1 and replacing the low-rank dimension convolution layer and the low-rank channel convolution layer with a full-rank convolution layer; adding a ReLU activation function and a batch regularization term between calculation layers, adjusting the data distribution of the low-rank convolutional layer, and designing a low-rank generation model.
And 3, fusing the low-rank generation model and the full-rank discrimination model to construct a complete medical image low-rank generation type countermeasure network.
The method provided by the invention has the following effects: aiming at the problem that the computational resource consumption of the generative confrontation network in a medical image generation task is too high, the invention provides a medical image generation-oriented low-rank generative confrontation network construction method, which greatly reduces the computational resource consumption of a model and improves the performance of the model on the premise of ensuring the generation quality of medical images. Finally, experiments show that the method effectively reduces consumed operation resources including FLOPs, internal memory and video memory in the medical image generation task.
Drawings
Fig. 1 is a framework diagram of a construction method of a low-rank generative countermeasure network.
FIG. 2 is a schematic diagram of a deep convolutional neural network model.
Fig. 3 is a schematic diagram of a low rank generative model based on tensor CP decomposition.
Fig. 4 ISIC-2017 skin lesion dataset.
Figure 5 ISIC-2017 skin damage data after treatment.
FIG. 6 results of generation of the original DCGAN model.
FIG. 7 shows the result of the low rank GAN model proposed by the present invention.
Detailed Description
The following further describes specific implementation steps of the medical image-oriented low-rank generation type countermeasure construction method with reference to the accompanying drawings:
step 1, constructing a low-rank transposition convolution calculation layer
The convolutional layer can ensure the translational invariance of data characteristics, and a large number of implicit modes of convolutional kernel sensing data are applied to various classical generative confrontation network models. However, the convolution layer of the existing generative countermeasure model is full-rank convolution, wherein a large amount of redundancy exists between convolution kernels, which greatly increases the complexity of the model, thereby consuming a large amount of computing resources in the training and reasoning process of the model. In order to reduce the complexity of a generative countermeasure model and reduce the computational resource of model training reasoning, the invention constructs a low-rank convolution layer based on the computation rule of tensor CP decomposition, and approximates the full-rank convolution in the GAN model by using mode principal components. The specific calculation process of the step is as follows:
a piece-wise quantization convolution operation can be expressed in the form of equation (1), i.e., a shape of X as shown in FIG. 2 u ×Y u The X S input tensor U is subjected to convolution operation of an original full-rank convolution kernel K with the shape of d X S X T to obtain the shape of X v ×Y v The output tensor V of x T.
Figure BDA0002799144950000031
Wherein, K represents a four-dimensional convolution kernel tensor, and xi is the half width of the kernel tensor, and because the application scene of the method is the generation task of the medical image, the method generatesThe convolutional layers (or transposed convolutional layers) in the structure of the antagonist network model are applied to a batch of images, so the same convolution operation can be expressed as the slave tensor U (N, C) in ,H in ,W in ) To tensor V (N, C) in ,H out ,W out ) The process of (2). Where H, W correspond to spatial dimensions, C corresponds to output channels, and N corresponds to output channels of the different batches. Similarly, the transposed convolution (or deconvolution) operation uses the same representation.
The convolution kernel tensor of the convolution operation is next represented in the form of a tensor CP decomposition. As a tensor decomposition expression method generalized to multidimensional low rank, a two-dimensional tensor low-rank CP decomposition of rank R shaped as nxm may be expressed in the following form:
Figure BDA0002799144950000041
wherein, A represents a two-dimensional tensor,
Figure BDA0002799144950000042
the low rank convolution parameter proposed by the present invention has four-dimensional tensors corresponding to two spatial dimensions, one output channel and one output channel, so the present invention needs to generalize the tensor CP decomposition representation to a four-dimensional form, that is:
Figure BDA0002799144950000043
therefore, the low rank tensor convolution kernel designed by the present invention is expressed in the form of:
Figure BDA0002799144950000044
then, the decomposed low-rank kernel tensor expression (4) is substituted into expression (1) of the convolution operation, and the components of the spatial dimension s and the t tensor are replaced, so that a new low-rank convolution is constructed:
Figure BDA0002799144950000045
in equation (5), the kernel tensor K s The invention is expressed by adopting a low-rank channel convolution kernel with the shape of 1 multiplied by S multiplied by R, and K is expressed by the same principle y Using a low-rank dimensional convolution kernel representation of 1 xdxr x Using a low-rank dimensional convolution kernel representation of the shape dX1 XRXR, K t A low rank channel convolution kernel of shape 1 × 1 × R × T is used for representation. Where ξ is the half-width of the kernel tensor, S and T are the number of output channels and the output channels of different batches, respectively, and R is the rank of decomposition.
The full rank convolutional layer shown in equation (1) can be factorized into a superposition of a set of low rank convolution computations shown in equation (5), i.e., the full rank convolutional layer can be constructed as a low rank convolution computation layer as shown in fig. 2.
Step 2, constructing a low-rank generation model
The low-rank generation type confrontation network model provided by the invention comprises a low-rank generation model and a discrimination model. Different from a full-rank medical image generation type countermeasure model, the method utilizes the low-rank convolution operation in the step 1 to design low-rank dimension convolution and low-rank characteristic channel convolution, construct a calculation layer of the low-rank generation model, compress data dimensions, reduce redundancy in the characteristic channels and reduce model parameters. Meanwhile, in the construction of a low-rank generation model, a low-rank dimension convolutional layer and a low-rank channel convolutional layer are used for replacing a full-rank convolutional layer, the network depth is increased, and a more effective medical image mode is constructed. The network structure of the low-rank generative model designed by the invention is shown in table 1.
TABLE 1 network architecture of the low rank generative model constructed in the present invention
ConvTranspose2d(100,306,kernel_size=(1,1),stride=(1,1))
ConvTranspose2d(306,306,kernel_size=(1,4),stride=(1,1))
ConvTranspose2d(306,306,kernel_size=(4,1),stride=(1,1))
ConvTranspose2d(306,512,kernel_size=(1,1),stride=(1,1))
ConvTranspose2d(512,384,kernel_size=(1,1),stride=(2,2))
ConvTranspose2d(384,384,kernel_size=(1,4),stride=(1,1),padding=(0,1))
ConvTranspose2d(384,384,kernel_size=(4,1),stride=(1,1),padding=(1,0))
ConvTranspose2d(384,256,kernel_size=(1,1),stride=(1,1))
ConvTranspose2d(256,192,kernel_size=(1,1),stride=(2,2))
ConvTranspose2d(192,192,kernel_size=(1,4),stride=(1,1),padding=(0,1))
ConvTranspose2d(192,192,kernel_size=(4,1),stride=(1,1),padding=(1,0))
ConvTranspose2d(192,128,kernel_size=(1,1),stride=(1,1))
ConvTranspose2d(128,96,kernel_size=(1,1),stride=(2,2))
ConvTranspose2d(96,96,kernel_size=(1,4),stride=(1,1),padding=(0,1))
ConvTranspose2d(96,96,kernel_size=(4,1),stride=(1,1),padding=(1,0))
ConvTranspose2d(96,64,kernel_size=(1,1),stride=(1,1))
ConvTranspose2d(64,33,kernel_size=(1,1),stride=(2,2))
ConvTranspose2d(33,33,kernel_size=(1,4),stride=(1,1),padding=(0,1))
ConvTranspose2d(33,33,kernel_size=(4,1),stride=(1,1),padding=(1,0))
ConvTranspose2d(33,3,kernel_size=(1,1),stride=(1,1))
The network structure provided by the invention is formed by adopting a plurality of groups of transposition convolutions. The first two parameters of the transposed convolution operation represent the number of channels of the input tensor and the output tensor in the convolution operation respectively. kernel _ size is the shape of the convolution kernel, e.g., the convolution layer in table 1 where the convolution kernel is 1 × 1 in size is the channel convolution introduced by the present invention; convolution layers with convolution kernel shapes of 1 x 4 and 4 x 1 are the dimensional convolutions introduced by the present invention. stride is the step size of the convolution operation, and its two parameters correspond to the step length of each convolution in the horizontal and vertical directions, respectively. padding is the padding of the convolution operation, and its two parameters are the pixel width padded at the edge of the image data in the horizontal and vertical directions, respectively.
Because the low-rank dimension convolution layer and the low-rank channel convolution layer are introduced into the generated model, the depth of the model is increased, more abstract characteristics are learned, the difficulty of model training is increased, and the convergence rate of the model is delayed.
Firstly, in order to improve the training effect of a model, batch regularization items are respectively added to a dimension convolution layer and a channel convolution layer, the input distribution of any neuron in a network is corrected to be standard normal distribution, and the distribution of output data after low-rank convolution operation is adjusted;
secondly, in order to improve the convergence speed of the model and ensure the learning effect of the generated model, the method adds a ReLU activation function to each low-rank dimension convolution layer and channel convolution layer in the network, so that the low-rank dimension convolution layer and the channel convolution layer are transmitted more quickly in the gradient descent process.
And finally, adding a regularization term and a nonlinear activation function to obtain a low-rank calculation layer, and overlapping the low-rank calculation layer to construct a low-rank generation model, wherein the network structure of the low-rank generation model constructed by the method is shown in a table 2.
TABLE 2 Low rank Generation model network architecture constructed in accordance with the present invention
Figure BDA0002799144950000061
Wherein BatchNorm2d represents a batch regularization term added between convolution layers to adjust the data distribution of the convolution output. Relu, and Tanh are both added activation functions.
Step 3, constructing a low-rank generation type confrontation network of medical images
And (3) splicing the low-rank generation model obtained in the step (2) and the full-rank discrimination model into a medical image low-rank generation type countermeasure network for generating a medical image. The network structure of the full-rank arbiter is shown in table 3, and the structure of the low-rank generation type countermeasure network model constructed by the invention is shown in fig. 3.
Table 3 discriminator network structure adopted in the construction method of the present invention
Conv2d(3,64,kernel_size=(4,4),stride=(2,2),padding=(1,1))LeakyReLU
Conv2d(64,128,kernel_size=(4,4),stride=(2,2),padding=(1,1))BatchNorm2dLeakyReLU
Conv2d(128,256,kernel_size=(4,4),stride=(2,2),padding=(1,1))BatchNorm2dLeakyReLU
Conv2d(256,512,kernel_size=(4,4),stride=(2,2),padding=(1,1))BatchNorm2dLeakyReLU
Conv2d(512,1,kernel_size=(4,4),stride=(1,1))Sigmoid
Wherein BatchNorm2d represents a batch regularization term added between convolution layers to adjust the data distribution of the convolution output. Relu, LeakyRelu and Sigmoid are all added activation functions.
The method comprises the following steps:
the whole process of the invention is divided into three parts: designing a low-rank transposition convolution calculation layer, constructing a low-rank generation model and constructing a low-rank generation type countermeasure network of the medical image. Specifically, the low-rank dimension convolutional layer and the low-rank pass convolutional layer are designed based on a tensor CP decomposition rule, then the low-rank dimension convolutional layer and the low-rank pass convolutional layer are adopted, after a batch regularization term and an activation function are added, a medical image generation model is built layer by layer, finally, based on a countermeasure game idea, a low-rank generator and a full-rank discriminator are spliced into a low-rank generation type countermeasure network model, and a mode of the medical image is learned.
And (4) verification result:
according to the invention, a large number of experiments are carried out by using the ISIC-2017 skin injury data set, and the effectiveness of the model is verified. Specifically, the ISIC-2017 dataset comprised 2000 pixel anisometric dermoscopic image components. As shown in fig. 4, the image of ISIC-2017 is divided into benign and malignant skin lesion images. The present invention converts the original image of the data set into a 64 x 64 pixel image, the processed image being shown in fig. 5.
The invention adopts the generation quality of the picture and the compression effect of the model to verify the effect of the model. Specifically, ISIC-2017 is a focus picture in an unnatural scene, so that JS divergence (Jensen-Shannon divergence) is adopted as an evaluation index of image generation quality. For the compression effect of the model, the invention adopts the floating point operation times (FLOPs) and the model parameters as evaluation criteria.
The JS divergence serves as a commonly used evaluation index for measuring the probability distribution of data, the distance between a generated image and a real image can be effectively evaluated, and the JS divergence is defined as:
Figure BDA0002799144950000071
wherein KL represents the KL divergence, defined as
Figure BDA0002799144950000072
The invention adopts a typical medical image generation model DCGAN as a comparison model. The experimental results obtained by training the model with 200 epochs are shown in fig. 6, fig. 7 and table 4. Wherein, fig. 6 is a generation result of the DCGAN model, and fig. 7 is a generation result obtained by the low-rank countermeasure generation method of the present invention; table 2 is the generator compression performance of the model.
TABLE 4 comparison of Performance results
Model (model) Number of parameters FLOPs JS divergence
DCGAN 27.430G 3.577M 97-105
CPGAN 20.921G 2.985M 97-112
On one hand, as can be observed from the results of comparing the model performances in table 4, the present invention effectively reduces the computational resources consumed by the model, including the occupied storage, memory, display and space, where M ═ 10 in table 4 6 ,G=10 9 . Specifically, the low-rank generation type countermeasure network construction method provided by the invention reduces the parameter quantity by 23.73% and reduces the floating point operation times (FLOPs) by 19.07%. The dimension convolution and the channel convolution adopted by the low-rank convolution operation of the invention use smaller dimensions and fewer channels in the calculation process, thereby greatly reducing the parameter number of the model; in addition, the low-rank generation type countermeasure network architecture designed by the invention effectively cascades the low-rank convolution layer and reduces the floating point operation times required by model operation.
On the other hand, as shown in fig. 6, fig. 7 and table 4, the low-rank generation type antagonistic network structure constructed by the present invention effectively maintains the medical image generation effect, and particularly, the low-rank generation type antagonistic network structure constructed by the present invention can achieve the same generation effect as the full-rank generation type antagonistic network model DCGAN on the ISIC skin damage data set. The low-rank convolution operation designed by the invention can effectively learn the principal components of the medical image data mode, meanwhile, the constructed low-rank generation type countermeasure network architecture can effectively learn the data distribution of the real medical image, and finally, the stability of the model can be improved by the activation function and the batch regular term added in the model.
In summary, the low-rank generation type confrontation network model provided by the invention greatly reduces the computational resources consumed by the model, including FLOPs, memory, video memory and space occupation, and effectively ensures the medical image generation effect.

Claims (1)

1. A low-rank generation type countermeasure network construction method for medical image generation is characterized by comprising the following steps:
step 1, constructing a low-rank transposition convolution calculation layer;
a piece-quantized convolution operation is expressed in the form of equation (1), with a shape X u ×Y u The X S input tensor U is convolved by a convolution kernel K to obtain the shape of X v ×Y v An output tensor V of x T;
Figure FDA0002799144940000011
k represents a four-dimensional convolution kernel tensor, xi is the half width of the kernel tensor, H and W correspond to spatial dimensions, C corresponds to an output channel, and N corresponds to the output channels of different batches; similarly, the same representation method is adopted for the transposition convolution or deconvolution operation;
the nuclear tensor K in equation (1) is represented by CP decomposition to the following form:
Figure FDA0002799144940000012
continuously substituting the decomposed kernel tensor expression (4) into the expression (1) of the original convolution operation, and replacing component positions expressed as space dimensions s and t tensor to obtain the expression (1) approximate calculation of the original convolution operation:
Figure FDA0002799144940000013
in equation (5), the kernel tensor K s Book, bookThe invention is represented by a low-rank channel convolution kernel with a shape of 1 × 1 × S × R, and similarly, K y Using a low-rank dimensional convolution kernel representation of 1 xdxr x Using a low-rank dimensional convolution kernel representation of the shape dX1 XRXR, K t Adopting a low-rank channel convolution kernel with the shape of 1 multiplied by R multiplied by T to express, wherein xi is the half width of a kernel tensor, S and T are the number of output channels and the output channels of different batches respectively, and R is the decomposition rank;
step 2, constructing a low-rank generation model;
constructing a calculation layer of a low-rank generation model by using the low-rank dimension convolution and the low-rank characteristic channel convolution obtained in the step 1;
the constructed computation layer comprises K s ,K y ,K x ,K t Filling batch regularization items in each layer of convolution for four layers of low-rank convolution of a convolution kernel, correcting input distribution of any neuron in the network into standard normal distribution, and adjusting distribution of output data after low-rank convolution operation so as to improve training effect of the model;
adding a ReLU activation function after all the batch regularization items in the calculation layer, so that the low-rank dimension convolution layer and the channel convolution layer are more quickly spread in the gradient descent process, the convergence speed of the model is improved, and the learning effect of the generated model is ensured;
finally, obtaining a low-rank calculation layer added with a batch regularization term and a ReLU activation function term, and superposing the low-rank calculation layer to obtain a low-rank generation model;
step 3, constructing a medical image low-rank generation type countermeasure network;
and (3) splicing the low-rank generation model obtained in the step (2) and the full-rank discrimination model into a medical image low-rank generation type countermeasure network for generating a medical image.
CN202011343299.2A 2020-11-26 2020-11-26 Low-rank generation type countermeasure network construction method for medical image generation Active CN112465118B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011343299.2A CN112465118B (en) 2020-11-26 2020-11-26 Low-rank generation type countermeasure network construction method for medical image generation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011343299.2A CN112465118B (en) 2020-11-26 2020-11-26 Low-rank generation type countermeasure network construction method for medical image generation

Publications (2)

Publication Number Publication Date
CN112465118A CN112465118A (en) 2021-03-09
CN112465118B true CN112465118B (en) 2022-09-16

Family

ID=74808387

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011343299.2A Active CN112465118B (en) 2020-11-26 2020-11-26 Low-rank generation type countermeasure network construction method for medical image generation

Country Status (1)

Country Link
CN (1) CN112465118B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113900860A (en) * 2021-10-27 2022-01-07 重庆邮电大学 CGAN-based data recovery method for wireless sensor network fault node
CN117238458B (en) * 2023-09-14 2024-04-05 广东省第二人民医院(广东省卫生应急医院) Critical care cross-mechanism collaboration platform system based on cloud computing

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110428045A (en) * 2019-08-12 2019-11-08 电子科技大学 Depth convolutional neural networks compression method based on Tucker algorithm
CN111652236A (en) * 2020-04-21 2020-09-11 东南大学 Lightweight fine-grained image identification method for cross-layer feature interaction in weak supervision scene

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110428045A (en) * 2019-08-12 2019-11-08 电子科技大学 Depth convolutional neural networks compression method based on Tucker algorithm
CN111652236A (en) * 2020-04-21 2020-09-11 东南大学 Lightweight fine-grained image identification method for cross-layer feature interaction in weak supervision scene

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于生成对抗网络的低秩图像生成方法;赵树阳等;《自动化学报》;20180309(第05期);全文 *

Also Published As

Publication number Publication date
CN112465118A (en) 2021-03-09

Similar Documents

Publication Publication Date Title
Yu et al. Tensorizing GAN with high-order pooling for Alzheimer’s disease assessment
Chen et al. Deep feature learning for medical image analysis with convolutional autoencoder neural network
Zhou et al. Deep learning of the sectional appearances of 3D CT images for anatomical structure segmentation based on an FCN voting method
Pang et al. A novel end-to-end classifier using domain transferred deep convolutional neural networks for biomedical images
Li et al. Deep learning attention mechanism in medical image analysis: Basics and beyonds
Kooi et al. Classifying symmetrical differences and temporal change for the detection of malignant masses in mammography using deep neural networks
Mall et al. A comprehensive review of deep neural networks for medical image processing: Recent developments and future opportunities
CN108197629B (en) Multi-modal medical image feature extraction method based on label correlation constraint tensor decomposition
Tataru et al. Deep Learning for abnormality detection in Chest X-Ray images
CN112465118B (en) Low-rank generation type countermeasure network construction method for medical image generation
Balaji et al. Medical image analysis with deep neural networks
Chen et al. Generative adversarial U-Net for domain-free medical image augmentation
Zhao et al. Diagnose like a radiologist: Hybrid neuro-probabilistic reasoning for attribute-based medical image diagnosis
Zhang et al. CT image classification based on convolutional neural network
Du et al. Parameter-free similarity-aware attention module for medical image classification and segmentation
Chen et al. Generative adversarial u-net for domain-free few-shot medical diagnosis
Xia et al. Deep residual neural network based image enhancement algorithm for low dose CT images
Miller et al. Self-supervised deep learning to enhance breast cancer detection on screening mammography
Zheng et al. Pneumoconiosis identification in chest X-ray films with CNN-based transfer learning
Cheng et al. Multi-attention mechanism medical image segmentation combined with word embedding technology
Tao et al. Tooth CT Image Segmentation Method Based on the U-Net Network and Attention Module.
Shen et al. MLF-IOSC: multi-level fusion network with independent operation search cell for low-dose CT denoising
Zhang et al. Nucleus image segmentation method based on GAN and FCN model
Kumar et al. Medical images classification using deep learning: a survey
Wang et al. An improved CapsNet applied to recognition of 3D vertebral images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant