CN113255794A - Medical image classification method based on GoogLeNet network - Google Patents

Medical image classification method based on GoogLeNet network Download PDF

Info

Publication number
CN113255794A
CN113255794A CN202110608436.9A CN202110608436A CN113255794A CN 113255794 A CN113255794 A CN 113255794A CN 202110608436 A CN202110608436 A CN 202110608436A CN 113255794 A CN113255794 A CN 113255794A
Authority
CN
China
Prior art keywords
network
image
image classification
model
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110608436.9A
Other languages
Chinese (zh)
Inventor
杨敬民
陈静
廖健鑫
杨东海
张文杰
方金生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Minnan Normal University
Original Assignee
Minnan Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Minnan Normal University filed Critical Minnan Normal University
Priority to CN202110608436.9A priority Critical patent/CN113255794A/en
Publication of CN113255794A publication Critical patent/CN113255794A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a medical image classification method based on a GoogLeNet network, which specifically comprises the following steps: s1, acquiring and preprocessing original data information of the CT image, and dividing the preprocessed CT image data into a test set and a training set; s2, constructing an improved GoogLeNet network model, and inputting the training set into the improved GoogLeNet network model for training to obtain a trained CT image classification model; and S3, inputting the test set into the CT image classification model to obtain a CT image classification result. The method can improve the classification accuracy and classification accuracy of the CT image under the condition of limited training samples, and effectively improves the diagnosis effect of the CT image.

Description

Medical image classification method based on GoogLeNet network
Technical Field
The invention belongs to the technical field of medical image computer-aided diagnosis, and particularly relates to a medical image classification method based on a GoogLeNet network.
Background
Early discovery, early confirmation and early isolation are main methods for controlling pneumonia transmission, and at present, nucleic acid detection is still the 'gold standard' for confirmation, and the new coronary pneumonia can be confirmed only by positive nucleic acid detection. However, the detection speed of nucleic acid detection is not fast enough, and the problem of nucleic acid detection sensitivity exists, which can cause the occurrence of false negative condition. Therefore, some experts propose imaging diagnosis as an auxiliary condition for the accurate diagnosis of new coronary pneumonia. Due to the shortage of medical resources, the diagnosis workload of reading large-scale medical images by a radiologist is large, and the efficiency is not high. Therefore, the computer vision technology is used for auxiliary diagnosis of the possible new coronary pneumonia CT image, the missed diagnosis rate of the new coronary pneumonia can be effectively reduced, the shortage of medical resources is relieved, the computer vision technology has important significance, and the computer-aided diagnosis also becomes one of the hot spots of research of numerous scholars at present.
At present, medical image research by scholars at home and abroad is mainly divided into two categories. One is a method based on traditional machine learning. For example, Zhao Ke Yang et al, in "diagnosis of tumor assisted by machine learning", adopt a method of machine learning for high-quality digital pathological section to judge the nature, grade and prognosis of tumor. Sunllei et al applied a non-linear support vector machine method in a "medical image-based mathematical programming support vector machine" for computer-aided diagnosis. In the 'machine learning-based Alzheimer disease course classification', the fan-in-the-way et al performs classification research on Alzheimer disease by using a support vector machine and a random forest construction model. However, the traditional machine learning method cannot effectively mine rich information contained in the medical image.
Another group of scholars adopts a deep learning method to study medical images. Liudi et al used a convolutional neural network for false positive nodule removal in "deep learning-based medical image pulmonary nodule detection" and performed experiments on LUNA16 data sets; the classification of pneumonia CT images using the DenseNet by Yang et al at "COVID-CT-Dataset: A CT Scan Dataset about COVID-19" achieved a classification accuracy of 84.7%. However, the above-described conventional machine learning method or deep learning method requires a large amount of labeled image data to train a model.
Therefore, finding a medical image classification method with high accuracy and high classification accuracy, which does not require a large amount of labeled image data to train a model, is a major concern for researchers.
Disclosure of Invention
In order to solve the technical problems, the invention provides a medical image classification method based on the GoogLeNet network, which can improve the classification accuracy and classification accuracy of CT images under the condition of limited training samples and effectively improve the diagnosis effect of the CT images.
In order to achieve the purpose, the invention provides a medical image classification method based on a GoogleLeNet network, which specifically comprises the following steps:
s1, acquiring and preprocessing original data information of the CT image, and dividing the preprocessed CT image data into a test set and a training set;
s2, constructing an improved GoogLeNet network model, and inputting the training set into the improved GoogLeNet network model for training to obtain a trained CT image classification model;
and S3, inputting the test set into the CT image classification model to obtain a CT image classification result.
Preferably, the S1 is specifically:
s1.1, scanning a lesion part of a human body through a medical radiation instrument to obtain original data information of a CT image formed by arranging pixels with different gray scales according to a matrix;
s1.2, carrying out image segmentation, feature extraction, labeled data information and data enhancement on the original data information of the CT image to obtain a sample set with a label;
and S1.3, dividing the sample set into a training set and a testing set according to a proportion.
Preferably, the classification tag includes: normal and multiple lesion category labels.
Preferably, the data enhancement comprises: mirror image, rotation, scaling, clipping, translation, gaussian noise, brightness adjustment, saturation adjustment, and contrast adjustment.
Preferably, the S2 is specifically:
s2.1, constructing an improved GoogLeNet network model according to the visual feature diagram and the structure in the modified GoogLeNet feature network, and pre-training the improved GoogLeNet network model;
s2.2, performing low-level feature transfer learning on the pre-trained GoogleLeNet network according to the commonality of the medical image and the natural image in the texture and edge low-level features;
and S3.3, inputting the training set into the improved GoogLeNet network model for training to obtain a trained CT image classification model.
Preferably, modifying the structure of the google lenet feature network specifically comprises:
replacing an inclusion structure in a GoogLeNet characteristic network with a Fast-inclusion structure, wherein an h-switch function is adopted as an activation function, a Softmax function is adopted as a classifier, and the number of neurons in a full connection layer is 24; then, the two branches of the 3 × 3 convolution kernel and the 5 × 5 convolution kernel in the existing inclusion module in the google lenet model are merged.
Preferably, S3.3 is specifically:
performing optimized network deep training on the improved GoogLeNet characteristic network by adopting a self-adaptive gradient estimation method, and updating a weight matrix and bias by adopting a gradient descent method; and adjusting the freezing layer in the improved GoogLeNet characteristic network to obtain a trained CT image classification model.
Preferably, the S3 is specifically:
s3.1, inputting the test set into the CT image classification model to obtain normal or multiple lesion type CT images;
s3.2, respectively storing the normal CT images and the CT images of multiple lesion types in two different folders;
and S3.3, automatically marking corresponding labels on the normal and multiple lesion type CT images through an imageDatastore instruction in the matlab, and then comparing the normal and multiple lesion type CT images to obtain the classification accuracy, the classification precision, the sensitivity and the specificity.
Compared with the prior art, the invention has the beneficial effects that:
the invention firstly carries out a series of preprocessing such as data enhancement, image processing and the like on CT original image data, fully utilizes the internal relation between medical images and natural images to carry out feature migration, simultaneously adopts feature visualization, improves the feature extraction efficiency and the image classification accuracy of a network layer, and reduces the redundant calculation amount in the training process. And finally, the improved network is used for adjustment and classification, and a CT image classification model with good fitting is obtained. The method can improve the classification accuracy and classification accuracy of the lung CT image under the condition of limited training samples, and effectively improves the diagnosis effect of the lung CT image.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a flow chart of a method of the present invention;
fig. 2 is an alternative diagram of the google lenet feature network structure of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Example 1
Referring to fig. 1, the invention provides a medical image classification method based on a google lenet network, which specifically includes the following steps:
s1, acquiring and preprocessing original data information of the CT image, and dividing the preprocessed CT image data into a test set and a training set;
firstly, a machine of medical radiology department is adopted to scan a human body, after X rays emitted by the machine penetrate through the human body, the detected lesion part is captured by an X ray detector, and organs (such as brain, spinal cord, mediastinum, lung, gall bladder, pancreas, pelvic organs and the like) consisting of soft tissues can be displayed due to different transmittances of different organs of the human body to the X rays, so that a CT image which can reflect different structural tissues of the human body can be obtained, and a lesion image can be well shown on the background of an anatomical image. The CT image is composed of a certain number of pixels with different gray scales from black to white which are arranged in a matrix, the CT image represented in a matrix form is used as original data, the pixels reflect the X-ray absorption coefficient of corresponding voxels, the sizes and the numbers of the pixels of different CT scans and obtained images are different, the sizes can be different from 1.0 multiplied by 1.0mm and 0.5 multiplied by 0.5mm, the number can be 256 multiplied by 256, namely 65536 or 512 multiplied by 512, the smaller the pixels, the more the number, the finer the composed image and the higher the spatial resolution.
Then, the invention preprocesses the CT image raw data expressed in the form of matrix, including: image segmentation, feature extraction, classification label setting and data enhancement;
the setting of the classification label comprises: normal and multiple lesion category labels; and the data enhancement comprises mirroring, rotation, scaling, clipping, translation, Gaussian noise, brightness adjustment, saturation adjustment and contrast adjustment.
Randomly separating a large number of acquired medical CT images uniformly through a window with a fixed size to obtain a series of sub-images; then extracting feature vectors from the sub-images to obtain a feature vector set, namely a sample set, marking the sample set to obtain labels of normal and multiple lesion types, then adding data, namely mirroring, rotating, scaling, cutting, translating, Gaussian noise, brightness adjustment, saturation adjustment and contrast adjustment, to the sample set of each type to adapt to the input of a GoogLeNet network, and fully utilizing the existing CT images with a small number of labels to increase the diversity of training samples.
Finally, the sample sets of various types subjected to data addition processing are all divided into training sets and testing sets, and are divided according to 66.7%/33.3%, 75%/25% or 90%/10%.
S2, constructing an improved GoogLeNet network model, inputting the divided training set into the improved GoogLeNet network model for pre-training, and adjusting model parameters to obtain a trained CT image classification model.
Firstly, constructing an improved GoogLeNet network model, and pre-training the improved GoogLeNet network model;
the conventional google lenet network is 144 layers deep. According to the research on the specific fine-grained category of the CT image of the lesion position, the main nodes of the traditional GoogLeNet network are subjected to feature visualization. Through feature visualization, when the texture contour of a diseased position cannot be normally identified due to the appearance of features in most channels, the extraction sensitivity of a network layer behind the determined layer to the low-level features of the CT image is low, and through the method, the migration network depth is finally determined, which provides a basis for the adjustment of the network hyper-parameters.
According to the principle, the invention determines the total depth of the improved GoogleLeNet network to be 130 layers according to a visual feature map, namely the first 125 layers of the original GoogleLeNet network are kept, and the modified last 5 layers comprise 2 pool layers, 1 loss layer, 1 softmax layer and 1 output layer. The reason for taking the first 125 layers is that most of the feature maps after the first 125 layers are shown as blocky patch areas, which is not favorable for fine-grained feature identification of the medical image. Then, referring to fig. 2, in the improved google lenet feature network, the inclusion structure in the google lenet feature network is replaced by a Fast-inclusion structure, the activation function is an h-switch function, the classifier adopts a Softmax function, and the number of the neurons in the full connection layer is 24. The improved GooglLeNet characteristic network comprises 22 weight layers; then, the two branches of the 3 × 3 convolution kernel and the 5 × 5 convolution kernel in the existing inclusion module in the google lenet model are merged, so that the merging is performed on the principle of increasing the network recognition rate and reducing the parameters at the same time. In the improved GoogleLeNet characteristic network, data passes through a Dropout layer after dimensionality reduction, and the output ratio of the Dropout layer is 60%. The activation function h-Swish of the GoogleLeNet feature network is h-Swish (x) ═ ReLU6(x +3)/6, wherein ReLU6(x) ═ min (max (x,0),6), so that the identification accuracy of the original GoogleLeNet feature network can be improved, and relatively low calculated amount can be guaranteed.
According to the method, the low-level feature transfer learning is carried out by using the pre-trained network on the sample data set according to the commonality of the medical image and the natural image in the low-level features such as texture, edge and the like, so that the defect of the labeled CT sample is alleviated.
Then, an adaptive gradient estimation method is adopted to carry out optimization network deep training on each improved GoogLeNet characteristic network, a gradient descent method is adopted to update a weight matrix and bias, namely, a network parameter is gradually optimized and adjusted through the gradient descent method, so that the error between the label of the training data and the final prediction label is gradually reduced, and the network can better fit the training data.
And finally, adjusting a freezing layer in the improved GoogLeNet characteristic network, namely, only training an N-K layer network by fixing a network layer K in the GoogLeNet characteristic network, and accelerating the network training process. And freezing different network layers by adjusting the K value to finally obtain the trained CT image classification model.
And S3, inputting the CT images to be identified in the test set into the trained CT image classification model to obtain corresponding CT image classification results.
The image labels and the classification results are used, the image labels are respectively stored in two different folders according to the diagnosis results of doctors, different types (normal or multiple lesion type CT images) can be automatically marked with corresponding labels through an imagedatabase instruction in matlab, and the classification accuracy, the classification precision, the sensitivity and the specificity are obtained through comparison.
In summary, the invention firstly carries out a series of preprocessing such as data enhancement and image processing on CT original image data, fully utilizes the internal relation between medical images and natural images to carry out feature migration, and simultaneously adopts feature visualization, thereby improving the feature extraction efficiency and the image classification accuracy of a network layer and reducing the redundant computation amount in the training process. And finally, the improved network is used for adjustment and classification, and a CT image classification model with good fitting is obtained. The method can improve the classification accuracy and classification accuracy of the CT image under the condition of limited training samples, and effectively improves the diagnosis effect of the CT image.
The above-described embodiments are merely illustrative of the preferred embodiments of the present invention, and do not limit the scope of the present invention, and various modifications and improvements of the technical solutions of the present invention can be made by those skilled in the art without departing from the spirit of the present invention, and the technical solutions of the present invention are within the scope of the present invention defined by the claims.

Claims (8)

1. A medical image classification method based on a GoogleLeNet network is characterized by comprising the following steps:
s1, acquiring and preprocessing original data information of the CT image, and dividing the preprocessed CT image data into a test set and a training set;
s2, constructing an improved GoogLeNet network model, and inputting the training set into the improved GoogLeNet network model for training to obtain a trained CT image classification model;
and S3, inputting the test set into the CT image classification model to obtain a CT image classification result.
2. The google lenet network-based medical image classification method according to claim 1, wherein the S1 specifically is:
s1.1, scanning a lesion part of a human body through a medical radiation instrument to obtain original data information of a CT image formed by arranging pixels with different gray scales according to a matrix;
s1.2, carrying out image segmentation, feature extraction, labeled data information and data enhancement on the original data information of the CT image to obtain a sample set with a label;
and S1.3, dividing the sample set into a training set and a testing set according to a proportion.
3. The google lenet network-based medical image classification method according to claim 2, wherein the classification label includes: normal and multiple lesion category labels.
4. The google lenet network-based medical image classification method according to claim 2, wherein the data enhancement comprises: mirror image, rotation, scaling, clipping, translation, gaussian noise, brightness adjustment, saturation adjustment, and contrast adjustment.
5. The google lenet network-based medical image classification method according to claim 1, wherein the S2 specifically is:
s2.1, constructing an improved GoogLeNet network model according to the visual feature diagram and the structure in the modified GoogLeNet feature network, and pre-training the improved GoogLeNet network model;
s2.2, performing low-level feature transfer learning on the pre-trained GoogleLeNet network according to the commonality of the medical image and the natural image in the texture and edge low-level features;
and S3.3, inputting the training set into the improved GoogLeNet network model for training to obtain a trained CT image classification model.
6. The google lenet network-based medical image classification method according to claim 5, wherein the structure of the google lenet feature network is modified, specifically:
replacing an inclusion structure in a GoogLeNet characteristic network with a Fast-inclusion structure, wherein an h-switch function is adopted as an activation function, a Softmax function is adopted as a classifier, and the number of neurons in a full connection layer is 24; then, the two branches of the 3 × 3 convolution kernel and the 5 × 5 convolution kernel in the existing inclusion module in the google lenet model are merged.
7. The google lenet network-based medical image classification method according to claim 1, wherein S3.3 is specifically:
performing optimized network deep training on the improved GoogLeNet characteristic network by adopting a self-adaptive gradient estimation method, and updating a weight matrix and bias by adopting a gradient descent method; and adjusting the freezing layer in the improved GoogLeNet characteristic network to obtain a trained CT image classification model.
8. The google lenet network-based medical image classification method according to claim 1, wherein the S3 specifically is:
s3.1, inputting the test set into the CT image classification model to obtain normal or multiple lesion type CT images;
s3.2, respectively storing the normal CT images and the CT images of multiple lesion types in two different folders;
and S3.3, automatically marking corresponding labels on the normal and multiple lesion type CT images through an imageDatastore instruction in the matlab, and then comparing the normal and multiple lesion type CT images to obtain the classification accuracy, the classification precision, the sensitivity and the specificity.
CN202110608436.9A 2021-06-01 2021-06-01 Medical image classification method based on GoogLeNet network Pending CN113255794A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110608436.9A CN113255794A (en) 2021-06-01 2021-06-01 Medical image classification method based on GoogLeNet network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110608436.9A CN113255794A (en) 2021-06-01 2021-06-01 Medical image classification method based on GoogLeNet network

Publications (1)

Publication Number Publication Date
CN113255794A true CN113255794A (en) 2021-08-13

Family

ID=77185720

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110608436.9A Pending CN113255794A (en) 2021-06-01 2021-06-01 Medical image classification method based on GoogLeNet network

Country Status (1)

Country Link
CN (1) CN113255794A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113707312A (en) * 2021-09-16 2021-11-26 人工智能与数字经济广东省实验室(广州) Blood vessel quantitative identification method and device based on deep learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110443268A (en) * 2019-05-30 2019-11-12 杭州电子科技大学 A kind of benign pernicious classification method of liver cancer CT image based on deep learning
CN111104961A (en) * 2019-10-31 2020-05-05 太原理工大学 Method for classifying breast cancer based on improved MobileNet network
CN111563542A (en) * 2020-04-24 2020-08-21 空间信息产业发展股份有限公司 Automatic plant classification method based on convolutional neural network
CN111709425A (en) * 2020-05-26 2020-09-25 漳州卫生职业学院 Lung CT image classification method based on feature migration

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110443268A (en) * 2019-05-30 2019-11-12 杭州电子科技大学 A kind of benign pernicious classification method of liver cancer CT image based on deep learning
CN111104961A (en) * 2019-10-31 2020-05-05 太原理工大学 Method for classifying breast cancer based on improved MobileNet network
CN111563542A (en) * 2020-04-24 2020-08-21 空间信息产业发展股份有限公司 Automatic plant classification method based on convolutional neural network
CN111709425A (en) * 2020-05-26 2020-09-25 漳州卫生职业学院 Lung CT image classification method based on feature migration

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113707312A (en) * 2021-09-16 2021-11-26 人工智能与数字经济广东省实验室(广州) Blood vessel quantitative identification method and device based on deep learning

Similar Documents

Publication Publication Date Title
CN110930416B (en) MRI image prostate segmentation method based on U-shaped network
CN111563897B (en) Breast nuclear magnetic image tumor segmentation method and device based on weak supervision learning
CN113902761B (en) Knowledge distillation-based unsupervised segmentation method for lung disease focus
CN111275686B (en) Method and device for generating medical image data for artificial neural network training
CN112700461B (en) System for pulmonary nodule detection and characterization class identification
Popescu et al. Retinal blood vessel segmentation using pix2pix gan
CN113539402B (en) Multi-mode image automatic sketching model migration method
CN111767952A (en) Interpretable classification method for benign and malignant pulmonary nodules
CN112785581A (en) Training method and device for extracting and training large blood vessel CTA (computed tomography angiography) imaging based on deep learning
CN115661029A (en) Pulmonary nodule detection and identification system based on YOLOv5
CN111383759A (en) Automatic pneumonia diagnosis system
CN114445328A (en) Medical image brain tumor detection method and system based on improved Faster R-CNN
CN113255794A (en) Medical image classification method based on GoogLeNet network
CN113902702A (en) Pulmonary nodule benign and malignant auxiliary diagnosis system based on computed tomography
CN116385467B (en) Cerebrovascular segmentation method based on self-supervision learning and related equipment
CN112488971A (en) Medical image fusion method for generating countermeasure network based on spatial attention mechanism and depth convolution
CN117036288A (en) Tumor subtype diagnosis method for full-slice pathological image
CN116883341A (en) Liver tumor CT image automatic segmentation method based on deep learning
CN115526898A (en) Medical image segmentation method
Zhou et al. Pcrlv2: A unified visual information preservation framework for self-supervised pre-training in medical image analysis
CN114463339A (en) Medical image segmentation method based on self-attention
CN114359308A (en) Aortic dissection method based on edge response and nonlinear loss
CN117197156B (en) Lesion segmentation method and system based on double decoders UNet and Transformer
CN117649400B (en) Image histology analysis method and system under abnormality detection framework
CN117437514B (en) Colposcope image mode conversion method based on CycleGan

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210813

RJ01 Rejection of invention patent application after publication