CN112750115A - Multi-modal cervical carcinoma pre-lesion image recognition method based on graph neural network - Google Patents

Multi-modal cervical carcinoma pre-lesion image recognition method based on graph neural network Download PDF

Info

Publication number
CN112750115A
CN112750115A CN202110054445.8A CN202110054445A CN112750115A CN 112750115 A CN112750115 A CN 112750115A CN 202110054445 A CN202110054445 A CN 202110054445A CN 112750115 A CN112750115 A CN 112750115A
Authority
CN
China
Prior art keywords
image
neural network
feature
attention
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110054445.8A
Other languages
Chinese (zh)
Inventor
申兴发
李树丰
晏菱
赵庆彪
刘立立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202110054445.8A priority Critical patent/CN112750115A/en
Publication of CN112750115A publication Critical patent/CN112750115A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a multi-modal cervical carcinoma pre-lesion image identification method based on a graph convolution neural network. Firstly, collecting colposcopy images, and labeling the colposcopy images of each patient according to a pathological diagnosis report; and then constructing a neural network BG-CNN, wherein the neural network comprises a graph neural network formed by two ResNet18, and a graph convolution neural network forms a relationship extraction network. Inputting the marked colposcope image into the constructed neural network BG-CNN for training, setting a loss function as a cross entropy loss function, and finally obtaining a trained model; the method disclosed by the invention fully combines clinical diagnosis experience of doctors, combines two types of images of acetic acid experimental images and iodine experimental images, and integrates two methods of a convolution network and a graph convolution neural network, so that the associated information in the two types of images of the same patient can be automatically learned, the colposcope images are jointly distinguished, and the identification precision is obviously improved.

Description

Multi-modal cervical carcinoma pre-lesion image recognition method based on graph neural network
Technical Field
The invention belongs to the field of medical images, and relates to a method for identifying whether a cervical image shot under a colposcope is a precancerous lesion image or not by using a deep learning method.
Background
At present, the cervical cancer is screened in China clinically, and whether the screened object has the cervical cancer and the cervical cancer precursor lesion is determined by biopsy. The cervical precancerous lesions can be divided into two states, LSIL and HSIL. HSIL, a high grade precancerous lesion, requires timely resection to prevent its further development into cervical cancer. LSIL, a low-grade precancerous lesion, requires only conservative treatment and can restore normality through its own immune system by improving lifestyle and hygiene conditions. The use of colposcopy for cervical cancer pre-lesion screening is a common approach used by most hospitals. The colposcope can be used for inspecting the cervix under the condition of 3-7 times of amplification, and the abnormal pathological tissue and the normal tissue are obviously differentiated by applying an acetic acid solution and a Lugol iodine solution which are diluted to 3 percent. However, the operation process of screening precancerous lesions in colposcopy is relatively low in requirements of operators, the accuracy of clinical examination depends on the subjective judgment of doctors to a large extent, and part of experienced doctors have the specificity of only 48% clinically. Therefore, the embarrassing situation that a large number of doctors with rich experience are needed clinically can be relieved by finding a scientific, accurate and quick colposcopic diagnosis method.
Currently, there are some methods of deep learning to screen and analyze cervical disease images. However, most of the methods are directed to images after acetic acid experiments, and the fact that the clinical doctor can combine different experimental reagents to obtain images for combined diagnosis is ignored.
The method is different from other documents in identifying the cervical cancer precancerous lesion images under the colposcope. The method disclosed by the invention fully combines clinical diagnosis experience of doctors, combines two types of images of acetic acid experimental images and iodine experimental images, and integrates two methods of a convolution network and a graph convolution neural network, so that the associated information in the two types of images of the same patient can be automatically learned, and the colposcope images can be jointly distinguished. The identification precision is obviously improved.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a multi-modal cervical carcinoma precursor image identification method based on a graph convolution neural network in order to fully integrate the advantages of the convolution neural network and the graph neural network and improve the diagnosis accuracy of cervical carcinoma precursor under a colposcope.
The neural network BG-CNN provided by the invention mainly comprises two parts, wherein one part is a graph neural network formed by two ResNet18, and the other part is a graph convolution neural network forming relationship extraction network. The technical scheme adopted for solving the technical problem comprises the following steps:
step one, collecting colposcopy images, and labeling the colposcopy images of each patient according to a pathological diagnosis report.
Step two, constructing two 18-layer ResNet as feature extraction networks for extracting feature maps F of acetic acid experiment images respectivelyaAnd feature map F of iodine experimental imageb. Then two attention models are respectively passed to generate two attention diagrams AaAnd Ab
And step three, combining the feature map and the Attention map of the corresponding image extracted in the step two by using Bilinear pooling Bilinear Attention position based on Attention to respectively generate a two-dimensional feature matrix for generating an acetic acid experimental image and an iodine experimental image.
And step four, taking each row of the feature matrix in the step three as a node of the graph, and finding K nodes with the Euclidean distance being closest to each node by using a K neighbor algorithm to construct an adjacency matrix. A two-layer graph convolution neural network is then used to aggregate features between nodes. Finally, the final feature representation is obtained by element dot multiplication of the learned features of the convolutional layer with the features learned by ResNet 18.
And fifthly, inputting the marked colposcope image obtained in the first step into the nerve networks BG-CNN constructed in the second, third and fourth steps for training, setting the loss function as a cross entropy loss function, and finally obtaining a trained model. And inputting the colposcope image into a trained model for image detection.
The invention has the following beneficial effects:
the method disclosed by the invention fully combines clinical diagnosis experience of doctors, combines two types of images of acetic acid experimental images and iodine experimental images, and integrates two methods of a convolution network and a graph convolution neural network, so that the associated information in the two types of images of the same patient can be automatically learned, and the colposcope images can be jointly distinguished. The identification precision is obviously improved.
Drawings
FIG. 1 is a diagram of a neural network BG-CNN according to the present invention.
Detailed Description
The method of the present invention is further described below with reference to the accompanying drawings
The specific operation of marking the colposcope image in the step one is as follows:
each patient was classified into two categories, LSIL-and HSIL +, based on their pathological diagnosis report. The LSIL-comprises normal patients and low-grade squamous intraepithelial lesion (LSIL) patients, and the HSIL + comprises high-grade squamous intraepithelial lesion (HSIL) patients and cervical cancer patients; an acetic acid test image and an iodine test image were taken from each patient's colposcopy image as training sets.
The detailed process of obtaining the feature map and the attention map of the corresponding image described in the step two is as follows:
2-1, defining an acetic acid experimental image input into the neural network as X and an iodine experimental image as Y; ResNet18 network is
Figure BDA0002900397910000031
The acetic acid image feature map and the iodine experiment image feature map can be expressed as follows:
Figure BDA0002900397910000032
Figure BDA0002900397910000033
2-2. the attention module consists of a CNN layer with a convolution kernel of 3 x 3 and a BN layer, the notation of which is Atten (·). The attention map generated from the feature maps of the acetic acid experimental image and the iodine experimental image can be expressed as follows:
Aa=relu(Atten(Fa))
Ab=relu(Atten(Fb))
each attention diagram a ═ a1,a2,...,am]The dimension is determined by the number of convolution kernels of the CNN layer; relu () is an activation function.
The detailed process of the feature map of the corresponding image and the two-dimensional feature matrix generated by the BilinerAttention Pooling in the third step is as follows:
in the attention map AaAnd AbIn (2), each subvector aiSince the feature of each part of the object can be abstracted and considered as a part of the corresponding acetic acid experimental image and iodine experimental image, the feature of each part of the object can be represented by the feature map F and the ith attention sub-vector aiThe elements are obtained by dot multiplication. The formula can be expressed as follows:
Figure BDA0002900397910000041
Γ (·) denotes Biliner attachment position. As an element point, it is indicated as being coincident. a isiThe ith sub-vector representing the attention map. g (-) represents a global average pooling function.
The specific process of constructing the adjacency matrix using the K-nearest neighbor algorithm and the specific process of performing relationship aggregation using the graph convolution described in step four are as follows:
4-1 by step three, a feature vector of m dimensions is obtained for both the acetic acid experimental image and the iodine experimental image, so a total of 2m feature vectors is obtained. With each vector as a node of the graph, the adjacency matrix K of the whole graph structure is defined as follows:
Figure BDA0002900397910000042
wherein KNN (f)i) Representative node fiK nearest neighbors based on Euclidean distance, fjDenotes f byiOther nodes than the other.
4-2 defines a node set V and an edge set E, the node set comprises 2m nodes, namely fiE.v, when the adjacency matrix KijWhen equal to 1, then (f)i,fj) E.g. fiAnd fjThere is one edge, thus resulting in one complete graph G ═ (V, E). Graph G is then fed into a two-layer graph convolution neural network to aggregate the features of neighboring nodes. The calculation formula for updating each node is as follows:
Figure BDA0002900397910000043
Hla characteristic representation of a node representing the l-th layer, and when l is 0, then H0=[f1,f2,...,f2m]。
Figure BDA0002900397910000044
Is the normalized critical matrix K. WlIs a weight matrix that the l-th layer can learn.
4-3 the final relationship characteristics are expressed as follows:
y=H2[Fa,Fb]
wherein H2The feature representation is learned after the two-layer graph convolution network. FaAnd FbCharacteristic graphs of the acetic acid experimental image and the iodine experimental image obtained by the ResNet18 network are respectively.
And fifthly, inputting the marked colposcope image obtained in the first step into the nerve networks BG-CNN constructed in the second, third and fourth steps for training, setting the loss function as a cross entropy loss function, and finally obtaining a trained model. And inputting the colposcope image into a trained model for image detection.
FIG. 1 is a diagram of a neural network BG-CNN according to the present invention.

Claims (5)

1. A multi-modal cervical carcinoma pre-lesion image recognition method based on a graph convolution neural network is characterized by comprising the following steps:
step one, collecting colposcopy images, and labeling the colposcopy images of each patient according to a pathological diagnosis report;
step two, constructing two 18-layer ResNet as feature extraction networks for extracting feature maps F of acetic acid experiment images respectivelyaAnd feature map F of iodine experimental imageb(ii) a Then two attention models are respectively passed to generate two attention diagrams AaAnd Ab
Combining the feature map and the Attention map of the corresponding image extracted in the step two by using Bilinear pooling Bilinear Attention position based on Attention to respectively generate a two-dimensional feature matrix for generating an acetic acid experimental image and an iodine experimental image;
step four, taking each row of the characteristic matrix in the step three as a node of the graph, and finding K nodes with the shortest Euclidean distance of each node by using a K neighbor algorithm to construct an adjacent matrix; then using a two-layer graph convolution neural network to aggregate features between nodes; finally, the final feature representation is obtained by performing element dot multiplication on the learned features of the convolutional layer and the features learned by ResNet 18;
inputting the marked colposcope image obtained in the step one into the nerve network BG-CNN constructed in the step two, the step three and the step four for training, setting a loss function as a cross entropy loss function, and finally obtaining a trained model; and inputting the colposcope image into a trained model for image detection.
2. The method for recognizing the multimodal cervical cancer precursor lesion image based on the atlas neural network as claimed in claim 1, wherein the specific operation of labeling the colposcopic image in the step one is as follows:
each patient was classified into two categories, LSIL-and HSIL +; the LSIL-comprises normal patients and low-grade squamous intraepithelial lesion (LSIL) patients, and the HSIL + comprises high-grade squamous intraepithelial lesion (HSIL) patients and cervical cancer patients; an acetic acid test image and an iodine test image were taken from each patient's colposcopy image as training sets.
3. The method for identifying the multi-modal cervical carcinoma pre-lesion image based on the atlas neural network as claimed in claim 2, wherein the feature map and the attention map obtained by obtaining the corresponding image in the second step are detailed as follows:
2-1, defining an acetic acid experimental image input into the neural network as X and an iodine experimental image as Y; ResNet18 network is
Figure FDA0002900397900000021
The acetic acid image feature map and the iodine experiment image feature map can be expressed as follows:
Figure FDA0002900397900000022
Figure FDA0002900397900000023
2-2. the attention module consists of a CNN layer with a convolution kernel of 3 x 3 and a BN layer, the symbol of which is described as Atten (·); the attention map generated from the feature maps of the acetic acid experimental image and the iodine experimental image can be expressed as follows:
Aa=relu(Atten(Fa))
Ab=relu(Atten(Fb))
each attention diagram a ═ a1,a2,...,am]The dimension is determined by the number of convolution kernels of the CNN layer; relu () is an activation function.
4. The method for identifying the multimodal cervical pre-lesion image based on the atlas neural network as claimed in claim 3, wherein the detailed process of the feature map of the corresponding image and the two-dimensional feature matrix generated by the Bilinear Attention position in the third step is as follows:
in the attention map AaAnd AbIn (2), each subvector aiSince the feature of each part of the object can be abstracted and considered as a part of the corresponding acetic acid experimental image and iodine experimental image, the feature of each part of the object can be represented by the feature map F and the ith attention sub-vector aiElement dot product is obtained; the formula can be expressed as follows:
Figure FDA0002900397900000024
Γ (·) denotes Biliner attachment position; as element point of the line indicates a coincidence; a isiAn ith sub-vector representing the attention map; g (-) represents a global average pooling function.
5. The method for recognizing the multi-modal cervical carcinoma pre-lesion image based on the atlas neural network as claimed in claim 4, wherein the specific process of constructing the adjacency matrix by using the K-nearest neighbor algorithm and the specific process of performing the relationship aggregation by using the atlas are as follows:
4-1, obtaining a feature vector of m dimensions for both the acetic acid experiment image and the iodine experiment image by the third step, so that 2m feature vectors are obtained in total; with each vector as a node of the graph, the adjacency matrix K of the whole graph structure is defined as follows:
Figure FDA0002900397900000031
wherein KNN (f)i) Representative node fiK nearest neighbors based on Euclidean distance, fjDenotes f byiOther nodes than the first node;
4-2 defines a node set V and an edge set E, the node set comprises 2m nodes, namely fiE.v, when the adjacency matrix KijWhen equal to 1, then (f)i,fj) E.g. fiAnd fjThere is one edge, thus resulting in one complete graph G ═ (V, E); then sending the graph G into a two-layer graph convolution neural network to aggregate the characteristics of adjacent nodes; the calculation formula for updating each node is as follows:
Figure FDA0002900397900000032
Hla characteristic representation of a node representing the l-th layer, and when l is 0, then H0=[f1,f2,...,f2m];
Figure FDA0002900397900000033
Is the normalized critical matrix K; wlIs a weight matrix learnable by the l-th layer;
4-3 the final relationship characteristics are expressed as follows:
y=H2[Fa,Fb]
wherein H2The feature representation is learned after the two-layer graph convolution network; faAnd FbCharacteristic graphs of the acetic acid experimental image and the iodine experimental image obtained by the ResNet18 network are respectively.
CN202110054445.8A 2021-01-15 2021-01-15 Multi-modal cervical carcinoma pre-lesion image recognition method based on graph neural network Pending CN112750115A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110054445.8A CN112750115A (en) 2021-01-15 2021-01-15 Multi-modal cervical carcinoma pre-lesion image recognition method based on graph neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110054445.8A CN112750115A (en) 2021-01-15 2021-01-15 Multi-modal cervical carcinoma pre-lesion image recognition method based on graph neural network

Publications (1)

Publication Number Publication Date
CN112750115A true CN112750115A (en) 2021-05-04

Family

ID=75652104

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110054445.8A Pending CN112750115A (en) 2021-01-15 2021-01-15 Multi-modal cervical carcinoma pre-lesion image recognition method based on graph neural network

Country Status (1)

Country Link
CN (1) CN112750115A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113223723A (en) * 2021-05-11 2021-08-06 胡敏雄 Method for predicting multi-modal kidney tumor kidney protection operation difficulty and complications
CN113469119A (en) * 2021-07-20 2021-10-01 合肥工业大学 Cervical cell image classification method based on visual converter and graph convolution network
CN113591629A (en) * 2021-07-16 2021-11-02 深圳职业技术学院 Finger three-mode fusion recognition method, system, device and storage medium
CN114841970A (en) * 2022-05-09 2022-08-02 北京字节跳动网络技术有限公司 Inspection image recognition method and device, readable medium and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108388841A (en) * 2018-01-30 2018-08-10 浙江大学 Cervical biopsy area recognizing method and device based on multiple features deep neural network
CN109543719A (en) * 2018-10-30 2019-03-29 浙江大学 Uterine neck atypia lesion diagnostic model and device based on multi-modal attention model
CN109859159A (en) * 2018-11-28 2019-06-07 浙江大学 A kind of cervical lesions region segmentation method and device based on multi-modal segmentation network
CN110826576A (en) * 2019-10-10 2020-02-21 浙江大学 Cervical lesion prediction system based on multi-mode feature level fusion
US10692602B1 (en) * 2017-09-18 2020-06-23 Deeptradiology, Inc. Structuring free text medical reports with forced taxonomies
CN111738113A (en) * 2020-06-10 2020-10-02 杭州电子科技大学 Road extraction method of high-resolution remote sensing image based on double-attention machine system and semantic constraint

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10692602B1 (en) * 2017-09-18 2020-06-23 Deeptradiology, Inc. Structuring free text medical reports with forced taxonomies
CN108388841A (en) * 2018-01-30 2018-08-10 浙江大学 Cervical biopsy area recognizing method and device based on multiple features deep neural network
CN109543719A (en) * 2018-10-30 2019-03-29 浙江大学 Uterine neck atypia lesion diagnostic model and device based on multi-modal attention model
CN109859159A (en) * 2018-11-28 2019-06-07 浙江大学 A kind of cervical lesions region segmentation method and device based on multi-modal segmentation network
CN110826576A (en) * 2019-10-10 2020-02-21 浙江大学 Cervical lesion prediction system based on multi-mode feature level fusion
CN111738113A (en) * 2020-06-10 2020-10-02 杭州电子科技大学 Road extraction method of high-resolution remote sensing image based on double-attention machine system and semantic constraint

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113223723A (en) * 2021-05-11 2021-08-06 胡敏雄 Method for predicting multi-modal kidney tumor kidney protection operation difficulty and complications
CN113223723B (en) * 2021-05-11 2023-08-25 福建省立医院 Method for predicting difficulty and complications of kidney-protecting operation of multi-mode kidney tumor
CN113591629A (en) * 2021-07-16 2021-11-02 深圳职业技术学院 Finger three-mode fusion recognition method, system, device and storage medium
CN113591629B (en) * 2021-07-16 2023-06-27 深圳职业技术学院 Finger tri-modal fusion recognition method, system, device and storage medium
CN113469119A (en) * 2021-07-20 2021-10-01 合肥工业大学 Cervical cell image classification method based on visual converter and graph convolution network
CN113469119B (en) * 2021-07-20 2022-10-04 合肥工业大学 Cervical cell image classification method based on visual converter and image convolution network
CN114841970A (en) * 2022-05-09 2022-08-02 北京字节跳动网络技术有限公司 Inspection image recognition method and device, readable medium and electronic equipment
CN114841970B (en) * 2022-05-09 2023-07-18 抖音视界有限公司 Identification method and device for inspection image, readable medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN112750115A (en) Multi-modal cervical carcinoma pre-lesion image recognition method based on graph neural network
AU2019311336B2 (en) Computer classification of biological tissue
WO2018120942A1 (en) System and method for automatically detecting lesions in medical image by means of multi-model fusion
CN111429407B (en) Chest X-ray disease detection device and method based on double-channel separation network
CN109636805B (en) Cervical image lesion area segmentation device and method based on classification prior
CN108388841B (en) Cervical biopsy region identification method and device based on multi-feature deep neural network
Li et al. Lesion-attention pyramid network for diabetic retinopathy grading
CN109102491A (en) A kind of gastroscope image automated collection systems and method
CN106056595A (en) Method for automatically identifying whether thyroid nodule is benign or malignant based on deep convolutional neural network
Luo et al. Retinal image classification by self-supervised fuzzy clustering network
CN110826576B (en) Cervical lesion prediction system based on multi-mode feature level fusion
CN113344864A (en) Ultrasonic thyroid nodule benign and malignant prediction method based on deep learning
Lei et al. Automated detection of retinopathy of prematurity by deep attention network
CN113610118A (en) Fundus image classification method, device, equipment and medium based on multitask course learning
CN115760835A (en) Medical image classification method of graph convolution network
Ay et al. Automated classification of nasal polyps in endoscopy video-frames using handcrafted and CNN features
Noor et al. GastroNet: A robust attention‐based deep learning and cosine similarity feature selection framework for gastrointestinal disease classification from endoscopic images
KR102407248B1 (en) Deep Learning based Gastric Classification System using Data Augmentation and Image Segmentation
Xue et al. CT-based COPD identification using multiple instance learning with two-stage attention
CN116664911A (en) Breast tumor image classification method based on interpretable deep learning
CN112419246A (en) Depth detection network for quantifying esophageal mucosa IPCLs blood vessel morphological distribution
CN110277166B (en) Auxiliary diagnosis system and method for palace laparoscope
CN111768845B (en) Pulmonary nodule auxiliary detection method based on optimal multi-scale perception
Song et al. Classification of cervical lesion images based on CNN and transfer learning
Li et al. Multi-source data fusion for recognition of cervical precancerous lesions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Yan Ling

Inventor after: Shen Xingfa

Inventor after: Li Shufeng

Inventor after: Zhao Qingbiao

Inventor after: Liu Lili

Inventor before: Shen Xingfa

Inventor before: Li Shufeng

Inventor before: Yan Ling

Inventor before: Zhao Qingbiao

Inventor before: Liu Lili