CN113066054B - Cervical OCT image feature visualization method for computer-aided diagnosis - Google Patents

Cervical OCT image feature visualization method for computer-aided diagnosis Download PDF

Info

Publication number
CN113066054B
CN113066054B CN202110268171.2A CN202110268171A CN113066054B CN 113066054 B CN113066054 B CN 113066054B CN 202110268171 A CN202110268171 A CN 202110268171A CN 113066054 B CN113066054 B CN 113066054B
Authority
CN
China
Prior art keywords
cervical tissue
cervical
feature
model
classification model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110268171.2A
Other languages
Chinese (zh)
Other versions
CN113066054A (en
Inventor
马于涛
余沁怡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202110268171.2A priority Critical patent/CN113066054B/en
Publication of CN113066054A publication Critical patent/CN113066054A/en
Application granted granted Critical
Publication of CN113066054B publication Critical patent/CN113066054B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10101Optical tomography; Optical coherence tomography [OCT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a visualization method for extracting cervical tissue OCT image features based on parameter edge features of a convolutional neural network classification model, which can describe the edges of cervical tissue features learned by the model and improve the model interpretability from the angle, thereby better assisting a doctor in carrying out lesion diagnosis. The method comprises the following steps: 1) collecting a cervical OCT image to train a CNN classification model 2) performing data enhancement on trained parameters of the cervical tissue classification model 3) performing bidirectional gradient convolution on activated model neurons based on different cervical tissue classification results, and extracting edge features 4) creating a visual heat map according to the edge feature extraction result.

Description

Cervical OCT image feature visualization method for computer-aided diagnosis
Technical Field
The invention provides a cervical OCT image feature visualization method for computer-aided diagnosis, belonging to the field of deep learning interpretability and the field of medical images.
Background
The rapid development of deep learning promotes the artificial intelligence transformation of various industries, and a technical mode based on deep learning +' is emerging in various high and new technical fields. In the field of medical imaging, the auxiliary diagnosis technology based on artificial intelligence is greatly different in the scientific research field and clinical diagnosis with excellent diagnosis level.
Cervical cancer is one of the most common female diseases, and optical coherence tomography is the main means for detecting cervical cancer, and can provide a high-resolution tissue longitudinal cutting image for doctors. Based on the characteristics of the detection process, the current school world constructs an image classification model for deep learning into a screening process, so that the diagnosis efficiency is accelerated, and the detection precision is comparable to the top level of human beings.
However, the deep learning algorithm is a typical "input-output" model, and besides the fine precision that can be achieved on the data set, the algorithm itself is difficult to understand, and the "black box" inside the model is still difficult to open, and cannot provide the basis for human to generate a certain result. If a model is completely uninterpretable, its application in many areas is limited by the inability to give more reliable information, and it is desirable to know what knowledge the model learns from the data and make the final decision.
Through deep learning visualization algorithms, the user can be explained why the model produced the result. In particular, in the field of medical imaging, it is an essential link to accurately provide the basis for deep learning model diagnosis for doctors. However, cervical slice images exhibit the characteristics of high granularity and graying, so that the conventional visualization algorithm is difficult to meet the explanatory requirement on the task, and no visualization method for the OCT cutting detection model of the cervical tissue slice exists at present.
Disclosure of Invention
Based on the background and the problems, aiming at the problem that the mainstream visualization algorithm effect is poor due to the characteristics of high granularity and gray level of the OCT image of the cervical section, the method carries out bidirectional gradient convolution extraction on the calculated feature data, and the invention provides a cervical OCT image feature visualization method for computer-aided diagnosis based on the characteristic edge learned by an explained deep learning model, which comprises the following steps:
1) acquiring OCT image of female cervical tissue as training sample
The cervical OCT image is a two-dimensional or three-dimensional structural image of cervical tissue obtained by scanning through an optical coherence tomography technology or an optical coherence tomography technology;
2) training the image classification model by using the training sample, and determining the trained network parameters, wherein the method comprises the following substeps:
21) constructing an image classification model and randomly initializing network parameters
The image classification model is a deep learning network which takes a convolutional neural network as a model backbone and comprises a convolutional layer, a pooling layer, a full-link layer and a regression classification layer; the parameters are convolution layer convolution kernel data and weight and deviation vectors corresponding to the hidden layer and the output layer;
22) repeatedly iterating the training network, and updating the image classification model parameters until all the weight w and the fluctuation interval of the deviation vector b are smaller than a threshold value epsilon;
3) creating a characteristic visualization heat map according to the input cervical tissue OCT map, comprising the following sub-steps:
31) extracting the score y of the trained model before the regression classification layercWherein c is the category of cervical tissue decided by the image classification model;
32) extracting convolutional layer characteristic mapping parameter matrix A needing to be visualizedkK is the number of characteristic graphs output by the convolutional layer;
33) calculating a gradient of each feature mapping score
Figure BDA0002972915890000021
34) To AkPerforming parameter enhancement operation including normalization, threshold value elimination, binarization and noise reduction, and performing bidirectional gradient convolution to obtain a feature mapping matrix A'k
35) The results corresponding to 33) and 34) are multiplied and then normalized, and all the results are accumulated and then normalized through a ReLU function to be used as cervical tissue characteristic parameter values;
36) and visually displaying 35) the obtained cervical tissue characteristic parameter values by means of thermodynamic diagrams in color.
Further, the image classification model in the step 2) is a convolutional neural network classification model.
Further, in the step 34), the specific process is as follows:
341) k characteristic graphs obtained by inputting cervical OCT images through a certain convolution layer are normalized and subjected to threshold value elimination
Normalizing the data of each feature map to a [ m, n ] interval, and removing data smaller than a threshold value in the feature map interval by setting 0 according to a set threshold value z;
342) binarization denoising is carried out on the normalized feature map group
Based on the characteristic diagram data distribution interval [ p, q ] newly obtained from 341), carrying out offset calculation close to p or q on each characteristic value, and redistributing and reducing noise;
343) bidirectional gradient convolution extraction of edge features
According to 342), performing gradient convolution in x and y directions on the k feature maps, and performing convolution kernel Gx,GyRespectively 3 × 3 horizontal and vertical gradient factors, and combining the above two convolution results to obtain gradient
Figure BDA0002972915890000022
Extracting edge features of each feature map, namely a feature mapping matrix A'k
Further, 341) the value of the threshold is determined by the histogram distribution of the feature map data.
Further, 342) the proportion of the offset is determined by the histogram distribution of the profile data.
Further, 343) a convolution kernel Gx,GyRespectively, are as follows,
Figure BDA0002972915890000031
Figure BDA0002972915890000032
further, in the step 35), the specific process is as follows:
351) and (4) the score gradient of each feature map is matched with the feature map matrix A 'after data processing'kMultiplication by multiplication
Figure BDA0002972915890000033
Gradient the score of each class
Figure BDA0002972915890000034
Multiplying by 34) processed feature mapping matrix A'kObtaining a weighted edge feature map of the contribution degree of each feature mapping matrix to different categories;
352) summing all weighted edge feature maps by a ReLU function
Inputting the edge feature map group into a ReLU function and then summing to obtain a complete edge feature mapping matrix
Figure BDA0002972915890000035
Wherein C is the cervical tissue type judged by the image classification model, and then normalization is carried out for visualization of the heat map.
Compared with the prior art, the invention has the advantages and beneficial effects that: on one hand, a new CNN model feature map visualization method is provided by extracting edge features of the feature map. On the other hand, a visualization analysis method is introduced into the deep learning model for classifying the cervical OCT images, and the visualization effect on the model is enhanced on the existing visualization effect by using the edge feature-based visualization method disclosed by the invention.
Drawings
In order to more clearly illustrate the embodiments or prior art solutions of the present invention, the drawings used in the description of the embodiments or prior art will be briefly described below
FIG. 1 is a schematic flow chart of a feature visualization method.
FIG. 2 is a flow chart of an embodiment of the present invention.
FIG. 3 shows VGG19 model parameters.
FIG. 4 shows the effect of the mainstream visualization methods such as Crad-CAM, Guided-BackProp, etc. compared with the method, and the second row is followed by 4 sheets of visualization effects based on the edge of the feature map.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, an embodiment of the present invention provides a method for visualizing characteristics of a cervical OCT image for computer-aided diagnosis, including the following steps:
1) acquiring OCT image of female cervical tissue as training sample
In this embodiment, 3172 cases, 2464 cases of cysts, 2067 cases of valgus, 5539 cases of high-grade and 364 cases of cancers of 3D cervical tissue inflammation samples are obtained by scanning with the optical coherence tomography/optical coherence tomography;
2) training the VGG19 classification model by using the training samples, and determining the trained network parameters;
21) constructing an image classification model and randomly initializing network parameters
Based on sample categories, setting a model as a corresponding five-classification model, setting an accuracy verification method as two-fold cross verification and back propagation weight, adopting Adam for a convergence optimization algorithm, setting a convergence threshold value to be 1e-08, setting the probability of a random value of a Dropout method to be 0.4 by using an objective equation for preventing overfitting, adopting global average pooling for a pooling method, setting the size of a full connection layer to be 512 dimensions, and improving gradient disappearance by adopting a BatchNormalization method;
the input of the model is a 224 x 224 three-dimensional cervical tissue OCT image, and noise reduction and background elimination processing are performed before the input;
22) repeatedly iterating the training network, and updating the model parameters until all the fluctuation intervals of the weight w and the deviation b are smaller than the threshold value epsilon
In this embodiment, the number of training iterations epoch is set to 200, the size of each Batch is 32, and the flow and output parameters of model training are shown in fig. 2;
finally, selecting a certain iteration parameter with the accuracy rate of 0.8438 as a model parameter for visual calculation;
3) creating a characteristic visualization heat map according to the input cervical tissue OCT map;
inputting an OCT image of cervical tissue, and judging the cervical tissue symptom to be high-level by a model;
31) extracting the score y of the trained model before the regression classification layer softmax layercWherein c is the category of cervical tissue for the model decision;
in this example, the scores are five-classification corresponding scores;
32) extracting convolutional layer characteristic mapping parameter matrix A needing to be visualizedkK is the number of characteristic graphs output by the convolutional layer;
extracting the output of the last convolutional layer, wherein the characteristic diagram of the output with the layer name of 'block 5_ conv 4' is Ak
33) Calculating a gradient of each feature mapping score
Figure BDA0002972915890000041
34) To AkPerforming parameter enhancement operation including normalization, threshold value elimination, binarization and noise reduction, and performing bidirectional gradient convolution to obtain a feature mapping matrix A'k
In this embodiment, the normaize function is directly used to Normalize the data of each feature map to the [ -0.5,0.5] interval, and the data smaller than the threshold in the feature map interval is subjected to 0-setting rejection according to the set threshold 0.1722. The value of the threshold is determined by the histogram distribution of the characteristic diagram data, and the value smaller than 0.1722 is considered to not contribute to judging the type of the cervical tissue OCT diagram;
and performing offset calculation close to 0 or 0.5 on each characteristic value based on the characteristic diagram data distribution interval [0, 0.5] newly obtained from the last step, and redistributing and reducing the noise. In the embodiment, the proportion of the deviation is determined by the histogram distribution of the feature map data, and the reciprocal of the numerical frequency/total numerical value is the reference proportion of the deviation, i.e. the numerical value with higher frequency contributes more to the whole decision, and the deviation is smaller, so that the distribution among the data is more concentrated, and the gradient information is more prominent;
according to the feature mapping chart newly obtained in the last step, 512 features are subjected to feature mappingThe map is subjected to gradient convolution in the x, y directions, a convolution kernel Gx,GyThe horizontal and vertical gradient factors are 3 × 3, respectively, and the convolution kernels in this embodiment are:
Figure BDA0002972915890000051
Figure BDA0002972915890000052
for each value, combining the above two convolution results to obtain a gradient
Figure BDA0002972915890000053
Namely extracting edge feature A 'of each feature map'k
35) The results corresponding to 33) and 34) are multiplied and then normalized, and all the results are accumulated after passing through a ReLU function;
351) and (4) the score gradient of each feature map is matched with the feature map A 'after data processing'kMultiplication by multiplication
Figure BDA0002972915890000054
Gradient the score of each class
Figure BDA0002972915890000055
Multiplying by 34) processed feature mapping matrix A'kObtaining a weighted edge feature map of the contribution degree of each feature mapping matrix to different categories, wherein the operation can reduce the influence of edge features with low contribution to classification results on visualization results;
352) summing all weighted edge feature maps by a ReLU function
Inputting the edge feature map group obtained by 351) into a ReLU function and then summing to obtain a complete edge feature mapping matrix
Figure BDA0002972915890000056
Wherein C is the cervical tissue type judged by the model and is used for visualization of a heat map;
in this embodiment, the ReLU function uses f (x) ═ max (0, x), and normalizes the weighted edge feature maps after summing, that is, the edge feature map for visualization;
36) visually displaying 35) the obtained cervical tissue characteristic parameter values by means of thermodynamic diagrams in color;
to demonstrate the effectiveness of the present invention, the input image is classified using the model of the first embodiment, the input "high-level" significant region of the cervical tissue OCT image is visualized using the visualization algorithm of the present invention, while other mainstream visualization methods such as Crad-CAM (row 1 column 2), Grad-CAM + + (row 1 column 3), Score-CAM (row 1 column 4), Faster-Score-CAM (row 1 column 5), Guided-BackPropagation (row 3 column 1), Guided-Grad-CAM (row 3 column 2), Guided-Grad-CAM + + (row 3 column 3), Guided-Score-CAM (row 3 column 4), Guided-Fster-Score-CAM (row 3 column 5) are shown in FIG. 4, and compared to the method (row 2 column 2-4) in which the results in FIG. 4 demonstrate that the method can accurately map the significant region edges of the image (the box), the model can be used as a basis for judging that the type result is in a high level, and can be used for model interpretative analysis, the salient region is basically consistent with other methods, and for a cervical tissue OCT image of machine-assisted diagnosis, the focus region concerned in model judgment can be more clearly depicted.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.

Claims (6)

1. A cervical tissue OCT image feature visualization method for computer-aided diagnosis is characterized by comprising the following steps:
1) acquiring OCT image of female cervical tissue as training sample
The cervical OCT image is a two-dimensional or three-dimensional structural image of cervical tissue obtained by scanning through an optical coherence tomography technology or an optical coherence tomography technology;
2) training the image classification model by using the training sample, and determining the trained network parameters, wherein the method comprises the following substeps:
21) constructing an image classification model and randomly initializing network parameters
The image classification model is a deep learning network which takes a convolutional neural network as a model backbone and comprises a convolutional layer, a pooling layer, a full-link layer and a regression classification layer; the parameters are convolution layer convolution kernel data and weight and deviation vectors corresponding to the hidden layer and the output layer;
22) repeatedly iterating the training network, and updating the image classification model parameters until all the weight w and the fluctuation interval of the deviation vector b are smaller than a threshold value epsilon;
3) creating a characteristic visualization heat map according to the input cervical tissue OCT map, comprising the following sub-steps:
31) extracting the score y of the trained model before the regression classification layercWherein c is the category of cervical tissue decided by the image classification model;
32) extracting convolutional layer characteristic mapping parameter matrix A needing to be visualizedkK is the number of characteristic graphs output by the convolutional layer;
33) calculating a gradient of each feature mapping score
Figure FDA0003473048230000011
34) To AkPerforming parameter enhancement operation including normalization, threshold value elimination, binarization and noise reduction, and performing bidirectional gradient convolution to obtain a feature mapping matrix A'k
In the step 34), the specific process is as follows:
341) k characteristic graphs obtained by inputting cervical OCT images through a certain convolution layer are normalized and subjected to threshold value elimination
Normalizing the data of each feature map to a [ m, n ] interval, and removing data smaller than a threshold value in the feature map interval by setting 0 according to a set threshold value z;
342) binarization denoising is carried out on the normalized feature map group
Based on the characteristic diagram data distribution interval [ p, q ] newly obtained from 341), carrying out offset calculation close to p or q on each characteristic value, and redistributing and reducing noise;
343) bidirectional gradient convolution extraction of edge features
According to 342), performing gradient convolution in x and y directions on the k feature maps, and performing convolution kernel Gx,GyRespectively 3 × 3 horizontal and vertical gradient factors, and combining the above two convolution results to obtain gradient
Figure FDA0003473048230000021
Extracting edge features of each feature map, namely a feature mapping matrix A'k
35) The results corresponding to 33) and 34) are multiplied and then normalized, and all the results are accumulated and then normalized through a ReLU function to be used as cervical tissue characteristic parameter values;
36) and the parameter values of the cervical tissue characteristics obtained in the step 35) are displayed visually by means of a heat map in color.
2. The method for visualizing the characteristics of an OCT image of cervical tissue according to claim 1, wherein said method comprises: the image classification model in the step 2) is a convolutional neural network classification model.
3. The method for visualizing the characteristics of an OCT image of cervical tissue according to claim 1, wherein said method comprises: 341) the value of the middle threshold is determined by the histogram distribution of the characteristic diagram data.
4. The method for visualizing the characteristics of an OCT image of cervical tissue according to claim 1, wherein said method comprises: 342) the proportion of the medium offset is determined by the histogram distribution of the profile data.
5. The cervical tissue for computer-aided diagnosis of claim 1The OCT image feature visualization method comprises the following steps: 343) middle convolution kernel Gx,GyRespectively, are as follows,
Figure FDA0003473048230000022
Figure FDA0003473048230000023
6. the method for visualizing the characteristics of an OCT image of cervical tissue according to claim 1, wherein said method comprises: in the step 35), the specific process is as follows:
351) and (4) the score gradient of each feature map is matched with the feature map matrix A 'after data processing'kMultiplication by multiplication
Figure FDA0003473048230000024
Gradient the score of each class
Figure FDA0003473048230000025
Multiplying by 34) processed feature mapping matrix A'kObtaining a weighted edge feature map of the contribution degree of each feature mapping matrix to different categories;
352) summing all weighted edge feature maps by a ReLU function
Inputting the edge feature map group into a ReLU function and then summing to obtain a complete edge feature mapping matrix
Figure 1
Wherein c is the cervical tissue class judged by the image classification model, and then normalization is carried out for visualization of the heat map.
CN202110268171.2A 2021-03-12 2021-03-12 Cervical OCT image feature visualization method for computer-aided diagnosis Active CN113066054B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110268171.2A CN113066054B (en) 2021-03-12 2021-03-12 Cervical OCT image feature visualization method for computer-aided diagnosis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110268171.2A CN113066054B (en) 2021-03-12 2021-03-12 Cervical OCT image feature visualization method for computer-aided diagnosis

Publications (2)

Publication Number Publication Date
CN113066054A CN113066054A (en) 2021-07-02
CN113066054B true CN113066054B (en) 2022-03-15

Family

ID=76560127

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110268171.2A Active CN113066054B (en) 2021-03-12 2021-03-12 Cervical OCT image feature visualization method for computer-aided diagnosis

Country Status (1)

Country Link
CN (1) CN113066054B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113724842B (en) * 2021-09-08 2022-08-02 武汉兰丁智能医学股份有限公司 Cervical tissue pathology auxiliary diagnosis method based on attention mechanism
CN114445348B (en) * 2021-12-31 2022-11-29 扬州中卓泵业有限公司 New material water pump defect detection method and system based on optical means

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110555845A (en) * 2019-09-27 2019-12-10 上海鹰瞳医疗科技有限公司 Fundus OCT image identification method and equipment
CN111417334A (en) * 2017-11-30 2020-07-14 爱尔康公司 Improved segmentation in optical coherence tomography imaging
CN111932665A (en) * 2020-06-15 2020-11-13 浙江工贸职业技术学院 Hepatic vessel three-dimensional reconstruction and visualization method based on vessel tubular model

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2009305771B2 (en) * 2008-10-14 2013-08-15 Lightlab Imaging, Inc. Methods for stent strut detection and related measurement and display using optical coherence tomography
US11122981B2 (en) * 2019-05-17 2021-09-21 Massachusehis Institute Of Technology Arterial wall characterization in optical coherence tomography imaging

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111417334A (en) * 2017-11-30 2020-07-14 爱尔康公司 Improved segmentation in optical coherence tomography imaging
CN110555845A (en) * 2019-09-27 2019-12-10 上海鹰瞳医疗科技有限公司 Fundus OCT image identification method and equipment
CN111932665A (en) * 2020-06-15 2020-11-13 浙江工贸职业技术学院 Hepatic vessel three-dimensional reconstruction and visualization method based on vessel tubular model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Fully automated detection, grading and 3D modeling of maculopathy from OCT volumes;Hassan B, Hassan T;《2019 2nd International Conference on Communication, Computing and Digital systems (C-CODE)》;20190404;第252-257页 *
基于光学相干层析的鲜红斑痣信号增强和降斑;殷代强等;《中国激光》;20130930;第40卷(第9期);第1-6页 *

Also Published As

Publication number Publication date
CN113066054A (en) 2021-07-02

Similar Documents

Publication Publication Date Title
Musallam et al. A new convolutional neural network architecture for automatic detection of brain tumors in magnetic resonance imaging images
CN107203999B (en) Dermatoscope image automatic segmentation method based on full convolution neural network
CN107610087B (en) Tongue coating automatic segmentation method based on deep learning
CN107273845B (en) Facial expression recognition method based on confidence region and multi-feature weighted fusion
Tian et al. Multi-path convolutional neural network in fundus segmentation of blood vessels
Al-Dmour et al. A clustering fusion technique for MR brain tissue segmentation
CN109410204B (en) Cortical cataract image processing and enhancing method based on CAM
Suresha et al. Alzheimer disease detection based on deep neural network with rectified Adam optimization technique using MRI analysis
CN113066054B (en) Cervical OCT image feature visualization method for computer-aided diagnosis
CN110706225B (en) Tumor identification system based on artificial intelligence
Jain et al. Lung nodule segmentation using salp shuffled shepherd optimization algorithm-based generative adversarial network
Samiappan et al. Classification of carotid artery abnormalities in ultrasound images using an artificial neural classifier.
CN112991363A (en) Brain tumor image segmentation method and device, electronic equipment and storage medium
CN113298742A (en) Multi-modal retinal image fusion method and system based on image registration
Rahmawaty et al. Classification of breast ultrasound images based on texture analysis
CN113782184A (en) Cerebral apoplexy auxiliary evaluation system based on facial key point and feature pre-learning
CN115147600A (en) GBM multi-mode MR image segmentation method based on classifier weight converter
Renukadevi et al. Brain Image Classification Using Time Frequency Extraction with Histogram Intensity Similarity.
Li et al. Study on the detection of pulmonary nodules in CT images based on deep learning
Rahman et al. Hybrid feature fusion and machine learning approaches for melanoma skin cancer detection
CN113610118A (en) Fundus image classification method, device, equipment and medium based on multitask course learning
Alves et al. Extracting lungs from ct images using fully convolutional networks
Doshi et al. (Retracted) Deep belief network-based image processing for local directional segmentation in brain tumor detection
Asyhar et al. Implementation LSTM Algorithm for Cervical Cancer using Colposcopy Data
SJ et al. A Features Fusion Approach for Neonatal and Pediatrics Brain Tumor Image Analysis Using Genetic and Deep Learning Techniques.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant