CN112734769B - Medical image segmentation and quantitative analysis method based on interactive information guided deep learning method, computer device and storage medium - Google Patents
Medical image segmentation and quantitative analysis method based on interactive information guided deep learning method, computer device and storage medium Download PDFInfo
- Publication number
- CN112734769B CN112734769B CN202011642172.0A CN202011642172A CN112734769B CN 112734769 B CN112734769 B CN 112734769B CN 202011642172 A CN202011642172 A CN 202011642172A CN 112734769 B CN112734769 B CN 112734769B
- Authority
- CN
- China
- Prior art keywords
- segmentation
- cnv
- model
- feature
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 94
- 230000002452 interceptive effect Effects 0.000 title claims abstract description 37
- 238000013135 deep learning Methods 0.000 title claims abstract description 31
- 238000004445 quantitative analysis Methods 0.000 title claims abstract description 24
- 238000003709 image segmentation Methods 0.000 title claims abstract description 23
- 230000011218 segmentation Effects 0.000 claims abstract description 105
- 238000010586 diagram Methods 0.000 claims abstract description 66
- 238000012549 training Methods 0.000 claims abstract description 35
- 230000003993 interaction Effects 0.000 claims abstract description 34
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 9
- 230000000694 effects Effects 0.000 claims abstract description 8
- 238000012360 testing method Methods 0.000 claims abstract description 5
- 239000011159 matrix material Substances 0.000 claims description 21
- 230000008569 process Effects 0.000 claims description 15
- 238000002372 labelling Methods 0.000 claims description 14
- 230000006870 function Effects 0.000 claims description 13
- 238000004364 calculation method Methods 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 7
- 238000010606 normalization Methods 0.000 claims description 6
- 238000013528 artificial neural network Methods 0.000 claims description 4
- 238000013507 mapping Methods 0.000 claims description 4
- 229940050561 matrix product Drugs 0.000 claims description 3
- 238000005259 measurement Methods 0.000 claims description 3
- 238000000638 solvent extraction Methods 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 230000008901 benefit Effects 0.000 abstract description 3
- 208000005590 Choroidal Neovascularization Diseases 0.000 description 72
- 206010060823 Choroidal neovascularisation Diseases 0.000 description 72
- 238000012014 optical coherence tomography Methods 0.000 description 38
- 230000003902 lesion Effects 0.000 description 5
- 230000002207 retinal effect Effects 0.000 description 5
- 206010012689 Diabetic retinopathy Diseases 0.000 description 4
- 206010012601 diabetes mellitus Diseases 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 210000003161 choroid Anatomy 0.000 description 3
- 210000001525 retina Anatomy 0.000 description 3
- 238000010845 search algorithm Methods 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000012800 visualization Methods 0.000 description 2
- 201000004569 Blindness Diseases 0.000 description 1
- 208000008839 Kidney Neoplasms Diseases 0.000 description 1
- 206010028980 Neoplasm Diseases 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000004204 blood vessel Anatomy 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000007850 degeneration Effects 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 230000001575 pathological effect Effects 0.000 description 1
- 208000024891 symptom Diseases 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/181—Segmentation; Edge detection involving edge growing; involving edge linking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10101—Optical tomography; Optical coherence tomography [OCT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Geometry (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a deep learning method based on interactive information guidance, a medical image segmentation and quantitative analysis method, computer equipment and a storage medium, wherein the deep learning method comprises the following steps: (1) Constructing a deep convolutional neural network, and directly training by using the existing CNV labeled data to obtain an automatic segmentation model; (2) On the basis of not changing the main network structure, combining simulated interaction information on a training set, adding a channel to a network input characteristic diagram to represent the interaction information, and performing fine tuning training by using the same CNV mask to obtain a model A; (3) During testing or using, a doctor provides an OCT image, the automatic segmentation model is used for automatically detecting and segmenting the CNV of the input OCT image, if the doctor is satisfied with the segmentation effect, the operation is stopped, otherwise, the doctor is allowed to input the interactive information into the model A, and a more accurate result is obtained. The invention has the advantages of improved segmentation performance, more robustness and more accuracy.
Description
Technical Field
The invention relates to a medical image segmentation and quantitative analysis method based on a deep learning method guided by interactive information, computer equipment and a storage medium, belonging to the technical field of computer image processing.
Background
Diabetic Retinopathy (DR) is a common complication of diabetes mellitus in the eye, is a leading cause of blindness in diabetic patients, and the degree of symptoms is generally manifested by retinal microvascular degeneration. Currently, OCT (optical coherence tomography) is commonly used clinically to observe, assess, grade the degree of progression of retinal vasculopathy in DR patients. Among them, choroidal Neovascularization (CNV) is an abnormal manifestation of choroidal blood vessels in diabetic patients, seriously threatening the normal function of retinal tissues. Therefore, the observation and detection of CNV by OCT technique has important clinical value for diagnosing diabetes. The relation between CNV and DR progress can not be studied, and CNV can not be segmented, extracted and quantitatively analyzed from OCT image. The traditional manual segmentation of the CNV is time-consuming and labor-consuming, and is easy to cause deviation. With the application of deep learning to medical image analysis, some methods for automated segmentation of CNVs have been proposed. However, OCT images of different lesion degrees and different patients are complicated and changeable, have various structures, have a high risk by simply relying on automatic segmentation, and are difficult to achieve satisfactory results for all OCT images.
Traditional interactive segmentation methods, such as seed point region growing method, graph cutting method, and active contour model method, are difficult to directly apply to accurate segmentation of CNV because the retina OCT image usually has speckle noise, low contrast, and weak edge.
The techniques currently employed are listed below:
chinese patent document CN 106558030A discloses a choroid segmentation method in three-dimensional large-field sweep optical coherence tomography: image enhancement processing; disc area detection: detecting the upper surface of the retina and an inner-outer retinal interface by using an improved operator, and obtaining a two-dimensional projection of the optic disc region by detecting the absence of the inner-outer retinal interface; suprachoroidal surface segmentation: under the inner and outer retina interfaces, after a optic disc region is removed, obtaining the upper choroidal surface by using a three-dimensional multi-resolution image search algorithm based on a gradient cost function; choroidal surface segmentation: under the lower surface of the choroid, after removing the optic disc region, obtaining an initial segmentation result of the upper surface of the choroid by using a three-dimensional multi-resolution image search algorithm based on a gradient cost function; and calculating a cost function combining the gradient information and the region information according to the primary segmentation result, and obtaining an accurate segmentation result by using a three-dimensional image search algorithm.
Chinese patent document CN 110415253A discloses a point interactive medical image segmentation method based on a deep neural network: the algorithm starts from a tumor center position provided by an expert, 16 image blocks with the size of 32 multiplied by 32 are densely collected according to a center diffusion mode to form an image block sequence, a deep segmentation network with sequence learning is used for learning the change trend inside and outside a target, the edge of the target is determined, and the segmentation of the renal tumor is realized. The method can overcome the influences of low contrast of medical images, variable target positions and fuzzy target edges, and is suitable for organ segmentation and tumor segmentation tasks.
Chinese patent document CN111292338A discloses a method and system for segmenting choroidal neovascularization from fundus OCT images: collecting an eye fundus OCT image containing choroidal neovascularization lesion, constructing a U-Net-like neural network model, and training by using the OCT image and lesion markers. In the testing stage, the OCT image to be segmented is directly input into the network to obtain the choroidal neovascularization.
The above patents have some drawbacks or deficiencies in the art: 1) The traditional CNV segmentation method has the advantages of low calculation speed, difficult parameter adjustment and low adaptability; 2) The CNV segmentation method only adopts deep learning, and has the main problems that the labeled data used for training is difficult to obtain, the generalization capability is influenced, and the segmentation performance is obviously reduced when a complex CNV pathological image is encountered; 3) The current related method combining interactive information and deep learning adopts image block processing, the processing process is complex, and the determined target edge lacks continuity guarantee, which results in reduced accuracy.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a medical image segmentation and quantitative analysis method based on a deep learning method guided by interactive information;
the meaning of the invention is that the deep learning and interactive information are combined, so that doctors can obtain more accurate CNV segmentation results with the least amount of interaction.
Interpretation of terms:
1. ResNet34, resNet34 is a residual network with 34 layers, which is a well-known network with stronger performance, specifically, see document He K, zhang X, ren S, et al.
2. ISAC-Net, which is slightly modified based on SAC-Net. The SAC-Net first layer convolution has 3 channels and the ISAC-Net first layer convolution has 4 channels because the ISAC-Net input is a 4 channel image, which includes the 3 channel original OCT image, and the thermal image, heatmap, of the method described later, which together form the 4 channel input, which is the input to the ISAC-Net network.
The technical scheme of the invention is as follows:
the invention discloses a deep learning method based on mutual information guidance and a medical image segmentation and quantitative analysis method. The method comprises the following steps:
(1) Constructing a deep convolutional neural network, and directly training by using the existing CNV labeled data to obtain an automatic segmentation model;
the method for generating the CNV annotation data comprises the following steps: and marking a CNV region on the OCT image to obtain a mask image of the CNV target, wherein in the mask image of the CNV target, 1 represents that the corresponding point belongs to the CNV region, and 0 represents the other points. The mask images of these CNV targets will be used to train the deep convolutional neural network.
(2) On the basis of not changing the main network structure, combining simulated interaction information on a training set, adding a channel to a network input characteristic diagram to represent the interaction information, and performing fine tuning training by using the same CNV mask to obtain a model A;
(3) During testing or using, a doctor provides an OCT image, the input OCT image is automatically detected and segmented by using the automatic segmentation model, if the doctor is satisfied with the segmentation effect, the operation is stopped, otherwise, the doctor is allowed to interactively point out the uppermost, leftmost and rightmost 3 points of the CNV region boundary to serve as an interactive information input model A, and a more accurate result is obtained.
According to the invention, the deep convolutional neural network is preferably a self-partitioning network SAC-Net which is a semantic partitioning neural network with a completely symmetrical coding-decoding architecture, a ResNet34 is adopted in a coding part, and a full connection layer in the ResNet34 is removed;
in the automatic segmentation network SAC-Net, an encoding part and a decoding part are connected through a self-attention module A, and feature graphs which are symmetrical to each other, namely have the same resolution, of the encoding part and the decoding part are connected through a plurality of attention jump connection modules AC.
When the automatic segmentation network SAC-Net is trained, firstly carrying out nonlinear transformation on shallow features, reducing the dimension of each position in space, namely the channel number of a feature map, calculating an attention weight matrix, and weighting with the shallow feature map to obtain a transformed feature map; and adding the transformed feature map and the deep feature map, and realizing jump connection through an attention jump connection module AC. By the attention jump connection module SAC, the SAC-Net provided by the invention extracts more context information and improves the accuracy of segmentation.
Further preferably, the specific implementation process includes:
(1) setting the shallow layer characteristic diagram as X belongs to R C×H×W The deep characteristic diagram is Y ∈ R C×H×W R is real number in mathematics, C, H, W is dimension of characteristic diagram X, C is channel number, H is height, W is width, R is width C×H×W Representing a third-order tensor real space with dimensions C × H × W; mapping X to Z = f (X, W; k) e R by the attention jump connection module AC using a 1 × 1 convolution kernel k×H×W ;
The function f represents the convolution operation of k 1 × 1 convolution kernels (automatically learned in the training process) and the shallow feature map X; z represents an operation result of 1 × 1 convolution;
(2) calculating the similarity of each point and all other points by utilizing Z based on the feature block, wherein the similarity measurement adopts cosine distance; the method specifically comprises the following steps: extracting a 3 × 3 feature block with p as the center from each position p in Z by an attention jump connection module AC, wherein H × W positions are provided, and each position belongs to a k-dimensional space; the dimension of the extracted feature block is 3 × 3 × k; the position p is the spatial position of tensor Z, wherein Z has k channels, each channel has the size of H multiplied by W, the position p is any one of H multiplied by W positions, and each position is a k-dimensional vector and corresponds to k channels; like an RGB color image, each location is a point on the image, and each point is an RGB three-dimensional vector.
According to a cosine distance formula, similarity s of the feature block i and the feature block j ij Comprises the following steps:wherein Z is i Representing a 3 x 3 image block, Z, centered on a feature block i j Represents a 3 x 3 image block centered on feature block j;
performing convolution operation on each feature block and the original feature graph Z to calculate the distance between the feature blocks;
(3) calculating a connection characteristic diagram;
calculating a similarity matrix S = (S) according to Z ij ) N×N N = H × W; the ith row of S represents the similarity of the ith position to the rest of positions;
softmax normalization was performed for each row of S, i.e.: the normalization result refers to the ith row and jth column elements of the matrix S;
Wherein reshape (X; N, C) belongs to R N×C Expressing the feature tensor X ∈ R C×H×W Reorganizing the elements into a matrix, each row representing a feature (C-dimensional vector) of one location; SD represents the matrix product of the similarity matrix S and the matrix D;
by calculation ofThe feature of each location becomes a weighted sum of the features of all the remaining locations; in the similarity matrix S, the position of semantic features having the same or similar contexts is heavily weighted, and therefore, long-distance similar semantic contexts can be found or noticed.
(5) Connect the feature mapsAnd the deep characteristic diagram Y is added, so that the semantic information is prevented from increasing and the position information is prevented from decreasing in the network deepening process. Thereby obtaining a segmentation result with more accurate position.
Further preferably, the distance between the feature blocks is calculated by performing convolution operation on each feature block and the original feature map Z, and means that: and taking the characteristic block i as a convolution kernel, performing convolution with the original characteristic diagram Z, and calculating the distance between the characteristic block i and the rest characteristic blocks through one convolution.
Further preferably, k =11 for reducing the calculation load.
Preferably, according to the invention, the calculation process is the same for the self-attention module a as for the AC module, but no jump connection is required. That is, the output of module A is a connection profile in the AC module
According to the invention, the model A is based on the existing automatic segmentation network model SAC-Net, and the number of input channels is changed, wherein the number of the input channels is 4, the model A comprises 3 channels of images, namely OCT images, and an additional Gaussian map channel G (thermodynamic diagram).
According to the optimization of the invention, in the step (2), a channel is added to the network input characteristic diagram to represent the interaction information by combining the simulated interaction information on the training set, and the same CNV mask is used for fine tuning training to obtain a model A, which specifically comprises the following steps;
a. using the CNV labeling data obtained in the step (1) as a training set;
b. based on the CNV labeling data in the step a, calculating 3 positions of the CNV region on each CNV mask image, wherein the three positions are the uppermost position, the leftmost position and the rightmost position, and calculating a thermodynamic diagram corresponding to each OCT image by using the 3 positions, namely simulated interaction information;
c. and training through CNV labeling data and a corresponding thermodynamic diagram to obtain a model A.
According to the invention, in the step (3), the doctor is allowed to interactively find the top, left and right 3 points of the CNV region boundary as the interactive information input model a, so as to obtain a more accurate result, specifically:
assuming that a user clicks 3 boundary points of the top, left and right 3 points of the CNV region boundary, that is, P1, P2 and P3, in the interaction process, a thermodynamic diagram is generated at each boundary point with a radius R =20, specifically: for any boundary point P = (x 0, y 0), namely the x0 column and the y0 row, of P1, P2 and P3, a gaussian function G is constructed by taking P as the center p (x, y) is represented by formula (I):
in formula (i), the parameter σ =10, exp represents an exponential function with the natural constant e as the base, and when the distance between point (x, y) and point P is greater than the radius R, the pair G p By truncation, i.e. when (x-x 0) 2 +(y-y0) 2 >R 2 When, G p (x, y) =0; for all points (x, y), x =1,2, \ 8230;, W, y =1,2, \ 8230;, H, the resulting thermodynamic diagram G p At the point ofThe value of (x, y) is G p (x,y);
By the above method, 3 thermodynamic diagrams G are generated for the top, left and right 3 points, i.e., P1, P2 and P3 p1 ,G p2 ,G p3 Will be 3 thermodynamic diagrams G p1 ,G p2 ,G p3 Superposed to form a total thermodynamic diagram G = G p1 +G p2 +G p3 The model is an interactive segmentation model ISAC-Net which is an input thermodynamic diagram required by the model A;
the generated thermodynamic diagram will constitute an input image of 4 channels together with the OCT image, which is input into the interactive segmentation model ISAC-Net to get the CNV mask.
According to the optimization of the invention, in the step (3), if the doctor is unsatisfactory in the segmentation effect, and after the interaction of the model A, the accurate segmentation mask satisfying the doctor is achieved, the sample is added into the difficult sample library, and the automatic segmentation model is trained and learned on line, so that the generalization capability of the automatic segmentation model is enhanced.
A computer device comprising a memory storing a computer program and a processor implementing the steps of the mutual information guidance based deep learning method and medical image segmentation and quantitative analysis method when executed.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method for deep learning and medical image segmentation and quantitative analysis based on mutual information guidance.
The invention has the beneficial effects that:
1. the deep learning method is combined with the interactive information, the existing CNV labeling data are fully utilized for model training, an automatic segmentation model and a semi-automatic model A with interactive information input are obtained, interaction of doctors is reduced as far as possible, and segmentation accuracy is improved as far as possible. The CNV can be accurately segmented for most OCT images, and unnecessary interaction is avoided. The automatic segmentation model continuously learns in interactive feedback, and segmentation performance is continuously improved, so that the whole method is more robust and more accurate.
2. The existing CNV labeling data are fully utilized, the generalization capability of deep learning on CNV segmentation is improved, and the method is suitable for complex and changeable OCT images.
3. And by combining the interactive information which is as simple as possible, the lesion type and degree can be accurately segmented even if the lesion type and degree do not appear in the training set.
4. The invention carries out interactive adjustment, only when the doctor is unsatisfied with the pre-segmentation result, the interaction is provided, only 3 points of the CNV area are clicked to finish the input of the interactive information, and a more accurate adjustment result is achieved. The 3 points are the minimum, and the segmentation effect is better if the doctor is willing to provide more boundary points.
5. The invention continuously learns, if the doctor carries on the interactive adjustment, it is relatively poor that the generalization performance of the automatic segmentation model trained on the present OCT picture, therefore, the invention utilizes CNV mask code outputted by model A in the course of interactive adjustment, form the new (data, label) data pair, train the automatic segmentation model again, fine tune its parameter, make the generalization performance of the automatic segmentation model further improve. This has the advantage that the automatic segmentation model and model B feed back each other and continue to improve.
Drawings
FIG. 1 is a block diagram of an automatic segmentation model according to the present invention;
FIG. 2 is a block diagram of the structure of model A, an improved interaction part network model ISAC-Net, according to the present invention;
FIG. 3 is a schematic flow chart of a deep learning method and a medical image segmentation and quantitative analysis method based on mutual information guidance according to the present invention;
FIG. 4 is a schematic diagram of an input image 1;
FIG. 5 is a diagram illustrating the CNV segmentation result obtained by using the SAC-Net model input in FIG. 4;
FIG. 6 is a schematic diagram of an input image 2;
FIG. 7 is a schematic view illustrating an interaction process and a segmentation result visualization;
FIG. 8 is a diagram illustrating the CNV segmentation result obtained by using the interactive segmentation model ISAC-Net, model A, as input in FIG. 7.
Detailed Description
The invention is further defined in the following description, without being limited thereto, by reference to the drawings and examples.
Example 1
As shown in fig. 3, the automatic segmentation network and the interactive segmentation network of the present invention are both used to segment a CNV from an OCT image, and the quantitative analysis mainly includes calculating the area of the CNV. The method comprises the following steps:
(1) Constructing a deep convolutional neural network, and directly training by using the existing CNV labeled data to obtain an automatic segmentation model;
the method for generating the CNV annotation data comprises the following steps: and marking a CNV area on the OCT image to obtain a mask image of the CNV target, wherein in the mask image of the CNV target, 1 indicates that the corresponding point belongs to the CNV area, and 0 indicates the other points. The mask images of these CNV targets will be used to train the deep convolutional neural network.
(2) On the basis of not changing a main network structure, combining simulated interaction information on a training set, adding a channel to a network input characteristic diagram to represent the interaction information, and performing fine tuning training by using the same CNV mask to obtain a model A;
(3) During testing or using, a doctor provides an OCT image, the input OCT image is automatically detected and segmented by using the automatic segmentation model, if the doctor is satisfied with the segmentation effect, the operation is stopped, otherwise, the doctor is allowed to interactively point out the uppermost, leftmost and rightmost 3 points of the CNV region boundary to serve as an interactive information input model A, and a more accurate result is obtained.
Example 2
The deep learning method based on mutual information guidance according to embodiment 1 is different from the medical image segmentation and quantitative analysis method in that:
as shown in fig. 1, the deep convolutional neural Network is an automatic segmentation Network SAC-Net (Skip attribute Connection Network), and the structure of the SAC-Net Network is shown in fig. 1, and is a semantic segmentation neural Network of a completely symmetric coding-decoding architecture, a ResNet34 is adopted in a coding part, and a full Connection layer in the ResNet34 is removed;
in the automatic segmentation network SAC-Net, an encoding part and a decoding part are connected through a self-attention module A, and feature graphs which are symmetrical to each other, namely have the same resolution, of the encoding part and the decoding part are connected through a plurality of attention jump connection modules AC.
When the SAC-Net of the automatic segmentation network is trained, firstly, the shallow feature is subjected to nonlinear transformation, the dimension of each position in space, namely the channel number of the feature map, is reduced, an attention weight matrix is calculated, and the attention weight matrix and the shallow feature map are weighted to obtain a transformed feature map; and adding the transformed feature map and the deep feature map, and realizing jump connection through an attention jump connection module AC. By the attention jump connection module SAC, the SAC-Net provided by the invention extracts more context information and improves the accuracy of segmentation.
The specific implementation process comprises the following steps:
(1) setting the shallow layer characteristic diagram as X belongs to R C×H×W The deep characteristic diagram is Y ∈ R C×H×W R is real number in mathematics, C, H, W is dimension of characteristic diagram X, C is channel number, H is height, W is width, R is width C×H×W Representing a third-order tensor real space with dimensions C × H × W; mapping X by the attention hopping connection module AC to Z = f (X, W; k) e R using a 1 × 1 convolution kernel k×H×W ;
The function f represents the convolution operation of k 1 × 1 convolution kernels (automatically learned in the training process) and the shallow feature map X; z represents an operation result of 1 × 1 convolution;
(2) calculating the similarity of each point and all other points by utilizing Z based on the feature block, wherein the similarity measurement adopts cosine distance; the method specifically comprises the following steps: extracting a 3 × 3 feature block with p as the center from each position p in Z by an attention jump connection module AC, wherein H × W positions are provided, and each position belongs to a k-dimensional space; the dimension of the extracted feature block is 3 × 3 × k; the position p is the spatial position of tensor Z, wherein Z has k channels, each channel has the size of H multiplied by W, the position p is any one of H multiplied by W positions, and each position is a k-dimensional vector and corresponds to k channels; like an RGB color image, each location is a point on the image, and each point is an RGB three-dimensional vector. k =11.
According to a cosine distance formula, similarity s of the feature block i and the feature block j ij Comprises the following steps:wherein Z is i Representing a 3 x 3 image block, Z, centered on a feature block i j Represents a 3 x 3 image block centered on feature block j;
performing convolution operation on each feature block and the original feature graph Z to calculate the distance between the feature blocks; the method comprises the following steps: and taking the characteristic block i as a convolution kernel, performing convolution with the original characteristic diagram Z, and calculating the distance between the characteristic block i and the rest characteristic blocks through one convolution.
(3) Calculating a connection characteristic diagram;
calculating a similarity matrix S = (S) according to Z ij ) N×N N = H × W; the ith row of S represents the similarity of the ith position to the rest of positions;
softmax normalization was performed for each row of S, i.e.: the result of normalization of the ith row and the jth column of the matrix S is obtained;
Wherein reshape (X; N, C) belongs to R N×C Expressing the feature tensor X ∈ R C×H×W Reorganizing the elements into a matrix, each row representing a feature (C-dimensional vector) of one location; SD represents the matrix product of the similarity matrix S and the matrix D;
by calculatingThe feature of each location becomes a weighted sum of the features of all the remaining locations; in the similarity matrix S, the position of semantic features having the same or similar contexts is heavily weighted, and therefore, long-distance similar semantic contexts can be found or noticed.
(5) Connect the feature mapsAnd the deep characteristic diagram Y is added, so that the semantic information is prevented from increasing and the position information is prevented from decreasing in the network deepening process. Thereby obtaining a segmentation result with more accurate position.
For the self-attention module a, the calculation process is the same as for the AC module, but no jump connection is required. That is, the output of module A is the connection profile in the AC module
Model A is to change the number of input channels on the basis of the existing automatic segmentation network model SAC-Net, wherein the number of input channels is 4, and the input channels comprise 3-channel images, namely OCT images, and additional Gaussian map channels G (thermodynamic diagram).
After opening the OCT image, a doctor expects a model or an algorithm to be capable of obtaining the segmentation result of the CNV immediately, so the invention proposes that an automatic segmentation network SAC-Net is used for trying automatic segmentation, if the CNV can be segmented well and the doctor is satisfied, the task is finished, and the area of the CNV is directly given through a segmented CNV mask image. Otherwise, if the doctor is not satisfied with the automatic segmentation result, the doctor selects 3 points on the OCT image to complete the interaction, and generates a thermodynamic diagram G by using the 3 points, at this time, the thermodynamic diagram G and the original OCT image form an input image with 4 channels, and the input image is input into an interactive segmentation network ISAC-Net, wherein the ISAC-Net is slightly modified on the basis of SAC-Net, and only the number of the convolution kernel channels of the first volume of layers is changed to 4. The structural block diagram of model A, namely the improved interaction part network model ISAC-Net, is shown in FIG. 2.
In the step (2), combining simulated interaction information on a training set, adding a channel to a network input characteristic diagram to represent the interaction information, and performing fine tuning training by using the same CNV mask to obtain a model A, wherein the method specifically comprises the following steps;
a. using the CNV labeling data obtained in the step (1) as a training set; labeling the OCT images, for example, 100 OCT images, and obtaining 100 CNV mask images as a result of the labeling. The OCT image and CNV mask image constitute a training set of the auto-segmentation network SAC-Net.
b. Then ISAC-Net requires the physician to improve the interaction information in order to train the interactive segmentation network ISAC-Net, since it is desirable to allow the physician to interact to improve accuracy when the automatic segmentation network SAC-Net is not effective. And 3 points which cannot be interacted by the doctor any more during training, so that the uppermost, leftmost and rightmost positions of the CNV region on each CNV mask image are calculated based on the CNV marking data in the step a, and the 3 positions are used for calculating the thermodynamic diagram corresponding to each OCT image, namely the simulated interaction information; for example, based on the 100 CNV labeled images, 3 positions of the CNV region on each image can be calculated, and the thermodynamic diagram corresponding to each OCT image can be calculated using the 3 positions, so that 100 thermodynamic diagrams are obtained.
c. And training through CNV labeling data and a corresponding thermodynamic diagram to obtain a model A. Therefore, in training ISAC-Net, the training set consists of 100 images of 4 channels, each 4-channel image consisting of the original OCT image (3 channels) and the corresponding thermodynamic diagram (1 channel). )
In the step (3), allowing the doctor to interactively point out the top, left and right 3 points of the boundary of the CNV region as the interactive information input model A to obtain a more accurate result, specifically:
assuming that a user clicks 3 boundary points, namely P1, P2, and P3, at the top, the left, and the right of the CNV region boundary in the interaction process, a thermodynamic diagram is generated at each boundary point with a radius R =20, specifically: for any boundary point P = (x 0, y 0), namely the x0 column and the y0 row, of P1, P2 and P3, a gaussian function G is constructed by taking P as the center p (x, y) is represented by formula (I):
in formula (i), the parameter σ =10, exp represents an exponential function with the natural constant e as the base, and when the distance between point (x, y) and point P is greater than the radius R, the pair G p To perform truncation, i.e. when (x-x 0) 2 +(y-y0) 2 >R 2 When, G p (x, y) =0; for all points (x, y), x =1,2, \ 8230;, W, y =1,2, \ 8230;, H, the resulting thermodynamic diagram G p The value of the point (x, y) is G p (x,y);
By the above method, 3 thermodynamic diagrams G are generated for the top, left and right 3 points P1, P2 and P3 p1 ,G p2 ,G p3 Will be 3 thermodynamic diagrams G p1 ,G p2 ,G p3 Superposed to form a total thermodynamic diagram G = G p1 +G p2 +G p3 The model is an interactive segmentation model ISAC-Net which is an input thermodynamic diagram required by the model A;
the generated thermodynamic diagram will constitute an input image of 4 channels together with the OCT image, which is input into the interactive segmentation model ISAC-Net to get the CNV mask.
FIG. 4 is a schematic diagram of an input image 1; FIG. 5 is a diagram illustrating the CNV segmentation result obtained by using the SAC-Net model input in FIG. 4; the area is 3730 pixels.
FIG. 6 is a schematic diagram of an input image 2; FIG. 7 is a schematic view illustrating an interaction process and a segmentation result visualization; the dots are provided interaction points. FIG. 8 is a schematic diagram of the CNV segmentation result obtained by using the interactive segmentation model ISAC-Net, model A, as input in FIG. 7; the area is 1255 pixels.
The method combines a deep learning method with interactive information, fully utilizes the existing CNV labeling data to train the model, obtains an automatic segmentation model and a semi-automatic model A with interactive information input, reduces doctor interaction as much as possible, and improves segmentation accuracy as much as possible. The CNV can be accurately segmented for most OCT images, and unnecessary interaction is avoided. The automatic segmentation model continuously learns in interactive feedback, and segmentation performance is continuously improved, so that the whole method is more robust and more accurate.
Example 3
The deep learning method based on mutual information guidance according to embodiment 2 is different from the medical image segmentation and quantitative analysis method in that:
in the step (3), if the doctor is unsatisfied with the segmentation effect and the accurate segmentation mask satisfying the doctor is achieved after the interaction of the model A, the sample is added into a difficult sample library, and the automatic segmentation model is trained and learned on line to enhance the generalization capability of the automatic segmentation model.
Example 4
A computer device comprising a memory storing a computer program and a processor executing the steps of implementing any one of the methods of embodiments 1-4 based on mutual information guided deep learning and medical image segmentation and quantitative analysis.
Example 5
A computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of any one of the deep learning method and the medical image segmentation and quantitative analysis method based on mutual information guidance of embodiments 1 to 4.
Claims (10)
1. The deep learning method and the medical image segmentation and quantitative analysis method based on mutual information guidance are characterized by comprising the following steps:
(1) Constructing a deep convolutional neural network, and directly training by using the existing CNV labeled data to obtain an automatic segmentation model; the deep convolutional neural network is an automatic segmentation network SAC-Net, is a semantic segmentation neural network with a completely symmetrical coding-decoding structure, adopts ResNet34 in a coding part, and removes a full connection layer in the ResNet 34;
in the automatic partitioning network SAC-Net, an encoding part and a decoding part are connected through a self-attention module A, and the encoding part and the decoding part are connected through a plurality of attention jump connection modules AC, wherein the feature maps are symmetrical, namely the feature maps have the same resolution;
when the automatic segmentation network SAC-Net is trained, firstly carrying out nonlinear transformation on shallow features, reducing the dimension of each position in space, namely the channel number of a feature map, calculating an attention weight matrix, and weighting with the shallow feature map to obtain a transformed feature map; adding the transformed feature map and the deep feature map, and realizing jump connection through an attention jump connection module AC;
(2) Combining simulated interaction information on a training set, adding a channel to a network input characteristic diagram to represent the interaction information, and performing fine tuning training by using the same CNV mask to obtain a model A;
(3) During testing or using, an OCT image is provided, the input OCT image is automatically detected and segmented by using an automatic segmentation model, if the segmentation effect is satisfactory, the segmentation is stopped, otherwise, the uppermost, leftmost and rightmost 3 points of the CNV region boundary are allowed to be interactively located and serve as an interactive information input model A, and a more accurate result is obtained.
2. The deep learning method and medical image segmentation and quantitative analysis method based on mutual information guidance according to claim 1, wherein the automatic segmentation network SAC-Net is implemented by the following specific processes:
(1) setting the shallow layer characteristic diagram as X belongs to R C×H×W The deep characteristic map is Y ∈ R C×H×W R refers to real number in mathematics, C, H and W refer to dimensionality of a characteristic diagram X, C refers to channel number, H refers to height, W refers to width, R refers to width C×H×W Representing a third-order tensor real space with dimensions C × H × W; mapping X by the attention hopping connection module AC to Z = f (X, W; k) e R using a 1 × 1 convolution kernel k×H×W ;
The function f represents the convolution operation of k 1 multiplied by 1 convolution kernels and the shallow feature map X; z represents an operation result of 1 × 1 convolution;
(2) calculating the similarity of each point and all other points based on the feature block by using Z, wherein the similarity measurement adopts cosine distance; the method specifically comprises the following steps: extracting a 3 × 3 feature block with p as the center from each position p in Z by an attention jump connection module AC, wherein H × W positions are provided, and each position belongs to a k-dimensional space; the dimension of the extracted feature block is 3 × 3 × k; the position p is a spatial position of tensor Z, the Z has k channels, each channel has the size of H multiplied by W, the position p is any one of H multiplied by W positions, each position is a k-dimensional vector and corresponds to k channels;
according to the cosine distance formula, the similarity s between the characteristic block i and the characteristic block j ij Comprises the following steps:wherein, Z i Representing a 3 x 3 image block, Z, centered on the feature block i j Represents a 3 x 3 image block centered on feature block j;
calculating the distance between the feature blocks by performing convolution operation on each feature block and the original feature map Z;
(3) calculating a connection characteristic diagram;
calculating a similarity matrix S = (S) according to Z ij ) N×N N = H × W; the ith row of S represents the similarity of the ith position to the rest of positions;
softmax normalization was performed for each row of S, i.e.: the result of normalization of the ith row and the jth column of the matrix S is obtained;
Wherein reshape (X; N, C) belongs to R N×C Expressing the feature tensor X ∈ R C×H×W Reorganizing the elements into a matrix, each row representing a feature of one location; SD represents the matrix product of the similarity matrix S and the matrix D;
by calculation ofOf each positionThe feature becomes a weighted sum of the features of all the remaining locations;
3. The deep learning method and the medical image segmentation and quantitative analysis method based on mutual information guidance according to claim 1, wherein the distance between the feature blocks is calculated by performing convolution operation on each feature block and an original feature map Z, and the calculation is performed by: taking the characteristic block i as a convolution kernel, performing convolution with the original characteristic graph Z, and calculating the distance between the characteristic block i and the rest characteristic blocks through one convolution; k =11.
4. The deep learning method and medical image segmentation and quantitative analysis method based on mutual information guidance according to claim 1, wherein the method for generating the CNV labeling data comprises: marking a CNV area on the OCT image to obtain a mask image of a CNV target, wherein in the mask image of the CNV target, 1 indicates that a corresponding point belongs to the CNV area, and 0 indicates the other points;
5. The mutual information guide-based deep learning method and medical image segmentation and quantitative analysis method as claimed in claim 1, wherein the model a is obtained by changing the number of input channels on the basis of an automatic segmentation network model SAC-Net, the number of input channels is 4, and the model a includes 3 channels of images, namely OCT images, and an additional gaussian map channel G.
6. The deep learning method and medical image segmentation and quantitative analysis method based on mutual information guidance according to claim 1, wherein in step (2), a channel is added to a network input feature map to represent mutual information by combining simulated mutual information on a training set, and the same CNV mask is used for fine tuning training to obtain a model A, which specifically comprises;
a. using the CNV labeling data obtained in the step (1) as a training set;
b. based on the CNV labeling data in the step a, calculating 3 positions of the CNV region on each CNV mask image, namely the uppermost position, the leftmost position and the rightmost position, and calculating a thermodynamic diagram corresponding to each OCT image by using the 3 positions, namely simulated interaction information;
c. and training through CNV labeling data and a corresponding thermodynamic diagram to obtain a model A.
7. The deep learning method and medical image segmentation and quantitative analysis method based on mutual information guidance according to claim 1, wherein in step (3), the top, left and right 3 points of the CNV region boundary are allowed to be interactively found as the mutual information input model a, so as to obtain more accurate results, specifically:
assuming that a user clicks 3 boundary points, namely P1, P2, and P3, at the top, the left, and the right of the CNV region boundary in the interaction process, a thermodynamic diagram is generated at each boundary point with a radius R =20, specifically: for any boundary point P = (x 0, y 0), namely the x0 column and the y0 row in P1, P2 and P3, a Gaussian function G is constructed by taking P as the center p (x, y) as shown in formula (I):
in formula (I), the parameter σ =10, exp represents an exponential function with the natural constant e as the base, and when the distance between point (x, y) and point P is greater than the radius R, the pair G p To perform truncation, i.e. when (x-x 0) 2 +(y-y0) 2 >R 2 When, G p (x, y) =0; for all points (x, y), x =1,2 p The value at point (x, y) is G p (x,y);
By the above method, 3 thermodynamic diagrams G are generated for the top, left and right 3 points P1, P2 and P3 p1 ,G p2 ,G p3 Will be 3 thermodynamic diagrams G p1 ,G p2 ,G p3 Superposed to form a total thermodynamic diagram G = G p1 +G p2 +G p3 The model is an interactive segmentation model ISAC-Net which is an input thermodynamic diagram required by the model A;
the generated thermodynamic diagram and the OCT image form an input image of 4 channels, and the input image and the OCT image are input into an interactive segmentation model ISAC-Net to obtain a CNV mask; the model A is characterized in that the number of input channels is changed on the basis of an automatic segmentation network model SAC-Net, the number of the input channels is 4, and the model A comprises 3-channel images, namely OCT images, and additional Gaussian mapping image channels G.
8. The deep learning method and medical image segmentation and quantitative analysis method based on mutual information guidance according to any one of claims 1 to 7, wherein in the step (3), if the segmentation effect is not satisfactory for a doctor, and after model A interaction, an accurate segmentation mask satisfying the doctor is achieved, the segmentation mask is added into a difficult sample library, and online training and learning are performed on the automatic segmentation model, so that the generalization capability of the automatic segmentation model is enhanced.
9. A computer device comprising a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the method for deep learning based on mutual information guidance and the method for segmentation and quantitative analysis of medical images according to any one of claims 1 to 8.
10. A computer-readable storage medium, on which a computer program is stored, wherein the computer program, when being executed by a processor, implements the steps of the method for deep learning based on mutual information guidance and the method for medical image segmentation and quantitative analysis according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011642172.0A CN112734769B (en) | 2020-12-31 | 2020-12-31 | Medical image segmentation and quantitative analysis method based on interactive information guided deep learning method, computer device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011642172.0A CN112734769B (en) | 2020-12-31 | 2020-12-31 | Medical image segmentation and quantitative analysis method based on interactive information guided deep learning method, computer device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112734769A CN112734769A (en) | 2021-04-30 |
CN112734769B true CN112734769B (en) | 2022-11-04 |
Family
ID=75609111
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011642172.0A Active CN112734769B (en) | 2020-12-31 | 2020-12-31 | Medical image segmentation and quantitative analysis method based on interactive information guided deep learning method, computer device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112734769B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115170807B (en) * | 2022-09-05 | 2022-12-02 | 浙江大华技术股份有限公司 | Image segmentation and model training method, device, equipment and medium |
CN117036696A (en) * | 2023-07-21 | 2023-11-10 | 清华大学深圳国际研究生院 | Image segmentation method, device, equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110222722A (en) * | 2019-05-14 | 2019-09-10 | 华南理工大学 | Interactive image stylization processing method, calculates equipment and storage medium at system |
CN111127447A (en) * | 2019-12-26 | 2020-05-08 | 河南工业大学 | Blood vessel segmentation network and method based on generative confrontation network |
CN111583285A (en) * | 2020-05-12 | 2020-08-25 | 武汉科技大学 | Liver image semantic segmentation method based on edge attention strategy |
-
2020
- 2020-12-31 CN CN202011642172.0A patent/CN112734769B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110222722A (en) * | 2019-05-14 | 2019-09-10 | 华南理工大学 | Interactive image stylization processing method, calculates equipment and storage medium at system |
CN111127447A (en) * | 2019-12-26 | 2020-05-08 | 河南工业大学 | Blood vessel segmentation network and method based on generative confrontation network |
CN111583285A (en) * | 2020-05-12 | 2020-08-25 | 武汉科技大学 | Liver image semantic segmentation method based on edge attention strategy |
Also Published As
Publication number | Publication date |
---|---|
CN112734769A (en) | 2021-04-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Liu et al. | Deep learning framework based on integration of S-Mask R-CNN and Inception-v3 for ultrasound image-aided diagnosis of prostate cancer | |
Tang et al. | A two-stage approach for automatic liver segmentation with Faster R-CNN and DeepLab | |
CN110197493B (en) | Fundus image blood vessel segmentation method | |
CN109493954B (en) | SD-OCT image retinopathy detection system based on category distinguishing and positioning | |
CN110930416B (en) | MRI image prostate segmentation method based on U-shaped network | |
Dharmawan et al. | A new hybrid algorithm for retinal vessels segmentation on fundus images | |
Kromp et al. | Evaluation of deep learning architectures for complex immunofluorescence nuclear image segmentation | |
CN110659692A (en) | Pathological image automatic labeling method based on reinforcement learning and deep neural network | |
CN106780498A (en) | Based on point depth convolutional network epithelium and matrix organization's automatic division method pixel-by-pixel | |
CN112734769B (en) | Medical image segmentation and quantitative analysis method based on interactive information guided deep learning method, computer device and storage medium | |
CN112263217B (en) | Improved convolutional neural network-based non-melanoma skin cancer pathological image lesion area detection method | |
Ye et al. | Medical image diagnosis of prostate tumor based on PSP-Net+ VGG16 deep learning network | |
Assad et al. | Deep biomedical image classification using diagonal bilinear interpolation and residual network | |
CN111383759A (en) | Automatic pneumonia diagnosis system | |
CN113298742A (en) | Multi-modal retinal image fusion method and system based on image registration | |
CN109816665B (en) | Rapid segmentation method and device for optical coherence tomography image | |
Kanca et al. | Learning hand-crafted features for k-NN based skin disease classification | |
Guo et al. | CAFR-CNN: coarse-to-fine adaptive faster R-CNN for cross-domain joint optic disc and cup segmentation | |
Liu et al. | Weakly-supervised localization and classification of biomarkers in OCT images with integrated reconstruction and attention | |
Wang et al. | RFPNet: Reorganizing feature pyramid networks for medical image segmentation | |
Wang et al. | Optic disc detection based on fully convolutional neural network and structured matrix decomposition | |
Shi et al. | Multi-threshold image segmentation based on an improved whale optimization algorithm: A case study of Lupus Nephritis | |
US20230230237A1 (en) | Image processing method and apparatus, computer device, and medium | |
CN117496217A (en) | Premature infant retina image detection method based on deep learning and knowledge distillation | |
CN115937163A (en) | Target region extraction method and system for SPECT lung perfusion imaging |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |