CN113269778B - Image weak supervision segmentation method based on iteration - Google Patents

Image weak supervision segmentation method based on iteration Download PDF

Info

Publication number
CN113269778B
CN113269778B CN202110683693.9A CN202110683693A CN113269778B CN 113269778 B CN113269778 B CN 113269778B CN 202110683693 A CN202110683693 A CN 202110683693A CN 113269778 B CN113269778 B CN 113269778B
Authority
CN
China
Prior art keywords
image
iteration
weak supervision
region
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110683693.9A
Other languages
Chinese (zh)
Other versions
CN113269778A (en
Inventor
郭翌
刘若韵
汪源源
周世崇
常才
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CN202110683693.9A priority Critical patent/CN113269778B/en
Publication of CN113269778A publication Critical patent/CN113269778A/en
Application granted granted Critical
Publication of CN113269778B publication Critical patent/CN113269778B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Multimedia (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

An image weak supervision segmentation method based on iteration is characterized in that a thyroid ultrasound image containing a positioning boundary frame is used as weak supervision information, a probability gradient labeling mode is used for obtaining training labels, deep learning weak supervision segmentation network parameters and training labels are continuously updated in an iteration training mode, and finally a trained network is used for segmenting an image to be processed. According to the method, through optimization of the iterative network, the initial positioning label is converted into a final segmentation result under the weak supervision condition without manual intervention, and accurate segmentation of a specific area in the thyroid ultrasound image can be realized under the weak supervision condition without manual marking.

Description

Image weak supervision segmentation method based on iteration
Technical Field
The invention relates to a technology in the field of image processing, in particular to an image weak supervision segmentation method based on iteration.
Background
Most of the existing deep learning image segmentation methods are supervised learning, the segmentation accuracy of the existing deep learning image segmentation methods depends on a large amount of high-quality labeled data, time and labor are consumed, and subjective deviation is easy to generate. Therefore, there is an increasing need to apply a weakly supervised method for automatic segmentation of thyroid ultrasound images in order to improve diagnostic performance and reduce human intervention.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides an image weak supervision segmentation method based on iteration, which converts an initial positioning label into a final segmentation result under the weak supervision condition without manual intervention through optimization of an iteration network and can realize accurate segmentation of a specific area in an image under the weak supervision condition without manual marking.
The invention is realized by the following technical scheme:
the invention relates to an iteration-based image weak supervision segmentation method, which comprises the steps of taking a thyroid ultrasound image containing a positioning boundary frame as weak supervision information, obtaining a training label by utilizing a probability gradient labeling mode, continuously updating parameters and training labels of a deep learning weak supervision segmentation network designed aiming at the characteristics of the ultrasound image in an iteration training mode, and finally segmenting an image to be processed by adopting the trained network.
The probability gradient labeling mode is as follows: randomly selecting a Region of Interest (Region of Interest, roI) near a specific Region bounding box in an initial image marked with a positioning bounding box, adopting K-means pixel clustering operation and selecting a maximum connected Region as an initial positioning label, determining an observation center fixed Region in the initial positioning label, and taking the rest peripheral parts as non-observation regions; and then gradient transformation is carried out on the non-observation region, namely continuous probability gradient descending processing is carried out from the inside to the outside of the region, so that the original binary label is converted into a probability gradient label with the internal pixel value in a [0,1] interval.
The deep learning weak supervision segmentation network comprises: gamma conversion module, batch normalization layer, characteristic selection module, pixel comparison module and pooling module, wherein: the gamma conversion module carries out nonlinear processing on the image; the batch normalization layer performs normalization processing on the data distribution information according to the data distribution information in the image, and ensures the nonlinear expression capability of the network model; the characteristic selection module generates the correlation among corresponding characteristic vectors by calculating the characteristic graphs extracted from (1) a central fixed area of a thyroid ultrasound image, (2) a predicted specific area output between layers and (3) a background area, performing maximum pooling and average pooling operations and mapping conversion, and screening out the characteristic graphs which are high in correlation with the central fixed area and low in correlation with the background from the characteristic graphs extracted from the predicted specific area output between layers for weighting to obtain a weighted characteristic graph, so that the characteristic extraction process of the network is optimized; the pixel comparison module sets an iteration termination condition for each image so as to save training time; the pooling module obtains a feature dimension reduction result by performing down-sampling processing on the feature map, so that over-fitting is reduced, and the fault tolerance of the model is improved.
The nonlinear processing is used for enhancing the gray value of a darker area in the image, and specifically comprises the following steps: gamma conversion with gamma value of 2 is carried out on the image of the input network to improve the contrast of the whole input image.
The specific extraction method of the characteristic diagram comprises the following steps: each convolution layer in the feature selection module comprises a plurality of convolution kernels, and the convolution kernels are used for scanning the whole image from left to right and from top to bottom in sequence to obtain the feature map.
The screening is as follows: feature maps having a high correlation with the central anchor region and a low correlation with the background are selected from the feature maps extracted from the predicted specific region, and weighted.
The specific region is selected from, but not limited to, a focused region of interest manually before surgery, a tumor region and/or an image region to be analyzed and studied in detail.
The iteration termination condition is that after the gamma conversion module, the batch normalization layer, the feature selection module and the pooling module complete one-time complete iteration training each time, the pixel comparison module is used for calculating the sum of the prediction probability difference of each pixel in each input image between two continuous iterations. When the sum of the differences is smaller than a threshold value beta, iteration is stopped, and an iteration termination condition is set for each image, so that the effects of saving training time and improving the efficiency of the optimization process are achieved.
The weighting, the weighting coefficient of which is obtained by the convolution layer with Sigmoid activation function.
The deep learning weak supervision segmentation network adopts a distribution loss function L in iterative training D =λ 1 L dice2 L focal Wherein: region optimization section
Figure GDA0003762234310000021
Edge optimization part L focal =-(1-p t ) γ log(p t ),λ 1 And λ 2 Is a constant value of p t And p 0 The pixel values of the corresponding points in the prediction segmentation result and the current training label are respectively.
Technical effects
The invention integrally solves the problem of dependence of the existing image segmentation technology on manual annotation; under the weak supervision environment without manual labeling, the method can not only make full use of the specific area positioning information provided by the boundary frame, but also obtain the accurate segmentation result with the final Dice coefficient of about 85% through continuous iterative optimization of the segmentation network, so that medical image analysis is free from the dependence on pixel-level precision labeling.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a flow chart of an initial tag acquisition process;
in the figure: and (a), (b), (c), (d) and (e) respectively represent the region of interest, the K-means pixel clustering result, the maximum connected region, the probability gradient label and the gold standard.
FIG. 3 is a diagram of a weakly supervised split network architecture;
in the figure: (ii) (a) is a left half and (b) is a right half;
FIG. 4 is a schematic diagram of a feature selection module;
in the figure: a. b and c represent the central fixed region, the predicted region and the background, respectively. Feature maps 1, 2 and 3 represent feature maps extracted from a, b and c, respectively;
FIG. 5 is a schematic diagram illustrating the effects of the embodiment;
the image is the optimization result of different types of thyroid gland ultrasonic images after iteration, wherein the first row is an ultrasonic image with fuzzy edges of a specific area, and the second row is an image with clear edges;
FIG. 6 is a diagram of some exemplary weakly supervised segmentation results;
in the figure: the first column is the original ultrasound image, the second column is the physician's gold standard, and the third column is the thyroid ultrasound image results.
Detailed Description
As shown in fig. 1, this embodiment relates to an iteration-based image weak supervised segmentation method, which specifically includes the steps of:
step 1, using the thyroid ultrasound image containing the positioning bounding box as weak supervision information, and obtaining a training label by using a probability gradient labeling mode, wherein the method specifically comprises the following steps:
1.1 randomly select a Region of 256 × 256 pixels near a specific Region bounding box as a Region of interest (RoI) in the initial image (1024 × 768 pixels) with localization bounding box labeling.
1.2 adopting K-means pixel clustering operation to the interested region, and selecting the maximum connected region as an initial positioning label.
1.3 regarding the inner 60% of the specific area in the initial positioning tag as the central fixed area and the rest of the peripheral part as the uncertain outer area; and then, gradient transformation is carried out on the uncertain external area, namely continuous probability gradient descending processing is carried out from the inside to the outside of the area, so that the original binary label is converted into a probability gradient label with the internal pixel value in the [0,1] interval.
Step 2, continuously updating the deep learning weak supervision segmentation network parameters and the training labels in an iterative training mode, and specifically comprising the following steps of:
2.1 aiming at the problem of much noise interference in the ultrasonic image, a feature selection module is introduced into the weak supervision segmentation network, and the method specifically comprises the following steps: after obtaining feature representations of different levels, performing maximum pooling and average pooling operations on feature maps extracted from a central fixed region, a prediction region and a background region in an image, and converting the mappings into feature vectors; then calculating the correlation among the characteristic vectors; and screening out a characteristic diagram with high correlation with the central fixed area and low correlation with the background from the characteristic diagrams extracted from the prediction area, and weighting to obtain the characteristic diagram for pooling operation.
The weighting, the weighting coefficient of which is obtained by the convolutional layer with Sigmoid activation function.
2.2 this embodiment is directed at the problem that the required number of iterations of different types of thyroid ultrasound images is different, controls the iterative process through the pixel comparison module, specifically is: after each time of complete iterative training of the deep learning weak supervision segmentation network, the sum of the prediction probability difference values of each pixel in each input image between two continuous iterations is calculated, and when the sum is smaller than a threshold value, the iterative optimization process of the image is stopped.
2.3 training an iterative network for uncertainty of the result of the edge segmentation of the specific region using a distribution loss function L D =λ 1 L dice2 L focal Wherein: region-optimized partial Dice loss of
Figure GDA0003762234310000041
The edge optimized part focal length is L focal =-(1-p t ) γ log(p t ) Wherein: lambda [ alpha ] 1 And λ 2 Is a constant number, p t And p 0 The pixel values of the corresponding points in the prediction segmentation result and the current training label are respectively.
Step 3, segmenting the image to be processed by adopting the trained network, which specifically comprises the following steps:
3.1 the test image is preprocessed, specifically: a 256 × 256 pixel Region near a specific Region bounding box is randomly selected as a Region of Interest (RoI) in an initial image (1024 × 768 pixels) with a localization bounding box label as an input of the network model.
3.2 inputting the thyroid ultrasound image with the size of 256 multiplied by 256 pixels into the trained segmentation network to obtain the corresponding specific region segmentation result.
Through specific practical experiments, 500 sample thyroid gland ultrasonic images of an ultrasound department in a subsidiary tumor hospital of the university of Fudan are collected, roI with a proper size is selected from original images as input of a neural network, and the size of the images is 256 pixels by 256 pixels. For each thyroid ultrasound image, the target region is outlined manually by a radiologist with a clinical experience of more than a decade as a gold standard.
In an embodiment, the above processing is performed on 500 thyroid ultrasound images, the segmentation accuracy reaches 85%, and the schematic diagram of the segmentation results is shown in fig. 6, which includes (a) - (f) 6 examples, the first column is the original ultrasound image, the second column is the gold standard labeled by the doctor, and the third column is the result of the weakly supervised segmentation of the specific region.
According to the invention, the positioning boundary frame of the specific area is used as weak supervision information of a segmentation network, an iterative network training mode is adopted for an input image, and a pixel comparison module is provided to control an iterative process. The task of weakly supervised segmentation of a specific area in an image is completed with high efficiency under the condition of no need of manual labeling. The method can not only fully utilize the positioning information provided by the boundary frame, but also obtain the accurate segmentation result with the final Dice coefficient of about 85 percent through continuous iterative optimization of the segmentation network, thereby reducing the labor cost of deep learning image segmentation by 14 times and enabling image analysis to get rid of the dependence on pixel-level precision labeling.
The foregoing embodiments may be modified in many different ways by those skilled in the art without departing from the spirit and scope of the invention, which is defined by the appended claims and all changes that come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.

Claims (4)

1. An image weak supervision segmentation method based on iteration is characterized in that a thyroid ultrasound image containing a positioning boundary frame is used as weak supervision information, a training label is obtained in a probability gradient labeling mode, parameters and a training label of a deep learning weak supervision segmentation network designed aiming at the characteristics of the ultrasound image are continuously updated in an iteration training mode, and finally the trained network is adopted to segment an image to be processed;
the deep learning weak supervision segmentation network comprises: gamma conversion module, batch normalization layer, characteristic selection module, pixel comparison module and pooling module, wherein: the gamma conversion module carries out nonlinear processing on the image; the batch normalization layer performs normalization processing on the data distribution information in the image according to the data distribution information in the image, so that the nonlinear expression capability of the network model is ensured; the characteristic selection module performs maximum pooling and average pooling operations and mapping conversion on characteristic graphs extracted from (1) a central fixed area of a thyroid ultrasound image, (2) a predicted specific area output between layers and (3) a background area through the pooling module to generate correlation among corresponding characteristic vectors, and screens out characteristic graphs which are high in correlation with the central fixed area and low in correlation with the background from the characteristic graphs extracted from the predicted specific area output between layers to perform weighting, so that a weighted characteristic graph is obtained, and the characteristic extraction process of the network is optimized; the pixel comparison module sets an iteration termination condition for each image so as to save training time; the pooling module obtains a feature dimension reduction result by performing downsampling processing on the feature map, so that over-fitting is reduced, and the fault tolerance of the model is improved;
the probability gradient marking mode is as follows: randomly selecting an ROI (region of interest) near a boundary frame of a specific region in an initial image marked with a positioning boundary frame, adopting K-means pixel clustering operation and selecting a maximum connected region as an initial positioning label, determining an observation center fixed region in the initial positioning label, and taking the rest peripheral parts as non-observation regions; then, gradient transformation is carried out on the non-observation region, namely continuous probability gradient descending processing is carried out from the inside to the outside of the region, so that the original binary label is converted into a probability gradient label with the internal pixel value within the [0,1] interval;
the iteration termination condition is that after the deep learning weak supervision segmentation network finishes one complete iteration training each time, the sum of the prediction probability difference values of each pixel in each input image between two continuous iterations is calculated by using the pixel comparison module, the iteration is stopped when the sum of the difference values is smaller than a threshold value beta, and the iteration termination condition is set for each image so as to achieve the effects of saving the training time and improving the efficiency of the optimization process.
2. The image weakly supervised segmentation method based on iteration as claimed in claim 1, wherein the non-linear processing enhances the gray scale value of a darker area in the image, and specifically comprises: gamma conversion with gamma value of 2 is carried out on the image of the input network to improve the contrast of the whole input image.
3. The image weak supervision segmentation method based on iteration as claimed in claim 1, characterized in that the screening is: feature maps having a high correlation with the central fixed region and a low correlation with the background are selected from the feature maps extracted from the predicted specific region, and weighted.
4. The iterative-based image weakly supervised segmentation method of claim 1, wherein the weighting is obtained by a convolution layer with Sigmoid activation function.
CN202110683693.9A 2021-06-21 2021-06-21 Image weak supervision segmentation method based on iteration Active CN113269778B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110683693.9A CN113269778B (en) 2021-06-21 2021-06-21 Image weak supervision segmentation method based on iteration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110683693.9A CN113269778B (en) 2021-06-21 2021-06-21 Image weak supervision segmentation method based on iteration

Publications (2)

Publication Number Publication Date
CN113269778A CN113269778A (en) 2021-08-17
CN113269778B true CN113269778B (en) 2022-11-29

Family

ID=77235431

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110683693.9A Active CN113269778B (en) 2021-06-21 2021-06-21 Image weak supervision segmentation method based on iteration

Country Status (1)

Country Link
CN (1) CN113269778B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105844228A (en) * 2016-03-21 2016-08-10 北京航空航天大学 Remote sensing image cloud detection method based on convolution nerve network
CN108776969A (en) * 2018-05-24 2018-11-09 复旦大学 Breast ultrasound image lesion segmentation approach based on full convolutional network
CN109785344A (en) * 2019-01-22 2019-05-21 成都大学 The remote sensing image segmentation method of binary channel residual error network based on feature recalibration
CN111445488A (en) * 2020-04-22 2020-07-24 南京大学 Method for automatically identifying and segmenting salt body through weak supervised learning
CN111583287A (en) * 2020-04-23 2020-08-25 浙江大学 Deep learning model training method for fine portrait picture segmentation
WO2021076605A1 (en) * 2019-10-14 2021-04-22 Ventana Medical Systems, Inc. Weakly supervised multi-task learning for cell detection and segmentation

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111563897B (en) * 2020-04-13 2024-01-05 北京理工大学 Breast nuclear magnetic image tumor segmentation method and device based on weak supervision learning
CN112668579A (en) * 2020-12-24 2021-04-16 西安电子科技大学 Weak supervision semantic segmentation method based on self-adaptive affinity and class distribution

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105844228A (en) * 2016-03-21 2016-08-10 北京航空航天大学 Remote sensing image cloud detection method based on convolution nerve network
CN108776969A (en) * 2018-05-24 2018-11-09 复旦大学 Breast ultrasound image lesion segmentation approach based on full convolutional network
CN109785344A (en) * 2019-01-22 2019-05-21 成都大学 The remote sensing image segmentation method of binary channel residual error network based on feature recalibration
WO2021076605A1 (en) * 2019-10-14 2021-04-22 Ventana Medical Systems, Inc. Weakly supervised multi-task learning for cell detection and segmentation
CN111445488A (en) * 2020-04-22 2020-07-24 南京大学 Method for automatically identifying and segmenting salt body through weak supervised learning
CN111583287A (en) * 2020-04-23 2020-08-25 浙江大学 Deep learning model training method for fine portrait picture segmentation

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
" U-Net: convolutional networks on Medical Image Computing";Ronneberger O等;《 Springer》;20151231;全文 *
"Nodule Localization in Thyroid Ultrasound Images with a Joint-Training Convolutional Neural Network";Ruoyun Liu等;《Journal of Digital Imaging》;20200630;全文 *
在线特征选择的目标跟踪;杨恢先等;《计算机应用研究》;20100315(第03期);全文 *
基于增强CT图像的肾上腺肿瘤分类;唐三等;《仪器仪表学报》;20141215;全文 *

Also Published As

Publication number Publication date
CN113269778A (en) 2021-08-17

Similar Documents

Publication Publication Date Title
CN108268870B (en) Multi-scale feature fusion ultrasonic image semantic segmentation method based on counterstudy
CN111784671B (en) Pathological image focus region detection method based on multi-scale deep learning
CN111951288B (en) Skin cancer lesion segmentation method based on deep learning
Liu et al. A framework of wound segmentation based on deep convolutional networks
CN111563897B (en) Breast nuclear magnetic image tumor segmentation method and device based on weak supervision learning
CN111488914A (en) Alzheimer disease classification and prediction system based on multitask learning
CN110766670A (en) Mammary gland molybdenum target image tumor localization algorithm based on deep convolutional neural network
CN112862805B (en) Automatic auditory neuroma image segmentation method and system
CN111311574A (en) Terahertz lesion detection method and system based on artificial intelligence
CN113139977B (en) Mouth cavity curve image wisdom tooth segmentation method based on YOLO and U-Net
CN116884623B (en) Medical rehabilitation prediction system based on laser scanning imaging
CN111179275A (en) Medical ultrasonic image segmentation method
CN113902945A (en) Multi-modal breast magnetic resonance image classification method and system
Zhao et al. Attention residual convolution neural network based on U-net (AttentionResU-Net) for retina vessel segmentation
Yang et al. RADCU-Net: Residual attention and dual-supervision cascaded U-Net for retinal blood vessel segmentation
CN111383222A (en) Intervertebral disc MRI image intelligent diagnosis system based on deep learning
CN115546466A (en) Weak supervision image target positioning method based on multi-scale significant feature fusion
CN113643297B (en) Computer-aided age analysis method based on neural network
CN113539402B (en) Multi-mode image automatic sketching model migration method
CN110992309B (en) Fundus image segmentation method based on deep information transfer network
CN113269778B (en) Image weak supervision segmentation method based on iteration
CN116468923A (en) Image strengthening method and device based on weighted resampling clustering instability
CN112686912B (en) Acute stroke lesion segmentation method based on gradual learning and mixed samples
CN114663421A (en) Retina image intelligent analysis system and method based on information migration and ordered classification
Shahzad et al. Semantic segmentation of anaemic RBCs using multilevel deep convolutional encoder-decoder network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant