CN112241954B - Full-view self-adaptive segmentation network configuration method based on lump differentiation classification - Google Patents

Full-view self-adaptive segmentation network configuration method based on lump differentiation classification Download PDF

Info

Publication number
CN112241954B
CN112241954B CN202011140808.1A CN202011140808A CN112241954B CN 112241954 B CN112241954 B CN 112241954B CN 202011140808 A CN202011140808 A CN 202011140808A CN 112241954 B CN112241954 B CN 112241954B
Authority
CN
China
Prior art keywords
tumor
full
lump
image
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011140808.1A
Other languages
Chinese (zh)
Other versions
CN112241954A (en
Inventor
陈颖昭
焦佳佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Maritime University
Original Assignee
Shanghai Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Maritime University filed Critical Shanghai Maritime University
Priority to CN202011140808.1A priority Critical patent/CN112241954B/en
Publication of CN112241954A publication Critical patent/CN112241954A/en
Application granted granted Critical
Publication of CN112241954B publication Critical patent/CN112241954B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a full-view self-adaptive segmentation network configuration method based on lump differentiation classification, which comprises the steps of preprocessing full-view images, enhancing local contrast of the images and reducing other noise; performing morphological corrosion operation on the preprocessed image, and reducing the boundary of each tumor to obtain a target image; classifying the target image into irregular multi-tumor, smooth multi-tumor, irregular single-tumor and smooth single-tumor by feeding the target image into the generation type countermeasure network; performing morphological expansion operation on the classified images to obtain images with classification labels; designing four segmentation network models and segmenting the tumor; and measuring the segmentation effect of the full-field self-adaptive segmentation network based on the lump differentiation classification according to the segmentation index. Compared with the method for configuring the full-view adaptive segmentation network based on the lump differentiation classification, the method for configuring the full-view adaptive segmentation network based on the lump differentiation classification provided by the invention is more intelligent and efficient in comparison with the method for manually extracting the interested target lump region.

Description

Full-view self-adaptive segmentation network configuration method based on lump differentiation classification
Technical Field
The invention relates to the technical field of image segmentation, in particular to a full-view self-adaptive segmentation network configuration method based on lump differentiation classification.
Background
In recent years, with the continuous development of computer vision, image segmentation techniques have been applied to various industries, in which breast mass segmentation has also received attention from many researchers. Breast molybdenum target screening is currently the most common and effective method of pre-breast cancer screening. Radiologists are often influenced by subjective factors or diagnostic experience in the process of breast molybdenum target image analysis, so that differences exist between entities and observers, and therefore, the detection of abnormalities such as bumps, calcifications and the like in molybdenum target photos by using a computer-aided detection or diagnosis technology plays an important role, and therefore, the design of an effective breast bump segmentation auxiliary system is important.
In the past few decades, a great deal of research has been conducted for developing breast molybdenum target image tumor segmentation, wherein deep learning has many progress in breast tumor segmentation, but the current breast tumor segmentation is mostly carried out after the target tumor region of interest is manually or by means of detection technology, and the manual extraction of the target region containing tumor is a tedious and difficult work for radiologists, so the automatic breast tumor segmentation technology for constructing a full-field range has high application value, and few researches on identification and segmentation of a plurality of breast tumors are carried out simultaneously.
Disclosure of Invention
The invention aims to provide a full-view self-adaptive segmentation network configuration method based on lump differentiation classification, which aims to solve the problems of complicated and low-efficiency extraction of an interested target lump region by people or by means of a detection technology.
In order to solve the technical problems, the technical scheme of the invention is as follows: the full-view adaptive segmentation network configuration method based on the lump differentiation classification comprises the following steps: step 1: preprocessing the full-view image, enhancing the local contrast of the image and reducing other noise; step 2: performing morphological corrosion operation on the preprocessed image, and reducing the boundary of each tumor to obtain a target image; step 3: classifying the target image into an irregular multi-tumor, a smooth multi-tumor, an irregular single-tumor and a smooth single-tumor by feeding the target image into a generated countermeasure network; step 4: performing morphological expansion operation on the classified images, and reducing the tumor shrinkage caused by the corrosion operation performed in the step 2 to obtain images with classification labels; step 5: respectively designing four segmentation network models and segmenting the tumor according to the characteristics of the four types of target images in the step 3; step 6: and measuring the segmentation effect of the full-field self-adaptive segmentation network based on the lump differentiation classification according to the segmentation index.
Further, in step 1, preprocessing the full field image includes: histogram equalization, binary filtering, and gamma conversion.
Further, in step 5, the model design and mass segmentation are performed for four segmented networks: aiming at irregular multi-tumor, a hole space convolution pooling pyramid module is added while an attention mechanism module is added on the basis of an R2U-Net network, and an original cyclic convolution layer is replaced by a cross feature accumulation module; aiming at smooth multiple tumors, a plurality of tumor perception modules are added while an attention module is added on the basis of an R2U-Net network, and each convolution layer in the plurality of tumor perception modules uses three kernels with different sizes to construct a feature map; aiming at an irregular single tumor, the super-pixel image and the original image are connected to serve as the input of the network while an attention mechanism module is added on the basis of an R2U-Net network, and the edge contour information of the irregular tumor is extracted; for smooth single tumor, an attention mechanism module is added on the basis of an R2U-Net network, the network trained by the attention mechanism module suppresses irrelevant areas and highlights useful characteristics, and target structures focused on different shapes and sizes are automatically learned.
Further, the segmentation indicators include sensitivity, specificity, accuracy, recall, dice coefficient, and Jacquard similarity coefficient.
According to the full-view adaptive segmentation network configuration method based on the lump differentiation classification, provided by the invention, the segmentation of the lump in the whole medical image is fully considered, and compared with the manual extraction of the interested target lump region, the full-view adaptive segmentation network configuration method based on the lump differentiation classification is more intelligent and efficient; the method adopts the corrosion and expansion operation in morphological treatment, so that the accuracy of lump classification is improved, the original forms of the lump are basically reserved, and the lump segmentation effect is improved; the method has the advantages that the network self-adaptive configuration of classification and then segmentation is carried out, the tumors are divided into four categories through the classification model, and the tumors are segmented through different segmentation networks, so that segmentation indexes such as segmentation accuracy and recall ratio are greatly improved.
Drawings
The invention is further described below with reference to the accompanying drawings:
fig. 1 is a flowchart illustrating steps of a method for configuring a full-field adaptive segmentation network based on mass differentiation classification according to an embodiment of the present invention.
Detailed Description
The invention provides a full-field adaptive segmentation network configuration method based on lump differentiation classification, which is further described in detail below with reference to the accompanying drawings and specific embodiments. Advantages and features of the invention will become more apparent from the following description and from the claims. It is noted that the drawings are in a very simplified form and utilize non-precise ratios, and are intended to facilitate a convenient, clear, description of the embodiments of the invention.
The invention has the core ideas that the full-view self-adaptive segmentation network configuration method based on the lump differentiation classification fully considers the segmentation of the lump in the whole medical image, and compared with the manual extraction of the interested target lump region, the full-view self-adaptive segmentation network configuration method based on the lump differentiation classification is more intelligent and efficient; the method adopts the corrosion and expansion operation in morphological treatment, so that the accuracy of lump classification is improved, the original forms of the lump are basically reserved, and the lump segmentation effect is improved; the method has the advantages that the network self-adaptive configuration of classification and then segmentation is carried out, the tumors are divided into four categories through the classification model, and the tumors are segmented through different segmentation networks, so that segmentation indexes such as segmentation accuracy and recall ratio are greatly improved.
According to the technical scheme, the invention provides a full-field self-adaptive segmentation network configuration method based on lump differentiation classification, and fig. 1 is a flow chart of steps of the full-field self-adaptive segmentation network configuration method based on lump differentiation classification provided by the embodiment of the invention. Referring to fig. 1, the full-view adaptive segmentation network configuration method based on mass differentiation classification includes the steps of:
s11: preprocessing the full-view image, enhancing the local contrast of the image and reducing other noise;
s12: performing morphological corrosion operation on the preprocessed image, and reducing the boundary of each tumor to obtain a target image;
s13: classifying the target image into an irregular multi-tumor, a smooth multi-tumor, an irregular single-tumor and a smooth single-tumor by feeding the target image into a generated countermeasure network;
s14: performing morphological expansion operation on the classified images, and reducing the tumor shrinkage caused by the corrosion operation performed in the step S12 to obtain images with classification labels;
s15: respectively designing four segmentation network models and segmenting the tumor according to the characteristics of the four types of target images in the S13;
s16: further, in S15, a dynamic timer is set according to the priority of the node, and the node waiting time is shorter when the priority is higher.
In the embodiment of the invention, the image is converted into the gray image, the labels and other interferences in the background are removed by a local threshold method and a small area method, the breast and pectoral muscle parts are reserved, and then the effects of increasing the local contrast of the image, smoothing and removing dryness of the image and enhancing the image are realized by histogram equalization, binary filtering and gamma conversion operation.
In step 2, the etching operation is defined as: when the origin of the structural element b is located at (x, y), the erosion of the image f at (x, y) with a flat structural element b is defined as the minimum of the overlapping area of the image f and b. The corrosion of the structural element b at (x, y) of an image f is given by:
that is, to seek b to f erosion, we place the origin of the structural element at the position of each pixel of the image, and erosion at any position is determined by selecting the minimum value from all values of f contained in the b overlapping region. And obtaining a target image after the corrosion operation, wherein the target image is an image with image content to be classified.
In the step 3, the breast tumor is classified by the semi-coupling generation type countermeasure network, and compared with the traditional convolutional neural network, the method has higher precision, reduces the calculation cost, and improves the robustness of countermeasure training.
In step 4, the expansion operation is defined as: when the origin of b is located at position (x, y), the expansion of the flat structural element b at any position (x, y) for the image f is defined as the maximum of the overlapping area of the image f and b, i.e.:
the target image obtained at this time is an image whose image content is to be divided while having a classification tag.
In step 5, for irregular multi-tumor, a hole space convolution pooling pyramid (ASPP) module is added while an attention mechanism (AG) module is added on the basis of an R2U-Net network, and an original cyclic convolution layer is replaced by a cross feature accumulation module (CFA), so that information of the irregular multi-tumor can be effectively extracted; aiming at a smooth multi-tumor mass, a multi-tumor perception module (MSM) is added while an Attention (AG) module is added on the basis of an R2U-Net network, three kernels with different sizes are used for constructing a feature map by each convolution layer in the MSM module, different image features of a receiving domain are extracted, and the feature map carries multi-scale context information and stores fine tumor position information; aiming at an irregular single tumor, the method adopts an attention mechanism (AG) module added on the basis of an R2U-Net network, and simultaneously, in order to strengthen the outline and provide structural information, a super-pixel image and an original image are connected to serve as the input of the network, so that the edge outline information of the irregular tumor is better extracted; for round single bumps, an attention mechanism (AG) module is added on the basis of an R2U-Net network, and the network with AG model training can restrain irrelevant areas and highlight useful characteristics, and the model automatically learns to focus on target structures with different shapes and sizes.
The purpose of breast image segmentation is to obtain a segmentation result for each pixel, determining whether the pixel is a tumor or background. In step 6, by comparing the Group Trunk (GT) with Segmentation Results (SR), there are four cases, true Positive (TP), indicating the number of pixel units that correctly divide the lesion into positive categories; false Positive (FP) indicates the number of pixel units that erroneously divide the background into positive categories. True Negative (TN), which indicates the number of pixels that correctly divide the pixels into negative categories; false Negative (FN) indicates the number of pixels that erroneously classify the pixel into a negative class. The most commonly used evaluation criteria evaluate the performance of the experiment, sensitivity (SE), specificity (SP), accuracy (Acc), recall (PPV), F-Measure (F1), dice Coefficient (DC) and Jacquard similarity coefficient (JC). Their specific definitions are as follows:
through the evaluation of the segmentation indexes, the segmentation effect of the full-field self-adaptive segmentation network based on the lump differentiation classification is good.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (3)

1. A full-view self-adaptive segmentation network configuration method based on lump differentiation classification is characterized by comprising the following steps:
step 1: preprocessing the full-view image, enhancing the local contrast of the image and reducing other noise;
step 2: performing morphological corrosion operation on the preprocessed image, and reducing the boundary of each tumor to obtain a target image;
step 3: classifying the target image into an irregular multi-tumor, a smooth multi-tumor, an irregular single-tumor and a smooth single-tumor by feeding the target image into a generated countermeasure network;
step 4: performing morphological expansion operation on the classified images, and reducing the tumor shrinkage caused by the corrosion operation performed in the step 2 to obtain images with classification labels;
step 5: according to the characteristics of the four types of target images in the step 3, four types of segmentation network models are respectively designed and the tumor is segmented, aiming at irregular multiple tumors, a hole space convolution pooling pyramid module is added while an attention mechanism module is added on the basis of an R2U-Net network, and an original circular convolution layer is replaced by a cross feature accumulation module; aiming at smooth multiple tumors, a plurality of tumor perception modules are added while an attention module is added on the basis of an R2U-Net network, and each convolution layer in the plurality of tumor perception modules uses three kernels with different sizes to construct a feature map; aiming at an irregular single tumor, the super-pixel image and the original image are connected to serve as the input of the network while an attention mechanism module is added on the basis of an R2U-Net network, and the edge contour information of the irregular tumor is extracted; aiming at a smooth single tumor, an attention mechanism module is added on the basis of an R2U-Net network, a network trained by the attention mechanism module suppresses irrelevant areas and highlights useful characteristics, and target structures focused on different shapes and sizes are automatically learned;
step 6: and measuring the segmentation effect of the full-field self-adaptive segmentation network based on the lump differentiation classification according to the segmentation index.
2. The method of configuring a full-field adaptive segmentation network based on mass differentiation classification as set forth in claim 1, wherein in step 1, preprocessing the full-field image comprises: histogram equalization, binary filtering, and gamma conversion.
3. The method of claim 1, wherein in step 6, the segmentation metrics include sensitivity, specificity, accuracy, recall, dice coefficients, and jaccard similarity coefficients.
CN202011140808.1A 2020-10-22 2020-10-22 Full-view self-adaptive segmentation network configuration method based on lump differentiation classification Active CN112241954B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011140808.1A CN112241954B (en) 2020-10-22 2020-10-22 Full-view self-adaptive segmentation network configuration method based on lump differentiation classification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011140808.1A CN112241954B (en) 2020-10-22 2020-10-22 Full-view self-adaptive segmentation network configuration method based on lump differentiation classification

Publications (2)

Publication Number Publication Date
CN112241954A CN112241954A (en) 2021-01-19
CN112241954B true CN112241954B (en) 2024-03-15

Family

ID=74169900

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011140808.1A Active CN112241954B (en) 2020-10-22 2020-10-22 Full-view self-adaptive segmentation network configuration method based on lump differentiation classification

Country Status (1)

Country Link
CN (1) CN112241954B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107886514A (en) * 2017-11-22 2018-04-06 浙江中医药大学 Breast molybdenum target image lump semantic segmentation method based on depth residual error network
CN110414539A (en) * 2019-08-05 2019-11-05 腾讯科技(深圳)有限公司 A kind of method and relevant apparatus for extracting characterization information
CN110490850A (en) * 2019-02-14 2019-11-22 腾讯科技(深圳)有限公司 A kind of lump method for detecting area, device and Medical Image Processing equipment
WO2020019671A1 (en) * 2018-07-23 2020-01-30 哈尔滨工业大学(深圳) Breast lump detection and classification system and computer-readable storage medium
CN111192245A (en) * 2019-12-26 2020-05-22 河南工业大学 Brain tumor segmentation network and method based on U-Net network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107886514A (en) * 2017-11-22 2018-04-06 浙江中医药大学 Breast molybdenum target image lump semantic segmentation method based on depth residual error network
WO2020019671A1 (en) * 2018-07-23 2020-01-30 哈尔滨工业大学(深圳) Breast lump detection and classification system and computer-readable storage medium
CN110490850A (en) * 2019-02-14 2019-11-22 腾讯科技(深圳)有限公司 A kind of lump method for detecting area, device and Medical Image Processing equipment
CN110414539A (en) * 2019-08-05 2019-11-05 腾讯科技(深圳)有限公司 A kind of method and relevant apparatus for extracting characterization information
CN111192245A (en) * 2019-12-26 2020-05-22 河南工业大学 Brain tumor segmentation network and method based on U-Net network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于自适应能量偏移场无边缘主动轮廓模型的乳腺肿块分割与分类方法研究;王孝义;邢素霞;王瑜;曹宇;申楠;潘子妍;;中国医学物理学杂志(第08期);全文 *

Also Published As

Publication number Publication date
CN112241954A (en) 2021-01-19

Similar Documents

Publication Publication Date Title
CN110334706B (en) Image target identification method and device
CN109840521B (en) Integrated license plate recognition method based on deep learning
CN109636824B (en) Multi-target counting method based on image recognition technology
CN105809175A (en) Encephaledema segmentation method and system based on support vector machine algorithm
CN115311507B (en) Building board classification method based on data processing
CN110070545B (en) Method for automatically extracting urban built-up area by urban texture feature density
CN104217213A (en) Medical image multi-stage classification method based on symmetry theory
CN115272647A (en) Lung image recognition processing method and system
CN116758336A (en) Medical image intelligent analysis system based on artificial intelligence
CN108921172A (en) Image processing apparatus and method based on support vector machines
CN115294377A (en) System and method for identifying road cracks
CN109272522B (en) A kind of image thinning dividing method based on local feature
CN111127407B (en) Fourier transform-based style migration forged image detection device and method
CN113177554A (en) Thyroid nodule identification and segmentation method, system, storage medium and equipment
CN105844299B (en) A kind of image classification method based on bag of words
CN112241954B (en) Full-view self-adaptive segmentation network configuration method based on lump differentiation classification
CN110363240A (en) A kind of medical image classification method and system
CN113763407B (en) Nodule edge analysis method of ultrasonic image
CN113870194B (en) Breast tumor ultrasonic image processing device with fusion of deep layer characteristics and shallow layer LBP characteristics
CN112926670A (en) Garbage classification system and method based on transfer learning
CN114140830A (en) Repeated identification inhibition method based on circulating tumor cell image
CN111783571A (en) Cervical cell automatic classification model establishment and cervical cell automatic classification method
Prasad et al. A multi-classifier and decision fusion framework for robust classification of mammographic masses
CN111414956B (en) Multi-example learning identification method for fuzzy mode in lung CT image
CN113516022B (en) Fine-grained classification system for cervical cells

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant