CN112241954B - Full-view self-adaptive segmentation network configuration method based on lump differentiation classification - Google Patents
Full-view self-adaptive segmentation network configuration method based on lump differentiation classification Download PDFInfo
- Publication number
- CN112241954B CN112241954B CN202011140808.1A CN202011140808A CN112241954B CN 112241954 B CN112241954 B CN 112241954B CN 202011140808 A CN202011140808 A CN 202011140808A CN 112241954 B CN112241954 B CN 112241954B
- Authority
- CN
- China
- Prior art keywords
- tumor
- full
- lump
- image
- segmentation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000011218 segmentation Effects 0.000 title claims abstract description 59
- 238000000034 method Methods 0.000 title claims abstract description 32
- 230000004069 differentiation Effects 0.000 title claims abstract description 23
- 206010028980 Neoplasm Diseases 0.000 claims abstract description 56
- 230000001788 irregular Effects 0.000 claims abstract description 18
- 230000007797 corrosion Effects 0.000 claims abstract description 11
- 238000005260 corrosion Methods 0.000 claims abstract description 11
- 230000000877 morphologic effect Effects 0.000 claims abstract description 10
- 230000003044 adaptive effect Effects 0.000 claims abstract description 9
- 230000000694 effects Effects 0.000 claims abstract description 7
- 238000007781 pre-processing Methods 0.000 claims abstract description 6
- 230000002708 enhancing effect Effects 0.000 claims abstract description 5
- 230000007246 mechanism Effects 0.000 claims description 11
- 230000008447 perception Effects 0.000 claims description 5
- 230000035945 sensitivity Effects 0.000 claims description 4
- 238000009825 accumulation Methods 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 238000011176 pooling Methods 0.000 claims description 3
- 208000026310 Breast neoplasm Diseases 0.000 description 6
- 210000000481 breast Anatomy 0.000 description 6
- 206010006187 Breast cancer Diseases 0.000 description 5
- 239000003795 chemical substances by application Substances 0.000 description 5
- ZOKXTWBITQBERF-UHFFFAOYSA-N Molybdenum Chemical compound [Mo] ZOKXTWBITQBERF-UHFFFAOYSA-N 0.000 description 4
- 238000001514 detection method Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 229910052750 molybdenum Inorganic materials 0.000 description 4
- 239000011733 molybdenum Substances 0.000 description 4
- 230000003628 erosive effect Effects 0.000 description 3
- 238000003709 image segmentation Methods 0.000 description 3
- 238000013145 classification model Methods 0.000 description 2
- 125000004122 cyclic group Chemical group 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 206010006272 Breast mass Diseases 0.000 description 1
- 208000004434 Calcinosis Diseases 0.000 description 1
- 230000005856 abnormality Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000002308 calcification Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000005530 etching Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 210000002976 pectoralis muscle Anatomy 0.000 description 1
- 230000008569 process Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Processing (AREA)
Abstract
The invention provides a full-view self-adaptive segmentation network configuration method based on lump differentiation classification, which comprises the steps of preprocessing full-view images, enhancing local contrast of the images and reducing other noise; performing morphological corrosion operation on the preprocessed image, and reducing the boundary of each tumor to obtain a target image; classifying the target image into irregular multi-tumor, smooth multi-tumor, irregular single-tumor and smooth single-tumor by feeding the target image into the generation type countermeasure network; performing morphological expansion operation on the classified images to obtain images with classification labels; designing four segmentation network models and segmenting the tumor; and measuring the segmentation effect of the full-field self-adaptive segmentation network based on the lump differentiation classification according to the segmentation index. Compared with the method for configuring the full-view adaptive segmentation network based on the lump differentiation classification, the method for configuring the full-view adaptive segmentation network based on the lump differentiation classification provided by the invention is more intelligent and efficient in comparison with the method for manually extracting the interested target lump region.
Description
Technical Field
The invention relates to the technical field of image segmentation, in particular to a full-view self-adaptive segmentation network configuration method based on lump differentiation classification.
Background
In recent years, with the continuous development of computer vision, image segmentation techniques have been applied to various industries, in which breast mass segmentation has also received attention from many researchers. Breast molybdenum target screening is currently the most common and effective method of pre-breast cancer screening. Radiologists are often influenced by subjective factors or diagnostic experience in the process of breast molybdenum target image analysis, so that differences exist between entities and observers, and therefore, the detection of abnormalities such as bumps, calcifications and the like in molybdenum target photos by using a computer-aided detection or diagnosis technology plays an important role, and therefore, the design of an effective breast bump segmentation auxiliary system is important.
In the past few decades, a great deal of research has been conducted for developing breast molybdenum target image tumor segmentation, wherein deep learning has many progress in breast tumor segmentation, but the current breast tumor segmentation is mostly carried out after the target tumor region of interest is manually or by means of detection technology, and the manual extraction of the target region containing tumor is a tedious and difficult work for radiologists, so the automatic breast tumor segmentation technology for constructing a full-field range has high application value, and few researches on identification and segmentation of a plurality of breast tumors are carried out simultaneously.
Disclosure of Invention
The invention aims to provide a full-view self-adaptive segmentation network configuration method based on lump differentiation classification, which aims to solve the problems of complicated and low-efficiency extraction of an interested target lump region by people or by means of a detection technology.
In order to solve the technical problems, the technical scheme of the invention is as follows: the full-view adaptive segmentation network configuration method based on the lump differentiation classification comprises the following steps: step 1: preprocessing the full-view image, enhancing the local contrast of the image and reducing other noise; step 2: performing morphological corrosion operation on the preprocessed image, and reducing the boundary of each tumor to obtain a target image; step 3: classifying the target image into an irregular multi-tumor, a smooth multi-tumor, an irregular single-tumor and a smooth single-tumor by feeding the target image into a generated countermeasure network; step 4: performing morphological expansion operation on the classified images, and reducing the tumor shrinkage caused by the corrosion operation performed in the step 2 to obtain images with classification labels; step 5: respectively designing four segmentation network models and segmenting the tumor according to the characteristics of the four types of target images in the step 3; step 6: and measuring the segmentation effect of the full-field self-adaptive segmentation network based on the lump differentiation classification according to the segmentation index.
Further, in step 1, preprocessing the full field image includes: histogram equalization, binary filtering, and gamma conversion.
Further, in step 5, the model design and mass segmentation are performed for four segmented networks: aiming at irregular multi-tumor, a hole space convolution pooling pyramid module is added while an attention mechanism module is added on the basis of an R2U-Net network, and an original cyclic convolution layer is replaced by a cross feature accumulation module; aiming at smooth multiple tumors, a plurality of tumor perception modules are added while an attention module is added on the basis of an R2U-Net network, and each convolution layer in the plurality of tumor perception modules uses three kernels with different sizes to construct a feature map; aiming at an irregular single tumor, the super-pixel image and the original image are connected to serve as the input of the network while an attention mechanism module is added on the basis of an R2U-Net network, and the edge contour information of the irregular tumor is extracted; for smooth single tumor, an attention mechanism module is added on the basis of an R2U-Net network, the network trained by the attention mechanism module suppresses irrelevant areas and highlights useful characteristics, and target structures focused on different shapes and sizes are automatically learned.
Further, the segmentation indicators include sensitivity, specificity, accuracy, recall, dice coefficient, and Jacquard similarity coefficient.
According to the full-view adaptive segmentation network configuration method based on the lump differentiation classification, provided by the invention, the segmentation of the lump in the whole medical image is fully considered, and compared with the manual extraction of the interested target lump region, the full-view adaptive segmentation network configuration method based on the lump differentiation classification is more intelligent and efficient; the method adopts the corrosion and expansion operation in morphological treatment, so that the accuracy of lump classification is improved, the original forms of the lump are basically reserved, and the lump segmentation effect is improved; the method has the advantages that the network self-adaptive configuration of classification and then segmentation is carried out, the tumors are divided into four categories through the classification model, and the tumors are segmented through different segmentation networks, so that segmentation indexes such as segmentation accuracy and recall ratio are greatly improved.
Drawings
The invention is further described below with reference to the accompanying drawings:
fig. 1 is a flowchart illustrating steps of a method for configuring a full-field adaptive segmentation network based on mass differentiation classification according to an embodiment of the present invention.
Detailed Description
The invention provides a full-field adaptive segmentation network configuration method based on lump differentiation classification, which is further described in detail below with reference to the accompanying drawings and specific embodiments. Advantages and features of the invention will become more apparent from the following description and from the claims. It is noted that the drawings are in a very simplified form and utilize non-precise ratios, and are intended to facilitate a convenient, clear, description of the embodiments of the invention.
The invention has the core ideas that the full-view self-adaptive segmentation network configuration method based on the lump differentiation classification fully considers the segmentation of the lump in the whole medical image, and compared with the manual extraction of the interested target lump region, the full-view self-adaptive segmentation network configuration method based on the lump differentiation classification is more intelligent and efficient; the method adopts the corrosion and expansion operation in morphological treatment, so that the accuracy of lump classification is improved, the original forms of the lump are basically reserved, and the lump segmentation effect is improved; the method has the advantages that the network self-adaptive configuration of classification and then segmentation is carried out, the tumors are divided into four categories through the classification model, and the tumors are segmented through different segmentation networks, so that segmentation indexes such as segmentation accuracy and recall ratio are greatly improved.
According to the technical scheme, the invention provides a full-field self-adaptive segmentation network configuration method based on lump differentiation classification, and fig. 1 is a flow chart of steps of the full-field self-adaptive segmentation network configuration method based on lump differentiation classification provided by the embodiment of the invention. Referring to fig. 1, the full-view adaptive segmentation network configuration method based on mass differentiation classification includes the steps of:
s11: preprocessing the full-view image, enhancing the local contrast of the image and reducing other noise;
s12: performing morphological corrosion operation on the preprocessed image, and reducing the boundary of each tumor to obtain a target image;
s13: classifying the target image into an irregular multi-tumor, a smooth multi-tumor, an irregular single-tumor and a smooth single-tumor by feeding the target image into a generated countermeasure network;
s14: performing morphological expansion operation on the classified images, and reducing the tumor shrinkage caused by the corrosion operation performed in the step S12 to obtain images with classification labels;
s15: respectively designing four segmentation network models and segmenting the tumor according to the characteristics of the four types of target images in the S13;
s16: further, in S15, a dynamic timer is set according to the priority of the node, and the node waiting time is shorter when the priority is higher.
In the embodiment of the invention, the image is converted into the gray image, the labels and other interferences in the background are removed by a local threshold method and a small area method, the breast and pectoral muscle parts are reserved, and then the effects of increasing the local contrast of the image, smoothing and removing dryness of the image and enhancing the image are realized by histogram equalization, binary filtering and gamma conversion operation.
In step 2, the etching operation is defined as: when the origin of the structural element b is located at (x, y), the erosion of the image f at (x, y) with a flat structural element b is defined as the minimum of the overlapping area of the image f and b. The corrosion of the structural element b at (x, y) of an image f is given by:
that is, to seek b to f erosion, we place the origin of the structural element at the position of each pixel of the image, and erosion at any position is determined by selecting the minimum value from all values of f contained in the b overlapping region. And obtaining a target image after the corrosion operation, wherein the target image is an image with image content to be classified.
In the step 3, the breast tumor is classified by the semi-coupling generation type countermeasure network, and compared with the traditional convolutional neural network, the method has higher precision, reduces the calculation cost, and improves the robustness of countermeasure training.
In step 4, the expansion operation is defined as: when the origin of b is located at position (x, y), the expansion of the flat structural element b at any position (x, y) for the image f is defined as the maximum of the overlapping area of the image f and b, i.e.:
the target image obtained at this time is an image whose image content is to be divided while having a classification tag.
In step 5, for irregular multi-tumor, a hole space convolution pooling pyramid (ASPP) module is added while an attention mechanism (AG) module is added on the basis of an R2U-Net network, and an original cyclic convolution layer is replaced by a cross feature accumulation module (CFA), so that information of the irregular multi-tumor can be effectively extracted; aiming at a smooth multi-tumor mass, a multi-tumor perception module (MSM) is added while an Attention (AG) module is added on the basis of an R2U-Net network, three kernels with different sizes are used for constructing a feature map by each convolution layer in the MSM module, different image features of a receiving domain are extracted, and the feature map carries multi-scale context information and stores fine tumor position information; aiming at an irregular single tumor, the method adopts an attention mechanism (AG) module added on the basis of an R2U-Net network, and simultaneously, in order to strengthen the outline and provide structural information, a super-pixel image and an original image are connected to serve as the input of the network, so that the edge outline information of the irregular tumor is better extracted; for round single bumps, an attention mechanism (AG) module is added on the basis of an R2U-Net network, and the network with AG model training can restrain irrelevant areas and highlight useful characteristics, and the model automatically learns to focus on target structures with different shapes and sizes.
The purpose of breast image segmentation is to obtain a segmentation result for each pixel, determining whether the pixel is a tumor or background. In step 6, by comparing the Group Trunk (GT) with Segmentation Results (SR), there are four cases, true Positive (TP), indicating the number of pixel units that correctly divide the lesion into positive categories; false Positive (FP) indicates the number of pixel units that erroneously divide the background into positive categories. True Negative (TN), which indicates the number of pixels that correctly divide the pixels into negative categories; false Negative (FN) indicates the number of pixels that erroneously classify the pixel into a negative class. The most commonly used evaluation criteria evaluate the performance of the experiment, sensitivity (SE), specificity (SP), accuracy (Acc), recall (PPV), F-Measure (F1), dice Coefficient (DC) and Jacquard similarity coefficient (JC). Their specific definitions are as follows:
through the evaluation of the segmentation indexes, the segmentation effect of the full-field self-adaptive segmentation network based on the lump differentiation classification is good.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.
Claims (3)
1. A full-view self-adaptive segmentation network configuration method based on lump differentiation classification is characterized by comprising the following steps:
step 1: preprocessing the full-view image, enhancing the local contrast of the image and reducing other noise;
step 2: performing morphological corrosion operation on the preprocessed image, and reducing the boundary of each tumor to obtain a target image;
step 3: classifying the target image into an irregular multi-tumor, a smooth multi-tumor, an irregular single-tumor and a smooth single-tumor by feeding the target image into a generated countermeasure network;
step 4: performing morphological expansion operation on the classified images, and reducing the tumor shrinkage caused by the corrosion operation performed in the step 2 to obtain images with classification labels;
step 5: according to the characteristics of the four types of target images in the step 3, four types of segmentation network models are respectively designed and the tumor is segmented, aiming at irregular multiple tumors, a hole space convolution pooling pyramid module is added while an attention mechanism module is added on the basis of an R2U-Net network, and an original circular convolution layer is replaced by a cross feature accumulation module; aiming at smooth multiple tumors, a plurality of tumor perception modules are added while an attention module is added on the basis of an R2U-Net network, and each convolution layer in the plurality of tumor perception modules uses three kernels with different sizes to construct a feature map; aiming at an irregular single tumor, the super-pixel image and the original image are connected to serve as the input of the network while an attention mechanism module is added on the basis of an R2U-Net network, and the edge contour information of the irregular tumor is extracted; aiming at a smooth single tumor, an attention mechanism module is added on the basis of an R2U-Net network, a network trained by the attention mechanism module suppresses irrelevant areas and highlights useful characteristics, and target structures focused on different shapes and sizes are automatically learned;
step 6: and measuring the segmentation effect of the full-field self-adaptive segmentation network based on the lump differentiation classification according to the segmentation index.
2. The method of configuring a full-field adaptive segmentation network based on mass differentiation classification as set forth in claim 1, wherein in step 1, preprocessing the full-field image comprises: histogram equalization, binary filtering, and gamma conversion.
3. The method of claim 1, wherein in step 6, the segmentation metrics include sensitivity, specificity, accuracy, recall, dice coefficients, and jaccard similarity coefficients.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011140808.1A CN112241954B (en) | 2020-10-22 | 2020-10-22 | Full-view self-adaptive segmentation network configuration method based on lump differentiation classification |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011140808.1A CN112241954B (en) | 2020-10-22 | 2020-10-22 | Full-view self-adaptive segmentation network configuration method based on lump differentiation classification |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112241954A CN112241954A (en) | 2021-01-19 |
CN112241954B true CN112241954B (en) | 2024-03-15 |
Family
ID=74169900
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011140808.1A Active CN112241954B (en) | 2020-10-22 | 2020-10-22 | Full-view self-adaptive segmentation network configuration method based on lump differentiation classification |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112241954B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107886514A (en) * | 2017-11-22 | 2018-04-06 | 浙江中医药大学 | Breast molybdenum target image lump semantic segmentation method based on depth residual error network |
CN110414539A (en) * | 2019-08-05 | 2019-11-05 | 腾讯科技(深圳)有限公司 | A kind of method and relevant apparatus for extracting characterization information |
CN110490850A (en) * | 2019-02-14 | 2019-11-22 | 腾讯科技(深圳)有限公司 | A kind of lump method for detecting area, device and Medical Image Processing equipment |
WO2020019671A1 (en) * | 2018-07-23 | 2020-01-30 | 哈尔滨工业大学(深圳) | Breast lump detection and classification system and computer-readable storage medium |
CN111192245A (en) * | 2019-12-26 | 2020-05-22 | 河南工业大学 | Brain tumor segmentation network and method based on U-Net network |
-
2020
- 2020-10-22 CN CN202011140808.1A patent/CN112241954B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107886514A (en) * | 2017-11-22 | 2018-04-06 | 浙江中医药大学 | Breast molybdenum target image lump semantic segmentation method based on depth residual error network |
WO2020019671A1 (en) * | 2018-07-23 | 2020-01-30 | 哈尔滨工业大学(深圳) | Breast lump detection and classification system and computer-readable storage medium |
CN110490850A (en) * | 2019-02-14 | 2019-11-22 | 腾讯科技(深圳)有限公司 | A kind of lump method for detecting area, device and Medical Image Processing equipment |
CN110414539A (en) * | 2019-08-05 | 2019-11-05 | 腾讯科技(深圳)有限公司 | A kind of method and relevant apparatus for extracting characterization information |
CN111192245A (en) * | 2019-12-26 | 2020-05-22 | 河南工业大学 | Brain tumor segmentation network and method based on U-Net network |
Non-Patent Citations (1)
Title |
---|
基于自适应能量偏移场无边缘主动轮廓模型的乳腺肿块分割与分类方法研究;王孝义;邢素霞;王瑜;曹宇;申楠;潘子妍;;中国医学物理学杂志(第08期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112241954A (en) | 2021-01-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110334706B (en) | Image target identification method and device | |
CN109840521B (en) | Integrated license plate recognition method based on deep learning | |
CN109636824B (en) | Multi-target counting method based on image recognition technology | |
CN105809175A (en) | Encephaledema segmentation method and system based on support vector machine algorithm | |
CN115311507B (en) | Building board classification method based on data processing | |
CN110070545B (en) | Method for automatically extracting urban built-up area by urban texture feature density | |
CN104217213A (en) | Medical image multi-stage classification method based on symmetry theory | |
CN115272647A (en) | Lung image recognition processing method and system | |
CN116758336A (en) | Medical image intelligent analysis system based on artificial intelligence | |
CN108921172A (en) | Image processing apparatus and method based on support vector machines | |
CN115294377A (en) | System and method for identifying road cracks | |
CN109272522B (en) | A kind of image thinning dividing method based on local feature | |
CN111127407B (en) | Fourier transform-based style migration forged image detection device and method | |
CN113177554A (en) | Thyroid nodule identification and segmentation method, system, storage medium and equipment | |
CN105844299B (en) | A kind of image classification method based on bag of words | |
CN112241954B (en) | Full-view self-adaptive segmentation network configuration method based on lump differentiation classification | |
CN110363240A (en) | A kind of medical image classification method and system | |
CN113763407B (en) | Nodule edge analysis method of ultrasonic image | |
CN113870194B (en) | Breast tumor ultrasonic image processing device with fusion of deep layer characteristics and shallow layer LBP characteristics | |
CN112926670A (en) | Garbage classification system and method based on transfer learning | |
CN114140830A (en) | Repeated identification inhibition method based on circulating tumor cell image | |
CN111783571A (en) | Cervical cell automatic classification model establishment and cervical cell automatic classification method | |
Prasad et al. | A multi-classifier and decision fusion framework for robust classification of mammographic masses | |
CN111414956B (en) | Multi-example learning identification method for fuzzy mode in lung CT image | |
CN113516022B (en) | Fine-grained classification system for cervical cells |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |