CN109859159B - Cervical lesion region segmentation method and device based on multi-mode segmentation network - Google Patents
Cervical lesion region segmentation method and device based on multi-mode segmentation network Download PDFInfo
- Publication number
- CN109859159B CN109859159B CN201811469200.6A CN201811469200A CN109859159B CN 109859159 B CN109859159 B CN 109859159B CN 201811469200 A CN201811469200 A CN 201811469200A CN 109859159 B CN109859159 B CN 109859159B
- Authority
- CN
- China
- Prior art keywords
- image
- acetic acid
- segmentation
- iodine
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Landscapes
- Image Analysis (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a cervical lesion area segmentation method and device based on a multi-modal segmentation network, belongs to the technical field of medical image processing, and fuses the characteristics of an acetic acid image and an iodine image in a cross connection mode in the characteristic extraction process of the two images. In order to fuse the characteristics of the two images, the acetic acid image characteristic of the previous volume block and the iodine image of the next volume block are spliced at a channel level, and then the iodine image is branched and then subsequent characteristic learning is carried out; similarly, the iodine image feature of the previous volume block and the acetic acid image feature of the next volume block are spliced, and then the acetic acid image is branched for subsequent feature learning. Such cross-connection continues until the fifth convolution block, and the characteristics of the acetic acid image and the iodine image output from the fifth convolution block substantially maintain the characteristics of both images. Then, the features learned from the acetic acid image branch and the iodine image branch are respectively entered into the FCN model segment to perform segment prediction.
Description
Technical Field
The invention relates to the technical field of medical image processing, in particular to a cervical lesion region segmentation method and device based on a multi-mode segmentation network.
Background
Cervical cancer is one of the main causes threatening the life of women worldwide and is the only malignant tumor with definite etiology of human beings at present. Currently, some screening techniques prevent cervical cancer by detecting Squamous Intraepithelial Lesions (SIL). Squamous intraepithelial lesions are divided into two categories, High-grade squamous intraepithelial lesions (HSIL) and Low-grade squamous intraepithelial lesions (HSIL). Further exacerbation of HSIL leads to cervical cancer. Therefore, HSIL requires further therapeutic and prophylactic measures in clinical applications; LSIL only requires constant visits with some mild treatment.
Screening of cervical lesions is mainly performed by HPV examination, PAP examination, digital cervical photography (digitalcicorgraph) and colposcopy (colposcopy). The multi-modality colposcopic images employed in the present invention are from colposcopy. The colposcopy comprises the specific steps of directly exposing the cervix, sequentially applying 0.9% physiological saline, 3% -5% acetic acid solution and compound iodine solution on the surface of the cervix, carefully observing whether lesion focuses exist in the cervical squamous column junction and the columnar epithelial region through the shot cervix image, further evaluating the nature and type of the lesions, determining the range of the lesions, finally guiding and selecting the accurate part of biopsy according to the information, avoiding blind biopsy and improving the biopsy positive rate and the diagnosis accuracy rate.
However, the diagnostic result for colposcopy depends largely on the subjective experience of the doctor, and the accuracy of the judgment is directly related to the positive rate of biopsy and the diagnosis accuracy. With the development of artificial intelligence technology and medical image analysis, many machine learning and deep learning methods are applied to the auxiliary diagnosis of medical images to help doctors make more accurate diagnosis.
Semantic Segmentation of images (Semantic Segmentation) is an important research direction in the field of artificial intelligence computer vision, and the task of Semantic Segmentation is to complete the Segmentation of an object or a specific area or scene. In medical images, semantic segmentation is often used to segment lesions, organs or cells, tissues, etc. in the image to facilitate subsequent lesion grading or diagnosis by a physician. Jonathan Long proposed in 2015 CVPR conference paper full volumetric conditional Networks for semantic segmentation that a semantic segmentation task was performed using convolution and Deconvolution (deconstruction) instead of full connectivity in full Convolutional neural Networks (FCN), which achieved a breakthrough result, and FCN became one of the main methods of semantic segmentation models.
During the examination process of the colposcope, doctors can repeatedly compare the changes of cervical epithelium after acetic acid is coated and iodine solution is coated (the colposcope image after acetic acid is coated is called an acetic acid image, and the colposcope image after iodine solution is coated is an iodine image), find the common area of the doctors and the colposcope image, identify pathological changes and improve the accuracy of biopsy. Therefore, the invention provides a cervical lesion segmentation technology based on a multi-mode segmentation network based on an FCN method and acetic acid images and iodine images of a cervical colposcope, and assists doctors in realizing cervical lesion identification under the colposcope.
Disclosure of Invention
The invention aims to provide a cervical lesion region segmentation method and device based on a multi-mode segmentation network, which improve the diagnosis efficiency and accuracy of cervical lesions.
In order to achieve the above object, the cervical lesion region segmentation method based on the multi-modal segmentation network provided by the present invention comprises the following steps:
1) acquiring acetic acid images and iodine images of a plurality of cervical samples, and marking lesion areas of all the images to obtain a cervical sample training set;
2) inputting acetic acid images of the same group of images in the training set into a first FCN network, inputting iodine images into a second FCN network, wherein the first FCN network and the second FCN network have the same structure and jointly form a multi-modal cervical lesion area segmentation network;
3) cross-connecting the feature extraction parts of the first FCN network and the second FCN network, and fusing the features of the acetic acid image and the iodine image to obtain a feature map of the acetic acid image and a feature map of the iodine image;
4) segmenting the characteristic diagram of the acetic acid image at the segmentation part of the first FCN network to obtain an acetic acid image segmentation result; segmenting the feature map of the iodine image at the segmentation part of the second FCN network to obtain an iodine image segmentation result;
5) calculating the loss of the multi-modal cervical lesion area segmentation network according to the segmentation results of the acetic acid image and the iodine image and the marks of the segmentation results, and updating the parameters of the multi-modal cervical lesion area segmentation network according to the loss;
6) inputting the acetic acid image and the iodine image of the next group of images into the first FCN network and the second FCN network respectively, and repeating the steps 4) -6) to train the multi-modal cervical lesion area segmentation network until convergence to obtain a multi-modal cervical lesion area segmentation model;
7) inputting an acetic acid image and an iodine image of a cervical sample to be detected into a multi-modal cervical lesion region segmentation model to obtain a segmentation prediction map.
In the technical scheme, on the basis of the prior art, the characteristics of the acetic acid image and the iodine image are fused through the cross connection of the characteristic diagrams of the acetic acid image and the iodine image, the potential correlation between the acetic acid image and the iodine image is fully utilized and learned, the learning of lesion characteristics is promoted, the segmentation accuracy is improved, the diagnosis of doctors is assisted through the final segmentation diagram, and the diagnosis efficiency is improved.
In order to improve the learning rate of the model and make the generalization performance of the model better, the method preferably further comprises the following steps:
obtaining a cervical sample verification set by the method in the step 1);
after the multi-modal cervical lesion region segmentation model is trained, the multi-modal cervical lesion region segmentation model is tested by using the acetic acid image and the iodine image of the same group of images in the verification set, and the model learning rate is adjusted.
Preferably, the method further comprises the following steps:
obtaining a cervical sample test set by the method in the step 1);
and testing the trained multi-mode cervical lesion region segmentation model by using the acetic acid image and the iodine image of the same group of images in the test set, and calculating the segmentation accuracy, the recall rate and the mIOU value.
Preferably, the data ratio of the training set, the validation set and the test set is 7: 2: 1.
in clinical diagnosis, doctors often check the vinegar white area of the acetic acid image and the iodine unstained area on the iodine image at the same time, and repeatedly compare the acetic acid image and the iodine image to find common lesion and lesion area so as to improve the accuracy of biopsy. Therefore, there is a certain correlation between the acetic acid image feature and the iodine image feature to some extent, and in order to simulate the actual diagnosis situation of the doctor as much as possible, it is preferable that step 3) includes:
the acetic acid image features extracted by the Nth convolution block of the first FCN network and the iodine image features extracted by the Nth convolution block of the second FCN network are fused before the convolution operation is carried out on the (N + 1) th convolution block of the second FCN network;
simultaneously, the iodine image features extracted by the Nth convolution block of the second FCN network and the acetic acid image features extracted by the Nth convolution block of the first FCN network are fused before the convolution operation is carried out on the (N + 1) th convolution block of the first FCN network;
and obtaining the feature map of the acetic acid image and the feature map of the iodine image until all convolution operations are completed.
Preferably, the acetic acid image features and the iodine image features are fused through splicing of the feature channels.
Preferably, the feature extraction portion of the first and second FCN networks is based on VGG-16 and comprises five volume blocks, each followed by a pooling layer.
Preferably, in step 4), the step of segmenting the feature map of the acetic acid image or the iodine image includes:
performing two-time upsampling on the result obtained after the pooling of the fifth convolution block, and then adding the result obtained after the pooling of the fourth convolution block and the element-by-element result;
and performing two-time upsampling on the result obtained in the last step, adding the result subjected to the pooling with the third convolution block element by element, and performing two-time upsampling to obtain a final segmentation result.
The invention provides a cervical lesion region segmentation device based on a multi-modal segmentation network, which comprises: a memory storing computer-executable instructions and data for use or production in executing the computer-executable instructions; a processor communicatively coupled to the memory and configured to execute computer-executable instructions stored by the memory, the computer-executable instructions, when executed, implementing the above-described cervical lesion region segmentation method based on a multi-modal segmentation network.
It should be noted that the "cervical lesion region segmentation method or apparatus" in the present invention is directed to segmentation of a corresponding region in an image.
Compared with the prior art, the invention has the beneficial effects that:
the invention provides a cross connection mode to realize the fusion of the features of the acetic acid image and the iodine image in the feature extraction part of the acetic acid image and the iodine image, so that a network can learn the features of the acetic acid image and the iodine image, the two parts are mutually promoted to be fused, and the accuracy of the segmentation result of the acetic acid image and the iodine image is improved.
Drawings
Fig. 1 is a flow chart of a multi-modal cervical lesion region segmentation model training process according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an FCN network according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a multi-modal cervical lesion region segmentation model according to an embodiment of the present invention;
fig. 4 is a flowchart of segmentation prediction of a cervical sample to be detected according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described with reference to the following embodiments and accompanying drawings.
Examples
The cervical lesion region segmentation apparatus based on the multi-modal segmentation network of the present embodiment includes: a memory storing computer-executable instructions and data for use or production in executing the computer-executable instructions; a processor communicatively coupled to the memory and configured to execute computer-executable instructions stored by the memory, the computer-executable instructions, when executed, performing the following steps of a cervical lesion region segmentation method based on a multi-modal segmentation network:
the method comprises the following steps: screening and preprocessing of colposcopic images
The colposcopy can sequentially smear normal saline, 3% -5% acetic acid and compound iodine solution on the cervix uteri, and doctors can shoot a plurality of normal saline images, acetic acid images and iodine images in the process. Since the application of physiological saline is only a simple cleaning action to remove other interfering substances, so that the entire cervix is completely exposed and no specific reaction occurs to the cervical epithelium, the present embodiment uses only the acetic acid image and the iodine image of each patient.
Some medical instruments, characters, large-area bleeding and light reflection images may exist in the acetic acid and iodine images, and in order to keep the images with better quality and better learn the image characteristics, an acetic acid image and an iodine image are screened out for each patient; the screened images are given to doctors for marking, the marked areas have lesion areas (HSIL and LSIL), all samples are divided into a training set, a verification set and a test set, and the data proportion is 7: 2: 1.
step two: multi-modal cervical lesion region segmentation model training
The input of the multi-modal cervical lesion region segmentation model of the embodiment is an acetic acid image and an iodine image under colposcopy, and the acetic acid image and the iodine image in a training set are adopted when the multi-modal cervical lesion region segmentation model is trained. As shown in fig. 1, the whole training process includes inputting multiple sets (generally, 8 times of batch) of acetic acid images and iodine images of the same patient into a multi-modal cervical lesion segmentation model, continuously training the model to enable the segmentation result to be as close as possible to the result labeled by a doctor, generating the loss of the prediction result and the real result of the model through a loss function, transmitting the prediction result and the real result in a gradient reverse manner, and updating model parameters; inputting the next group of acetic acid image and iodine image into a multi-modal cervical lesion region segmentation model, and continuing the training of the steps; and finally terminating the training until the model converges.
During training, the model is tested on a verification set, namely, data in the verification set is input into the trained model, and the hyperparameters such as the learning rate and the like are adjusted through indexes such as a loss value, accuracy, a recall rate and the like, so that the generalization performance of the model is better; and finally outputting the segmentation results of the acetic acid image and the iodine image by the model.
The specific structure of the multi-modal cervical lesion region segmentation model is shown in fig. 3, and is mainly based on the FCN segmentation network shown in fig. 2, the feature extraction part of the FCN network is distributed based on VGG-16, the network has 5 convolution blocks, and each convolution block is followed by a pooling layer; and the latter part of the FCN is an upsampling part, the result after the pooling of the fifth convolution block is subjected to twice upsampling, then the two times of upsampling are added with the result after the pooling of the fourth convolution block element by element, the obtained result is subjected to twice upsampling, the two times of upsampling is added with the result after the pooling of the third convolution block element by element, and twice upsampling is carried out to obtain the final segmentation prediction result.
The multi-modal cervical lesion region segmentation model provided by the embodiment is extended and expanded on the FCN network, and features of acetic acid and iodine images are fused mainly in a cross-connection manner introduced in a feature extraction part of the FCN network. The model has two branches of acetic acid images and iodine images, each branch is an FCN network, feature fusion is carried out in the middle of each branch through cross connection, and the rest parts of each branch are consistent with a single FCN network.
In clinical diagnosis, doctors often check the vinegar white area of the acetic acid image and the iodine unstained area on the iodine image at the same time, and repeatedly compare the acetic acid image and the iodine image to find common lesion and lesion area so as to improve the accuracy of biopsy. Therefore, there is a certain correlation between the features of the acetic acid image and the features of the iodine image to a certain extent, and in order to simulate the actual diagnosis situation of a doctor as much as possible, the features of the two images are fused in a cross-connection manner in the feature extraction process of the acetic acid image and the iodine image.
As shown in fig. 3, the first to fifth convolution blocks represent the feature extraction portion of the network, each convolution block representing the result after pooling; in order to fuse the characteristics of the two images, the acetic acid image characteristic of the previous volume block and the iodine image of the next volume block are spliced at a channel level, and then the iodine image is branched and then subsequent characteristic learning is carried out; similarly, the iodine image feature of the previous volume block and the acetic acid image feature of the next volume block are spliced, and then the acetic acid image is branched for subsequent feature learning. Such cross-connection continues until the fifth convolution block, and the characteristics of the acetic acid image and the iodine image output from the fifth convolution block substantially maintain the characteristics of both images. Then, the features learned by the acetic acid image branch and the iodine image branch are respectively entered into the FCN model segment (this portion coincides with the up-sampling portion in the FCN), and segment prediction is performed.
Step three: cervical lesion region segmentation prediction
When a new colposcopic detection image (an acetic acid image and an iodine image in a test set) of the patient exists, the cervical lesion area on the acetic acid image and the cervical lesion area on the iodine image can be segmented only by inputting the image processed by the 3% -5% acetic acid solution and the compound iodine solution into the cervical lesion area segmentation model trained in the step two, and the specific flow is shown in fig. 4.
It should be noted that the "cervical lesion region segmentation method or apparatus" in the present embodiment is directed to segmentation of a corresponding region in an image.
Claims (7)
1. A cervical lesion region segmentation method based on a multi-modal segmentation network is characterized by comprising the following steps:
1) acquiring acetic acid images and iodine images of a plurality of cervical samples, and marking lesion areas of all the images to obtain a cervical sample training set;
2) inputting acetic acid images of the same group of images in the training set into a first FCN network, inputting iodine images into a second FCN network, wherein the first FCN network and the second FCN network have the same structure and jointly form a multi-modal cervical lesion area segmentation network;
3) cross-connecting the feature extraction parts of the first FCN network and the second FCN network, and fusing the features of the acetic acid image and the iodine image to obtain a feature map of the acetic acid image and a feature map of the iodine image;
4) segmenting the characteristic diagram of the acetic acid image at the segmentation part of the first FCN network to obtain an acetic acid image segmentation result; segmenting the feature map of the iodine image at the segmentation part of the second FCN network to obtain an iodine image segmentation result;
5) calculating the loss of the multi-modal cervical lesion area segmentation network according to the segmentation results of the acetic acid image and the iodine image and the marks of the segmentation results, and updating the parameters of the multi-modal cervical lesion area segmentation network according to the loss;
6) inputting the acetic acid image and the iodine image of the next group of images into the first FCN network and the second FCN network respectively, and repeating the steps 4) -6) to train the multi-modal cervical lesion area segmentation network until convergence to obtain a multi-modal cervical lesion area segmentation model;
7) inputting an acetic acid image and an iodine image of a cervical sample to be detected into a multi-modal cervical lesion region segmentation model to obtain a segmentation prediction map;
the step 3) comprises the following steps:
the acetic acid image features extracted by the Nth convolution block of the first FCN network and the iodine image features extracted by the Nth convolution block of the second FCN network are fused before the convolution operation is carried out on the (N + 1) th convolution block of the second FCN network;
simultaneously, the iodine image features extracted by the Nth convolution block of the second FCN network and the acetic acid image features extracted by the Nth convolution block of the first FCN network are fused before the convolution operation is carried out on the (N + 1) th convolution block of the first FCN network;
and obtaining the feature map of the acetic acid image and the feature map of the iodine image until all convolution operations are completed.
2. The cervical lesion region segmentation method according to claim 1, further comprising the steps of:
obtaining a cervical sample verification set by the method in the step 1);
and verifying the multi-modal cervical lesion area segmentation network by using the acetic acid image and the iodine image of the same group of images in the verification set every time the multi-modal cervical lesion area segmentation network completes one training, and adjusting the network learning rate.
3. The cervical lesion region segmentation method according to claim 2, further comprising the steps of:
obtaining a cervical sample test set by the method in the step 1);
and testing the trained multi-mode cervical lesion region segmentation model by using the acetic acid image and the iodine image of the same group of images in the test set, and calculating the segmentation accuracy, the recall rate and the mIOU value.
4. The cervical lesion region segmentation method of claim 1, wherein the acetic acid image features and the iodine image features are fused by stitching of feature channels.
5. The cervical lesion region segmentation method of claim 1, wherein the feature extraction part of the first and second FCN networks is based on VGG-16, and comprises five volume blocks, each volume block being followed by a pooling layer.
6. The cervical lesion region segmentation method according to claim 5, wherein the step of segmenting the feature map of the acetic acid image or the iodine image in step 4) includes:
performing two-time upsampling on the result obtained after the pooling of the fifth convolution block, and then adding the result obtained after the pooling of the fifth convolution block and the fourth convolution block element by element;
and performing two-time upsampling on the result obtained in the last step, adding the result subjected to the pooling with the third convolution block element by element, and performing two-time upsampling to obtain a final segmentation result.
7. A cervical lesion region segmentation apparatus based on a multi-modal segmentation network, comprising: a memory storing computer-executable instructions and data for use or production in executing the computer-executable instructions; a processor communicatively coupled to the memory and configured to execute the memory-stored computer-executable instructions, wherein the computer-executable instructions, when executed, implement the multi-modal segmentation network-based cervical lesion region segmentation method as recited in claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811469200.6A CN109859159B (en) | 2018-11-28 | 2018-11-28 | Cervical lesion region segmentation method and device based on multi-mode segmentation network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811469200.6A CN109859159B (en) | 2018-11-28 | 2018-11-28 | Cervical lesion region segmentation method and device based on multi-mode segmentation network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109859159A CN109859159A (en) | 2019-06-07 |
CN109859159B true CN109859159B (en) | 2020-10-13 |
Family
ID=66890554
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811469200.6A Active CN109859159B (en) | 2018-11-28 | 2018-11-28 | Cervical lesion region segmentation method and device based on multi-mode segmentation network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109859159B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110491479A (en) * | 2019-07-16 | 2019-11-22 | 北京邮电大学 | A kind of construction method of sclerotin status assessment model neural network based |
CN110826576B (en) * | 2019-10-10 | 2022-10-04 | 浙江大学 | Cervical lesion prediction system based on multi-mode feature level fusion |
CN112750115B (en) * | 2021-01-15 | 2024-06-04 | 浙江大学医学院附属邵逸夫医院 | Multi-mode cervical cancer pre-lesion image recognition method based on graph neural network |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103325128A (en) * | 2013-05-16 | 2013-09-25 | 深圳市理邦精密仪器股份有限公司 | Method and device intelligently identifying characteristics of images collected by colposcope |
CN108257129A (en) * | 2018-01-30 | 2018-07-06 | 浙江大学 | The recognition methods of cervical biopsy region aids and device based on multi-modal detection network |
CN108319977A (en) * | 2018-01-30 | 2018-07-24 | 浙江大学 | Cervical biopsy area recognizing method based on the multi-modal network of channel information and device |
CN108388841A (en) * | 2018-01-30 | 2018-08-10 | 浙江大学 | Cervical biopsy area recognizing method and device based on multiple features deep neural network |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106097347A (en) * | 2016-06-14 | 2016-11-09 | 福州大学 | A kind of multimodal medical image registration and method for visualizing |
CN110073404B (en) * | 2016-10-21 | 2023-03-21 | 南坦生物组学有限责任公司 | Digital histopathology and microdissection |
US10360499B2 (en) * | 2017-02-28 | 2019-07-23 | Anixa Diagnostics Corporation | Methods for using artificial neural network analysis on flow cytometry data for cancer diagnosis |
US10366491B2 (en) * | 2017-03-08 | 2019-07-30 | Siemens Healthcare Gmbh | Deep image-to-image recurrent network with shape basis for automatic vertebra labeling in large-scale 3D CT volumes |
-
2018
- 2018-11-28 CN CN201811469200.6A patent/CN109859159B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103325128A (en) * | 2013-05-16 | 2013-09-25 | 深圳市理邦精密仪器股份有限公司 | Method and device intelligently identifying characteristics of images collected by colposcope |
CN108257129A (en) * | 2018-01-30 | 2018-07-06 | 浙江大学 | The recognition methods of cervical biopsy region aids and device based on multi-modal detection network |
CN108319977A (en) * | 2018-01-30 | 2018-07-24 | 浙江大学 | Cervical biopsy area recognizing method based on the multi-modal network of channel information and device |
CN108388841A (en) * | 2018-01-30 | 2018-08-10 | 浙江大学 | Cervical biopsy area recognizing method and device based on multiple features deep neural network |
Non-Patent Citations (1)
Title |
---|
FuseNet: Incorporating Depth into Semantic Segmentation via Fusion-Based CNN Architecture;Caner Hazirbas 等;《Computer Vision–ACCV 2016》;20161124;第222-237页 * |
Also Published As
Publication number | Publication date |
---|---|
CN109859159A (en) | 2019-06-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109543719B (en) | Cervical atypical lesion diagnosis model and device based on multi-modal attention model | |
CN109859159B (en) | Cervical lesion region segmentation method and device based on multi-mode segmentation network | |
US20220343623A1 (en) | Blood smear full-view intelligent analysis method, and blood cell segmentation model and recognition model construction method | |
CN108388841B (en) | Cervical biopsy region identification method and device based on multi-feature deep neural network | |
CN108464840B (en) | Automatic detection method and system for breast lumps | |
CN108257129B (en) | Cervical biopsy region auxiliary identification method and device based on multi-mode detection network | |
CN108319977B (en) | Cervical biopsy region identification method and device based on channel information multi-mode network | |
CN110647874B (en) | End-to-end blood cell identification model construction method and application | |
CN110689543A (en) | Improved convolutional neural network brain tumor image segmentation method based on attention mechanism | |
CN110647875B (en) | Method for segmenting and identifying model structure of blood cells and blood cell identification method | |
CN110826576B (en) | Cervical lesion prediction system based on multi-mode feature level fusion | |
JP2023544466A (en) | Training method and device for diagnostic model of lung adenocarcinoma and squamous cell carcinoma based on PET/CT | |
CN108764342B (en) | Semantic segmentation method for optic discs and optic cups in fundus image | |
JP7487418B2 (en) | Identifying autofluorescence artifacts in multiplexed immunofluorescence images | |
CN112419295A (en) | Medical image processing method, apparatus, computer device and storage medium | |
CN111079901A (en) | Acute stroke lesion segmentation method based on small sample learning | |
CN115546605A (en) | Training method and device based on image labeling and segmentation model | |
CN111754530B (en) | Prostate ultrasonic image segmentation classification method | |
CN115565698A (en) | Method and system for artificial intelligence assessment of kidney supply quality | |
CN111368669A (en) | Nonlinear optical image recognition method based on deep learning and feature enhancement | |
CN117611601B (en) | Text-assisted semi-supervised 3D medical image segmentation method | |
JP7492650B2 (en) | Automated identification of necrotic regions in digital images of multiplex immunofluorescence stained tissues | |
CN112741651B (en) | Method and system for processing ultrasonic image of endoscope | |
JP6246978B2 (en) | Method for detecting and quantifying fibrosis | |
CN115994999A (en) | Goblet cell semantic segmentation method and system based on boundary gradient attention network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |