CN110826576A - Cervical lesion prediction system based on multi-mode feature level fusion - Google Patents
Cervical lesion prediction system based on multi-mode feature level fusion Download PDFInfo
- Publication number
- CN110826576A CN110826576A CN201910959387.6A CN201910959387A CN110826576A CN 110826576 A CN110826576 A CN 110826576A CN 201910959387 A CN201910959387 A CN 201910959387A CN 110826576 A CN110826576 A CN 110826576A
- Authority
- CN
- China
- Prior art keywords
- image
- acetic acid
- iodine
- extraction network
- cervical
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/29—Graphical models, e.g. Bayesian networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a cervical lesion prediction system based on multi-modal feature level fusion, which comprises a computer memory, a computer processor and a computer program which is stored in the computer memory and can be executed on the computer processor, wherein a cervical lesion prediction model is stored in the computer memory and comprises an acetic acid image feature extraction network, an iodine image feature extraction network and an auxiliary module for fusing extracted features; the computer processor when executing the computer program performs the steps of: receiving an acetic acid image and an iodine image in colposcopy, and cutting out a region containing a cervix uteri; and respectively inputting the acetic acid image and the iodine image into an acetic acid image characteristic extraction network and an iodine image characteristic extraction network in the cervical lesion prediction model, respectively inputting the acetic acid image and the iodine image into respective auxiliary modules after characteristic extraction, performing characteristic fusion, and outputting a prediction result after calculation. The invention can make the prediction result more accurate to assist the doctor to make correct diagnosis and judgment.
Description
Technical Field
The invention belongs to the field of medical image processing, and particularly relates to a cervical lesion prediction system based on multi-modal feature level fusion.
Background
Cervical cancer is the second most common type of cancer in the female reproductive system, severely affecting the life and quality of life of the patient. Cervical disease screening can help prevent cervical cancer by detecting squamous intraepithelial lesions, which are generally classified into two categories: low grade squamous intraepithelial lesions (LSIL) and high grade squamous intraepithelial lesions (HSIL). In clinical practice, an important goal of screening is to distinguish cervical high-grade squamous intraepithelial lesions (HSIL) from normal/low-grade squamous intraepithelial lesions (LSIL) in order to detect cervical cancer early, since most (60%) of low-grade squamous intraepithelial lesions automatically return to normal, whereas high-grade squamous intraepithelial lesions require treatment.
Colposcopy is a commonly used cervical cancer screening method, and 5% acetic acid and compound iodine solution are sequentially applied to cervical epithelial cells, and then the cervix is photographed for multiple times. The acetic acid image records the response of the cervix to acetic acid (acetic acid whitening), and the iodine image shows the extent of iodine non-staining.
In the existing cervical lesion identification method, acetic acid image features are extracted manually, and Normal/low-grade squamous intraepithelial lesions (Normal/LSIL) and high-grade squamous intraepithelial lesions (HSIL) are classified by using a Support Vector Machine (SVM), Adaboost or a random forest. There is work to combine the acetic acid images with some clinical examination results (e.g. HPV and Pap test) and calculate decision scores for each mode separately using SVM or k-nearest neighbors; they then integrate the decision scores of all the patterns to form the final decision. Xu et al proposed a deep learning network to model the non-linear relationship between acetic acid images and some clinical examination results (referred to as non-image data).
Chinese patent publication No. CN107220975A discloses an intelligent auxiliary cervical image determination system and a processing method thereof, including: a colposcope detection device and an auxiliary judgment device. By combining the colposcope detection device with the auxiliary judgment device, the colposcope detection device is used for acquiring the cervical image to be detected, and the auxiliary judgment device is matched with the cervical image to be detected and the characteristic data thereof for comparative analysis, so that whether the current cervical image to be detected is a normal cervical can be judged, and the characteristic data of the cervical image to be detected can be used for obtaining the characteristic parameters of which lesion type and lesion the current cervical image to be detected may belong to.
However, these above-mentioned cervical lesion recognition methods use only one image (only an acetic acid image or an iodine image) during use, which cannot sufficiently capture the characteristics of cervical lesions.
In clinical practice, physicians often analyze both acetic acid and iodine images simultaneously to determine potential lesions for more accurate diagnosis. This is because acetic acid and iodine images generally contain highly relevant information. For example, the acetic acid whitened areas in the acetic acid image may be used as a supplemental support for iodine non-stained areas in the iodine image (and vice versa). Therefore, in order to better capture lesion features and to be consistent with actual diagnostic practices, a fusion of acetic acid and iodine images needs to be learned and explored to further improve the accuracy of cervical lesion identification.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a cervical lesion prediction system based on multi-modal feature level fusion, which fuses cervical lesion information of an acetic acid image and an iodine image, so that cervical lesions can be predicted more accurately, and doctors can be assisted to make correct diagnosis and judgment.
The technical scheme of the invention is as follows:
a cervical lesion prediction system based on multi-modal feature level fusion, comprising a computer memory, a computer processor, and a computer program stored in and executable on the computer memory, the computer memory having a cervical lesion prediction model stored therein, comprising an acetic acid image feature extraction network, an iodine image feature extraction network, and an auxiliary module for fusing the extracted features; wherein the acetic acid image feature extraction network and the iodine image feature extraction network are both based on a ResNet-50 network;
the computer processor, when executing the computer program, performs the steps of:
receiving an acetic acid image and an iodine image in colposcopy, and cutting out a region containing a cervix uteri;
and respectively inputting the acetic acid image and the iodine image into an acetic acid image characteristic extraction network and an iodine image characteristic extraction network in the cervical lesion prediction model, sequentially extracting the characteristics of each convolution block of the network, fusing the convolution blocks through an auxiliary module, inputting the convolution blocks into the next convolution block until reaching a full connection layer, and finally outputting a prediction result of the cervical lesion.
According to the prediction system, the cervical lesion prediction model realizes multi-mode characteristic level fusion, the input image is a cervical colposcope image coated with acetic acid and iodine solution, and cervical lesions are identified by fusing cervical lesion information of the acetic acid image and the iodine image, so that the search for a potential mechanism of multi-mode fusion is facilitated, and the development of intelligent identification of the cervical lesions is promoted.
The cervical lesion prediction model is obtained by the following steps:
establishing a training set: screening an acetic acid image and an iodine image for each patient, labeling the images according to pathological results of the patients, detecting an area only containing cervix uteri on the images by adopting a Faster R-CNN model, cutting the images on an original image, and dividing image data into a training set, a verification set and a test set;
establishing a network structure: a ResNet-50 network is used as an acetic acid image characteristic extraction network and an iodine image characteristic extraction network, and the output of each rolling block of the acetic acid image characteristic extraction network and the iodine image characteristic extraction network is sequentially input to the respective auxiliary module and subjected to characteristic fusion; after the acetic acid image characteristic and the iodine image characteristic which are fused with the characteristics respectively pass through a full connection layer, integrating the results of the two networks;
training a network structure: and (3) training the network by using the acetic acid image and the iodine image of the training set, and adjusting training parameters according to the effect of the model on the verification set during training until the model converges to obtain the trained cervical lesion prediction model.
In the process of establishing the training set, when the image is labeled, the image data is divided into two categories of normal/low-level lesion and high-level lesion.
In the invention, the acetic acid image feature extraction network and the iodine image feature extraction network respectively comprise 5 volume blocks (namely 5 volume blocks in ResNet-50) and 1 full connection layer which are connected in sequence;
feature representation of acetic acid image is obtained by the acetic acid image through Conv2, Conv3, Conv4 and Conv5 in acetic acid image feature extraction networkAnd respectively input into respective auxiliary modules to obtain the acetic acid image characteristics to be fusedThe characteristics of the iodine image after acetic acid image enhancement are obtained after the characteristics of the iodine image and the iodine image are fused
The iodine images are subjected to Conv2, Conv3, Conv4 and Conv5 in an iodine image feature extraction network to obtain feature representation of the iodine imagesAnd respectively input the data into respective auxiliary modules to obtain the iodine image characteristics to be fusedThe characteristics of the acetic acid image after iodine image enhancement are obtained after the characteristics of the acetic acid image are fused
The characteristic flow of the above process is: inputting the acetic acid image and the iodine image into an acetic acid image feature extraction network and an iodine image feature extraction network respectively until Conv2 obtains features respectivelyAndat this time, theAndinputting the images into respective auxiliary modules (the auxiliary modules with opposite directions but same structure) to obtain the acetic acid image features to be fusedAnd iodine image features to be fusedRespectively comparing the data with iodine image characteristicsAnd acetic acid image characteristicsFusing to obtain iodine image features after acetic acid image enhancementAnd acetic acid image features after iodine image enhancementThe enhanced features will now be input to the next convolution block Conv3, respectively, and the above calculations repeated until the last fully connected layer.
The auxiliary module has the specific structure that:
firstly, performing 1 × 1 convolution to realize channel dimension reduction, and generally setting the number of channels to be reduced to 256; sequentially passing through a convolution layer and a bottleneck block, and then performing dimensionality increase by using 1 × 1 convolution, wherein the convolution layer with a large convolution kernel and a larger number of bottleneck blocks are used in a shallow layer, and the convolution layer with a small convolution kernel and a small number of bottleneck blocks are used in a deep layer; finally, the characteristic value is normalized to [0,1] through a sigmoid function.
In the 5 volume blocks of the acetic acid image feature extraction network and the iodine image feature extraction network, the auxiliary modules corresponding to Conv2, Conv3, Conv4 and Conv5 are respectively 7 × 7 volume blocks and 4 bottleneck blocks, 5 × 5 volume blocks and 3 bottleneck blocks, 3 × 3 volume blocks and 2 bottleneck blocks, and 1 × 1 volume block and 1 bottleneck block.
In the process of training the network structure, the adopted loss function is as follows:
L=La+Ld
wherein L isaAnd LdLoss of the acetic acid image and iodine image, respectively.
Compared with the prior art, the invention has the following beneficial effects:
the cervical lesion prediction model is based on a ResNet-50 skeleton, namely ResNet-50 is used for feature extraction on an acetic acid image and an iodine image; because the two images have certain correlation on characteristics and lesions, the cervical lesion prediction model provided by the invention realizes characteristic level fusion, realizes characteristic fusion in the process of extracting the characteristics of the acetic acid image and the iodine image, enables the acetic acid image and the iodine image to be mutually assisted, fully captures the potential relationship between the acetic acid image and the iodine image, and learns the characteristics beneficial to lesion recognition, thereby completing multi-modal cervical lesion prediction and greatly improving the accuracy of prediction.
Drawings
FIG. 1 is a schematic diagram of the procedure for detecting and cutting a cervical region when a training set is established according to the present invention;
fig. 2 is a schematic diagram of the overall structure of the cervical lesion prediction model according to the present invention;
fig. 3 is a flowchart illustrating the training of a cervical lesion prediction model according to an embodiment of the present invention.
Detailed Description
The invention will be described in further detail below with reference to the drawings and examples, which are intended to facilitate the understanding of the invention without limiting it in any way.
Step 1: preparation of acetic acid and iodine images
During colposcopy, doctors can observe the reaction and change of cervical epithelium of patients by smearing physiological saline, 3% -5% acetic acid solution and compound iodine solution, and evaluate whether there is lesion and the degree of lesion.
The method adopts the acetic acid image and the iodine image of each patient in colposcopy, and screens out one acetic acid image and one iodine image for each patient in order to keep images with better quality and better learn image characteristics because some medical instruments, characters, large-area bleeding and light reflection images possibly exist in the acetic acid image and the iodine image; and obtaining the actual label of the patient according to the pathological result of the patient, namely, knowing the final diagnosis result (normal, low-grade lesion or high-grade lesion) according to the given acetic acid image and iodine image of the patient. Because the emphasis is to identify cervical high-grade lesion, the invention divides the data into: normal/low grade lesions and high grade lesions, i.e. to achieve a binary task.
In addition, in order to avoid the interference of other impurities around the image, the invention adopts the Faster R-CNN model to detect the area only containing the cervix uteri on the image, cuts the area on the original image, and takes the acetic acid and iodine image only containing the cervix uteri as the input of the multi-modal feature level fusion network, as shown in fig. 1 in detail. All data were divided into training, validation and test sets with a data ratio of 7:2: 1.
Step 2: construction of cervical lesion prediction model
As shown in fig. 2, the cervical lesion prediction model of the present invention is based primarily on the ResNet-50 model and the attention mechanism. The invention designs a ResNet-50 model for both acetic acid images and iodine images, and performs fusion in the process of feature extraction, namely an auxiliary module in FIG. 2. The ResNet-50 acetic acid image features Conv2, Conv3, Conv4 and Conv5 were taken out (noted as) And respectively input into respective auxiliary modules to obtainAnd through residual fusion (i.e., facies)Multiply-add-again) with the features of the iodine image, residual fusion is formulated asThe feature in which the iodine image passes through ResNet-50 is represented asAt the same time, the characteristics of the iodine image are measuredRespectively pass through auxiliary modules with the same structure and opposite directions to obtainAnd fusing with the characteristics of acetic acid image to obtain acetic acid image characteristics enhanced by iodine image And finally, integrating the results of the two networks by respectively passing the characteristics of the acetic acid image and the iodine image through a full connection layer to obtain the final cervical lesion classification result.
The structure sequence of the auxiliary module is that (1) the dimension reduction of the channel is realized by a 1 multiplied by 1 convolution, so as to reduce the parameter quantity and the complexity of the subsequent model calculation, and the channel number is reduced to 256; (2) because shallow features have less semantic information, a large convolution kernel and a larger number of bottleneck blocks are used in the shallow; the deep characteristic semantic information is rich, and a small convolution kernel and a small number of bottleneck blocks are used in the deep layer, wherein the small convolution kernel and the small number of bottleneck blocks are respectively 7 × 7 convolution blocks and 4 bottleneck blocks, 5 × 5 convolution blocks and 3 bottleneck blocks, 3 × 3 convolution blocks and 2 bottleneck blocks, and 1 × 1 convolution blocks and 1 bottleneck block; (3) then using 1 × 1 convolution to perform dimension ascending (the number of channels before dimension ascending to dimension descending); (4) finally, the characteristic value is normalized to [0,1] through a sigmoid function.
And step 3: training of cervical lesion prediction model
The input of the cervical lesion prediction model of the present invention is the acetic acid image and the iodine image (including only the cervical region) in step 1, and the network is trained using the acetic acid image and the iodine image of the training set. As shown in fig. 3, a plurality of (batch 8) acetic acid images and iodine images are input into the network, and the model is trained so that the predicted lesion type is as close as possible to the actual result of the doctor's diagnosis, and the loss function is:
L=La+Ld
wherein L isaAnd LdLoss of acetic acid image and iodine image, respectively; during training, training parameters are adjusted according to the effect of the model on the verification set, so that the performance of the model is better; the output of the model is the prediction result of cervical lesions.
And 4, step 4: cervical lesion prediction
When new colposcopic images (acetic acid images and iodine images in a test set) of a patient exist, the images are input into the trained multi-modal cervical lesion recognition network, and then a prediction result of cervical lesions can be obtained.
The embodiments described above are intended to illustrate the technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only specific embodiments of the present invention, and are not intended to limit the present invention, and any modifications, additions and equivalents made within the scope of the principles of the present invention should be included in the scope of the present invention.
Claims (8)
1. A cervical lesion prediction system based on multi-modal feature-level fusion, comprising a computer memory, a computer processor, and a computer program stored in and executable on the computer memory, characterized in that:
the computer memory is stored with a cervical lesion prediction model which comprises an acetic acid image feature extraction network, an iodine image feature extraction network and an auxiliary module for fusing the extracted features; wherein the acetic acid image feature extraction network and the iodine image feature extraction network are both based on a ResNet-50 network;
the computer processor, when executing the computer program, performs the steps of:
receiving an acetic acid image and an iodine image in colposcopy, and cutting out a region containing a cervix uteri;
and respectively inputting the acetic acid image and the iodine image into an acetic acid image characteristic extraction network and an iodine image characteristic extraction network in the cervical lesion prediction model, sequentially extracting the characteristics of each convolution block of the network, fusing the convolution blocks through an auxiliary module, inputting the convolution blocks into the next convolution block until reaching a full connection layer, and finally outputting a prediction result of the cervical lesion.
2. The cervical lesion prediction system based on multi-modal feature-level fusion of claim 1, wherein the cervical lesion prediction model is obtained by:
establishing a training set: screening an acetic acid image and an iodine image for each patient, labeling the images according to pathological results of the patients, detecting an area only containing cervix uteri on the images by adopting a Faster R-CNN model, cutting the images on an original image, and dividing image data into a training set, a verification set and a test set;
establishing a network structure: a ResNet-50 network is used as an acetic acid image characteristic extraction network and an iodine image characteristic extraction network, and the output of each rolling block of the acetic acid image characteristic extraction network and the iodine image characteristic extraction network is sequentially input to the respective auxiliary module and subjected to characteristic fusion; after the acetic acid image characteristic and the iodine image characteristic which are fused with the characteristics respectively pass through a full connection layer, integrating the results of the two networks;
training a network structure: and (3) training the network by using the acetic acid image and the iodine image of the training set, and adjusting training parameters according to the effect of the model on the verification set during training until the model converges to obtain the trained cervical lesion prediction model.
3. The cervical lesion prediction system based on multi-modal feature-level fusion of claim 2, wherein when labeling an image, the image data is classified into two categories, normal/low-level lesion and high-level lesion.
4. The cervical lesion prediction system based on multi-modal feature level fusion according to claim 1 or 2, wherein the acetic acid image feature extraction network and the iodine image feature extraction network each comprise 5 rolling blocks and 1 fully connected layer connected in sequence;
feature representation of acetic acid image is obtained by the acetic acid image through Conv2, Conv3, Conv4 and Conv5 in acetic acid image feature extraction networkAnd respectively input into respective auxiliary modules to obtain the acetic acid image characteristics to be fusedThe characteristics of the iodine image after acetic acid image enhancement are obtained after the characteristics of the iodine image and the iodine image are fused
The iodine images are subjected to Conv2, Conv3, Conv4 and Conv5 in an iodine image feature extraction network to obtain feature representation of the iodine imagesAnd respectively input the data into respective auxiliary modules to obtain the iodine image characteristics to be fusedThe characteristics of the acetic acid image after iodine image enhancement are obtained after the characteristics of the acetic acid image are fused
The characteristic flow of the above process is: inputting the acetic acid image and the iodine image into an acetic acid image feature extraction network and an iodine image feature extraction network respectively until Conv2 obtains features respectivelyAndat this time, theAndinputting the images into respective auxiliary modules to obtain the characteristics of the acetic acid images to be fusedAnd iodine image features to be fusedRespectively comparing the data with iodine image characteristicsAnd acetic acid image characteristicsFusing to obtain iodine image features after acetic acid image enhancementAnd acetic acid image features after iodine image enhancementThe enhanced features will now be input to the next convolution block Conv3, respectively, and the above calculations repeated until the last fully connected layer.
5. The cervical lesion prediction system based on multi-modal feature-level fusion according to claim 4, wherein the detailed structure of the auxiliary module is as follows:
firstly, performing 1 × 1 convolution to realize channel dimensionality reduction; sequentially passing through a convolution layer and a bottleneck block, and then performing dimensionality increase by using 1 × 1 convolution, wherein the convolution layer with a large convolution kernel and a larger number of bottleneck blocks are used in a shallow layer, and the convolution layer with a small convolution kernel and a small number of bottleneck blocks are used in a deep layer; finally, the characteristic value is normalized to [0,1] through a sigmoid function.
6. The cervical lesion prediction system based on multi-modal feature-level fusion of claim 5, wherein the number of channels is set to be reduced to 256 during dimensionality reduction of the channels.
7. The cervical lesion prediction system based on multi-modal feature level fusion of claim 5, wherein the auxiliary modules corresponding to Conv2, Conv3, Conv4 and Conv5 in the 5 volume blocks of the acetic acid image feature extraction network and the iodine image feature extraction network are 7 × 7 volume blocks and 4 bottleneck blocks, 5 × 5 volume blocks and 3 bottleneck blocks, 3 × 3 volume blocks and 2 bottleneck blocks, 1 × 1 volume block and 1 bottleneck block, respectively.
8. The cervical lesion prediction system based on multi-modal feature-level fusion according to claim 2, wherein the loss function used in training the network structure is:
L=La+Ld
wherein L isaAnd LdLoss of the acetic acid image and iodine image, respectively.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910959387.6A CN110826576B (en) | 2019-10-10 | 2019-10-10 | Cervical lesion prediction system based on multi-mode feature level fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910959387.6A CN110826576B (en) | 2019-10-10 | 2019-10-10 | Cervical lesion prediction system based on multi-mode feature level fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110826576A true CN110826576A (en) | 2020-02-21 |
CN110826576B CN110826576B (en) | 2022-10-04 |
Family
ID=69549019
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910959387.6A Active CN110826576B (en) | 2019-10-10 | 2019-10-10 | Cervical lesion prediction system based on multi-mode feature level fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110826576B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111882514A (en) * | 2020-07-27 | 2020-11-03 | 中北大学 | Multi-modal medical image fusion method based on double-residual ultra-dense network |
CN112258447A (en) * | 2020-09-14 | 2021-01-22 | 北京航空航天大学 | Diagnostic information evaluation method and system based on multiple dyeing pathological images |
CN112348059A (en) * | 2020-10-23 | 2021-02-09 | 北京航空航天大学 | Deep learning-based method and system for classifying multiple dyeing pathological images |
CN112614099A (en) * | 2020-12-17 | 2021-04-06 | 杭州电子科技大学 | Cervical cancer lesion region detection method based on fast-RCNN model |
CN112750115A (en) * | 2021-01-15 | 2021-05-04 | 杭州电子科技大学 | Multi-modal cervical carcinoma pre-lesion image recognition method based on graph neural network |
CN112884707A (en) * | 2021-01-15 | 2021-06-01 | 复旦大学附属妇产科医院 | Cervical precancerous lesion detection system, equipment and medium based on colposcope |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2523149A2 (en) * | 2011-05-11 | 2012-11-14 | Tata Consultancy Services Ltd. | A method and system for association and decision fusion of multimodal inputs |
US10127665B1 (en) * | 2017-07-31 | 2018-11-13 | Hefei University Of Technology | Intelligent assistant judgment system for images of cervix uteri and processing method thereof |
CN109034221A (en) * | 2018-07-13 | 2018-12-18 | 马丁 | A kind of processing method and its device of cervical cytology characteristics of image |
CN109543719A (en) * | 2018-10-30 | 2019-03-29 | 浙江大学 | Uterine neck atypia lesion diagnostic model and device based on multi-modal attention model |
CN109859159A (en) * | 2018-11-28 | 2019-06-07 | 浙江大学 | A kind of cervical lesions region segmentation method and device based on multi-modal segmentation network |
-
2019
- 2019-10-10 CN CN201910959387.6A patent/CN110826576B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2523149A2 (en) * | 2011-05-11 | 2012-11-14 | Tata Consultancy Services Ltd. | A method and system for association and decision fusion of multimodal inputs |
US10127665B1 (en) * | 2017-07-31 | 2018-11-13 | Hefei University Of Technology | Intelligent assistant judgment system for images of cervix uteri and processing method thereof |
CN109034221A (en) * | 2018-07-13 | 2018-12-18 | 马丁 | A kind of processing method and its device of cervical cytology characteristics of image |
CN109543719A (en) * | 2018-10-30 | 2019-03-29 | 浙江大学 | Uterine neck atypia lesion diagnostic model and device based on multi-modal attention model |
CN109859159A (en) * | 2018-11-28 | 2019-06-07 | 浙江大学 | A kind of cervical lesions region segmentation method and device based on multi-modal segmentation network |
Non-Patent Citations (1)
Title |
---|
T. CHEN等: "Multi-Modal Fusion Learning For Cervical Dysplasia Diagnosis", 《2019 IEEE 16TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2019)》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111882514A (en) * | 2020-07-27 | 2020-11-03 | 中北大学 | Multi-modal medical image fusion method based on double-residual ultra-dense network |
CN112258447A (en) * | 2020-09-14 | 2021-01-22 | 北京航空航天大学 | Diagnostic information evaluation method and system based on multiple dyeing pathological images |
CN112258447B (en) * | 2020-09-14 | 2023-12-22 | 北京航空航天大学 | Diagnostic information evaluation method and system based on various staining pathological images |
CN112348059A (en) * | 2020-10-23 | 2021-02-09 | 北京航空航天大学 | Deep learning-based method and system for classifying multiple dyeing pathological images |
CN112614099A (en) * | 2020-12-17 | 2021-04-06 | 杭州电子科技大学 | Cervical cancer lesion region detection method based on fast-RCNN model |
CN112750115A (en) * | 2021-01-15 | 2021-05-04 | 杭州电子科技大学 | Multi-modal cervical carcinoma pre-lesion image recognition method based on graph neural network |
CN112884707A (en) * | 2021-01-15 | 2021-06-01 | 复旦大学附属妇产科医院 | Cervical precancerous lesion detection system, equipment and medium based on colposcope |
CN112884707B (en) * | 2021-01-15 | 2023-05-05 | 复旦大学附属妇产科医院 | Cervical cancer pre-lesion detection system, device and medium based on colposcope |
CN112750115B (en) * | 2021-01-15 | 2024-06-04 | 浙江大学医学院附属邵逸夫医院 | Multi-mode cervical cancer pre-lesion image recognition method based on graph neural network |
Also Published As
Publication number | Publication date |
---|---|
CN110826576B (en) | 2022-10-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110826576B (en) | Cervical lesion prediction system based on multi-mode feature level fusion | |
CN109543719B (en) | Cervical atypical lesion diagnosis model and device based on multi-modal attention model | |
US20220076420A1 (en) | Retinopathy recognition system | |
US11666210B2 (en) | System for recognizing diabetic retinopathy | |
CN111048170B (en) | Digestive endoscopy structured diagnosis report generation method and system based on image recognition | |
EP4006831A1 (en) | Image processing method and apparatus, server, medical image processing device and storage medium | |
CN108257129B (en) | Cervical biopsy region auxiliary identification method and device based on multi-mode detection network | |
CN109102491A (en) | A kind of gastroscope image automated collection systems and method | |
KR102155381B1 (en) | Method, apparatus and software program for cervical cancer decision using image analysis of artificial intelligence based technology | |
WO2019232910A1 (en) | Fundus image analysis method, computer device and storage medium | |
CN113240655B (en) | Method, storage medium and device for automatically detecting type of fundus image | |
US20210374953A1 (en) | Methods for automated detection of cervical pre-cancers with a low-cost, point-of-care, pocket colposcope | |
CN111524124A (en) | Digestive endoscopy image artificial intelligence auxiliary system for inflammatory bowel disease | |
CN113610118A (en) | Fundus image classification method, device, equipment and medium based on multitask course learning | |
CN113222957A (en) | Multi-class focus high-speed detection method and system based on capsule lens image | |
CN114399465A (en) | Benign and malignant ulcer identification method and system | |
KR20210033902A (en) | Method, apparatus and software program for cervical cancer diagnosis using image analysis of artificial intelligence based technology | |
CN115661037A (en) | Capsule endoscope auxiliary detection method, device, system, equipment and medium | |
Mu et al. | Improved model of eye disease recognition based on VGG model | |
Kumar et al. | Segmentation of retinal lesions in fundus images: a patch based approach using encoder-decoder neural network | |
CN114330484A (en) | Method and system for classification and focus identification of diabetic retinopathy through weak supervision learning | |
Song et al. | Multi-model data fusion for cervical precancerous lesions detection | |
CN112396597A (en) | Method and device for rapidly screening unknown cause pneumonia images | |
CN112132782A (en) | Method and terminal for processing DME typing based on deep neural network | |
Xu et al. | Computer aided diagnosis of diabetic retinopathy based on multi-view joint learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |