CN108090906B - Cervical image processing method and device based on region nomination - Google Patents

Cervical image processing method and device based on region nomination Download PDF

Info

Publication number
CN108090906B
CN108090906B CN201810088291.2A CN201810088291A CN108090906B CN 108090906 B CN108090906 B CN 108090906B CN 201810088291 A CN201810088291 A CN 201810088291A CN 108090906 B CN108090906 B CN 108090906B
Authority
CN
China
Prior art keywords
network
region
target
classification
cervical image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810088291.2A
Other languages
Chinese (zh)
Other versions
CN108090906A (en
Inventor
吴健
应兴德
陈婷婷
马鑫军
吕卫国
袁春女
姚晔俪
王新宇
吴边
陈为
吴福理
吴朝晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201810088291.2A priority Critical patent/CN108090906B/en
Publication of CN108090906A publication Critical patent/CN108090906A/en
Application granted granted Critical
Publication of CN108090906B publication Critical patent/CN108090906B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Abstract

The invention discloses a cervical image processing device based on region nomination, comprising: the image acquisition device is used for acquiring a cervical image treated by 3-5% acetic acid solution; the processor comprises a cervical image preprocessing module and a processing module, wherein the processing module comprises a model network consisting of a feature extraction network, a region detection network and a region screening classification network and is used for outputting classification information and position information of a target region; a memory for storing parameters of a model network in the processor; and the display device is used for displaying the classification information and the position information of the target area output by the processor. The method for processing the cervical image by adopting the cervical image processing device based on the region nomination is also disclosed, so that the normal 'vinegar white' and the focus 'vinegar white' in the cervical image can be distinguished.

Description

Cervical image processing method and device based on region nomination
Technical Field
The invention belongs to the field of image processing, and particularly relates to a cervical image processing method and device based on region nomination.
Background
Deep learning is a method based on characterization learning of data in machine learning. The observations can be represented in a number of ways, such as a vector of intensity values for each pixel, or more abstractly as a series of edges, a specially shaped region, and so forth. And tasks are easier to learn from the examples using some specific representation methods. The benefit of deep learning is to replace the manual feature acquisition with unsupervised or semi-supervised feature learning and hierarchical feature extraction efficient algorithms.
With the continuous fermentation of the research heat of deep learning in recent years, more and more application scenes focusing on image recognition begin to appear the shadow of deep learning, including the recognition of medical images. Attempts for identifying abnormal regions possibly with lesions in some medical images by using a detection network are frequently repeated, but no matter the detection model is a one-stage or two-stage detection model, the initial purpose of model design is to focus on a positioning task, and a classification task is only an auxiliary task of the detection model, so that the hierarchical prediction of the abnormal regions is not reliable by only depending on the detection model to position the abnormal regions.
Disclosure of Invention
The invention provides a cervical image processing method and device based on region nomination, aiming at the problem that in the prior art, a cervical image processed by 3-5% acetic acid solution has the characteristic of 'vinegar white' in a plurality of regions and is difficult to distinguish between normal 'vinegar white' and focus 'vinegar white'.
The technical scheme adopted by the invention is as follows:
a cervical image processing apparatus based on region nomination, comprising:
the image acquisition device is used for acquiring a cervical image treated by 3-5% acetic acid solution;
the processor comprises a cervical image preprocessing module and a processing module, wherein the processing module comprises a model network consisting of a feature extraction network, a region detection network and a region screening classification network and is used for outputting classification information and position information of a target region;
a memory for storing parameters of a model network in the processor;
and the display device is used for displaying the classification information and the position information of the target area output by the processor.
The cervical image preprocessing module is used for marking the cervical images acquired by the image acquisition device and processed by the 3% -5% acetic acid solution, and clustering the cervical image data by using a K-means method.
The feature extraction network consists of a deep residual network ResNet50 and a top-down pyramid network.
The depth residual error network consists of 1 convolution layer, 1 maximum pooling layer, a first residual error convolution module, a second residual error convolution module, a third residual error convolution module and a fourth residual error convolution module which are connected in sequence.
The first to fourth residual convolution modules are respectively composed of 3, 4, 6 and 3 residual units.
Each residual unit is composed of 3 convolutional layers, and the characteristic diagram before entering the first convolutional layer also directly flows to the third convolutional layer and is added with the characteristic diagram output by the third convolutional layer to be used as the output of the residual unit.
The pyramid network comprises 3 upsampling modules; each up-sampling module consists of a bilinear interpolation layer and 2 convolution layers which are connected in sequence, and the up-sampling module is mainly used for enabling a feature map with a high-level low resolution to be consistent with a feature map with a low-level high resolution so as to carry out addition operation.
In the top-down pyramid structure, the output of each residual convolution module in the depth residual network ResNet50 is merged with the deep residual convolution modules in an additive manner after upsampling.
The importance of semantic information and detail information is balanced by the network design of the pyramid structure, discovery of smaller and thinner areas is facilitated, and comprehensiveness of area nomination candidates is guaranteed. In consideration of uneven size distribution of the target area, the method extracts area nominations with different scales from different network fusion layers, and the nomination acquisition mode is more targeted, so that the discovery of the target area is greatly improved.
The area detection network includes a classification subnetwork and a regression subnetwork.
The classification sub-network consists of 4 convolution filters with the size of 3 x 3 and the convolution step size of 1, 1 convolution filter with the size of 3 x 3 and the convolution step size of 1 and 1 sigmoid activation function layer which are connected in sequence. The classification sub-network outputs the classification information of the predicted target area, and the classification sub-network and the feature extraction network can be optimized by the difference value obtained by comparing the predicted classification information with the marked real label.
The regression subnetwork is composed of 4 convolution filters with the size of 3 x 3 and the convolution step of 1, and 1 convolution filter with the size of 3 x 3 and the convolution step of 1, which are connected in sequence. And the regression subnetwork outputs the predicted position information of the target area, and the regression subnetwork and the feature extraction network are optimized by a difference value obtained by comparing the predicted position information with the marked real label.
The regional screening and classifying network consists of a high-level target classifier and a normal classifier, wherein the high-level target classifier and the normal classifier have the same structure and are parallel in structure.
The high-level target classifier and the normal classifier respectively consist of 2 residual error units, 1 global pooling layer, 1 full-link layer and 1 sigmoid activation function layer which are sequentially connected; each residual unit is composed of 3 convolutional layers, and the characteristic diagram before entering the first convolutional layer also directly flows to the third convolutional layer and is added with the characteristic diagram output by the third convolutional layer to be used as the output of the residual unit.
The function of the region screening classification network is to verify the correctness of the region nomination candidate predicted by the region detection network and screen out the wrong region nomination.
The invention also provides a method for processing the cervical image by adopting the cervical image processing device based on the region nomination, which comprises the following steps: inputting the cervical image processed by the 3% -5% acetic acid solution acquired by the image acquisition device into a region detection network trained by a processor, outputting classification information and position information of a predicted target region, extracting the target region from the input cervical image according to the position information of the predicted target region, inputting the target region and the classification information of the predicted target region into a region screening classification network, outputting the final classification information and position information of the target region, and displaying the final classification information and position information on a display device.
The training method for the model network comprises the following steps:
(1) acquiring a cervical image processed by 3-5% acetic acid solution by using an image acquisition device, marking the cervical image by using a cervical image preprocessing module, and clustering cervical image data by using a K-means method to obtain image clusters with similarity to form a training set;
(2) training of area detection networks
The feature extraction network consists of a deep residual error network ResNet50 and a pyramid network, and the corresponding network layer is initialized by utilizing deep residual error network parameters pre-trained on ImageNet;
inputting the images in the training set into a feature extraction network, respectively inputting multi-scale feature maps output by the feature extraction network into a classification sub-network and a regression sub-network, outputting classification information of a predicted target region by the classification sub-network, outputting position information of the predicted target region by the regression sub-network, training until a loss function is converged, and storing parameters of the feature extraction network and the region detection network into a memory;
(3) training of regional screening classification networks
Sampling positive and negative samples of a high-level target and a normal target according to real labels marked in a training set, adjusting the sampled positive and negative samples to be the same resolution, respectively training a high-level target classifier and a normal classifier as input of a regional screening classification network, outputting the positive and negative sample marks by the regional screening classification network, training until a loss function is converged, and storing parameters of the regional screening classification network into a memory.
Compared with the prior art, the invention has the beneficial effects that:
(1) in order to accurately detect the position of the target area, the invention uses a characteristic pyramid network as a characteristic extraction network. An important characteristic of the feature pyramid network is that the shallow network layer information and the deep network layer information are fused, so that semantic information and detail information are well balanced. The area detection network based on this form of feature pyramid network can detect the target areas existing on the cervical image more comprehensively even if the target areas are not uniformly distributed in size. All target areas detected by the area detection network are used as area nominations and are put into the area nomination candidate set.
(2) In order to reduce the false positive rate of the regional detection network, the invention designs a regional screening classification network, and aims to screen out false positive nominations in a regional nomination candidate set and improve the accuracy rate of target regional detection. Since the region screening classification network is obtained by training the labeled real label, the region screening classification network is excellent in distinguishing the normal 'vinegar white' region from the diseased 'thick vinegar white' region, and can correctly screen the false nomination candidates.
Drawings
FIG. 1 is a schematic diagram of the structure of a feature extraction network and a regional detection network in accordance with the present invention;
FIG. 2 is a schematic diagram of a residual error unit according to the present invention;
fig. 3 is a schematic flow chart of a cervical image processing method of the present invention;
fig. 4 is a schematic flow chart of the area screening classification network according to the present invention.
Detailed Description
The invention will be described in further detail below with reference to the drawings and examples, which are intended to facilitate the understanding of the invention without limiting it in any way.
The invention provides a cervical image processing method and device based on region nomination, which are used for distinguishing normal vinegar white from focus vinegar white in a cervical image, wherein the focus vinegar white is taken as a target region, and classification information of the target region comprises level information and confidence of the target region.
Wherein, the level information of the target area comprises a high level and a low level, the high level target area is provided with irregular thin vinegar white epithelium and the vinegar white has a map-like boundary and is also provided with characteristics such as fine mosaic, fine dot-shaped blood vessels and the like; the low-grade target area has thick vinegar white epithelium and vinegar white appears rapidly, and is accompanied by a plurality of sleeve-shaped gland opening crypts, thick mosaics and thick punctate blood vessels.
The invention relates to a cervical image processing device based on region nomination, which specifically comprises:
the image acquisition device is used for acquiring a cervical image treated by 3-5% acetic acid solution;
the processor comprises a cervical image preprocessing module and a processing module, wherein the processing module comprises a model network consisting of a feature extraction network, a region detection network and a region screening classification network and is used for outputting classification information and position information of a target region;
a memory for storing parameters of a model network in the processor;
and the display device is used for displaying the classification information and the position information of the target area output by the processor.
As shown in fig. 1, the feature extraction network consists of a depth residual network ResNet50 and a top-down pyramid network,
wherein, the depth residual error network ResNet50 network is composed of 1 convolution filter with size 7 x 7 and convolution step size 2, 1 maximum pooling layer with size 3 x 3 and pooling step size 2, a first residual error convolution module, a second residual error convolution module, a third residual error convolution module and a fourth residual error convolution module which are connected in sequence,
the first to the fourth residual convolution modules are respectively composed of 3, 4, 6 and 3 residual units
As shown in fig. 2, each residual unit is composed of 3 convolutional layers with convolutional filter sizes 1 × 1,3 × 3,1 × 1, and the convolution step is 1 (except for the convolutional layer and the step of the first convolutional layer in the first residual unit of each residual convolutional module, which is 2), and the feature map before entering the first convolutional layer also flows directly to the third convolutional layer and is added to the feature map output by the third convolutional layer to be the output of the residual unit.
The pyramid network comprises 3 up-sampling modules, wherein each up-sampling module consists of 1 bilinear interpolation layer with the amplification size of 2, 1 convolution filter with the size of 3 x 3 and the convolution step size of 1 and 1 convolution layer with the convolution filter size of 1 x 1 and the convolution step size of 1 which are connected in sequence, and the addition operation is mainly performed in order to enable the high-layer low-resolution feature map to be consistent with the resolution of the low-layer high-resolution feature map.
In the top-down pyramid structure, the output of each residual convolution module in the depth residual network ResNet50 is merged with the residual convolution modules in the depth layer in an additive manner after upsampling, and the specific structure is shown in fig. 1.
The importance of semantic information and detail information is balanced by the network design of the pyramid structure, discovery of smaller and thinner areas is facilitated, and comprehensiveness of area nomination candidates is guaranteed. In consideration of uneven size distribution of the target area, the method extracts area nominations with different scales from different network fusion layers, and the nomination acquisition mode is more targeted, so that the discovery of the target area is greatly improved.
The area detection network includes a classification subnetwork and a regression subnetwork,
the classification sub-network consists of 4 convolution filters with the size of 3 x 3 and the convolution step size of 1, 1 convolution filter with the size of 3 x 3 and the convolution step size of 1 and 1 sigmoid activation function layer which are connected in sequence. The classification sub-network outputs the classification information of the predicted target area, and the classification sub-network and the feature extraction network can be optimized by the difference value obtained by comparing the predicted classification information with the marked real label.
The regression subnetwork is composed of 4 convolution filters with the size of 3 x 3 and the convolution step of 1, and 1 convolution filter with the size of 3 x 3 and the convolution step of 1, which are connected in sequence. And the regression subnetwork outputs the predicted position information of the target area, and the regression subnetwork and the feature extraction network are optimized by a difference value obtained by comparing the predicted position information with the marked real label.
The regional screening and classifying network consists of a high-level target classifier and a normal classifier, wherein the high-level target classifier and the normal classifier have the same structure and are structurally parallel.
The high-level target classifier and the normal classifier respectively consist of 2 residual error units which are sequentially connected, 1 global pooling layer with the size of a pooling filter being the size of an input feature map and the pooling step length being 1, 1 full-connection layer with the number of output channels being 2 and 1 sigmoid activation function layer; each residual unit is composed of convolution layers with the sizes of 3 convolution filters being 1 x 1,3 x 3 and 1 x 1 respectively, convolution step lengths are all 1 (except the convolution and step length of the first convolution layer in the first residual unit of each residual convolution module, the convolution step length is 2), and a feature map before entering the first convolution layer directly flows to the third convolution layer and is added with a feature map output by the third convolution layer to be used as the output of the residual unit.
The function of the region screening classification network is to verify the correctness of the region nomination candidate predicted by the region detection network and screen out the wrong region nomination.
As shown in fig. 3, the method for processing a cervical image by using the cervical image processing apparatus based on region nomination according to the present invention includes: inputting the cervical image processed by the 3% -5% acetic acid solution acquired by the image acquisition device into a region detection network trained by a processor, outputting classification information and position information of a predicted target region, extracting the target region from the input cervical image according to the position information of the predicted target region, inputting the target region and the classification information of the predicted target region into a region screening classification network, outputting the final classification information and position information of the target region, and displaying the final classification information and position information on a display device.
As shown in fig. 4, after a target region is extracted from an input cervical image according to the position information of the predicted target region, the target region and the predicted target region classification information are merged and input to a region screening classification network, and if the region is predicted as a high-level target in the region prediction network, the region is input to a high-level target classifier, and if not, the region is input to a normal classifier. Here, a case of inputting a high-level target classifier is taken as an example for description, and if a prediction result output by the high-level target classifier is a high level, the region prediction result is retained (that is, a final output result is a high-level target); if the prediction result of the high-level target classifier is normal, continuing to input the region into a normal classifier, and if the prediction result of the normal classifier is normal, reserving the prediction result of the region (namely, the final output result is a normal target); and if the prediction result of the normal classifier is not normal, directly abandoning the predicted target area.
The training method for the model network comprises the following steps:
(1) acquiring a cervical image processed by 3% -5% acetic acid solution by using an image acquisition device, marking the cervical image by using a cervical image preprocessing module, and clustering cervical image data by using a K-means method with K being 50 to obtain image clusters with similarity, so as to form a training set, wherein the training set comprises 1373 image clusters;
(2) training of area detection networks
Initializing a corresponding network layer by using a depth residual error network parameter pre-trained on ImageNet;
inputting the images in the training set into a feature extraction network, respectively inputting multi-scale feature maps output by the feature extraction network into a classification sub-network and a regression sub-network, outputting classification information of a predicted target region by the classification sub-network, outputting position information of the predicted target region by the regression sub-network, training until a loss function is converged, and storing parameters of the feature extraction network and the region detection network into a memory;
(3) training of regional screening classification networks
Sampling positive and negative samples of a high-level target and a normal target according to real labels marked in a training set, adjusting the sampled positive and negative samples to be the same resolution, respectively training a high-level target classifier and a normal classifier as input of a regional screening classification network, outputting the positive and negative sample marks by the regional screening classification network, training until a loss function is converged, and storing parameters of the regional screening classification network into a memory.
The above-mentioned embodiments are intended to illustrate the technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only specific embodiments of the present invention, and are not intended to limit the present invention, and any modifications, additions, equivalents, etc. made within the scope of the principles of the present invention should be included in the scope of the present invention.

Claims (7)

1. A cervical image processing apparatus based on region nomination, comprising:
the image acquisition device is used for acquiring a cervical image treated by 3-5% acetic acid solution;
the processor comprises a cervical image preprocessing module and a processing module, wherein the processing module comprises a model network consisting of a feature extraction network, a region detection network and a region screening classification network and is used for outputting classification information and position information of a target region;
the region detection network comprises a classification sub-network and a regression sub-network, the classification sub-network outputs the classification information of the predicted target region, and the regression sub-network outputs the position information of the predicted target region;
the classification information of the target region comprises level information and confidence of the target region, wherein the level information of the target region comprises a high level and a low level; the high-level target area is provided with irregular thin vinegar white epithelium and vinegar white with map-like boundaries and is also provided with fine mosaic and fine dot-shaped blood vessel characteristics; the low-level target area has thick vinegar white epithelium and high vinegar white appearance speed, and is accompanied with a plurality of sleeve-shaped gland opening crypts, thick mosaics and thick punctate blood vessels;
the regional screening and classifying network consists of a high-level target classifier and a normal classifier, wherein the high-level target classifier and the normal classifier have the same structure and are parallel in structure; the high-level target classifier and the normal classifier respectively consist of 2 residual error units, 1 global pooling layer, 1 full-link layer and 1 sigmoid activation function layer which are sequentially connected;
after a target area is extracted from an input cervical image according to the predicted position information of the target area, the target area and the predicted target area classification information are merged and input into an area screening classification network, if the area is predicted to be a high-level target in an area detection network, the area is input into a high-level target classifier, and if the area is not predicted to be the high-level target in the area detection network, the area is input into a normal classifier;
a memory for storing parameters of a model network in the processor;
and the display device is used for displaying the classification information and the position information of the target area output by the processor.
2. The cervical image processing apparatus based on region nomination of claim 1, wherein the cervical image preprocessing module is configured to label the cervical images acquired by the image acquisition apparatus after being processed by 3% -5% acetic acid solution, and cluster the cervical image data by using a K-means method.
3. The cervical image processing apparatus based on region nomination of claim 1, wherein the feature extraction network is composed of a depth residual network ResNet50 and a top-down pyramid network.
4. The cervical image processing apparatus according to claim 3, wherein the depth residual network comprises 1 convolutional layer, 1 max pooling layer, a first residual convolutional module, a second residual convolutional module, a third residual convolutional module, and a fourth residual convolutional module, which are connected in sequence.
5. The cervical image processing apparatus according to claim 4, wherein the first through fourth residual convolution modules are respectively composed of 3, 4, 6, and 3 residual units.
6. A method for processing a cervical image by using the cervical image processing apparatus based on region nomination according to any one of claims 1 to 5, comprising: inputting the cervical image processed by the 3% -5% acetic acid solution acquired by the image acquisition device into a region detection network trained by a processor, outputting classification information and position information of a predicted target region, extracting the target region from the input cervical image according to the position information of the predicted target region, inputting the target region and the classification information of the predicted target region into a region screening classification network, outputting the final classification information and position information of the target region, and displaying the final classification information and position information on a display device.
7. The method of claim 6, wherein the method of training the model network comprises:
(1) acquiring a cervical image processed by 3-5% acetic acid solution by using an image acquisition device, marking the cervical image by using a cervical image preprocessing module, and clustering cervical image data by using a K-means method to obtain image clusters with similarity to form a training set;
(2) training of area detection networks
The feature extraction network consists of a deep residual error network ResNet50 and a pyramid network, and the corresponding network layer is initialized by utilizing deep residual error network parameters pre-trained on ImageNet;
inputting the images in the training set into a feature extraction network, respectively inputting multi-scale feature maps output by the feature extraction network into a classification sub-network and a regression sub-network, outputting classification information of a target region by the classification sub-network, outputting position information of the target region by the regression sub-network, training until a loss function is converged, and storing parameters of the feature extraction network and a region detection network into a memory;
(3) training of regional screening classification networks
Sampling positive and negative samples of a high-level target and a normal target according to real labels marked in a training set, adjusting the sampled positive and negative samples to be the same resolution, respectively training a high-level target classifier and a normal classifier as input of a regional screening classification network, outputting the positive and negative sample marks by the regional screening classification network, training until a loss function is converged, and storing parameters of the regional screening classification network into a memory.
CN201810088291.2A 2018-01-30 2018-01-30 Cervical image processing method and device based on region nomination Active CN108090906B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810088291.2A CN108090906B (en) 2018-01-30 2018-01-30 Cervical image processing method and device based on region nomination

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810088291.2A CN108090906B (en) 2018-01-30 2018-01-30 Cervical image processing method and device based on region nomination

Publications (2)

Publication Number Publication Date
CN108090906A CN108090906A (en) 2018-05-29
CN108090906B true CN108090906B (en) 2021-04-20

Family

ID=62183425

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810088291.2A Active CN108090906B (en) 2018-01-30 2018-01-30 Cervical image processing method and device based on region nomination

Country Status (1)

Country Link
CN (1) CN108090906B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108961222A (en) * 2018-06-19 2018-12-07 江西大福医疗科技股份有限公司 A kind of cervical carcinoma early screening recognition methods based on gynecatoptron image
CN109145941B (en) * 2018-07-03 2021-03-09 怀光智能科技(武汉)有限公司 Irregular cervical cell mass image classification method and system
CN109034221A (en) * 2018-07-13 2018-12-18 马丁 A kind of processing method and its device of cervical cytology characteristics of image
CN109492530B (en) * 2018-10-10 2022-03-04 重庆大学 Robust visual object tracking method based on depth multi-scale space-time characteristics
CN111126421B (en) * 2018-10-31 2023-07-21 浙江宇视科技有限公司 Target detection method, device and readable storage medium
CN109636805B (en) * 2018-11-19 2022-04-01 浙江大学山东工业技术研究院 Cervical image lesion area segmentation device and method based on classification prior
CN109770928A (en) * 2019-02-27 2019-05-21 广州市妇女儿童医疗中心 The detection device and method of cervix opening degrees of expansion in stages of labor
CN110110748B (en) * 2019-03-29 2021-08-17 广州思德医疗科技有限公司 Original picture identification method and device
CN110197205B (en) * 2019-05-09 2022-04-22 三峡大学 Image identification method of multi-feature-source residual error network
CN110675391B (en) * 2019-09-27 2022-11-18 联想(北京)有限公司 Image processing method, apparatus, computing device, and medium
CN110688978A (en) * 2019-10-10 2020-01-14 广东工业大学 Pedestrian detection method, device, system and equipment
CN111160441B (en) * 2019-12-24 2024-03-26 上海联影智能医疗科技有限公司 Classification method, computer device, and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103096786A (en) * 2010-05-03 2013-05-08 国际科学技术医疗系统有限责任公司 Image analysis for cervical neoplasia detection and diagnosis
CN106991673A (en) * 2017-05-18 2017-07-28 深思考人工智能机器人科技(北京)有限公司 A kind of cervical cell image rapid classification recognition methods of interpretation and system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7664300B2 (en) * 2005-02-03 2010-02-16 Sti Medical Systems, Llc Uterine cervical cancer computer-aided-diagnosis (CAD)
EP2174266A2 (en) * 2007-08-03 2010-04-14 STI Medical Systems, LLC Computerized image analysis for a acetic acid induced cervical intraepithelial neoplasia
CN106874478A (en) * 2017-02-17 2017-06-20 重庆邮电大学 Parallelization random tags subset multi-tag file classification method based on Spark

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103096786A (en) * 2010-05-03 2013-05-08 国际科学技术医疗系统有限责任公司 Image analysis for cervical neoplasia detection and diagnosis
CN106991673A (en) * 2017-05-18 2017-07-28 深思考人工智能机器人科技(北京)有限公司 A kind of cervical cell image rapid classification recognition methods of interpretation and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Fast R-CNN;Ross Girshick;《arXiv》;20150927;论文第1-5节 *
Feature Pyramid Networks for Object Detection;Tsung-Yi Lin 等;《arXiv》;20170419;论文第1-5节 *
Multimodal Deep Learning for Cervical Dysplasia Diagnosis;Tao Xu 等;《Medical Image Computing and Computer-Assisted Intervention - MICCAI 2016. 19th International Conference》;20161021;论文摘要,第1-3节 *

Also Published As

Publication number Publication date
CN108090906A (en) 2018-05-29

Similar Documents

Publication Publication Date Title
CN108090906B (en) Cervical image processing method and device based on region nomination
CN108334848B (en) Tiny face recognition method based on generation countermeasure network
CN109344701B (en) Kinect-based dynamic gesture recognition method
CN111401384B (en) Transformer equipment defect image matching method
CN110909820B (en) Image classification method and system based on self-supervision learning
CN112418117B (en) Small target detection method based on unmanned aerial vehicle image
CN108038519B (en) Cervical image processing method and device based on dense feature pyramid network
CN104050471B (en) Natural scene character detection method and system
CN110598690B (en) End-to-end optical character detection and recognition method and system
CN111783576B (en) Pedestrian re-identification method based on improved YOLOv3 network and feature fusion
CN110929593B (en) Real-time significance pedestrian detection method based on detail discrimination
CN111931684A (en) Weak and small target detection method based on video satellite data identification features
CN111027456B (en) Mechanical water meter reading identification method based on image identification
Huang et al. Scribble-based boundary-aware network for weakly supervised salient object detection in remote sensing images
CN108921172B (en) Image processing device and method based on support vector machine
CN110853053A (en) Salient object detection method taking multiple candidate objects as semantic knowledge
CN110991374B (en) Fingerprint singular point detection method based on RCNN
Ling et al. A model for automatic recognition of vertical texts in natural scene images
CN104598881B (en) Feature based compresses the crooked scene character recognition method with feature selecting
CN114882204A (en) Automatic ship name recognition method
CN112364687A (en) Improved Faster R-CNN gas station electrostatic sign identification method and system
CN116258686A (en) Method for establishing colon polyp parting detection model based on image convolution feature capture
CN113628252A (en) Method for detecting gas cloud cluster leakage based on thermal imaging video
Rani et al. Object Detection in Natural Scene Images Using Thresholding Techniques
Kosala et al. Robust License Plate Detection in Complex Scene using MSER-Dominant Vertical Sobel.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant