CN109034221A - A kind of processing method and its device of cervical cytology characteristics of image - Google Patents
A kind of processing method and its device of cervical cytology characteristics of image Download PDFInfo
- Publication number
- CN109034221A CN109034221A CN201810768766.2A CN201810768766A CN109034221A CN 109034221 A CN109034221 A CN 109034221A CN 201810768766 A CN201810768766 A CN 201810768766A CN 109034221 A CN109034221 A CN 109034221A
- Authority
- CN
- China
- Prior art keywords
- image
- loss
- cervical cytology
- region
- prediction block
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 16
- 238000000034 method Methods 0.000 claims abstract description 43
- 238000012549 training Methods 0.000 claims abstract description 21
- 230000001629 suppression Effects 0.000 claims abstract description 12
- 238000013316 zoning Methods 0.000 claims abstract description 11
- 238000004364 calculation method Methods 0.000 claims description 18
- 238000012545 processing Methods 0.000 claims description 14
- 230000003902 lesion Effects 0.000 claims description 12
- 230000006835 compression Effects 0.000 claims description 9
- 238000007906 compression Methods 0.000 claims description 9
- 238000007781 pre-processing Methods 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 8
- 230000000717 retained effect Effects 0.000 claims description 6
- 238000007792 addition Methods 0.000 claims description 5
- 230000002708 enhancing effect Effects 0.000 claims description 5
- 206010041823 squamous cell carcinoma Diseases 0.000 claims description 5
- 208000007879 Atypical Squamous Cells of the Cervix Diseases 0.000 claims description 4
- 238000012216 screening Methods 0.000 abstract description 5
- 238000013144 data compression Methods 0.000 abstract 1
- 238000001514 detection method Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 8
- 238000013527 convolutional neural network Methods 0.000 description 5
- 230000003321 amplification Effects 0.000 description 4
- 238000002059 diagnostic imaging Methods 0.000 description 4
- 238000003199 nucleic acid amplification method Methods 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 3
- 206010028980 Neoplasm Diseases 0.000 description 2
- 238000003745 diagnosis Methods 0.000 description 2
- 201000010099 disease Diseases 0.000 description 2
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 210000000056 organ Anatomy 0.000 description 2
- 206010008263 Cervical dysplasia Diseases 0.000 description 1
- 208000032124 Squamous Intraepithelial Lesions Diseases 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 201000011510 cancer Diseases 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 208000019065 cervical carcinoma Diseases 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 210000000981 epithelium Anatomy 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 208000014829 head and neck neoplasm Diseases 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000003211 malignant effect Effects 0.000 description 1
- 208000026037 malignant tumor of neck Diseases 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 238000010827 pathological analysis Methods 0.000 description 1
- 238000004445 quantitative analysis Methods 0.000 description 1
- 238000001959 radiotherapy Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 206010040882 skin lesion Diseases 0.000 description 1
- 231100000444 skin lesion Toxicity 0.000 description 1
- 208000020077 squamous cell intraepithelial neoplasia Diseases 0.000 description 1
- 210000001519 tissue Anatomy 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
Abstract
The present invention discloses the processing method and its device of a kind of cervical cytology characteristics of image, comprising: by cervical cytology Image Data Compression to different resolution ratio, input area referral networks obtain region nomination frame and cervical cytology characteristics of image figure;In cervical cytology characteristics of image figure, selection region nominates the corresponding feature of frame as input, obtains pond characteristic pattern by grid pond layer;Characteristic pattern input sorter network in pond is obtained to the class probability in the region and the offset of prediction block and nomination frame;The loss of zoning referral networks and the loss of sorter network, obtain loss function;Optimized using back propagation, obtains convergent Faster RCNN model;Finally using the Faster RCNN of the method screening different resolution image training of non-maxima suppression, final prediction block is obtained, the present invention can effectively improve the efficiency and accuracy rate of unconventional cell in doctor's screening cervical cytology image.
Description
Technical field
The invention belongs to medical imaging data processing field, a kind of processing method of specific cervical cytology characteristics of image and
Its device.
Background technique
In the clinical diagnosis of cervical carcinoma, pathological diagnosis result be considered as it is most authoritative, most accurately differentiate as a result, and
The most important index for whether suffering from cancer is diagnosed in clinic.In in uterine neck cancer pathocytology image, clinician's energy
It is enough to pass through the movement of slice by the pathologist of profession, and then visually scan entire slice under the microscope, under discovery slice
Without sick in intraepithelial lesions/malignant change cell (NILM), low level squamous intraepithelial lesion (LSIL), high-level scaly epithelium
Become unconventional cells such as (HSIL), this work is heavy and time-consuming for experienced doctor, and with diagosis
The growth of time, rate of missed diagnosis also increase accordingly.
Deep learning method achieves huge achievement in field of image processing, this is also to identify using depth learning technology
Position of disease in medical image data provides possibility.Currently, CAD (the Computer Aided based on deep learning
Diagnosis) system, identify and divide the organ in CT image, in terms of, just have a wide range of applications.People
The three-dimensional reconstruction of body tissue, quantitative analysis require in advance to be split related position, draw in addition, image segmentation additionally aids
It leads operation, tumour radiotherapy and carries out treatment evaluation, be widely used.
Object detection (Object Detection) is an important research direction of computer vision, and task is to pass through
Computerized algorithm outpours the object position in image using rectangular collimation mark in the picture, and carries out object category prediction.Object
Body Detection task monitors security protection in recognition of face, medical, aerospace, automatic Pilot, suffers from weight in the scenes such as industry manufacture
It applies.In medical imaging, object detection is often detected the lesion in CT, or ultrasound, the organ in MRI image, detection disease
Manage the cell etc. in image.
LECUN in 1998 et al. be put forward for the first time convolutional neural networks (convolutional neural network,
NCC) after the handwritten numeral that LeNet model is used to identify on check by many banks of the U.S..The CNN model of various difference frameworks
If VGG, ResNet etc. obtain the champion repeatedly to compete in ImageNet contest, CNN is in image procossing and field of target recognition
It is widely used, becomes deep learning in the general neural network of field of image processing.CNN is made extensively in object detection
With: the Faster RCNN that Kaiming He in 2015 et al. is proposed not only increases speed on the basis of Fast RCNN, and
And also have good performance in precision, meanwhile, 2015 by Wei Liu team propose SSD algorithm in speed compared to
Faster RCNN is more efficient, and Faster RCNN, Faster RCNN and SSD are slightly inferior in precision becomes two-step method object
Two Typical Representatives of physical examination survey and single step object detection.
However, due to the segmentation of medical imaging and differing greatly for natural image, directly by general object detecting method
It is used on medical imaging that often effect is bad, object detection is made medically to have got long long way to go.
Summary of the invention
The present invention provides a kind of processing method of cervical cytology characteristics of image based on Faster RCNN algorithm, and phase
The detection method of unconventional cell in cervical cytology image should be provided.Regular growth of the present invention is that human normal is thin
Born of the same parents, unconventional cell is corresponding with human normal cell, is the improper morphological cellular of human body.
A kind of processing method of cervical cytology characteristics of image, comprising:
(1) prepare the callout box of unconventional cell in the N times of cervical cytology image and image amplified as training number
According to;The integer that the N value range is 10~40;
The present invention preferred N=20 or N=40 is, which is because, 20 times of amplifications are enlarged into doctor with 40 times and often use microscope
Amplification factor is consistent conducive to doctors experience;
(2) training data obtained to step (1) is compressed to resolution ratio R, and will be after the enhancing of cervical cytology image data
Input area referral networks, obtain region nomination frame and cervical cytology characteristics of image figure, the integer that the R is 500~2500,
It is preferred that 512,1024,2048;
(3) in the resulting cervical cytology characteristics of image figure of step (2), selection region nominates the corresponding feature conduct of frame
Input obtains pond characteristic pattern by grid pond layer;
(4) by pond characteristic pattern input sorter network obtain the region class probability and prediction block with nominate frame it is inclined
It moves;
(5) loss of the loss of region referral networks and sorter network in step (4) in step (2), summation are calculated separately
Obtain final loss function L;
(6) optimize L using back-propagation method, so that final loss function is reached minimum, obtain convergent Faster RCNN
Model.
(7) the compression resolution ratio R in (2) is changed the step, repeats step (2) to step (6), obtains multiple convergent
Faster RCNN model screens the prediction block of multiple Faster RCNN using the method for non-maxima suppression, retains
The high prediction block of confidence level;
Processing method of the invention can further include step (8), and (8) are by the compression of images not marked to step
(2) the resolution ratio R input area referral networks, the nomination region that output may contain unconventional cell are corresponding with the region
Characteristic pattern, characteristic pattern is inputted after the layer of grid pond input again sorter network obtain the other probability of every type and final prediction block and
The offset for nominating frame using the classification of maximum predicted probability as final prediction classification, and uses nomination frame and final prediction block inclined
The position for calculating final prediction block is moved, multiple model prediction frames is screened using non-maxima suppression, obtains final prediction result;
In step (2), each callout box includes upper left corner abscissa, the ordinate of box, width, height and the party of box
The corresponding classification of frame;The corresponding class categories of box include high-level Squamous cell lesions, low level Squamous cell lesions, SARS
Type squamous cell and squamous cell carcinoma etc.;
Step (2) the data enhancement methods specific steps are as follows:
(2-1) carries out left and right overturning to image and callout box
(2-2) spins upside down image and callout box
(2-3) carries out random brightness change to image
The calculation of grid pond layer in step (3) are as follows:
(3-1) is divided into the characteristic pattern of input the grid of k × k
(3-2) averages the characteristic value in each grid
(3-3) obtains the pond characteristic pattern of k × k;Wherein k be selected from 5-50 integer, preferably 6,7,8,9,10,11,12,
13, more preferable 7.
Region referral networks described in step (2) and the sorter network in step (4), basic network are classical taxonomy
Network, such as VGG, ResNet, Inception etc.;
Basic network present invention preferably uses ResNet as sorter network and region referral networks, the reason is that,
Residual error module in ResNet is conducive to gradient passback when training, and is more easier to restrain when training.In the training process, classify
Network is identical as the infrastructure network of region referral networks, therefore shared parameter;
What the region referral networks in step (5) lost method particularly includes:
The Center Loss of (5-1) calculating full articulamentum feature of basic network;
The specific formula for calculation of the Center Loss are as follows:
Wherein, LCCenter Loss, m to be calculated represent the feature sum of full articulamentum, xiIndicate that i institute in position is right
The characteristic value answered,Indicate classification yiThe quantity of the eigencenter of representative, the eigencenter is identical as class categories number;
The Classification Loss Cross Entropy of (5-2) zoning referral networks output;
The range loss of the zoning (5-3) nomination and callout box, SmoothL1 Loss;
The value of above three step is added by (5-4), obtains the loss of region referral networks;
The sorter network loss of characteristics of image processing module method particularly includes:
(5-5) calculates the Classification Loss Cross Entropy of sorter network output;
(5-6) calculates the offset of sorter network prediction and the range loss of callout box, SmoothL1 Loss;
Above two resulting value additions are obtained sorter network loss by (5-7);
The specific formula for calculation of step (5-2) and (5-5) described Cross Entropy are as follows:
Wherein y is the classification one-hot coding of classification,For the reality output of full articulamentum;
The calculation method of step (5-3) and (5-6) described Smooth L1 Loss are as follows:
Wherein, x is the difference of network output offset and target offset;
The specific steps of step (7) are as follows:
(7-1) results set S is initially set to sky, and the set of all prediction blocks is set as S ';
All prediction blocks are pressed confidence level from high to low sequence by (7-2);
(7-3) is selected to work as the highest prediction block B of previous belief, moves into S from S ';
(7-4) selection area coincidence in S ' is more than the prediction block of th, and deletes from S ';The th is 0.5~0.8
Decimal, the present invention preferably 0.5;
(7-5) repeats (7-1) to (7-4) until not having remaining predicted frame in S ', and S is the prediction block retained at this time;
Method of the invention and the difference of conventional single-mode type Faster RCNN are: the present invention instructs in a variety of resolution ratio
Practice multiple Faster RCNN, and the method for having used non-maxima suppression screens final prediction block, makes model to different size of
Unconventional cell prediction result is more stable, therefore improves the accuracy rate of traditional Faster RCNN, mentions to verify the present invention
The validity of fusion method out, contrived experiment: the training cervical cytology image for thering is lesion to mark using identical 7000
The Faster-RCNN model training method that data are described according to the present invention, each single model of training to model are restrained, are calculated separately
The susceptibility and specificity of each single model, reuse multi-resolution prediction frame screening technique proposed by the present invention to multiple models
Result merged after, calculate the susceptibility and specificity of Fusion Model again, the two is compared.By experiment, this hair
The multi-resolution prediction frame screening technique of bright proposition has promotion compared to each single model Faster RCNN, and sensibility is average
10.5% is promoted, specificity improves 5.6%.
The present invention also provides a kind of processing unit of cervical cytology characteristics of image, including image input module, image are pre-
Processing module, image characteristics extraction module and characteristics of image processing module;
Wherein image input module prepares the mark of unconventional cell in the N times of cervical cytology image and image amplified
Frame is infused as training data;The integer that the N value range is 10~40;
Cervical cytology image data is enhanced the training data that image input module obtains by image pre-processing module
Input area referral networks afterwards obtain region nomination frame and cervical cytology characteristics of image figure;
Image characteristics extraction module selects area in the resulting cervical cytology characteristics of image figure of image pre-processing module
The corresponding feature of frame is nominated as input in domain, obtains pond characteristic pattern by grid pond layer;
Characteristic pattern input sorter network in pond is obtained to the class probability in the region and the offset of prediction block and nomination frame again;
Characteristics of image processing module calculates separately the loss of region referral networks and characteristics of image in image pre-processing module
The loss of sorter network in extraction module, summation obtain final loss function L;
And optimize L using back-propagation method, so that final loss function is reached minimum, obtains convergent Faster RCNN
Model;
And change compression resolution ratio R, multiple convergent Faster RCNN models are obtained, non-maxima suppression is used
Method screens the prediction block of multiple Faster RCNN.
Wherein, in image input module, each callout box includes upper left corner abscissa, the ordinate of box, the width of box,
The high and corresponding classification of the box;The corresponding class categories of box include high-level Squamous cell lesions, in low level squamous
Skin lesion, atypical squamous cell and squamous cell carcinoma etc.;
Data enhancement methods specific steps described in image pre-processing module are as follows:
1, left and right overturning is carried out to image and callout box
2, image and callout box are spun upside down
3, random brightness change is carried out to image
The calculation of grid pond layer in image characteristics extraction module are as follows:
1, the characteristic pattern of input is divided into the grid of k × k
2, the characteristic value in each grid is averaged
3, the pond characteristic pattern of k × k is obtained;Wherein integer of the k selected from 5-50, preferably 6,7,8,9,10,11,12,13, more
It is preferred that 7.
Region referral networks described in image pre-processing module and the sorter network in image characteristics extraction module, basis
Network is classical taxonomy network, such as VGG, ResNet, Inception etc.;
Basic network present invention preferably uses ResNet as sorter network and region referral networks.In training process
In, sorter network is identical as the infrastructure network of region referral networks, therefore shared parameter;
The region referral networks loss of characteristics of image processing module method particularly includes:
The Center Loss of (5-1) calculating full articulamentum feature of basic network;
The specific formula for calculation of the Center Loss are as follows:
Wherein, LCCenter Loss, m to be calculated represent the feature sum of full articulamentum, xiIndicate that i institute in position is right
The characteristic value answered,Indicate classification yiThe quantity of the eigencenter of representative, the eigencenter is identical as class categories number;
The Classification Loss Cross Entropy of (5-2) zoning referral networks output;
Wherein y is the classification one-hot coding of classification,For the reality output of full articulamentum;
The range loss of the zoning (5-3) nomination and callout box, SmoothL1 Loss;
The value of above three step is added by (5-4), obtains the loss of region referral networks;
The sorter network loss of characteristics of image processing module method particularly includes:
(5-5) calculates the Classification Loss Cross Entropy of sorter network output;
(5-6) calculates the offset of sorter network prediction and the range loss of callout box, SmoothL1 Loss;
Above two resulting value additions are obtained sorter network loss by (5-7);
The specific formula for calculation of step (5-2) and (5-5) described Cross Entropy are as follows:
Wherein y is the classification one-hot coding of classification,For the reality output of full articulamentum;
The calculation method of step (5-3) and (5-6) described Smooth L1 Loss are as follows:
Wherein, x is the difference of network output offset and target offset;
Detailed description of the invention
Fig. 1 is that cervical cytology image, callout box and prediction block are inputted in specific implementation method of the present invention.
Fig. 2 is the single Faster RCNN specific implementation method structure chart of present invention training.
Fig. 3 is the schematic diagram of present invention screening multi-model prediction block.
Specific embodiment
It is thin to specific a kind of uterine neck provided by the invention below with reference to specific implementation method for a further understanding of the present invention
The detection method that born of the same parents learn unconventional cell in image is specifically described, but the present invention is not limited thereto, field technical staff
The non-intrinsically safe modifications and adaptations made under core guiding theory of the present invention, still fall within protection scope of the present invention.
Embodiment 1, a kind of method of cervical cytology characteristics of image processing, comprising:
(1) prepare the callout box of unconventional cell in the cervical cytology image and image of 40 times of amplifications as training number
According to;Each callout box includes upper left corner abscissa, the ordinate of box, width, height and the corresponding classification of the box of box;Side
The corresponding class categories of frame include high-level Squamous cell lesions, low level Squamous cell lesions, atypical squamous cell and
Squamous cell carcinoma etc.;
(2) training data obtained to step (1) is compressed to resolution ratio R, and will be after the enhancing of cervical cytology image data
Input is the region referral networks of basic network with ResNet, obtains region nomination frame and cervical cytology characteristics of image figure;
Data enhancement methods specific steps are as follows:
(2-1) carries out left and right overturning to image and callout box
(2-2) spins upside down image and callout box
(2-3) carries out random brightness change to image
(3) in the resulting cervical cytology characteristics of image figure of step (2), selection region nominates the corresponding feature conduct of frame
Input obtains pond characteristic pattern by grid pond layer.The calculation of grid pond layer are as follows:
(3-1) is divided into the characteristic pattern of input 7 × 7 grid
(3-2) averages the characteristic value in each grid
(3-3) obtains 7 × 7 pond characteristic pattern
(4) by pond characteristic pattern input sorter network obtain the region class probability and prediction block with nominate frame it is inclined
It moves;
(5) loss of the loss of region referral networks and sorter network in step (4) in step (2), summation are calculated separately
Final loss function L is obtained, the loss of region referral networks method particularly includes:
(5-1) calculates the Center Loss of the full articulamentum feature of basic network using following formula:
Wherein, LCCenter Loss, m to be calculated represent the feature sum of full articulamentum, xiIndicate that i institute in position is right
The characteristic value answered,Indicate classification yiThe quantity of the eigencenter of representative, the eigencenter is identical as class categories number;
The Classification Loss that (5-2) uses Cross Entropy formula zoning referral networks to export;
(5-3) nominates the range loss with callout box using SmoothL1 formula zoning;
The value of above three step is added by (5-4), obtains the loss of region referral networks;
The circular of sorter network loss are as follows:
(5-5) calculates the Classification Loss Cross Entropy of sorter network output;
(5-6) calculates the offset of sorter network prediction and the range loss of callout box, SmoothL1 Loss;
Above two resulting value additions are obtained sorter network loss by (5-7);
The specific formula for calculation of step (5-2) and (5-5) described Cross Entropy are as follows:
Wherein y is the classification one-hot coding of classification,For the reality output of full articulamentum;
The calculation method of step (5-3) and (5-6) described Smooth L1 Loss are as follows:
Wherein, x is the difference of network output offset and target offset;
(6) optimize L using back-propagation method, so that final loss function is reached minimum, obtain convergent Faster RCNN
Model;
(7) the compression resolution ratio R in (2) is changed the step, enabling R is respectively 512,1024,2048, repeats step (2) to step
Suddenly (6) obtain multiple convergent Faster RCNN models, using the method for non-maxima suppression to multiple Faster RCNN's
Prediction block is screened, and the high prediction block of confidence level is retained:
(7-1) results set S is initially set to sky, and the set of all prediction blocks is set as S ';
All prediction blocks are pressed confidence level from high to low sequence by (7-2);
(7-3) is selected to work as the highest prediction block B of previous belief, moves into S from S ';
(7-4) selection area coincidence in S ' is more than 0.6 prediction block, and deletes from S ';
(7-5) repeats (7-1) to (7-4) until not having remaining predicted frame in S ', and S is the prediction block retained at this time;
(8) by the compression of images not marked to step (2) the resolution ratio R input area referral networks, output may contain
There is the nomination region of unconventional cell characteristic pattern corresponding with the region, inputs classification again after characteristic pattern is inputted grid pond layer
Network obtains the offset of the other probability of every type and final prediction block and nomination frame, using the classification of maximum predicted probability as final
It predicts classification, and using the position of nomination frame and the final final prediction block of prediction block calculations of offset, is sieved using non-maxima suppression
Multiple model prediction frames are selected, final prediction result is obtained.
Embodiment 2, a kind of method of cervical cytology characteristics of image processing, comprising:
(1) prepare the callout box of unconventional cell in the cervical cytology image and image of 20 times of amplifications as training number
According to;Each callout box includes upper left corner abscissa, the ordinate of box, width, height and the corresponding classification of the box of box;Side
The corresponding class categories of frame include high-level Squamous cell lesions, low level Squamous cell lesions, atypical squamous cell and
Squamous cell carcinoma;
(2) training data obtained to step (1) is compressed to resolution ratio R, and will be after the enhancing of cervical cytology image data
Input is the region referral networks of basic network with ResNet, obtains region nomination frame and cervical cytology characteristics of image figure;
Data enhancement methods specific steps are as follows:
(2-1) carries out left and right overturning to image and callout box
(2-2) spins upside down image and callout box
(2-3) carries out random brightness change to image
(3) in the resulting cervical cytology characteristics of image figure of step (2), selection region nominates the corresponding feature conduct of frame
Input obtains pond characteristic pattern by grid pond layer.The calculation of grid pond layer are as follows:
(3-1) is divided into the characteristic pattern of input 10 × 10 grid
(3-2) averages the characteristic value in each grid
(3-3) obtains 10 × 10 pond characteristic pattern
(4) by pond characteristic pattern input sorter network obtain the region class probability and prediction block with nominate frame it is inclined
It moves;
(5) loss of the loss of region referral networks and sorter network in step (4) in step (2), summation are calculated separately
Final loss function L is obtained, the loss of region referral networks method particularly includes:
(5-1) calculates the Center Loss of the full articulamentum feature of basic network using following formula:
Wherein, LCCenter Loss, m to be calculated represent the feature sum of full articulamentum, xiIndicate that i institute in position is right
The characteristic value answered,Indicate classification yiThe quantity of the eigencenter of representative, the eigencenter is identical as class categories number;
The Classification Loss that (5-2) uses Cross Entropy formula zoning referral networks to export;
(5-3) nominates the range loss with callout box using SmoothL1 formula zoning;
Wherein, x is the difference of network output offset and target offset;
The value of above three step is added by (5-4), obtains the loss of region referral networks;
The circular of sorter network loss are as follows:
(5-5) calculates the Classification Loss Cross Entropy of sorter network output;
(5-6) calculates the offset of sorter network prediction and the range loss of callout box, SmoothL1 Loss;
Above two resulting value additions are obtained sorter network loss by (5-7);
The Cross Entropy is identical as (5-2), (5-3) as Smooth L1 Loss calculation;
The specific formula for calculation of step (5-2) and (5-5) described Cross Entropy are as follows:
Wherein y is the classification one-hot coding of classification,For the reality output of full articulamentum;
The calculation method of step (5-3) and (5-6) described Smooth L1 Loss are as follows:
Wherein, x is the difference of network output offset and target offset;
(6) optimize L using back-propagation method, so that final loss function is reached minimum, obtain convergent Faster RCNN
Model;
(7) the compression resolution ratio R in (2) is changed the step, enabling R is respectively 500,1000,2000, repeats step (2) to step
Suddenly (6) obtain multiple convergent Faster RCNN models, using the method for non-maxima suppression to multiple Faster RCNN's
Prediction block is screened, and the high prediction block of confidence level is retained:
(7-1) results set S is initially set to sky, and the set of all prediction blocks is set as S ';
All prediction blocks are pressed confidence level from high to low sequence by (7-2);
(7-3) is selected to work as the highest prediction block B of previous belief, moves into S from S ';
(7-4) selection area coincidence in S ' is more than 0.5 prediction block, and deletes from S ';;
(7-5) repeats (7-1) to (7-4) until not having remaining predicted frame in S ', and S is the prediction block retained at this time;
(8) by the compression of images not marked to step (2) the resolution ratio R input area referral networks, output may contain
There is the nomination region of unconventional cell characteristic pattern corresponding with the region, inputs classification again after characteristic pattern is inputted grid pond layer
Network obtains the offset of the other probability of every type and final prediction block and nomination frame, using the classification of maximum predicted probability as final
It predicts classification, and using the position of nomination frame and the final final prediction block of prediction block calculations of offset, is sieved using non-maxima suppression
Multiple model prediction frames are selected, final prediction result is obtained.
Claims (10)
1. a kind of processing method of cervical cytology characteristics of image, comprising:
(1) prepare the callout box of unconventional cell in the N times of cervical cytology image and image amplified as training data;
The integer that the N value range is 10~40;
(2) training data obtained to step (1) is compressed to resolution ratio R, and will input after the enhancing of cervical cytology image data
Region referral networks obtain region nomination frame and cervical cytology characteristics of image figure;
(3) in the resulting cervical cytology characteristics of image figure of step (2), selection region nominates the corresponding feature of frame as defeated
Enter, pond characteristic pattern is obtained by grid pond layer;
(4) characteristic pattern input sorter network in pond is obtained to the class probability in the region and the offset of prediction block and nomination frame;
(5) loss of the loss of region referral networks and sorter network in step (4) in step (2) is calculated separately, summation obtains
Final loss function L;
(6) optimize L using back-propagation method, so that final loss function is reached minimum, obtain convergent Faster RCNN mould
Type;
(7) the compression resolution ratio R in (2) is changed the step, repeats step (2) to step (6), obtains multiple convergent Faster
RCNN model screens the prediction block of multiple Faster RCNN using the method for non-maxima suppression, and it is high to retain confidence level
Prediction block, the R be 500~2500 integer.
2. the processing method of cervical cytology characteristics of image according to claim 1, which is characterized in that in step (1), often
A callout box includes upper left corner abscissa, the ordinate of box, width, height and the corresponding classification of the box of box.
3. the processing method of cervical cytology characteristics of image according to claim 2, which is characterized in that corresponding point of box
Class classification is selected from high-level Squamous cell lesions, low level Squamous cell lesions, atypical squamous cell and squamous cell carcinoma
One of or it is a variety of.
4. the processing method of cervical cytology characteristics of image according to claim 1, which is characterized in that institute in step (2)
State data enhancement methods specific steps are as follows:
(2-1) carries out left and right overturning to image and callout box;
(2-2) spins upside down image and callout box;
(2-3) carries out random brightness change to image.
5. the processing method of cervical cytology characteristics of image according to claim 1, which is characterized in that in step (3)
The specific calculation of grid pond layer are as follows:
(3-1) is divided into the characteristic pattern of input the grid of k × k;
(3-2) averages the characteristic value in each grid;
(3-3) obtains the pond characteristic pattern of k × k.
6. the processing method of cervical cytology characteristics of image according to claim 1, which is characterized in that institute in step (2)
Sorter network in the region referral networks stated and step (4), basic network are classical taxonomy network, such as VGG, ResNet,
Inception。
7. the processing method of cervical cytology characteristics of image according to claim 1, which is characterized in that in step (5)
The loss of region referral networks method particularly includes:
The Center Loss of (5-1) calculating full articulamentum feature of basic network;
The Classification Loss Cross Entropy of (5-2) zoning referral networks output;
The range loss of the zoning (5-3) nomination and callout box, SmoothL1Loss;
The value of above three step is added by (5-4), obtains the loss of region referral networks.
8. the processing method of cervical cytology characteristics of image according to claim 1, which is characterized in that in step (5)
Sorter network loss method particularly includes:
(5-5) calculates the Classification Loss Cross Entropy of sorter network output;
(5-6) calculates the offset of sorter network prediction and the range loss of callout box, SmoothL1Loss;
Above two resulting value additions are obtained sorter network loss by (5-7).
9. the processing method of cervical cytology characteristics of image according to claim 1, which is characterized in that step is adopted in (7)
Multiple Faster RCNN prediction blocks are screened with non-maxima suppression method particularly includes:
(7-1) results set S is initially set to sky, and the set of all prediction blocks is set as S ';
All prediction blocks are pressed confidence level from high to low sequence by (7-2);
(7-3) is selected to work as the highest prediction block B of previous belief, moves into S from S ';
(7-4) selection area coincidence in S ' is more than the prediction block of th, and deletes from S ';The th be 0.5~0.8 it is small
Number;
(7-5) repeats (7-1) to (7-4) until not having remaining predicted frame in S ', and S is the prediction block retained at this time.
10. a kind of processing unit of cervical cytology characteristics of image, which is characterized in that locate in advance including image input module, image
Manage module, image characteristics extraction module and characteristics of image processing module;
Wherein image input module prepares the callout box of unconventional cell in the N times of cervical cytology image and image amplified
As training data;The integer that the N value range is 10~40;
Image pre-processing module is compressed to resolution ratio R to the training data that image input module obtains, and by cervical cytology figure
As input area referral networks after data enhancing, region nomination frame and cervical cytology characteristics of image figure are obtained;
Image characteristics extraction module, in the resulting cervical cytology characteristics of image figure of image pre-processing module, selection region is mentioned
The corresponding feature of name frame obtains pond characteristic pattern as input, by grid pond layer;
Characteristic pattern input sorter network in pond is obtained to the class probability in the region and the offset of prediction block and nomination frame again;
Characteristics of image processing module calculates separately the loss and image characteristics extraction of region referral networks in image pre-processing module
The loss of sorter network in module, summation obtain final loss function L;
And optimize L using back-propagation method, so that final loss function is reached minimum, obtain convergent Faster RCNN model,
And change compression resolution ratio R, multiple convergent Faster RCNN models are obtained, the method for non-maxima suppression is used
The prediction block of multiple Faster RCNN is screened.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810768766.2A CN109034221A (en) | 2018-07-13 | 2018-07-13 | A kind of processing method and its device of cervical cytology characteristics of image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810768766.2A CN109034221A (en) | 2018-07-13 | 2018-07-13 | A kind of processing method and its device of cervical cytology characteristics of image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109034221A true CN109034221A (en) | 2018-12-18 |
Family
ID=64641353
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810768766.2A Pending CN109034221A (en) | 2018-07-13 | 2018-07-13 | A kind of processing method and its device of cervical cytology characteristics of image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109034221A (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110009599A (en) * | 2019-02-01 | 2019-07-12 | 腾讯科技(深圳)有限公司 | Liver masses detection method, device, equipment and storage medium |
CN110189293A (en) * | 2019-04-15 | 2019-08-30 | 广州锟元方青医疗科技有限公司 | Cell image processing method, device, storage medium and computer equipment |
CN110263656A (en) * | 2019-05-24 | 2019-09-20 | 南方科技大学 | A kind of cancer cell identification methods, devices and systems |
CN110443781A (en) * | 2019-06-27 | 2019-11-12 | 杭州智团信息技术有限公司 | A kind of the AI assistant diagnosis system and method for liver number pathology |
CN110648322A (en) * | 2019-09-25 | 2020-01-03 | 杭州智团信息技术有限公司 | Method and system for detecting abnormal cervical cells |
CN110765855A (en) * | 2019-09-12 | 2020-02-07 | 杭州迪英加科技有限公司 | Pathological image processing method and system |
CN110826576A (en) * | 2019-10-10 | 2020-02-21 | 浙江大学 | Cervical lesion prediction system based on multi-mode feature level fusion |
CN110853021A (en) * | 2019-11-13 | 2020-02-28 | 江苏迪赛特医疗科技有限公司 | Construction of detection classification model of pathological squamous epithelial cells |
CN111383267A (en) * | 2020-03-03 | 2020-07-07 | 重庆金山医疗技术研究院有限公司 | Target relocation method, device and storage medium |
CN113139540A (en) * | 2021-04-02 | 2021-07-20 | 北京邮电大学 | Backboard detection method and equipment |
CN113269190A (en) * | 2021-07-21 | 2021-08-17 | 中国平安人寿保险股份有限公司 | Data classification method and device based on artificial intelligence, computer equipment and medium |
CN113409923A (en) * | 2021-05-25 | 2021-09-17 | 济南大学 | Error correction method and system in bone marrow image individual cell automatic marking |
CN113781455A (en) * | 2021-09-15 | 2021-12-10 | 平安科技(深圳)有限公司 | Cervical cell image abnormality detection method, device, equipment and medium |
CN114187277A (en) * | 2021-12-14 | 2022-03-15 | 赛维森(广州)医疗科技服务有限公司 | Deep learning-based thyroid cytology multi-type cell detection method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106096654A (en) * | 2016-06-13 | 2016-11-09 | 南京信息工程大学 | A kind of cell atypia automatic grading method tactful based on degree of depth study and combination |
CN107368859A (en) * | 2017-07-18 | 2017-11-21 | 北京华信佳音医疗科技发展有限责任公司 | Training method, verification method and the lesion pattern recognition device of lesion identification model |
CN108090906A (en) * | 2018-01-30 | 2018-05-29 | 浙江大学 | A kind of uterine neck image processing method and device based on region nomination |
CN108257129A (en) * | 2018-01-30 | 2018-07-06 | 浙江大学 | The recognition methods of cervical biopsy region aids and device based on multi-modal detection network |
-
2018
- 2018-07-13 CN CN201810768766.2A patent/CN109034221A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106096654A (en) * | 2016-06-13 | 2016-11-09 | 南京信息工程大学 | A kind of cell atypia automatic grading method tactful based on degree of depth study and combination |
CN107368859A (en) * | 2017-07-18 | 2017-11-21 | 北京华信佳音医疗科技发展有限责任公司 | Training method, verification method and the lesion pattern recognition device of lesion identification model |
CN108090906A (en) * | 2018-01-30 | 2018-05-29 | 浙江大学 | A kind of uterine neck image processing method and device based on region nomination |
CN108257129A (en) * | 2018-01-30 | 2018-07-06 | 浙江大学 | The recognition methods of cervical biopsy region aids and device based on multi-modal detection network |
Non-Patent Citations (3)
Title |
---|
HAO WANG ET AL.: "Face r-cnn", 《ARXIV:1706.01061V1 [CS.CV]》 * |
XU MEIQUAN ET AL.: "Cervical cytology intelligent diagnosis based on object detection technology", 《COMPUTER SCIENCE》 * |
苏松志等: "《行人检测 理论与实践》", 31 March 2016, 厦门大学出版社 * |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110009599A (en) * | 2019-02-01 | 2019-07-12 | 腾讯科技(深圳)有限公司 | Liver masses detection method, device, equipment and storage medium |
CN110189293A (en) * | 2019-04-15 | 2019-08-30 | 广州锟元方青医疗科技有限公司 | Cell image processing method, device, storage medium and computer equipment |
CN110263656A (en) * | 2019-05-24 | 2019-09-20 | 南方科技大学 | A kind of cancer cell identification methods, devices and systems |
CN110263656B (en) * | 2019-05-24 | 2023-09-29 | 南方科技大学 | Cancer cell identification method, device and system |
CN110443781A (en) * | 2019-06-27 | 2019-11-12 | 杭州智团信息技术有限公司 | A kind of the AI assistant diagnosis system and method for liver number pathology |
CN110765855A (en) * | 2019-09-12 | 2020-02-07 | 杭州迪英加科技有限公司 | Pathological image processing method and system |
CN110648322B (en) * | 2019-09-25 | 2023-08-15 | 杭州智团信息技术有限公司 | Cervical abnormal cell detection method and system |
CN110648322A (en) * | 2019-09-25 | 2020-01-03 | 杭州智团信息技术有限公司 | Method and system for detecting abnormal cervical cells |
CN110826576A (en) * | 2019-10-10 | 2020-02-21 | 浙江大学 | Cervical lesion prediction system based on multi-mode feature level fusion |
CN110853021A (en) * | 2019-11-13 | 2020-02-28 | 江苏迪赛特医疗科技有限公司 | Construction of detection classification model of pathological squamous epithelial cells |
CN111383267A (en) * | 2020-03-03 | 2020-07-07 | 重庆金山医疗技术研究院有限公司 | Target relocation method, device and storage medium |
CN111383267B (en) * | 2020-03-03 | 2024-04-05 | 重庆金山医疗技术研究院有限公司 | Target repositioning method, device and storage medium |
CN113139540A (en) * | 2021-04-02 | 2021-07-20 | 北京邮电大学 | Backboard detection method and equipment |
CN113409923A (en) * | 2021-05-25 | 2021-09-17 | 济南大学 | Error correction method and system in bone marrow image individual cell automatic marking |
CN113409923B (en) * | 2021-05-25 | 2022-03-04 | 济南大学 | Error correction method and system in bone marrow image individual cell automatic marking |
CN113269190A (en) * | 2021-07-21 | 2021-08-17 | 中国平安人寿保险股份有限公司 | Data classification method and device based on artificial intelligence, computer equipment and medium |
CN113781455A (en) * | 2021-09-15 | 2021-12-10 | 平安科技(深圳)有限公司 | Cervical cell image abnormality detection method, device, equipment and medium |
CN113781455B (en) * | 2021-09-15 | 2023-12-26 | 平安科技(深圳)有限公司 | Cervical cell image anomaly detection method, device, equipment and medium |
CN114187277A (en) * | 2021-12-14 | 2022-03-15 | 赛维森(广州)医疗科技服务有限公司 | Deep learning-based thyroid cytology multi-type cell detection method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109034221A (en) | A kind of processing method and its device of cervical cytology characteristics of image | |
Feng et al. | CPFNet: Context pyramid fusion network for medical image segmentation | |
Yu et al. | Liver vessels segmentation based on 3d residual U-NET | |
CN110310281A (en) | Lung neoplasm detection and dividing method in a kind of Virtual Medical based on Mask-RCNN deep learning | |
CN108257135A (en) | The assistant diagnosis system of medical image features is understood based on deep learning method | |
Li et al. | Lung nodule detection with deep learning in 3D thoracic MR images | |
CN109063710A (en) | Based on the pyramidal 3D CNN nasopharyngeal carcinoma dividing method of Analysis On Multi-scale Features | |
CN103699904B (en) | The image computer auxiliary judgment method of multisequencing nuclear magnetic resonance image | |
CN109363698A (en) | A kind of method and device of breast image sign identification | |
CN106611413A (en) | Image segmentation method and system | |
Bicakci et al. | Metabolic imaging based sub-classification of lung cancer | |
CN109363697A (en) | A kind of method and device of breast image lesion identification | |
Lai et al. | DBT masses automatic segmentation using U-net neural networks | |
CN109447088A (en) | A kind of method and device of breast image identification | |
CN109727227A (en) | A kind of diagnosis of thyroid illness method based on SPECT image | |
CN109461144A (en) | A kind of method and device of breast image identification | |
Wang et al. | Multi-view fusion segmentation for brain glioma on CT images | |
CN114445328A (en) | Medical image brain tumor detection method and system based on improved Faster R-CNN | |
CN106709925A (en) | Method and device for locating vertebral block in medical image | |
Cao et al. | 3D convolutional neural networks fusion model for lung nodule detection onclinical CT scans | |
Liu et al. | CAM‐Wnet: An effective solution for accurate pulmonary embolism segmentation | |
Lu et al. | AugMS-Net: Augmented multiscale network for small cervical tumor segmentation from MRI volumes | |
Yektaei et al. | Diagnosis of lung cancer using multiscale convolutional neural network | |
Song et al. | Liver segmentation based on SKFCM and improved GrowCut for CT images | |
Wei et al. | An improved image segmentation algorithm ct superpixel grid using active contour |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20181218 |
|
WD01 | Invention patent application deemed withdrawn after publication |