CN108416379A - Method and apparatus for handling cervical cell image - Google Patents

Method and apparatus for handling cervical cell image Download PDF

Info

Publication number
CN108416379A
CN108416379A CN201810169962.8A CN201810169962A CN108416379A CN 108416379 A CN108416379 A CN 108416379A CN 201810169962 A CN201810169962 A CN 201810169962A CN 108416379 A CN108416379 A CN 108416379A
Authority
CN
China
Prior art keywords
image
image example
frame
deep learning
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810169962.8A
Other languages
Chinese (zh)
Inventor
万涛
徐通
丁鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Feather Care Cabbage Information Technology Co Ltd
Original Assignee
Beijing Feather Care Cabbage Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Feather Care Cabbage Information Technology Co Ltd filed Critical Beijing Feather Care Cabbage Information Technology Co Ltd
Priority to CN201810169962.8A priority Critical patent/CN108416379A/en
Publication of CN108416379A publication Critical patent/CN108416379A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The present invention provides a kind of method and apparatus for handling cervical cell image.The method of the present invention includes:Step S1:Image example is obtained, the corresponding image example calibration information of image example is then obtained;Step S2:Sample set is generated according to image example and image example calibration information;Step S3:Training and the parameter optimization that deep learning network is carried out according to sample set, obtain convolutional neural networks model;Step S4:Test image is tested using convolutional neural networks model, detects target area and its classification results.The technical problems such as methods and apparatus of the present invention realizes the automatic classification of image using the technology of deep learning, and the consistency for solving the classification work of artificial diagosis is poor, accuracy is low, improve work efficiency, reduce probability of failure.

Description

Method and apparatus for handling cervical cell image
Technical field
The present invention relates to field of computer technology more particularly to a kind of methods and dress for handling cervical cell image It sets.
Background technology
Liquid-based cytology is to detect uterine neck using thin layer liquid basal cell detection Thin-Cytologic Test (TCT) Cell simultaneously carries out cytology specification.TCT fundamentally solves conventional cast-off cells film-making false negative rate height, loses cell rate height With smear equal technical barriers of poor quality, make the positive rate of cervical carcinoma up to 95% or more, it is more advanced in the world at present A kind of cervical cancer cell Examined effect.
In TCT inspections, specific step is that clinician acquires cervical cell sample with special collector, then It will be rinsed in bottle of the collector merging equipped with cell-preservation liquid, thus obtain most cell sample.Suffer from The cell sample bottle of person is just sent to laboratory, and sample is disperseed and filtered via full-automatic cell detector, with reduce blood, The vestiges of mucus and inflammatory tissue have thus obtained a very thin intact cellular layer, in case further micro- Detection.Pathologist observes Cervical smear by microscope mirror, and according to Bethesda (TBS) reporting system pair Cell sample makes data classification.
Existing Cervical Cytology Bethesda reporting systems use descriptive classification, including following four class:Without epithelium Interior lesion or malignant change derive from extra-uterine various tumours, gland cell exception, squamous cell exception.Wherein squamous cell It is abnormal to be further divided into following four class:Low level squamous intraepithelial lesion, high-level squamous intraepithelial lesion, squamous cell Cancer, atypical squamous cell.And " atypical squamous cell " is divided into as following two class:Atypical Squamous without meaning of clarifying a diagnosis Cell (ASC-US), atypical squamous cell not except highly squamous intraepithelial lesions.The wherein concrete condition of ASC-US is also divided For two classes:Tend to reactivity, there is no other feature.This example of ASC-US highlights virologist and faces cytology or tissue The bed of nails of Complete Classification decision cannot often be made by learning sample.
Technically at least there is following defect in existing scheme:
(1) Cervical Cytology Bethesda reporting system standards are used at present, wherein not to squamous intraepithelial lesion Descriptive language is used with type and rank, it usually needs is compared the size of cell and nucleus, is observed dye levels and Evenness etc., these indexs are difficult often quantitative, rely on the subjective determination of diagosis person;
(2) experience that diagosis person is relied primarily on to the information extraction of Cervical smear work has subjectivity strong, consistent The disadvantages such as property is poor, accuracy is low;
(3) artificial diagosis is observed under the microscope cervical cell sample, by the times magnification for ceaselessly adjusting object lens Number and gradually scan slice, have it is time-consuming and laborious, the shortcomings that being easy to fail to pinpoint a disease in diagnosis.
Invention content
In view of this, the embodiment of the present invention provides a kind of method and apparatus for handling cervical cell image, can solve The consistency of certainly artificial diagosis classification work is poor, accuracy is low, the not high technical problem of accuracy.
To achieve the above object, the first aspect according to the ... of the embodiment of the present invention provides a kind of thin for handling uterine neck The method of born of the same parents' image, including:Step S1:Image example is obtained, the corresponding image example calibration of the image example is then obtained Information;Step S2:Sample set is generated according to the image example and the image example calibration information;Step S3:According to described Sample set carries out training and the parameter optimization of deep learning network, obtains convolutional neural networks model;Step S4:Utilize the volume Product neural network model tests test image, detects target area.
Optionally, the training of the deep learning network generates the bounding box of fixed size using feedforward convolutional network The score of object type in set and frame then inhibits step to generate final detection using non-maximization.
Optionally, the training of the deep learning network meets following feature:6 convolution characteristic layers are added to and are blocked The end of basic network;For the characteristic layer that the size with p channel is m × n, using 3 × 3 × p convolution kernel convolution operations, Wherein p, m, n are natural number;For each frame in k frame of given position, C classes score is calculated and relative to original default frame 4 offsets, (C+4) k × m × n is generated for m × n characteristic patterns and is exported.
To achieve the above object, the second aspect according to the ... of the embodiment of the present invention, it is proposed that one kind is thin for handling uterine neck The device of born of the same parents' image, including:Then demarcating module obtains the corresponding example figure of the image example for obtaining image example As calibration information;Sampling module, for generating sample set according to the image example and the image example calibration information;Depth Study module, training and parameter optimization for carrying out deep learning network according to the sample set, obtains convolutional neural networks Model;Test module, for being tested test image using the convolutional neural networks model, detect target area and Obtain classification results.
Optionally, the bounding box set of fixed size is generated using feedforward convolutional network in the deep learning module With the score of object type in frame, then step is inhibited to generate final detection using non-maximization.
Optionally, in the deep learning module, deep learning network meets following feature:The end of the basic network blocked Tail has 6 convolution characteristic layers;For the characteristic layer that the size with p channel is m × n, 3 × 3 × p convolution nuclear convolutions are used Operation, wherein p, m, n are natural number;For each frame in k frame of given position, C classes score is calculated and relative to original 4 offsets for giving tacit consent to frame generate (C+4) k × m × n output for m × n characteristic patterns.
To achieve the above object, in terms of third according to the ... of the embodiment of the present invention, it is proposed that a kind of electronic equipment, including: One or more processors;Storage device, for storing one or more programs, when one or more of programs are by described one A or multiple processors execute so that the method that one or more of processors realize the present invention.
To achieve the above object, the 4th aspect according to the ... of the embodiment of the present invention, it is proposed that a kind of computer-readable medium, It is stored thereon with computer program, the realization present invention's is used to handle cervical cell image when described program is executed by processor Method.
Any one embodiment in foregoing invention realizes the automatic classification of image using the technology of deep learning, improves work Make efficiency, reduces probability of failure.
Further effect possessed by above-mentioned non-usual optional mode adds hereinafter in conjunction with specific implementation mode With explanation.
Description of the drawings
Attached drawing does not constitute inappropriate limitation of the present invention for more fully understanding the present invention.Wherein:
Fig. 1 is the schematic diagram according to the ... of the embodiment of the present invention for handling the key step of the method for cervical cell image;
Fig. 2 (a) is thick calibration example figure according to the ... of the embodiment of the present invention, and Fig. 2 (b) is thin mark according to the ... of the embodiment of the present invention Determine example figure;
Fig. 3 is the schematic diagram according to the ... of the embodiment of the present invention for handling the main modular of the device of cervical cell image;
Fig. 4 is the hardware of the electronic equipment of the method for handling cervical cell image for realizing the embodiment of the present invention Structural schematic diagram.
Specific implementation mode
It explains to the exemplary embodiment of the present invention below in conjunction with attached drawing, including the various of the embodiment of the present invention Details should think them only exemplary to help understanding.Therefore, those of ordinary skill in the art should recognize It arrives, various changes and modifications can be made to the embodiments described herein, without departing from scope and spirit of the present invention.Together The description to known function and structure is omitted for clarity and conciseness in sample in following description.
From the foregoing, it can be understood that the prior art is mainly by the artificial diagosis diagnosis of veteran doctor, in particular to doctor Tissue samples are observed under the microscope, by ceaselessly adjusting the amplification factor and gradually scan slice of object lens, are had time-consuming Laborious disadvantage.In addition, the experience of diagosis person is relied primarily on to the diagnosis of histotomy, with subjectivity is strong, consistency is poor, smart The shortcomings of exactness is low, accuracy is poor (such as the small transfer in the lymphatic metastasis of breast cancer is difficult to be found, and is easy to be failed to pinpoint a disease in diagnosis).
The present invention is directed to propose the processing method and processing dress for handling cervical cell image of a kind of artificial intelligence It sets, using the machine learning model under the depth supervised learning based on convolutional neural networks, develops computer-aided diagnosis algorithm, Squamous cell in TCT liquid-based smears is detected and is classified, automatic identification atypical squamous cell and squamous are reached The purpose of intraepithelial lesions.To solve the subjectivity in artificial treatment in the prior art is strong, consistency is poor, accuracy is low, The problem of accuracy difference.Reproducible with objective and fair, accuracy is high, and accuracy is good, time saving and energy saving advantage.
Fig. 1 is the schematic diagram according to the ... of the embodiment of the present invention for handling the key step of the method for cervical cell image. As shown in Figure 1, the method for the embodiment mainly includes the following steps S1 to step S4.
Step S1:Image example is obtained, the corresponding image example calibration information of image example is then obtained.It needs to illustrate It is that the image example of the embodiment of the present invention is the corresponding digital picture of cervical liquid-based cells example smear.
The detailed process of " obtain image example " can be:Using full slice digital scanner, by cervical liquid-based cells model Example smear in kind is converted into high-resolution digital picture, and digital picture is easier to be stored and be subsequently can by computer analysis.
" obtaining the corresponding image example calibration information of image example " can refer to specifically the image processing apparatus of the present invention Record the calibration content about image example of doctor's input.The corresponding descriptive mark in certain positions of the calibration content, that is, image Label.Calibration can be divided into thick calibration and thin calibration.
(1) thick calibration.Thick calibration is happened at smear in kind and is converted into before digitized image.Thick calibration refers to doctor aobvious Focal area is sketched out with marker pen to obtain slightly demarcating contour line in cervical liquid-based cells material object painting on piece under micro mirror.Slightly Calibration is for preliminary latch marked region, in order to quickly find target in thin calibration phase.
(2) thin calibration.Thin calibration is happened at smear in kind and is converted into after digital picture.Thin calibration refers to that doctor is calculating It is handled on machine doing finer calibration inside the thick calibration contour line of digital picture.Demarcating the content handled is specifically:It will be sick The sick cell in stove region is delineated, while the cell to delineating adds label, shows the type of cell.Label can wrap It includes:True (the atypical squamous cells of undetermined of atypical squamous cell-interrogatory Significance, ASC-US), atypical squamous cell-not except high-level squamous intraepithelial lesion (atypical Squamous cells-cannot exclude HSIL, ASC-H), low level squamous intraepithelial lesion (low-grade Squamous intraepithelial lesion, LSIL), high-level squamous intraepithelial lesion (high-grade Squamous intraepithelial lesion, HSIL), atrophic cells.
For example, Fig. 2 (a) and Fig. 2 (b) respectively illustrate thick calibration result and thin calibration result.Black in Fig. 2 (a) is thick Body label is thick calibration result, has irised out focal area.Fig. 2 (b) is the amplification result of the centers Fig. 2 (a) part.In Fig. 2 (b) into One step is made that thin calibration, has irised out cell, and the other type for adding the cell and the length and area of demarcating region.
Step S2:Sample set is generated according to image example and image example calibration information.
Firstly the need of presetting sample specification, such as can be the image block (Im of the Pixel Dimensions of 500 pixels × 500 Age Patch), image block sampling is then carried out in image example.Sample set is divided into two parts data:A part sample be Image block sample including normal squamous cell, i.e. positive sample;Another part is that the tag attributes that step S1 is obtained are The image block sample of a variety of lesion squamous cells, i.e. negative sample.It should be noted that the sick cell in these negative samples Including individually demarcating cell or calibration cell mass.If it is calibration cell mass, it can integrally regard calibration cell mass as sample, It is added in sample set.
Step S3:Training and the parameter optimization that deep learning network is carried out according to sample set, obtain convolutional neural networks mould Type.
Specifically, the sample set that step S2 is obtained first is randomly divided into training sample set according to preset ratio and test is verified Sample set.Such as 80% data may be used as training sample, 20% is verification sample.Training sample is used for training mould Type, verification sample are used for adjusting the parameter of model.Then the parameter model that deep neural network is obtained according to training sample set, obtains To the identification model for atypical squamous cell and squamous intraepithelial lesion, it is excellent that model is then carried out according to verification sample set Change.
Atypical squamous cell and squamous intraepithelial lesion disaggregated model are based on deep learning convolutional neural networks.It is detecting Stage, the input of neural network are the different types of cell or cell mass demarcated, these cells and cell mass are individually deposited It is placed in a rectangle frame, the authentic specimen (ground truth) as training network model.Meanwhile considering different scale Characteristic pattern on using different length-width ratios rectangle frame realize neural network convolutional calculation.For in each rectangle frame Including data, calculate the offset of shape, and belong to the probability value of each cytology specification.In the trained stage, together by these The rectangle frame and authentic specimen data of length-width ratio are matched, and some of which rectangle frame and authentic specimen data have well Matching, then these rectangle frames are left the low rectangle frame of matching degree as in training set as the positive sample in training set Negative sample.The design of specific deep-neural-network model is as follows.
Training for deep learning network is right in the bounding box set for generating fixed size using feedforward convolutional network and frame As the score of classification, then step is inhibited to generate final detection using non-maximization.Classical VGG-16 networks can be used to make Based on, supplementary structure then is added to network, to realize that Analysis On Multi-scale Features figure detects.Such as it can be by 6 convolution characteristic layers It is added to the end of the basic network blocked.These layer of size is gradually reduced, and obtains the predicted value of multiple size measurements.Detection Convolution model is different each characteristic layer.The characteristic layer (or existing characteristic layer of optional basic network) each added One group of convolution filter can be used to generate fixed prediction sets.It is the characteristic layer of m × n for the size with p channel, Using 3 × 3 × p convolution kernel convolution operations, the score of classification or the coordinate shift relative to acquiescence frame are generated.It is rolled up in each application At m × n sizes position of product kernel operation, an output valve is generated.Bounding box offset output valve is measured relative to acquiescence frame, is write from memory Frame position is recognized then relative to characteristic pattern.One group of default boundary frame is associated with each characteristic pattern unit of overlay network.Give tacit consent to frame pair Characteristic pattern makees convolution algorithm so that each frame example is fixed relative to the position of its corresponding unit lattice.It is reflected in each feature It penetrates in unit, we predict every class score relative to example in the offset for giving tacit consent to frame shape in cell, and each frame. Specifically, for each frame in k frame of given position, C classes score and 4 offsets relative to original default frame are calculated Amount.This makes each position in characteristic pattern need (C+4) × k filter in total, and (C+4) is generated for m × n characteristic patterns K × m × n output.The acquiescence frame is similar to the anchor boxes used in Faster R-CNN, but the present invention is applied In the characteristic pattern of different resolution.Different acquiescence frame shapes is used in multiple characteristic patterns, it can effectively discrete possibility Output box shape space.
In the training stage, needs to establish true tag and give tacit consent to the correspondence between frame.For each true tag frame, It should be selected from acquiescence frame.These acquiescence frames change with position, aspect ratio and ratio.When initial, matching is each true Label frame is Chong Die with the jaccard that acquiescence frame is best.This is the matching process that original MultiBox is used, it assures that each true There are one matched acquiescence frames for label frame.Different from MultiBox, matching acquiescence frame is Chong Die with true tag jaccard to be higher than threshold It is worth the acquiescence frame of (0.5).It adds these matchings and simplifies problem concerning study:Neural network forecast obtains when it to have multiple overlapping acquiescence frames High confidence level is obtained, rather than requires its that of selection with Maximum overlap.
Most of convolutional networks reduce the size of characteristic pattern by deepening the number of plies.This not only reduces calculating and storage consumption, But also provide a degree of translation and size constancy.In order to handle different object sizes, the present invention converts image For different sizes, each size is then individually handled, then combined result.However, by in single network it is several not The characteristic pattern of same layer is predicted, can obtain identical effect, while the shared parameter also on all subjective scales.It uses Characteristic pattern from lower level can improve semantic segmentation quality, because lower level captures the finer thin of input object Section.Meanwhile addition can help smooth segmentation result from the global text of high-level characteristic figure down-sampling.In an experiment, Ke Yitong When using low layer and high-rise characteristic pattern be detected prediction.For example, the two exemplary characteristics figures (8 that can be used in the frame × 8 and 4 × 4).It should be noted that technical staff can flexibly use more with relatively small computing cost in practice Characteristic pattern.
Step S4:Test image is tested using convolutional neural networks model, detects target area and its classification knot Fruit.It should be noted that test image can ensure so preferably using the brand-new image data for applying picture independently of example The intersection of test sample collection and training sample set is sky.Independent test sample collection can ensure cell classification model test results Accuracy and the obtained robustness of disaggregated model.Step S4 can specifically include following step S41 and step S42.
Step S41:Test image is divided into multiple test image blocks, multiple test image blocks are then inputted into cell classification Model obtains the corresponding cell classification result of each test image block.So-called " the corresponding cell classification knot of each test image block Fruit " refers to just the probability value of the corresponding cell classification of each test image block.In an image block, what is identified is improper thin Born of the same parents' sample has the probability value of corresponding affiliated different classifications.It, will if there is multiclass abnormal cell in the same image block Classification of the corresponding classification of most probable value as the cell or cell mass.
Step S42:According to the corresponding cell classification of each test image block as a result, being counted using temporal voting strategy, obtain To test image classification results.Specifically:According to the cell classification of the obtained all test image blocks of step S41 as a result, adopting With ballot and majority principle, the number of cell or cell mass in all test image blocks in statistical test image, containing most Class categories belonging to number aim cell or cell mass are the final classification of the test image.
The method of the embodiment of the present invention realizes that picture is classified automatically using depth learning technology;Solve the consistent of artificial diagosis The disadvantages such as property is poor, accuracy is low;The working efficiency of doctor is improved, while being reduced due to the mistake that doctors experience is insufficient and occurs Accidentally probability.
Fig. 3 is the schematic diagram according to the ... of the embodiment of the present invention for handling the main modular of the device of cervical cell image. As shown in figure 3, the device of the embodiment includes mainly demarcating module 301, sampling module 302, deep learning module 303 and test Module 304.
Then demarcating module 301 obtains the corresponding image example calibration information of image example for obtaining image example.It needs It is noted that the image example of the embodiment of the present invention is the corresponding digital picture of cervical liquid-based cells example smear." obtain model The detailed process of illustration picture " can be:Using full slice digital scanner, convert cervical liquid-based cells example material object smear to High-resolution digital picture, digital picture are easier to be stored and be subsequently can by computer analysis." it is corresponding to obtain image example Image example calibration information " can refer to specifically image processing apparatus record doctor's input of the present invention about image example Demarcate content.The corresponding descriptive label in certain positions of the calibration content, that is, image.Calibration can be divided into thick calibration and thin mark It is fixed.Detail can refer to the description of the processing method above in connection with the present invention.
Sampling module 302 is used to generate sample set according to image example and image example calibration information.In sampling module 302 It needs to preset sample specification, such as can be the image block (Image Patch) of the Pixel Dimensions of 500 pixels × 500, then Image block sampling is carried out in image example.Sample set is divided into two parts data:A part sample be include normal squamous The image block sample of epithelial cell, i.e. positive sample;Another part is that the tag attributes that step S1 is obtained are in a variety of lesion squamous The image block sample of chrotoplast, i.e. negative sample.It should be noted that the sick cell in these negative samples includes that single calibration is thin Born of the same parents or calibration cell mass.If it is calibration cell mass, it can integrally regard calibration cell mass as sample, be added to sample set In.
Deep learning module 303 is used to carry out training and the parameter optimization of deep learning network according to sample set, is rolled up Product neural network model.Wherein, feedforward convolutional network may be used in deep learning module 303, generate the side of fixed size The score of object type in boundary's frame set and frame then inhibits step to generate final detection using non-maximization.And depth It practises in module 303, deep learning network meets following feature:The end of the basic network blocked has 6 convolution characteristic layers;It is right In the characteristic layer that the size with p channel is m × n, using 3 × 3 × p convolution kernel convolution operations, wherein p, m, n is nature Number;For each frame in k frame of given position, C classes score and 4 offsets relative to original default frame are calculated, it is right (C+4) k × m × n output is generated in m × n characteristic patterns.Detail can refer to the processing method above in connection with the present invention Description.
Test module 304 for being tested test image using convolutional neural networks model, detect target area and Its classification results.Detail can refer to the description of the processing method above in connection with the present invention.
According to an embodiment of the invention, the present invention also provides a kind of electronic equipment and a kind of readable storage medium storing program for executing.
The present invention electronic equipment include:At least one processor;And it is deposited with what at least one processor communication was connect Reservoir;Wherein, memory is stored with the instruction that can be executed by a processor, and instruction is executed by least one processor, so that The method that at least one processor executes processing cervical cell image provided by the present invention.
The computer readable storage medium of the present invention, computer-readable recording medium storage computer instruction, computer refer to Enable the method for making computer execute processing cervical cell image provided by the present invention.
Below with reference to Fig. 4, it illustrates the structural representations suitable for the electronic equipment 400 for realizing the embodiment of the present application Figure.Terminal shown in Fig. 4 is only an example, should not bring any limit to the function and use scope of the embodiment of the present application System.
As shown in figure 4, terminal 400 includes central processing unit (CPU) 401, it can be according to being stored in read-only memory (ROM) it the program in 402 or is executed respectively from the program that storage section 408 is loaded into random access storage device (RAM) 404 Kind action appropriate and processing.In RAM 403, also it is stored with system 400 and operates required various programs and data.CPU 401, ROM 402 and RAM 403 are connected with each other by bus 404.Input/output (I/O) interface 405 is also connected to bus 404。
It is connected to I/O interfaces 405 with lower component:Importation 406 including keyboard, mouse etc.;It is penetrated including such as cathode The output par, c 407 of spool (CRT), liquid crystal display (LCD) etc. and loud speaker etc.;Storage section 408 including hard disk etc.; And the communications portion 409 of the network interface card including LAN card, modem etc..Communications portion 409 via such as because The network of spy's net executes communication process.Driver 410 is also according to needing to be connected to I/O interfaces 405.Detachable media 411, such as Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on driver 410, as needed in order to be read from thereon Computer program be mounted into storage section 408 as needed.
Particularly, it according to embodiment disclosed by the invention, may be implemented as counting above with reference to the process of flow chart description Calculation machine software program.For example, embodiment disclosed by the invention includes a kind of computer program product comprising be carried on computer Computer program on readable medium, the computer program include the program code for method shown in execution flow chart. In such embodiment, which can be downloaded and installed by communications portion 409 from network, and/or from can Medium 411 is dismantled to be mounted.When the computer program is executed by central processing unit (CPU) 401, the system that executes the application The above-mentioned function of middle restriction.
It should be noted that computer-readable medium shown in the application can be computer-readable signal media or meter Calculation machine readable storage medium storing program for executing either the two arbitrarily combines.Computer readable storage medium for example can be --- but not Be limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or arbitrary above combination.Meter The more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to:Electrical connection with one or more conducting wires, just It takes formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type and may be programmed read-only storage Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device, Or above-mentioned any appropriate combination.In this application, can be any include computer readable storage medium or storage journey The tangible medium of sequence, the program can be commanded the either device use or in connection of execution system, device.And at this In application, computer-readable signal media may include in a base band or as the data-signal that a carrier wave part is propagated, Wherein carry computer-readable program code.Diversified forms may be used in the data-signal of this propagation, including but unlimited In electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be that computer can Any computer-readable medium other than storage medium is read, which can send, propagates or transmit and be used for By instruction execution system, device either device use or program in connection.Include on computer-readable medium Program code can transmit with any suitable medium, including but not limited to:Wirelessly, electric wire, optical cable, RF etc. or above-mentioned Any appropriate combination.
Flow chart in attached drawing and block diagram, it is illustrated that according to the system of the various embodiments of the application, method and computer journey The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation A part for a part for one module, program segment, or code of table, above-mentioned module, program segment, or code includes one or more Executable instruction for implementing the specified logical function.It should also be noted that in some implementations as replacements, institute in box The function of mark can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are practical On can be basically executed in parallel, they can also be executed in the opposite order sometimes, this is depended on the functions involved.Also it wants It is noted that the combination of each box in block diagram or flow chart and the box in block diagram or flow chart, can use and execute rule The dedicated hardware based systems of fixed functions or operations is realized, or can use the group of specialized hardware and computer instruction It closes to realize.
Being described in module involved in the embodiment of the present application can be realized by way of software, can also be by hard The mode of part is realized.Described module can also be arranged in the processor, for example, can be described as:A kind of processor packet Include sending module, acquisition module, determining module and first processing module.Wherein, the title of these modules is under certain conditions simultaneously The restriction to the module itself is not constituted, for example, sending module is also described as " sending picture to the server-side connected Obtain the module of request ".
As on the other hand, present invention also provides a kind of computer-readable medium, which can be Included in equipment described in above-described embodiment;Can also be individualism, and without be incorporated the equipment in.Above-mentioned calculating Machine readable medium carries one or more program, this hair is realized when said one or multiple programs are executed by processor The image processing method of bright proposition.
Above-mentioned specific implementation mode, does not constitute limiting the scope of the invention.Those skilled in the art should be bright It is white, design requirement and other factors are depended on, various modifications, combination, sub-portfolio and replacement can occur.It is any Modifications, equivalent substitutions and improvements made by within the spirit and principles in the present invention etc., should be included in the scope of the present invention Within.

Claims (8)

1. a kind of method for handling cervical cell image, which is characterized in that including:
Step S1:Image example is obtained, the corresponding image example calibration information of the image example is then obtained;
Step S2:Sample set is generated according to the image example and the image example calibration information;
Step S3:Training and the parameter optimization that deep learning network is carried out according to the sample set, obtain convolutional neural networks mould Type;
Step S4:Test image is tested using the convolutional neural networks model, detects target area and its classification knot Fruit.
2. according to the method described in claim 1, it is characterized in that, the training of the deep learning network is using feedforward convolution net Network generates the bounding box set of fixed size and the score of object type in frame, then inhibits step to produce using non-maximizations Raw final detection.
3. according to the method described in claim 1, it is characterized in that, the training of the deep learning network meets following feature:
6 convolution characteristic layers are added to the end of the basic network blocked;
For the characteristic layer that the size with p channel is m × n, using 3 × 3 × p convolution kernel convolution operations, wherein p, m, n is Natural number;
For each frame in k frame of given position, C classes score and 4 offsets relative to original default frame are calculated, it is right (C+4) k × m × n output is generated in m × n characteristic patterns.
4. a kind of device for handling cervical cell image, which is characterized in that including:
Then demarcating module obtains the corresponding image example calibration information of the image example for obtaining image example;
Sampling module, for generating sample set according to the image example and the image example calibration information;
Deep learning module, training and parameter optimization for carrying out deep learning network according to the sample set, obtains convolution Neural network model;
Test module, for being tested test image using the convolutional neural networks model, detect target area and its Classification results.
5. device according to claim 4, which is characterized in that feedforward convolutional network is used in the deep learning module, It generates the score of object type in the bounding box set of fixed size and frame, then inhibits step to generate most using non-maximization Final inspection is surveyed.
6. device according to claim 4, which is characterized in that in the deep learning module, deep learning network meets Following feature:
The end of the basic network blocked has 6 convolution characteristic layers;
For the characteristic layer that the size with p channel is m × n, using 3 × 3 × p convolution kernel convolution operations, wherein p, m, n is Natural number;
For each frame in k frame of given position, C classes score and 4 offsets pair relative to original default frame are calculated (C+4) k × m × n output is generated in m × n characteristic patterns.
7. a kind of electronic equipment, which is characterized in that including:
One or more processors;
Storage device, for storing one or more programs,
When one or more of programs are executed by one or more of processors so that one or more of processors are real The now method as described in any in claims 1 to 3.
8. a kind of computer-readable medium, is stored thereon with computer program, which is characterized in that described program is executed by processor Methods of the Shi Shixian as described in any in claims 1 to 3.
CN201810169962.8A 2018-03-01 2018-03-01 Method and apparatus for handling cervical cell image Pending CN108416379A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810169962.8A CN108416379A (en) 2018-03-01 2018-03-01 Method and apparatus for handling cervical cell image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810169962.8A CN108416379A (en) 2018-03-01 2018-03-01 Method and apparatus for handling cervical cell image

Publications (1)

Publication Number Publication Date
CN108416379A true CN108416379A (en) 2018-08-17

Family

ID=63129579

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810169962.8A Pending CN108416379A (en) 2018-03-01 2018-03-01 Method and apparatus for handling cervical cell image

Country Status (1)

Country Link
CN (1) CN108416379A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109190641A (en) * 2018-08-29 2019-01-11 哈尔滨理工大学 A kind of cervical cell feature extracting method based on LDA and transfer learning
CN110009050A (en) * 2019-04-10 2019-07-12 杭州智团信息技术有限公司 A kind of classification method and device of cell
CN110598561A (en) * 2019-08-15 2019-12-20 平安科技(深圳)有限公司 Cell slide analysis method and device based on machine learning and storage medium
CN110689518A (en) * 2019-08-15 2020-01-14 平安科技(深圳)有限公司 Cervical cell image screening method and device, computer equipment and storage medium
CN110705632A (en) * 2019-09-27 2020-01-17 北京工业大学 Automatic labeling method for fluorescent karyotype of antinuclear antibody
CN110909756A (en) * 2018-09-18 2020-03-24 苏宁 Convolutional neural network model training method and device for medical image recognition
CN111709485A (en) * 2020-06-19 2020-09-25 腾讯科技(深圳)有限公司 Medical image processing method and device and computer equipment
CN112868026A (en) * 2018-10-18 2021-05-28 莱卡微系统Cms有限责任公司 Inference microscope
CN113256628A (en) * 2021-07-05 2021-08-13 深圳科亚医疗科技有限公司 Apparatus and method for analysis management of cervical images, apparatus and storage medium
CN114418995A (en) * 2022-01-19 2022-04-29 生态环境部长江流域生态环境监督管理局生态环境监测与科学研究中心 Cascade algae cell statistical method based on microscope image
CN116364229A (en) * 2023-04-20 2023-06-30 北京透彻未来科技有限公司 Intelligent visual pathological report system for cervical cancer anterior lesion coning specimen

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780466A (en) * 2016-12-21 2017-05-31 广西师范大学 A kind of cervical cell image-recognizing method based on convolutional neural networks
CN106991673A (en) * 2017-05-18 2017-07-28 深思考人工智能机器人科技(北京)有限公司 A kind of cervical cell image rapid classification recognition methods of interpretation and system
CN107492099A (en) * 2017-08-28 2017-12-19 京东方科技集团股份有限公司 Medical image analysis method, medical image analysis system and storage medium
CN107590797A (en) * 2017-07-26 2018-01-16 浙江工业大学 A kind of CT images pulmonary nodule detection method based on three-dimensional residual error neutral net
CN107680088A (en) * 2017-09-30 2018-02-09 百度在线网络技术(北京)有限公司 Method and apparatus for analyzing medical image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780466A (en) * 2016-12-21 2017-05-31 广西师范大学 A kind of cervical cell image-recognizing method based on convolutional neural networks
CN106991673A (en) * 2017-05-18 2017-07-28 深思考人工智能机器人科技(北京)有限公司 A kind of cervical cell image rapid classification recognition methods of interpretation and system
CN107590797A (en) * 2017-07-26 2018-01-16 浙江工业大学 A kind of CT images pulmonary nodule detection method based on three-dimensional residual error neutral net
CN107492099A (en) * 2017-08-28 2017-12-19 京东方科技集团股份有限公司 Medical image analysis method, medical image analysis system and storage medium
CN107680088A (en) * 2017-09-30 2018-02-09 百度在线网络技术(北京)有限公司 Method and apparatus for analyzing medical image

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109190641A (en) * 2018-08-29 2019-01-11 哈尔滨理工大学 A kind of cervical cell feature extracting method based on LDA and transfer learning
CN110909756A (en) * 2018-09-18 2020-03-24 苏宁 Convolutional neural network model training method and device for medical image recognition
CN112868026A (en) * 2018-10-18 2021-05-28 莱卡微系统Cms有限责任公司 Inference microscope
CN110009050A (en) * 2019-04-10 2019-07-12 杭州智团信息技术有限公司 A kind of classification method and device of cell
CN110598561A (en) * 2019-08-15 2019-12-20 平安科技(深圳)有限公司 Cell slide analysis method and device based on machine learning and storage medium
CN110689518A (en) * 2019-08-15 2020-01-14 平安科技(深圳)有限公司 Cervical cell image screening method and device, computer equipment and storage medium
CN110705632B (en) * 2019-09-27 2022-03-22 北京工业大学 Automatic labeling method for fluorescent karyotype of antinuclear antibody
CN110705632A (en) * 2019-09-27 2020-01-17 北京工业大学 Automatic labeling method for fluorescent karyotype of antinuclear antibody
CN111709485A (en) * 2020-06-19 2020-09-25 腾讯科技(深圳)有限公司 Medical image processing method and device and computer equipment
CN111709485B (en) * 2020-06-19 2023-10-31 腾讯科技(深圳)有限公司 Medical image processing method, device and computer equipment
CN113256628A (en) * 2021-07-05 2021-08-13 深圳科亚医疗科技有限公司 Apparatus and method for analysis management of cervical images, apparatus and storage medium
CN114418995A (en) * 2022-01-19 2022-04-29 生态环境部长江流域生态环境监督管理局生态环境监测与科学研究中心 Cascade algae cell statistical method based on microscope image
CN116364229A (en) * 2023-04-20 2023-06-30 北京透彻未来科技有限公司 Intelligent visual pathological report system for cervical cancer anterior lesion coning specimen
CN116364229B (en) * 2023-04-20 2023-11-10 北京透彻未来科技有限公司 Intelligent visual pathological report system for cervical cancer anterior lesion coning specimen

Similar Documents

Publication Publication Date Title
CN108416379A (en) Method and apparatus for handling cervical cell image
US11288795B2 (en) Assessing risk of breast cancer recurrence
US11842556B2 (en) Image analysis method, apparatus, program, and learned deep learning algorithm
Gertych et al. Machine learning approaches to analyze histological images of tissues from radical prostatectomies
CN109791693B (en) Digital pathology system and related workflow for providing visualized whole-slice image analysis
US20200388033A1 (en) System and method for automatic labeling of pathology images
Dov et al. Weakly supervised instance learning for thyroid malignancy prediction from whole slide cytopathology images
US20190042826A1 (en) Automatic nuclei segmentation in histopathology images
Mi et al. Deep learning-based multi-class classification of breast digital pathology images
CA2965431A1 (en) Computational pathology systems and methods for early-stage cancer prognosis
US11631171B2 (en) Automated detection and annotation of prostate cancer on histopathology slides
WO2020142461A1 (en) Translation of images of stained biological material
CN107832838A (en) The method and apparatus for evaluating cell smear sample satisfaction
Habtemariam et al. Cervix type and cervical cancer classification system using deep learning techniques
Ström et al. Pathologist-level grading of prostate biopsies with artificial intelligence
US20190087693A1 (en) Predicting recurrence in early stage non-small cell lung cancer (nsclc) using spatial arrangement of clusters of tumor infiltrating lymphocytes and cancer nuclei
US20220335736A1 (en) Systems and methods for automatically classifying cell types in medical images
WO2022150554A1 (en) Quantification of conditions on biomedical images across staining modalities using a multi-task deep learning framework
Yang et al. The devil is in the details: a small-lesion sensitive weakly supervised learning framework for prostate cancer detection and grading
WO2021164320A1 (en) Computer vision based catheter feature acquisition method and apparatus and intelligent microscope
WO2014006421A1 (en) Identification of mitotic cells within a tumor region
Choschzick et al. Deep learning for the standardized classification of Ki-67 in vulva carcinoma: A feasibility study
Kabir et al. The utility of a deep learning-based approach in Her-2/neu assessment in breast cancer
KR20230063147A (en) Efficient Lightweight CNN and Ensemble Machine Learning Classification of Prostate Tissue Using Multilevel Feature Analysis Method and System
Mete et al. Automatic identification of angiogenesis in double stained images of liver tissue

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180817