CN108510482A - Cervical carcinoma detection method, device, equipment and medium based on gynecatoptron image - Google Patents

Cervical carcinoma detection method, device, equipment and medium based on gynecatoptron image Download PDF

Info

Publication number
CN108510482A
CN108510482A CN201810241901.8A CN201810241901A CN108510482A CN 108510482 A CN108510482 A CN 108510482A CN 201810241901 A CN201810241901 A CN 201810241901A CN 108510482 A CN108510482 A CN 108510482A
Authority
CN
China
Prior art keywords
image
cervical carcinoma
gynecatoptron
convolutional neural
neural networks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810241901.8A
Other languages
Chinese (zh)
Other versions
CN108510482B (en
Inventor
姚书忠
刘文彬
崔淑芬
张丽贞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
XIAMEN TOSS-XINGTEL GROUP Co Ltd
Original Assignee
XIAMEN TOSS-XINGTEL GROUP Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by XIAMEN TOSS-XINGTEL GROUP Co Ltd filed Critical XIAMEN TOSS-XINGTEL GROUP Co Ltd
Priority to CN201810241901.8A priority Critical patent/CN108510482B/en
Publication of CN108510482A publication Critical patent/CN108510482A/en
Application granted granted Critical
Publication of CN108510482B publication Critical patent/CN108510482B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Abstract

The invention discloses a kind of cervical carcinoma detection method, device, terminal device and computer readable storage medium based on gynecatoptron image, method includes:In the cervical carcinoma detection model based on two-way convolutional neural networks:The positioning and extraction that opening of the cervix position is carried out to the gynecatoptron image of acquisition, to generate the ROI image for including uterine neck carninomatosis hair region;The extraction at edge is split to the ROI image by two-way convolutional neural networks, to generate cutting image;By classifying, convolutional neural networks carry out cancer grade separation to the cutting image, to export the lesion grade of cervical carcinoma, it must fast and accurately obtain cervical carcinoma testing result, help lack experience doctor quickly judge diseased region, find atypia diseased region, lesion degree, materials position are judged, to finding that cervical carcinoma and precancerous lesion play help and facilitation in time.

Description

Cervical carcinoma detection method, device, equipment and medium based on gynecatoptron image
Technical field
The present invention relates to medical image process field more particularly to a kind of cervical carcinoma detection sides based on gynecatoptron image Method, device, terminal device and computer readable storage medium.
Background technology
Image segmentation algorithm just has existed in traditional digital image processing field, main threshold application dividing method and Its various mutation.Amplify out image detection algorithm again on the basis of image segmentation algorithm, but they to the grade of image procossing not Together, detection is primarily directed to image-region rank, and divides and need to handle each pixel, finer and smoother to image procossing.In depth Spend learning areas, the partitioning algorithm based on convolutional network is also more and more, using it is most be full convolutional network (FCN).
Screening is according to " three ladders " method, i.e., leading Liquid based cytology test (LCT) and human milk before the cancer of existing cervical carcinoma Head tumor virus (Human Papillomavirus, HPV) detects, and abnormal person carries out uterine neck, vagina lesion biopsy under gynecatoptron, after With operative treatment.Cervical biopsy is to diagnose the indispensable step of cervical lesions under gynecatoptron.Vaginoscopy is carried out in doctor When screening and biopsy are directly carried out by image, received the period of training and horizontal different, the image interpretation energy of doctor by doctor Power directly affects biopsy results.Meanwhile directly the judgement of cervical carcinoma carried out to digital image piece and segmentation be it is the simplest and Direct method, but judging result is limited by physician specialty ability, is not popularized.Presently the most effective method It is the diagnosis in cytology field, and has also obtained computerized algorithm auxiliary, and also applies some conventional segmentation methods, But it is limited by conventional segmentation methods, the selection result also needs to doctor and rejudges, cumbersome.
During implementing the embodiment of the present invention, inventor has found:Clinically the method for cervical carcinoma screening is mainly LCT/HPV, gynecatoptron and histopathology, wherein cytology screening have been incorporated with traditional image cutting method, and take Obtain certain effect, but diagnostic error sometimes;Gynecatoptron is also common inspection method, but its accuracy is by doctors experience level And training cycle influences, directly affect diagnostic result.
In digital image processing field, image segmentation is the algorithm being commonly used, but dividing method mentioned above exists Respective drawback, threshold segmentation method is relatively simple on application principle, is easy by image intensity profile itself and noise The influence of the factors such as interference, the threshold value obtained merely with grey level histogram can not make image segmentation obtain it is satisfied as a result, by To being affected for image irradiation.Additional light filling is needed when doing vaginoscopy, can seriously affect cutting effect.
FCN cutting algorithms based on convolutional neural networks substantially improve the above situation, can well solve illumination not Ill effect caused by equilibrium, and cutting effect is also more smooth.But effect is poor on segmentation wisp, with network layer Secondary intensification causes the missing of wisp information more although the deeper network of level can preferably be fitted sample distribution Seriously.
Invention content
In view of the above-mentioned problems, the cervical carcinoma detection method that the purpose of the present invention is to provide a kind of based on gynecatoptron image, Device, terminal device and computer readable storage medium must fast and accurately obtain cervical carcinoma testing result, help to lack experience Doctor quickly judges diseased region, finds atypia diseased region, lesion degree, materials position is judged, to finding uterine neck in time Cancer and precancerous lesion play facilitation.
In a first aspect, an embodiment of the present invention provides a kind of cervical carcinoma detection methods based on gynecatoptron image, including with Lower step:
In the cervical carcinoma detection model based on two-way convolutional neural networks:
The positioning and extraction that opening of the cervix position is carried out to the gynecatoptron image of acquisition include that uterine neck carninomatosis sends out region to generate ROI image;
The extraction at edge is split to the ROI image by two-way convolutional neural networks, to generate cutting image;
By classifying, convolutional neural networks carry out cancer grade separation to the cutting image, to export the lesion of cervical carcinoma Grade.
In the first realization method of first aspect, it is described by two-way convolutional neural networks to the ROI image into The extraction on row segmentation side edge, to generate cutting image, specially:
In two-way convolutional neural networks:
The characteristic value superposition that convolution algorithm and multilayer feature are carried out to the ROI image, to obtain and the ROI image ruler Very little identical thermal map spectrum;
Progress de-convolution operation is composed to the thermal map and high-level characteristic is superimposed with the characteristic value of low-level feature, to extract side Along generation cutting image.
According to the first realization method of first aspect, in second of realization method of first aspect, the thermal map spectrum In each pixel corresponding to numerical value be used to indicate that the pixel in the same position of the gynecatoptron image to belong to disease and sends out pixel Probability;
It is then described that cancer grade separation is carried out to the cutting image by convolutional neural networks of classifying, to export cervical carcinoma Lesion grade, specially:
According to the cutting image, disease hair pixel non-to each of the gynecatoptron image carries out processes pixel, with life At processing image;
The superposition calculation for carrying out the characteristic dimension of multilayer feature to the processing image by classification convolutional neural networks, with Obtain the lesion grade of the cervical carcinoma corresponding to the gynecatoptron image.
It is described according to institute in the third realization method of first aspect according to second of realization method of first aspect Cutting image is stated, disease hair pixel non-to each of the gynecatoptron image carries out processes pixel, to generate processing image, specifically For:
According to the cutting image, the position of the non-disease hair pixel of each of described gynecatoptron image is obtained;
When it is described it is non-disease hair pixel uterine neck carninomatosis send out region cutting edge other than when, by it is described it is non-disease hair pixel picture Plain value is set to 0.
According to second of realization method of first aspect, in the 4th kind of realization method of first aspect, the cervical carcinoma Lesion grade include low level lesion, high-level lesion, cervical carcinoma, chronic cervicitis and normal-sub uterine neck.
In the 5th kind of realization method of first aspect, the cervical carcinoma detection model based on two-way convolutional neural networks Training step include:
The gynecatoptron sample image of acquisition is pre-processed, to generate first sample;
Network is generated according to confrontation, sample expansion is carried out to the first sample, to obtain the second sample;
Determining for opening of the cervix position is carried out to the gynecatoptron image in second sample based on the location algorithm of deep learning Position and extraction, to generate trained ROI image;
Obtain the ROI- true pictures that user carries out disease hair area marking according to training ROI image;
The two-way convolutional neural networks are trained according to the ROI- true pictures, to generate trained cutting drawing Picture;Wherein, the convolution kernel of two-way every layer of the convolutional neural networks is 3, and total level is 20 layers, and each layer includes convolutional layer and anti- Convolutional layer;
According to the trained cutting image, processes pixel is carried out to each non-disease hair pixel in second sample, to obtain Take processes pixel image;
Feedback label of the user to the processes pixel image is obtained, to generate third sample;
The classification convolutional neural networks are trained according to the third sample, to obtain the disease of the cervical carcinoma of classification Become grade;Wherein, the convolution kernel of every layer of the convolutional neural networks of classification is 3, and total level is 10 layers.
Further include in the 6th kind of realization method of first aspect according to the 5th of first aspect the kind of realization method:
Costing bio disturbance is carried out according to the ROI- true pictures and the trained cutting image;Wherein, if costing bio disturbance letter Number is DLOSS, thenX indicates the cutting edge coordinate of user annotation in ROI- true pictures, y Indicate the edge coordinate of the training cutting image of deconvolution neural network forecast.
Second aspect, an embodiment of the present invention provides a kind of cervical carcinoma detection devices based on gynecatoptron image, including with Lower step:
In the cervical carcinoma detection model based on two-way convolutional neural networks:
Extraction module is positioned, positioning and extraction for carrying out opening of the cervix position to the gynecatoptron image of acquisition, to generate The ROI image in region is sent out including uterine neck carninomatosis;
Cutting image acquisition module, for being split edge to the ROI image by two-way convolutional neural networks Extraction, to generate cutting image;
Level results acquisition module, for carrying out cancer ranking score to the cutting image by convolutional neural networks of classifying Class, to export the lesion grade of cervical carcinoma.
The third aspect, the cervical carcinoma detection terminal equipment based on gynecatoptron image that an embodiment of the present invention provides a kind of, packet It includes processor, memory and is stored in the memory and is configured as the computer program executed by the processor, The processor realized when executing the computer program it is any one of above-mentioned described in the cervical carcinoma based on gynecatoptron image Detection method.
Fourth aspect, an embodiment of the present invention provides a kind of computer readable storage medium, the computer-readable storage Medium includes the computer program of storage, wherein controls the computer-readable storage medium when the computer program is run Equipment where matter execute it is any one of above-mentioned described in the cervical carcinoma detection method based on gynecatoptron image.
The cervical carcinoma detection method that an embodiment of the present invention provides a kind of based on gynecatoptron image, device, terminal device and Computer readable storage medium has the advantages that:
The gynecatoptron image of acquisition is being input to the cervical carcinoma detection model based on two-way convolutional neural networks into temporary dwelling palace When neck cancer detects, in the cervical carcinoma detection model based on two-way convolutional neural networks:First to the gynecatoptron image of acquisition into Then the positioning and extraction at temporary dwelling palace eck position pass through two-way convolution god to generate the ROI image for including uterine neck carninomatosis hair region The extraction at edge is split to the ROI image through network, to generate cutting image, finally by classification convolutional neural networks Cancer grade separation is carried out to the cutting image, to export the lesion grade of cervical carcinoma, by two-way convolutional neural networks and Feature Fusion applies different neural network models in different phase, can be partitioned into small cancerous lesion region, soon Speed and accurate Ground Split and the lesion grade for sorting out cervical carcinoma carry out can be carried out operating after a small amount of training in doctor, Can also be substantially reduced by the constraint of physician specialty knowledge, the hospital of not medical practitioner or remote districts can also be accurate Ground carries out cervical carcinoma screening and diagnosis, and the doctor that can help to lack experience quickly judges diseased region, finds atypia lesion Position judges lesion degree, materials position, and to finding that cervical carcinoma and precancerous lesion play facilitation in time, the present invention passes through Artificial intelligence identifies the uterine neck image that Via vagina mirror obtains, and timely and accurately judges various cervical lesions positions, instructs doctor accurate It really obtains pathological tissues and carries out pathological examination, it might even be possible to substitute traditional cytolgical examination, quickly find cervical carcinoma and cancer Preceding lesion has prodigious society and medical value.
Description of the drawings
In order to illustrate more clearly of technical scheme of the present invention, attached drawing needed in embodiment will be made below Simply introduce, it should be apparent that, the accompanying drawings in the following description is only some embodiments of the present invention, general for this field For logical technical staff, without creative efforts, other drawings may also be obtained based on these drawings.
Fig. 1 is the flow signal for the cervical carcinoma detection method based on gynecatoptron image that first embodiment of the invention provides Figure.
Fig. 2 is the schematic diagram for the gynecatoptron image that the terminal device that first embodiment of the invention provides obtains.
Fig. 3 is the schematic diagram for the ROI image including uterine neck carninomatosis hair region that first embodiment of the invention provides.
Fig. 4 is the schematic diagram of the thermal map spectrum for the acquisition that first embodiment of the invention provides.
Fig. 5 is the schematic diagram of the cutting image for the acquisition that first embodiment of the invention provides.
Fig. 6 is the schematic diagram for the two-way convolutional neural networks that first embodiment of the invention provides.
Fig. 7 is the schematic diagram for the classification convolutional neural networks that first embodiment of the invention provides.
Fig. 8 is the training for the cervical carcinoma detection model based on two-way convolutional neural networks that second embodiment of the invention provides The flow diagram of step.
Fig. 9 is the structural representation for the cervical carcinoma detection device based on gynecatoptron image that third embodiment of the invention provides Figure.
Specific implementation mode
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation describes, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
Referring to Fig. 1, first embodiment of the invention provides a kind of cervical carcinoma detection method based on gynecatoptron image, It can be executed, and included the following steps by terminal device:
In the cervical carcinoma detection model based on two-way convolutional neural networks:
S11 carries out the gynecatoptron image of acquisition the positioning and extraction at opening of the cervix position, includes uterine neck carninomatosis hair to generate The ROI image in region.
In embodiments of the present invention, the terminal device can be flat for desktop PC, notebook, palm PC, intelligence The computing devices such as plate and cloud server, particularly, the terminal device can be cloud server, as long as the terminal in hospital is clapped The gynecatoptron picture taken the photograph, Cloud Server can return to segmentation picture and lesion grade in 2s.
In embodiments of the present invention, in the cervical carcinoma detection model based on two-way convolutional neural networks, referring to Fig. 2, When acquiring gynecatoptron picture, does not ensure that there was only opening of the cervix in picture, other body parts is also had under most of situation, So in order to eliminate the influence at other positions, needs first to orient the position of opening of the cervix and extract the corresponding ROI image in its position As the picture of subsequent processing, ROI image, i.e. area-of-interest use the specified targets for wanting to read in of ROI, it is possible to reduce processing Time increases precision, offers convenience to image procossing;Referring to Fig. 3, the terminal device obtains the gynecatoptron figure of user's transmission Picture carries out the gynecatoptron image by the location algorithm based on deep learning the positioning and extraction at opening of the cervix position, with life At the ROI image for sending out region including uterine neck carninomatosis, the location algorithm based on deep learning can robustly navigate to palace very much Collar area, it should be noted that disease hair region is opening of the cervix position.
S12 is split the ROI image by two-way convolutional neural networks the extraction at edge, to generate cutting drawing Picture.
In embodiments of the present invention, specifically, in two-way convolutional neural networks:Referring to Fig. 4, the terminal device pair The ROI image carries out the characteristic value superposition of convolution algorithm and multilayer feature, to obtain heat identical with the ROI image size Collection of illustrative plates, referring to Fig. 5, then folded to thermal map spectrum progress de-convolution operation and the characteristic value of high-level characteristic and low-level feature Add, cutting image is generated to extract edge, referring to Fig. 6, C indicates that convolutional layer, DC indicate warp lamination, the terminal in Fig. 6 The characteristic value that equipment carries out the ROI image convolution algorithm and multilayer feature is superimposed, using the mutual syncretizing mechanism of multilayer feature, The fusion is characteristic value superposition, i.e., the feature of two identical dimensionals carries out digital addition, do not change characteristic spectrum dimension, every layer The width and height of characteristic spectrum are consistent, and abandon the sample level that can reduce characteristic spectrum size (wide and high), and characteristic value is added Refer to the identical characteristic image of two dimensions, the numerical value that same position is corresponded in two characteristic images carries out summation operation, keeps The width and height of image, such as a feature isIt is further characterized asSo it is after feature additionImage Output after convolution algorithm is the feature of this layer, reduces the width and height of image, institute due to not having when carrying out convolution algorithm It is identical with the characteristic dimension when feature is superimposed, the convolutional layer of first half has obtained the disease hair region of the ROI image Thermal map spectrum, the width of thermal map spectrum and high identical as gynecatoptron image input by user, the number corresponding to each pixel that thermal map is composed The pixel that value represents the gynecatoptron image position belongs to the probability that disease is sent out, and the extraneous edge of thermal map spectrum is that disease hair region is coarse And fuzzy transitional region, there is no apparent edge, then the terminal device composes the thermal map and carries out de-convolution operation, And high-level characteristic is superimposed with the characteristic value of low-level feature, with the intensification of convolutional network, the feature of Small object can be more and more unknown It is aobvious, in some instances it may even be possible to it disappears, so Network Low-layer time feature, same coordinate low-level feature and high-level characteristic weighted superposition are also needed to, A new characteristic spectrum is formed, with the intensification of feature, low-level feature can slowly disappear, but substantially improve original signal damage Mistake degree, thermal map spectrum obtain apparent edge image by the deconvolution network of latter half, generate cutting image, cutting image Different zones different colors is presented.It should be noted that in view of the smaller convolution kernel of convolutional neural networks can be kept Fine and smooth details in image, so every layer of convolution kernel size is 3, it is 2 that when each convolution algorithm, which adds picture to expand size, and is used Xavier is uniformly distributed to initialize weight, because neural network can preserve every layer of parameter, as neural network weight, is used for Image convolution operation, i.e. convolution algorithm are exactly that the weight parameter and image carry out convolution algorithm;Total level of bilateral network is 20 Layer, all layers are convolutional layer and warp lamination, and characteristic spectrum size is consistent.
S13, by classifying, convolutional neural networks carry out cancer grade separation to the cutting image, to export cervical carcinoma Lesion grade.
In embodiments of the present invention, the terminal device is according to the cutting image, to every in the gynecatoptron image A non-disease hair pixel carries out processes pixel, to generate processing image, specifically, the terminal device according to the cutting image, The position of the non-disease hair pixel of each of described gynecatoptron image is obtained, if the non-disease hair pixel sends out region in uterine neck carninomatosis Within cutting edge, then the pixel value of the non-disease hair pixel is constant;When the non-disease hair pixel sends out region in uterine neck carninomatosis When other than cutting edge, the pixel value of the non-disease hair pixel is set to 0 by the terminal device, referring to Fig. 7, the then end End equipment carries out the processing image by classification convolutional neural networks the superposition calculation of the characteristic dimension of multilayer feature, to obtain Take the lesion grade of the cervical carcinoma corresponding to the gynecatoptron image, the lesion grade of the cervical carcinoma include low level lesion, High-level lesion, cervical carcinoma, chronic cervicitis and normal-sub uterine neck, the classification convolutional neural networks use multilayer feature and melt The method of conjunction and smaller convolution kernel, fusion method are the increases of characteristic dimension, that is, splice two features and form dimension more More features, such as a feature isIt is further characterized asSo after merging featuresDescribed point Class convolutional neural networks, described every layer of convolution kernel size of classification convolutional neural networks are 3, and are uniformly distributed come just using xavier Beginningization weight, total level is 10 layers, as shown in fig. 7, C is convolutional layer, P is sample level, and FC is full articulamentum.
In conclusion first embodiment of the invention provides a kind of cervical carcinoma detection method based on gynecatoptron image, When the gynecatoptron image of acquisition is input to the progress cervical carcinoma detection of the cervical carcinoma detection model based on two-way convolutional neural networks, In the cervical carcinoma detection model based on two-way convolutional neural networks:Opening of the cervix position is carried out to the gynecatoptron image of acquisition first Positioning and extraction, with generate include uterine neck carninomatosis hair region ROI image, then by two-way convolutional neural networks to described ROI image is split the extraction at edge, to generate cutting image, finally by classification convolutional neural networks to the cutting drawing As carrying out cancer grade separation, to export the lesion grade of cervical carcinoma, by two-way convolutional neural networks and Feature Fusion, Different neural network models is applied in different phase, in order to adapt to wisp segmentation, in each neural computing process In used multilayer feature integration technology, while remaining the feature of low layer and top layer, small cancerous lesion area can be partitioned into Domain fast and accurately Ground Split and sorts out the lesion grade of cervical carcinoma, in doctor can be carried out after a small amount of training Operation, can also be substantially reduced by the constraint of physician specialty knowledge, also may be used in the hospital of not medical practitioner or remote districts Accurately to carry out cervical carcinoma screening and diagnosis, the doctor that can help to lack experience quickly judges diseased region, finds atypia Diseased region judges lesion degree, materials position, to finding that cervical carcinoma and precancerous lesion play facilitation, this hair in time The bright uterine neck image for identifying that Via vagina mirror is obtained by artificial intelligence, timely and accurately judges various cervical lesions positions, instructs Doctor accurately obtains pathological tissues and carries out pathological examination, it might even be possible to substitute traditional cytolgical examination, quickly find uterine neck Cancer and precancerous lesion have prodigious society and medical value.
In order to facilitate the understanding of the present invention, some currently preferred embodiments of the present invention will be done and will further be retouched below It states.
Second embodiment of the invention:
Referring to Fig. 8, on the basis of first embodiment of the invention, the cervical carcinoma based on two-way convolutional neural networks The training step of detection model includes:
S21 pre-processes the gynecatoptron sample image of acquisition, to generate first sample.
In embodiments of the present invention, specifically, in order to ensure the reliability of gynecatoptron sample image quality, ensureing not change Data enhancing is done to former gynecatoptron sample image under the premise of becoming gynecatoptron sample image contrast, Enhancement Method is mainly: Pass through translation, overturning, plus noise etc..Overturning is to carry out 3 directions to picture to rotate and the rotation of artwork mirror image picture;Addition Noise be common Gaussian noise, to form pretreated first sample.
S22 generates network according to confrontation and carries out sample expansion to the first sample, to obtain the second sample.
In embodiments of the present invention, since the patient for doing vaginoscopy is not very much, useful sample is just less, is Prevent the sample very few and caused by network over-fitting, need to do sample expansion before training.The terminal device according to Confrontation generates network (GAN) and carries out sample expansion to the first sample, can generate some samples very true to nature, described right Antibiosis is a kind of deep learning model at network (GAN), is the method for unsupervised learning most foreground in complex distributions in recent years One of, model passes through (at least) two modules in frame:Generate model (Generative Model) and discrimination model The mutual Game Learning of (Discriminative Model) generates fairly good output.
S23 carries out opening of the cervix position based on the location algorithm of deep learning to the gynecatoptron image in second sample Positioning and extraction, to generate trained ROI image.
In embodiments of the present invention, when acquiring gynecatoptron picture, do not ensure that there was only opening of the cervix in picture, it is most of Other body parts are also had under situation, so in order to eliminate the influence at other positions, are first oriented the position of opening of the cervix and are carried Take ROI pictures as the samples pictures of subsequent processing, the terminal device is with the location algorithm based on deep learning to described Gynecatoptron image in second sample carries out the positioning and extraction at opening of the cervix position, to generate trained ROI image.
S24 obtains the ROI- true pictures that user carries out disease hair area marking according to training ROI image;
In embodiments of the present invention, after acquisition includes the training ROI image in disease hair region at opening of the cervix position, by institute It states trained ROI image and feeds back to user (i.e. doctor), with the help of the doctor of professional experiences, the edge line mark in disease hair region It outpours and, the region surrounded by edge line is that true disease sends out region, and then the terminal device obtains user according to training ROI Image carry out disease hair area marking ROI- true pictures, the coordinate of these real estates of mark is saved, using as The training sample of the two-way convolutional neural networks.
S25 is trained the two-way convolutional neural networks according to the ROI- true pictures, to generate training cutting Image;Wherein, the convolution kernel of two-way every layer of the convolutional neural networks be 3, total level be 20 layers, each layer include convolutional layer and Warp lamination.
In embodiments of the present invention, the terminal device according to the ROI- true pictures to the two-way convolutional Neural net Network is trained, and to generate trained cutting image, using the mutual syncretizing mechanism of multilayer feature, which is characteristic value superposition, The feature of i.e. two identical dimensionals carries out digital addition, does not change characteristic spectrum dimension, and the width and height of every layer of characteristic spectrum are protected The sample level for unanimously abandoning and reducing characteristic spectrum size (wide and high) is held, meanwhile, with the intensification of convolutional network, small mesh Target feature can increasingly unobvious, in some instances it may even be possible to disappear, so also needing to Network Low-layer time feature, same coordinate low-level feature With high-level characteristic weighted superposition, a new characteristic spectrum is formed, with the intensification of feature, low-level feature can slowly disappear, but Substantially improve original loss of signal degree, it is contemplated that the smaller convolution kernel of convolutional neural networks can keep fine and smooth in image Details, so every layer of convolution kernel size is 3, it is 2 that when each convolution algorithm, which adds picture to expand size, and is uniformly divided using xavier Cloth initializes weight, and total level of two-way convolutional neural networks is 20 layers, all layers are convolutional layer and warp lamination, feature Collection of illustrative plates size is consistent.
In embodiments of the present invention, referring to Fig. 6, in the training process, region is sent out in order to distinguish normal region and disease Boundary uses the fine or not degree that two loss functions carry out scoring model, and total LOSS values are the sum of two loss functions, and LOSS is got over Mini Mod is trained better.As shown in fig. 6, (two-way convolutional neural networks joining place) uses among two-way convolutional neural networks The convolutional layer of SoftMaxLoss loss functions, first half has obtained the thermal map spectrum in sick hair region, the width of thermal map spectrum and high and institute The numerical value for stating each pixel that gynecatoptron sample image is identical, and thermal map is composed represents corresponding described gynecatoptron sample image position Pixel belong to the probability of disease hair, the extraneous edge of thermal map spectrum is the coarse and fuzzy transitional region in disease hair region, bright Aobvious edge.Thermal map spectrum obtains apparent edge image by the deconvolution network of latter half, as trains cutting image, instruction Different colors is presented in the different zones for practicing cutting image, and second loss function is connected behind deconvolution, anti-for judging Similarity between training cutting image and the ROI- true pictures that convolutional network generates, according to the ROI- true pictures And the trained cutting image carries out costing bio disturbance, the training cutting image and the ROI- that deconvolution neural network generates are true Similarity Appraisal process is as follows between image:
1) cutting edge for the training cutting image and the ROI- true pictures that extraction deconvolution network generates, forms side The coordinate of each pixel on edge is indicated with different vectors respectively.
2) the LOSS values between two vectors are calculated according to the following formula, are indicated with DLOSS.
Wherein x indicates the cutting edge coordinate of user annotation in ROI- true pictures, y Indicate the edge coordinate of the training cutting image of deconvolution neural network forecast.
S26 carries out processes pixel according to the trained cutting image to each non-disease hair pixel in second sample, To obtain processes pixel image.
In embodiments of the present invention, according to the trained cutting image, the non-disease of each of the gynecatoptron image is obtained Send out the position of pixel;If the non-disease hair pixel is within the cutting edge that uterine neck carninomatosis sends out region, the non-disease sends out pixel Pixel value is constant;If the non-disease hair pixel is other than the cutting edge that uterine neck carninomatosis sends out region, the picture of the non-disease hair pixel Plain value is set to 0, and processes pixel image is formed after the completion of processing.
S27 obtains feedback label of the user to the processes pixel image, to generate third sample.
In embodiments of the present invention, described after the non-disease hair area pixel processing is completed to form processes pixel image Terminal device gives the processes pixel image feedback to user (i.e. doctor), by medical practitioner to the processes pixel image mark Label, are divided into according to cancer grade:Low level lesion, high-level lesion, cervical carcinoma, chronic cervicitis and normal, the terminal is set It is standby to obtain feedback label of the user to the processes pixel image, to generate third sample, as the classification convolutional Neural net The training sample of network.
S28 is trained the classification convolutional neural networks according to the third sample, to obtain the cervical carcinoma of classification Lesion grade;Wherein, the convolution kernel of every layer of the convolutional neural networks of classification is 3, and total level is 10 layers.
In embodiments of the present invention, the terminal device according to the third sample to the classification convolutional neural networks into Row training, to obtain the lesion grade of the cervical carcinoma of classification, the classification convolutional neural networks use multilayer feature fusion Method and smaller convolution kernel, fusion method are the increases of characteristic dimension, that is, splice two features and to form dimension more Feature, the classification convolutional neural networks, described every layer of convolution kernel size of classification convolutional neural networks is 3, and uses xavier It is uniformly distributed to initialize weight, total level is 10 layers, as shown in fig. 7, C is convolutional layer, P is sample level, and FC is full connection Layer carries out costing bio disturbance finally using SoftMaxLoss loss functions.
In embodiments of the present invention, using 1000 cervical carcinoma gynecatoptron pictures as training sample original image, each Cancer grade is respectively 200, and for 200 crane pictures as test original image, each cancer grade is respectively 40, 200 test samples of middle test are not in 1000 training samples.After sample preprocessing and sample expand, training sample 10000 are extended to from 1000, test sample is without expanding, and in test sample, segmentation is assessed using three indexs Accuracy rate, respectively segmentation precision, over-segmentation rate and less divided rate have reached 95% or more wherein calculating and obtaining segmentation precision, Over-segmentation rate is less than 3%, and less divided rate is less than 3%, and grade separation accuracy rate is more than 90%.Also subsequently through increase training sample It can continue to improve segmentation precision and classification accuracy, while reduce over-segmentation rate and less divided rate.
Referring to Fig. 9, third embodiment of the invention provides a kind of cervical carcinoma detection device based on gynecatoptron image, packet Include following steps:
In the cervical carcinoma detection model based on two-way convolutional neural networks:
Extraction module 11 is positioned, positioning and extraction for carrying out opening of the cervix position to the gynecatoptron image of acquisition, with life At the ROI image for sending out region including uterine neck carninomatosis.
Cutting image acquisition module 12 is split edge for passing through two-way convolutional neural networks to the ROI image Extraction, to generate cutting image.
Level results acquisition module 13, for carrying out cancer grade to the cutting image by convolutional neural networks of classifying Classification, to export the lesion grade of cervical carcinoma.
In the first realization method of 3rd embodiment, the cutting image acquisition module 12 specifically includes:
In two-way convolutional neural networks:
Thermal map composes acquiring unit, and the characteristic value for carrying out convolution algorithm and multilayer feature to the ROI image is superimposed, with Obtain thermal map spectrum identical with the ROI image size.
Cutting image generation unit carries out de-convolution operation and high-level characteristic and low-level feature for being composed to the thermal map Characteristic value superposition, with extract edge generate cutting image.
The first realization method according to third embodiment, in second of realization method of 3rd embodiment, the heat The numerical value corresponding to each pixel in collection of illustrative plates is used to indicate that the pixel in the same position of the gynecatoptron image to belong to disease hair The probability of pixel.
Then the level results acquisition module 13 specifically includes:
Image generation unit is handled, for according to the cutting image, disease hair non-to each of the gynecatoptron image Pixel carries out processes pixel, to generate processing image.
Lesion grade acquiring unit, for carrying out multilayer feature to the processing image by convolutional neural networks of classifying The superposition calculation of characteristic dimension, to obtain the lesion grade of the cervical carcinoma corresponding to the gynecatoptron image.
Second of realization method according to third embodiment, in the third realization method of 3rd embodiment, the place Reason image generation unit specifically includes:
Position acquisition subelement, for according to the cutting image, obtaining the non-disease hair of each of described gynecatoptron image The position of pixel.
First pixel processing unit, when it is described it is non-disease hair pixel uterine neck carninomatosis send out region cutting edge other than when, will The pixel value of the non-disease hair pixel is set to 0.
Second of realization method according to third embodiment, in the 4th kind of realization method of 3rd embodiment, the palace The lesion grade of neck cancer includes low level lesion, high-level lesion, cervical carcinoma, chronic cervicitis and normal-sub uterine neck.
In the 5th kind of realization method of 3rd embodiment, the cervical carcinoma based on two-way convolutional neural networks detects mould The training step of type includes:
First sample generation module is pre-processed for the gynecatoptron sample image to acquisition, to generate first sample.
Second sample generation module carries out sample expansion, to obtain for generating network according to confrontation to the first sample Take the second sample.
Training ROI image generation module, for the location algorithm based on deep learning to the vagina in second sample Mirror image carries out the positioning and extraction at opening of the cervix position, to generate trained ROI image.
ROI- true picture acquisition modules carry out disease hair area marking for obtaining user according to training ROI image ROI- true pictures.
Two-way convolutional neural networks training module is used for according to the ROI- true pictures to the two-way convolutional Neural net Network is trained, to generate trained cutting image;Wherein, the convolution kernel of two-way every layer of the convolutional neural networks is 3, total level It it is 20 layers, each layer includes convolutional layer and warp lamination.
Processes pixel image collection module is used for according to the trained cutting image, to each non-in second sample Disease hair pixel carries out processes pixel, to obtain processes pixel image.
Third sample generation module, the feedback label for obtaining user to the processes pixel image, to generate third Sample.
Classify convolutional neural networks training module, for according to the third sample to the classification convolutional neural networks into Row training, to obtain the lesion grade of the cervical carcinoma of classification;Wherein, the convolution kernel of every layer of the convolutional neural networks of classification is 3, Total level is 10 layers.
5th kind of realization method according to third embodiment further include in the 6th kind of realization method of 3rd embodiment:
Costing bio disturbance module, for carrying out costing bio disturbance according to the ROI- true pictures and the trained cutting image; Wherein, if costing bio disturbance function is DLOSS, thenX indicates user annotation in ROI- true pictures Cutting edge coordinate, y indicate deconvolution neural network forecast training cutting image edge coordinate.
Fourth embodiment of the invention provides a kind of cervical carcinoma detection terminal equipment based on gynecatoptron image.The embodiment The cervical carcinoma detection terminal equipment based on gynecatoptron image include:It processor, memory and is stored in the memory And the computer program that can be run on the processor, such as the cervical carcinoma based on gynecatoptron image detect program.The place Reason device is realized when executing the computer program in above-mentioned each cervical carcinoma detection method embodiment based on gynecatoptron image Step, such as step S11 shown in FIG. 1.Alternatively, the processor realizes that above-mentioned each device is real when executing the computer program Apply the function of each module/unit in example, such as cutting image acquisition module.
Illustratively, the computer program can be divided into one or more module/units, one or more A module/unit is stored in the memory, and is executed by the processor, to complete the present invention.It is one or more A module/unit can be the series of computation machine program instruction section that can complete specific function, and the instruction segment is for describing institute State implementation procedure of the computer program in the cervical carcinoma detection terminal equipment based on gynecatoptron image.
The cervical carcinoma detection terminal equipment based on gynecatoptron image can be desktop PC, notebook, palm The computing devices such as computer and cloud server.The cervical carcinoma detection terminal equipment based on gynecatoptron image may include, but not It is only limitted to, processor, memory.It will be understood by those skilled in the art that above-mentioned component is only based on the palace of gynecatoptron image The example of neck cancer detection terminal equipment does not constitute the restriction to the cervical carcinoma detection terminal equipment based on gynecatoptron image, can To include component more more or fewer than above-mentioned component, certain components or different components are either combined, such as described is based on The cervical carcinoma detection terminal equipment of gynecatoptron image can also include input-output equipment, network access equipment, bus etc..
Alleged processor can be central processing unit (Central Processing Unit, CPU), can also be it His general processor, digital signal processor (Digital Signal Processor, DSP), application-specific integrated circuit (Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field- Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic, Discrete hardware components etc..General processor can be microprocessor or the processor can also be any conventional processor Deng the processor is the control centre of the cervical carcinoma detection terminal equipment based on gynecatoptron image, utilizes various interfaces The entire various pieces of the cervical carcinoma detection terminal equipment based on gynecatoptron image with connection.
The memory can be used for storing the computer program and/or module, and the processor is by running or executing Computer program in the memory and/or module are stored, and calls the data being stored in memory, described in realization The various functions of cervical carcinoma detection terminal equipment based on gynecatoptron image.The memory can mainly include storing program area and Storage data field, wherein storing program area can storage program area, application program (such as the sound needed at least one function Playing function, image player function etc.) etc..In addition, memory may include high-speed random access memory, can also include non- Volatile memory, such as hard disk, memory, plug-in type hard disk, intelligent memory card (Smart Media Card, SMC), safe number Word (Secure Digital, SD) block, flash card (Flash Card), at least one disk memory, flush memory device or its His volatile solid-state part.
Wherein, if module/unit of the cervical carcinoma detection terminal integration of equipments based on gynecatoptron image is with software The form of functional unit is realized and when sold or used as an independent product, can be stored in a computer-readable storage In medium.Based on this understanding, the present invention realizes all or part of flow in above-described embodiment method, can also pass through meter Calculation machine program is completed to instruct relevant hardware, and the computer program can be stored in a computer readable storage medium In, the computer program is when being executed by processor, it can be achieved that the step of above-mentioned each embodiment of the method.Wherein, the calculating Machine program includes computer program code, and the computer program code can be source code form, object identification code form, can hold Style of writing part or certain intermediate forms etc..The computer-readable medium may include:The computer program code can be carried Any entity or device, recording medium, USB flash disk, mobile hard disk, magnetic disc, CD, computer storage, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electric carrier signal, telecommunications letter Number and software distribution medium etc..It should be noted that the content that the computer-readable medium includes can be managed according to the administration of justice Local legislation and the requirement of patent practice carry out increase and decrease appropriate, such as in certain jurisdictions, according to legislation and patent Practice, computer-readable medium do not include electric carrier signal and telecommunication signal.
It should be noted that the apparatus embodiments described above are merely exemplary, wherein described be used as separating component The unit of explanation may or may not be physically separated, and the component shown as unit can be or can also It is not physical unit, you can be located at a place, or may be distributed over multiple network units.It can be according to actual It needs that some or all of module therein is selected to achieve the purpose of the solution of this embodiment.In addition, device provided by the invention In embodiment attached drawing, the connection relation between module indicates there is communication connection between them, specifically can be implemented as one or A plurality of communication bus or signal wire.Those of ordinary skill in the art are without creative efforts, you can to understand And implement.
The above is the preferred embodiment of the present invention, it is noted that for those skilled in the art For, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications are also considered as Protection scope of the present invention.

Claims (10)

1. a kind of cervical carcinoma detection method based on gynecatoptron image, which is characterized in that include the following steps:
In the cervical carcinoma detection model based on two-way convolutional neural networks:
The positioning and extraction that opening of the cervix position is carried out to the gynecatoptron image of acquisition include that uterine neck carninomatosis sends out region to generate ROI image;
The extraction at edge is split to the ROI image by two-way convolutional neural networks, to generate cutting image;
By classifying, convolutional neural networks carry out cancer grade separation to the cutting image, to export the lesion etc. of cervical carcinoma Grade.
2. the cervical carcinoma detection method according to claim 1 based on gynecatoptron image, which is characterized in that described by double The extraction at edge is split to the ROI image to convolutional neural networks, to generate cutting image, specially:
In two-way convolutional neural networks:
The characteristic value superposition that convolution algorithm and multilayer feature are carried out to the ROI image, to obtain and the ROI image size phase Same thermal map spectrum;
Progress de-convolution operation is composed to the thermal map and high-level characteristic is superimposed with the characteristic value of low-level feature, to extract edge life At cutting image.
3. the cervical carcinoma detection method according to claim 2 based on gynecatoptron image, which is characterized in that the thermal map spectrum In each pixel corresponding to numerical value be used to indicate that the pixel in the same position of the gynecatoptron image to belong to disease and sends out pixel Probability;
It is then described that cancer grade separation is carried out to the cutting image by convolutional neural networks of classifying, to export the disease of cervical carcinoma Become grade, specially:
According to the cutting image, disease hair pixel non-to each of the gynecatoptron image carries out processes pixel, to generate place Manage image;
By convolutional neural networks of classifying to the superposition calculation for handling image and carrying out the characteristic dimension of multilayer feature, to obtain The lesion grade of cervical carcinoma corresponding to the gynecatoptron image.
4. according to claim 3 according to the cutting image, which is characterized in that it is described according to the cutting image, it is right The non-disease hair pixel of each of the gynecatoptron image carries out processes pixel, to generate processing image, specially:
According to the cutting image, the position of the non-disease hair pixel of each of described gynecatoptron image is obtained;
When it is described it is non-disease hair pixel uterine neck carninomatosis send out region cutting edge other than when, by it is described it is non-disease hair pixel pixel value It is set to 0.
5. the cervical carcinoma detection method according to claim 3 based on gynecatoptron image, which is characterized in that the cervical carcinoma Lesion grade include low level lesion, high-level lesion, cervical carcinoma, chronic cervicitis and normal-sub uterine neck.
6. the cervical carcinoma detection method according to claim 1 based on gynecatoptron image, which is characterized in that described based on double Training step to the cervical carcinoma detection model of convolutional neural networks includes:
The gynecatoptron sample image of acquisition is pre-processed, to generate first sample;
Network is generated according to confrontation, sample expansion is carried out to the first sample, to obtain the second sample;
Based on the location algorithm of deep learning to the gynecatoptron image in second sample carry out opening of the cervix position positioning and Extraction, to generate trained ROI image;
Obtain the ROI- true pictures that user carries out disease hair area marking according to training ROI image;
The two-way convolutional neural networks are trained according to the ROI- true pictures, to generate trained cutting image;Its In, the convolution kernel of two-way every layer of the convolutional neural networks is 3, and total level is 20 layers, and each layer includes convolutional layer and deconvolution Layer;
According to the trained cutting image, processes pixel is carried out to each non-disease hair pixel in second sample, to obtain picture Element processing image;
Feedback label of the user to the processes pixel image is obtained, to generate third sample;
The classification convolutional neural networks are trained according to the third sample, to obtain the lesion etc. of the cervical carcinoma of classification Grade;Wherein, the convolution kernel of every layer of the convolutional neural networks of classification is 3, and total level is 10 layers.
7. the cervical carcinoma detection method based on gynecatoptron image according to the claim 6, which is characterized in that also wrap It includes:
Costing bio disturbance is carried out according to the ROI- true pictures and the trained cutting image;Wherein, if costing bio disturbance function is DLOSS, thenX indicates that the cutting edge coordinate of user annotation in ROI- true pictures, y indicate The edge coordinate of the training cutting image of deconvolution neural network forecast.
8. a kind of cervical carcinoma detection device based on gynecatoptron image, which is characterized in that include the following steps:
In the cervical carcinoma detection model based on two-way convolutional neural networks:
Extraction module is positioned, positioning and extraction for carrying out opening of the cervix position to the gynecatoptron image of acquisition include to generate Uterine neck carninomatosis sends out the ROI image in region;
Cutting image acquisition module, the extraction for being split edge to the ROI image by two-way convolutional neural networks, To generate cutting image;
Level results acquisition module, for carrying out cancer grade separation to the cutting image by convolutional neural networks of classifying, To export the lesion grade of cervical carcinoma.
9. a kind of cervical carcinoma detection terminal equipment based on gynecatoptron image, including processor, memory and it is stored in described In memory and it is configured as the computer program executed by the processor, when the processor executes the computer program Realize the cervical carcinoma detection method based on gynecatoptron image as claimed in any of claims 1 to 7 in one of claims.
10. a kind of computer readable storage medium, which is characterized in that the computer readable storage medium includes the calculating of storage Machine program, wherein equipment where controlling the computer readable storage medium when the computer program is run is executed as weighed Profit requires the cervical carcinoma detection method based on gynecatoptron image described in any one of 1 to 7.
CN201810241901.8A 2018-03-22 2018-03-22 Cervical cancer detection device based on colposcope images Expired - Fee Related CN108510482B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810241901.8A CN108510482B (en) 2018-03-22 2018-03-22 Cervical cancer detection device based on colposcope images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810241901.8A CN108510482B (en) 2018-03-22 2018-03-22 Cervical cancer detection device based on colposcope images

Publications (2)

Publication Number Publication Date
CN108510482A true CN108510482A (en) 2018-09-07
CN108510482B CN108510482B (en) 2020-12-04

Family

ID=63378218

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810241901.8A Expired - Fee Related CN108510482B (en) 2018-03-22 2018-03-22 Cervical cancer detection device based on colposcope images

Country Status (1)

Country Link
CN (1) CN108510482B (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109427060A (en) * 2018-10-30 2019-03-05 腾讯科技(深圳)有限公司 A kind of method, apparatus, terminal device and the medical system of image identification
CN109493340A (en) * 2018-11-28 2019-03-19 武汉大学人民医院(湖北省人民医院) Esophagus fundus ventricularis varication assistant diagnosis system and method under a kind of gastroscope
CN109584229A (en) * 2018-11-28 2019-04-05 武汉大学人民医院(湖北省人民医院) A kind of real-time assistant diagnosis system of Endoscopic retrograde cholangio-pancreatiography art and method
CN109949271A (en) * 2019-02-14 2019-06-28 腾讯科技(深圳)有限公司 A kind of detection method based on medical image, the method and device of model training
CN110033456A (en) * 2019-03-07 2019-07-19 腾讯科技(深圳)有限公司 A kind of processing method of medical imaging, device, equipment and system
CN110033445A (en) * 2019-04-10 2019-07-19 司法鉴定科学研究院 Medicolegal examination automatic identification system and recognition methods based on deep learning
CN110136113A (en) * 2019-05-14 2019-08-16 湖南大学 A kind of vagina pathology image classification method based on convolutional neural networks
CN110334565A (en) * 2019-03-21 2019-10-15 江苏迪赛特医疗科技有限公司 A kind of uterine neck neoplastic lesions categorizing system of microscope pathological photograph
CN110490850A (en) * 2019-02-14 2019-11-22 腾讯科技(深圳)有限公司 A kind of lump method for detecting area, device and Medical Image Processing equipment
CN110516665A (en) * 2019-08-23 2019-11-29 上海眼控科技股份有限公司 Identify the neural network model construction method and system of image superposition character area
CN110706794A (en) * 2019-09-26 2020-01-17 中国科学院深圳先进技术研究院 Medical image processing system and medical image processing method
CN111436972A (en) * 2020-04-13 2020-07-24 王时灿 Three-dimensional ultrasonic gynecological disease diagnosis device
CN111476794A (en) * 2019-01-24 2020-07-31 武汉兰丁医学高科技有限公司 UNET-based cervical pathological tissue segmentation method
CN111914841A (en) * 2020-08-07 2020-11-10 温州医科大学 CT image processing method and device
CN112435242A (en) * 2020-11-25 2021-03-02 江西中科九峰智慧医疗科技有限公司 Lung image processing method and device, electronic equipment and storage medium
CN112884707A (en) * 2021-01-15 2021-06-01 复旦大学附属妇产科医院 Cervical precancerous lesion detection system, equipment and medium based on colposcope
WO2021114832A1 (en) * 2020-05-28 2021-06-17 平安科技(深圳)有限公司 Sample image data enhancement method, apparatus, electronic device, and storage medium
WO2021139447A1 (en) * 2020-09-30 2021-07-15 平安科技(深圳)有限公司 Abnormal cervical cell detection apparatus and method
CN113710166A (en) * 2020-03-19 2021-11-26 艾多特公司 Carotid artery ultrasonic diagnosis system
TWI767506B (en) * 2020-02-26 2022-06-11 大陸商上海商湯智能科技有限公司 Image recognition method, training method and equipment of recognition model

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8488863B2 (en) * 2008-11-06 2013-07-16 Los Alamos National Security, Llc Combinational pixel-by-pixel and object-level classifying, segmenting, and agglomerating in performing quantitative image analysis that distinguishes between healthy non-cancerous and cancerous cell nuclei and delineates nuclear, cytoplasm, and stromal material objects from stained biological tissue materials
CN106339591A (en) * 2016-08-25 2017-01-18 汤平 Breast cancer prevention self-service health cloud service system based on deep convolutional neural network
CN106780466A (en) * 2016-12-21 2017-05-31 广西师范大学 A kind of cervical cell image-recognizing method based on convolutional neural networks
CN107609503A (en) * 2017-09-05 2018-01-19 刘宇红 Intelligent cancerous tumor cell identifying system and method, cloud platform, server, computer
US20180061046A1 (en) * 2016-08-31 2018-03-01 International Business Machines Corporation Skin lesion segmentation using deep convolution networks guided by local unsupervised learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8488863B2 (en) * 2008-11-06 2013-07-16 Los Alamos National Security, Llc Combinational pixel-by-pixel and object-level classifying, segmenting, and agglomerating in performing quantitative image analysis that distinguishes between healthy non-cancerous and cancerous cell nuclei and delineates nuclear, cytoplasm, and stromal material objects from stained biological tissue materials
CN106339591A (en) * 2016-08-25 2017-01-18 汤平 Breast cancer prevention self-service health cloud service system based on deep convolutional neural network
US20180061046A1 (en) * 2016-08-31 2018-03-01 International Business Machines Corporation Skin lesion segmentation using deep convolution networks guided by local unsupervised learning
CN106780466A (en) * 2016-12-21 2017-05-31 广西师范大学 A kind of cervical cell image-recognizing method based on convolutional neural networks
CN107609503A (en) * 2017-09-05 2018-01-19 刘宇红 Intelligent cancerous tumor cell identifying system and method, cloud platform, server, computer

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
TAO XU 等: "Multimodal Deep Learning", 《MICCAI 2016》 *
谢珍珠 等: "边缘增强深层网络的图像超分辨率重建", 《中国图象图形学报》 *

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11610310B2 (en) 2018-10-30 2023-03-21 Tencent Technology (Shenzhen) Company Limited Method, apparatus, system, and storage medium for recognizing medical image
CN109427060A (en) * 2018-10-30 2019-03-05 腾讯科技(深圳)有限公司 A kind of method, apparatus, terminal device and the medical system of image identification
US11410306B2 (en) 2018-10-30 2022-08-09 Tencent Technology (Shenzhen) Company Limited Method, apparatus, system, and storage medium for recognizing medical image
CN109493340A (en) * 2018-11-28 2019-03-19 武汉大学人民医院(湖北省人民医院) Esophagus fundus ventricularis varication assistant diagnosis system and method under a kind of gastroscope
CN109584229A (en) * 2018-11-28 2019-04-05 武汉大学人民医院(湖北省人民医院) A kind of real-time assistant diagnosis system of Endoscopic retrograde cholangio-pancreatiography art and method
CN111476794A (en) * 2019-01-24 2020-07-31 武汉兰丁医学高科技有限公司 UNET-based cervical pathological tissue segmentation method
CN111476794B (en) * 2019-01-24 2023-10-20 武汉兰丁智能医学股份有限公司 Cervical pathological tissue segmentation method based on UNET
CN110490850A (en) * 2019-02-14 2019-11-22 腾讯科技(深圳)有限公司 A kind of lump method for detecting area, device and Medical Image Processing equipment
CN109949271B (en) * 2019-02-14 2021-03-16 腾讯科技(深圳)有限公司 Detection method based on medical image, model training method and device
CN109949271A (en) * 2019-02-14 2019-06-28 腾讯科技(深圳)有限公司 A kind of detection method based on medical image, the method and device of model training
CN110458883A (en) * 2019-03-07 2019-11-15 腾讯科技(深圳)有限公司 A kind of processing system of medical imaging, method, apparatus and equipment
CN110033456B (en) * 2019-03-07 2021-07-09 腾讯科技(深圳)有限公司 Medical image processing method, device, equipment and system
CN110033456A (en) * 2019-03-07 2019-07-19 腾讯科技(深圳)有限公司 A kind of processing method of medical imaging, device, equipment and system
CN110334565A (en) * 2019-03-21 2019-10-15 江苏迪赛特医疗科技有限公司 A kind of uterine neck neoplastic lesions categorizing system of microscope pathological photograph
CN110033445A (en) * 2019-04-10 2019-07-19 司法鉴定科学研究院 Medicolegal examination automatic identification system and recognition methods based on deep learning
CN110136113B (en) * 2019-05-14 2022-06-07 湖南大学 Vagina pathology image classification method based on convolutional neural network
CN110136113A (en) * 2019-05-14 2019-08-16 湖南大学 A kind of vagina pathology image classification method based on convolutional neural networks
CN110516665A (en) * 2019-08-23 2019-11-29 上海眼控科技股份有限公司 Identify the neural network model construction method and system of image superposition character area
CN110706794A (en) * 2019-09-26 2020-01-17 中国科学院深圳先进技术研究院 Medical image processing system and medical image processing method
TWI767506B (en) * 2020-02-26 2022-06-11 大陸商上海商湯智能科技有限公司 Image recognition method, training method and equipment of recognition model
CN113710166A (en) * 2020-03-19 2021-11-26 艾多特公司 Carotid artery ultrasonic diagnosis system
CN111436972A (en) * 2020-04-13 2020-07-24 王时灿 Three-dimensional ultrasonic gynecological disease diagnosis device
WO2021114832A1 (en) * 2020-05-28 2021-06-17 平安科技(深圳)有限公司 Sample image data enhancement method, apparatus, electronic device, and storage medium
CN111914841A (en) * 2020-08-07 2020-11-10 温州医科大学 CT image processing method and device
CN111914841B (en) * 2020-08-07 2023-10-13 温州医科大学 CT image processing method and device
WO2021139447A1 (en) * 2020-09-30 2021-07-15 平安科技(深圳)有限公司 Abnormal cervical cell detection apparatus and method
CN112435242A (en) * 2020-11-25 2021-03-02 江西中科九峰智慧医疗科技有限公司 Lung image processing method and device, electronic equipment and storage medium
CN112884707A (en) * 2021-01-15 2021-06-01 复旦大学附属妇产科医院 Cervical precancerous lesion detection system, equipment and medium based on colposcope

Also Published As

Publication number Publication date
CN108510482B (en) 2020-12-04

Similar Documents

Publication Publication Date Title
CN108510482A (en) Cervical carcinoma detection method, device, equipment and medium based on gynecatoptron image
CN105894517B (en) CT image liver segmentation method and system based on feature learning
CN106056595B (en) Based on the pernicious assistant diagnosis system of depth convolutional neural networks automatic identification Benign Thyroid Nodules
CN106780448B (en) A kind of pernicious categorizing system of ultrasonic Benign Thyroid Nodules based on transfer learning and Fusion Features
Cao et al. Fracture detection in x-ray images through stacked random forests feature fusion
Cha et al. Urinary bladder segmentation in CT urography using deep‐learning convolutional neural network and level sets
Joshi et al. Classification of brain cancer using artificial neural network
CN108257135A (en) The assistant diagnosis system of medical image features is understood based on deep learning method
WO2021082691A1 (en) Segmentation method and apparatus for lesion area of eye oct image, and terminal device
Xu et al. DeepLN: a framework for automatic lung nodule detection using multi-resolution CT screening images
CN109919928A (en) Detection method, device and the storage medium of medical image
CN107506770A (en) Diabetic retinopathy eye-ground photography standard picture generation method
KR102058348B1 (en) Apparatus and method for classification of angiomyolipoma wihtout visible fat and clear cell renal cell carcinoma in ct images using deep learning and sahpe features
CN109389129A (en) A kind of image processing method, electronic equipment and storage medium
CN107229952A (en) The recognition methods of image and device
CN107045721A (en) One kind extracts pulmonary vascular method and device from chest CT image
Jony et al. Detection of lung cancer from CT scan images using GLCM and SVM
CN111754453A (en) Pulmonary tuberculosis detection method and system based on chest radiography image and storage medium
CN112263217B (en) Improved convolutional neural network-based non-melanoma skin cancer pathological image lesion area detection method
CN106530298A (en) Three-way-decision-based liver tumor CT image classification method
CN109902682A (en) A kind of mammary gland x line image detection method based on residual error convolutional neural networks
CN109492547A (en) A kind of tubercle recognition methods, device and storage medium
Ma et al. Automated pectoral muscle identification on MLO‐view mammograms: Comparison of deep neural network to conventional computer vision
CN108596174A (en) A kind of lesion localization method of skin disease image
CN110188767A (en) Keratonosus image sequence feature extraction and classifying method and device based on deep neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20201204

CF01 Termination of patent right due to non-payment of annual fee