CN110335267A - A kind of cervical lesions method for detecting area - Google Patents

A kind of cervical lesions method for detecting area Download PDF

Info

Publication number
CN110335267A
CN110335267A CN201910602980.5A CN201910602980A CN110335267A CN 110335267 A CN110335267 A CN 110335267A CN 201910602980 A CN201910602980 A CN 201910602980A CN 110335267 A CN110335267 A CN 110335267A
Authority
CN
China
Prior art keywords
indicate
candidate region
region
image
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910602980.5A
Other languages
Chinese (zh)
Inventor
柳培忠
柏兵
杜永兆
孙蓬明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Quanzhou Laborers Intelligent Technology Co Ltd
Huaqiao University
Original Assignee
Quanzhou Laborers Intelligent Technology Co Ltd
Huaqiao University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Quanzhou Laborers Intelligent Technology Co Ltd, Huaqiao University filed Critical Quanzhou Laborers Intelligent Technology Co Ltd
Priority to CN201910602980.5A priority Critical patent/CN110335267A/en
Publication of CN110335267A publication Critical patent/CN110335267A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention provides a kind of cervical lesions method for detecting area of cervical lesions detection field, include the following steps: step S10, uterine neck image is shot by gynecatoptron and is sent to computer;Step S20, received image is pre-processed;Step S30, using region candidate network by choosing candidate region on pretreated image;Step S40, recurrence calculating is carried out to candidate region using normalization exponential function, and then cervical lesions region is demarcated.The present invention has the advantages that improving the accuracy in detection of cervical carcinoma lesion region.

Description

A kind of cervical lesions method for detecting area
Technical field
The present invention relates to cervical lesions detection fields, refer in particular to a kind of cervical lesions method for detecting area.
Background technique
Cervical carcinoma is the fourth-largest malignant tumour of women, is the most common genital tract malignant tumour, the annual cervical carcinoma in the whole world New cases up to 528,000, wherein dead number of cases is up to 266,000.With the number of the infected and death toll of cervical carcinoma Increase year by year, and morbidity crowd present rejuvenation so that the prevention and treatment of cervical carcinoma/cervical lesions seems extremely heavy with diagnosis and treatment It wants.Therefore, carrying out extensive, specification cervical carcinoma screening project for general population is to reduce uterine neck carcinogenesis and death most One of effective ways.
It carries out diagnosis to cervical carcinoma to need to shoot multiple multi-period uterine neck images using gynecatoptron, gynecatoptron, which has become, to be faced The important tool of screening CIN (Cervical intraepitheliaI neoplasia) and early cervical carcinoma on bed, and directly affect the diagnosis and treatment scheme of patient.
However, gynecatoptron image is a morphology technology, ununified diagnostic criteria, image specificity is low, even if Cervicitis or simple HPV (human papilloma virus) infection can also generate abnormal image under gynecatoptron.Abnormal vaginal mirror The form of expression of image is all kinds of, both can behave as acetic acid white epithelium, can also be special-shaped blood vessel, and various abnormal images can also be same When occur, therefore the high sensitivity of gynecatoptron diagnostic imaging cervical lesions and specificity is low.And the subjective factor of operator is especially It is that its level professional technology and clinical experience etc. have large effect to the inspection result of gynecatoptron;The depth of cervical biopsy, model The same accuracy for influencing diagnosis such as enclose.
Therefore, how a kind of cervical lesions method for detecting area is provided, realizes that the detection for improving cervical carcinoma lesion region is quasi- Exactness becomes a urgent problem to be solved.
Summary of the invention
The technical problem to be solved in the present invention is to provide a kind of cervical lesions method for detecting area, realizes and improve uterine neck The accuracy in detection of cancer lesion region.
The present invention is implemented as follows: a kind of cervical lesions method for detecting area, described method includes following steps:
Step S10, uterine neck image is shot by gynecatoptron and is sent to computer;
Step S20, received image is pre-processed;
Step S30, using region candidate network by choosing candidate region on pretreated image;
Step S40, recurrence calculating carried out to candidate region using normalization exponential function, so to cervical lesions region into Rower is fixed.
Further, the step S20 is specifically included:
Step S21, received image is normalized;
Step S22, depth characteristic is extracted from the image after normalized.
Further, the step S21 is specifically included:
Step S211, the RGB image for being N*N at size by received image scaling, wherein N > 0;
Step S212, by standard data set Software for producing, cervical lesions region mark is carried out to the RGB image after scaling Note;
Step S213, the image input convolutional neural networks of mark are created into training pattern.
Further, the step S22 is specifically included:
Step S221, classification foundation of the shallow-layer feature of the training pattern as lesion region is chosen;
Step S222, depth characteristic is extracted to the training pattern:
WhereinIndicate will own in convolutional neural networks The Feature Mapping that residual error module generates is as input, WLAnd WL-1Successively indicate that two 3 × 3 convolution weight matrixs, BN () indicate The output data of each hidden layer of convolutional neural networks is normalized, f () indicates ReLU activation primitive,It indicates The convolution operation of convolutional neural networks;
Step S223, the global draw pond operated by Squeeze, presses the depth characteristic after convolution operation Contracting, so that the real number ordered series of numbers of C characteristic layer boil down to 1*1*C:
Wherein C indicates that the port number of characteristic layer, W indicate the width of characteristic layer, H indicates the height of characteristic layer, ucIndicate characteristic layer channel, i, j are positive integer, Fsq() indicates Squeeze operation;
Step S224, it is operated by Excitation by ZcWeight as characteristic layer carries out weight to every layer of depth characteristic Distribution:
Fex(Zc, W) and=σ (W2δ(W1Zc)), wherein Fex() indicates Excitation operation, and δ indicates ReLU activation primitive, W1 Indicate the parameter that full connection generates for the first time, W2Indicate the parameter that second of full connection generates.
Further, the step S30 is specifically included:
Step S31, by any scale image input area candidate network;
Step S32, described any scale image generates characteristic pattern using the convolution inclusion layer of convolutional neural networks;
Step S33, multiple dimensioned convolution operation is carried out on the characteristic pattern and chooses candidate region, and gives each candidate region Distribution one for mark whether be lesion region binary label;
Step S34, the set of region candidate network output candidate region.
Further, the step S33 is specifically included:
Step S331, sliding carries out the sliding window of selection feature at random on characteristic pattern for creation one;
Step S332, centered on the center of the sliding window, using 3 kinds of scales and 3 kinds of length-width ratios on characteristic pattern Map the candidate region of 9 kinds of different scales;
Step S333, to each candidate region distribution one for mark whether be lesion region binary label;
Step S334, judge that the IOU of candidate region and target area overlaps ratio, if >=70%, by the binary system Label is set as positive number;If≤30%, the binary label is set as negative;Remaining is given up.
Further, the step S40 is specifically included:
Step S41, an image impairment function, a Classification Loss function and a recurrence are defined based on the binary label Loss function;
Step S42, using the characteristic layer reassigned as the input of normalization exponential function, based on recurrence loss function pair Candidate region carries out recurrence calculating, based on Classification Loss function according to the classification foundation to return calculate after candidate region into Row classification, demarcates cervical lesions region based on image impairment function.
Further, in the step S41, described image loss function are as follows:
Wherein i indicates the candidate regions chosen The index in domain;PiIndicate that candidate region i is the probability of target area;The value of binary label is indicated, if candidate region is mesh Region is marked, thenOtherwisetiIndicate the coordinate vector of 4 endpoints in candidate region;Indicate the end of real estate 4 The coordinate vector of point;NclsIndicate change parameter when classifying to candidate region;NregCandidate region is normalized in expression When parameter;λ indicates constant balance factor;
The Classification Loss function are as follows:
The recurrence loss function are as follows:
tx=(x-xa)/wa, ty=(y-ya)/ha, tw=log (w/wa), th=log (h/ha),
Wherein x, y indicate that the centre coordinate of sliding window, w indicate that the width of sliding window, h indicate the height of sliding window;xa, yaIndicate the centre coordinate of candidate region, waIndicate the width of candidate region, haIndicate the height of candidate region;x*,y*Indicate target area The centre coordinate in domain, w*Indicate the width of target area, h*Indicate the height of target area.
The present invention has the advantages that
Lesion region is learnt by convolutional neural networks, and each depth is preferably indicated by Squeeze operation The significance level for spending feature operates enhancing useful feature by Excitation and inhibits unnecessary feature, greatly improves The screening efficiency of lesion region greatly improves the accuracy in detection of cervical carcinoma lesion region, is conducive to doctor and further examines Disconnected cervical lesions grade is of great significance for improving uterine neck screening accuracy.
Detailed description of the invention
The present invention is further illustrated in conjunction with the embodiments with reference to the accompanying drawings.
Fig. 1 is a kind of flow chart of cervical lesions method for detecting area of the present invention.
Specific embodiment
It please refers to shown in Fig. 1, a kind of preferred embodiment of cervical lesions method for detecting area of the present invention, including walks as follows It is rapid:
Step S10, uterine neck image is shot by gynecatoptron and is sent to computer;
Step S20, received image is pre-processed;
Step S30, using region candidate network (RNP network) by choosing candidate region on pretreated image;
Step S40, recurrence calculating is carried out to candidate region using normalization exponential function (Softmax function), and then right Cervical lesions region is demarcated.
The step S20 is specifically included:
Step S21, received image is normalized;Normalization is a kind of mode of simplified calculating, i.e., will have The expression formula of dimension turns to nondimensional expression formula, becomes scalar by transformation.
Step S22, depth characteristic is extracted from the image after normalized.The depth characteristic is convolutional neural networks The middle feature after multiple convolution and pondization operation.
The step S21 is specifically included:
Step S211, the RGB image for being N*N at size by received image scaling, wherein N > 0;
Step S212, by standard data set Software for producing (labelImg), cervix disease is carried out to the RGB image after scaling Become area marking, and output format is the labeled data of VOC data set (target detection common data sets);
Step S213, the image input convolutional neural networks of mark are created into training pattern.
The step S22 is specifically included:
Step S221, classification foundation of the shallow-layer feature of the training pattern as lesion region is chosen;The shallow-layer is special Sign is the feature of convolutional neural networks three first layers convolution Chi Huahou;
Step S222, depth characteristic is extracted to the training pattern:
WhereinIndicate will own in convolutional neural networks The Feature Mapping that residual error module generates is as input, WLAnd WL-1Successively indicate that two 3 × 3 convolution weight matrixs, BN () indicate The output data of each hidden layer of convolutional neural networks is normalized, f () indicates ReLU activation primitive,It indicates The convolution operation of convolutional neural networks;
Step S223, the global draw pond operated by Squeeze, presses the depth characteristic after convolution operation Contracting, so that the real number ordered series of numbers of C characteristic layer boil down to 1*1*C, in order to preferably indicate the significance level of each feature, a reality The corresponding significance level for indicating one layer of feature of number:
Wherein C indicates that the port number of characteristic layer, W indicate the width of characteristic layer, H indicates the height of characteristic layer, ucIndicate characteristic layer channel, i, j are positive integer, Fsq() indicates Squeeze operation;The characteristic layer Indicate the weight matrix that convolution algorithm generates;
Step S224, it is operated by Excitation by ZcWeight as characteristic layer carries out weight to every layer of depth characteristic Distribution:
Fex(Zc, W) and=σ (W2δ(W1Zc)), wherein Fex() indicates Excitation operation, and δ indicates ReLU activation primitive, W1 Indicate the parameter that full connection generates for the first time, W2Indicate the parameter that second of full connection generates.Full connection is indicated every characteristic layer Each node be connected with all nodes of a upper characteristic layer, for all characteristic synthetics to be got up.
Squeeze operation and Excitation operation belong to SE module (Squeeze-and-Excitation Networks function), SE module are adaptively recalibrated logical by explicitly modeling the relation of interdependence between channel The characteristic response of road formula.
The step S30 is specifically included:
Step S31, by any scale image input area candidate network;
Step S32, described any scale image generates characteristic pattern using the convolution inclusion layer of convolutional neural networks;The volume Product inclusion layer is the characteristic layer that convolutional neural networks generate, classification and recurrence while using this characteristic layer, so crying altogether Enjoy layer;
Step S33, multiple dimensioned convolution operation is carried out on the characteristic pattern and chooses candidate region, and gives each candidate region Distribution one for mark whether be lesion region binary label;
Step S34, the set of region candidate network output candidate region.
The step S33 is specifically included:
Step S331, sliding carries out the sliding window of selection feature at random on characteristic pattern for creation one;
Step S332, centered on the center of the sliding window, using 3 kinds of scales and 3 kinds of length-width ratios on characteristic pattern Map the candidate region of 9 kinds of different scales;
Step S333, to each candidate region distribution one for mark whether be lesion region binary label;
Step S334, judge that the I OU of candidate region and target area overlaps ratio, if >=70%, by the binary system Label is set as positive number;If≤30%, the binary label is set as negative;Remaining is given up, because its to whether diseased region The classification in domain does not help.
The step S40 is specifically included:
Step S41, an image impairment function, a Classification Loss function and a recurrence are defined based on the binary label Loss function;
Step S42, it using the characteristic layer reassigned as the input of normalization exponential function (classifier), is damaged based on returning It loses function and recurrence calculating is carried out to candidate region, based on Classification Loss function according to the classification foundation to the time returned after calculating Favored area is classified, and is demarcated based on image impairment function to cervical lesions region.
In the step S41, described image loss function are as follows:
Wherein i indicates the candidate regions chosen The index in domain;PiIndicate that candidate region i is the probability of target area;The value of binary label is indicated, if candidate region is mesh Region is marked, thenOtherwisetiIndicate the coordinate vector of 4 endpoints in candidate region;Indicate the end of real estate 4 The coordinate vector of point;NclsIndicate change parameter when classifying to candidate region;NregCandidate region is normalized in expression When parameter;λ indicates constant balance factor;
The Classification Loss function are as follows:
The recurrence loss function are as follows:
tx=(x-xa)/wa, ty=(y-ya)/ha, tw=log (w/wa), th=log (h/ha),
Wherein x, y indicate that the centre coordinate of sliding window, w indicate that the width of sliding window, h indicate the height of sliding window;xa, yaIndicate the centre coordinate of candidate region, waIndicate the width of candidate region, haIndicate the height of candidate region;x*,y*Indicate target area The centre coordinate in domain, w*Indicate the width of target area, h*Indicate the height of target area.
In conclusion the present invention has the advantages that
Lesion region is learnt by convolutional neural networks, and each depth is preferably indicated by Squeeze operation The significance level for spending feature operates enhancing useful feature by Excitation and inhibits unnecessary feature, greatly improves The screening efficiency of lesion region greatly improves the accuracy in detection of cervical carcinoma lesion region, is conducive to doctor and further examines Disconnected cervical lesions grade is of great significance for improving uterine neck screening accuracy.
Although specific embodiments of the present invention have been described above, those familiar with the art should be managed Solution, we are merely exemplary described specific embodiment, rather than for the restriction to the scope of the present invention, it is familiar with this The technical staff in field should be covered of the invention according to modification and variation equivalent made by spirit of the invention In scope of the claimed protection.

Claims (8)

1. a kind of cervical lesions method for detecting area, it is characterised in that: described method includes following steps:
Step S10, uterine neck image is shot by gynecatoptron and is sent to computer;
Step S20, received image is pre-processed;
Step S30, using region candidate network by choosing candidate region on pretreated image;
Step S40, recurrence calculating is carried out to candidate region using normalization exponential function, and then cervical lesions region is marked It is fixed.
2. a kind of cervical lesions method for detecting area as described in claim 1, it is characterised in that: the step S20 is specifically wrapped It includes:
Step S21, received image is normalized;
Step S22, depth characteristic is extracted from the image after normalized.
3. a kind of cervical lesions method for detecting area as claimed in claim 2, it is characterised in that: the step S21 is specifically wrapped It includes:
Step S211, the RGB image for being N*N at size by received image scaling, wherein N > 0;
Step S212, by standard data set Software for producing, cervical lesions area marking is carried out to the RGB image after scaling;
Step S213, the image input convolutional neural networks of mark are created into training pattern.
4. a kind of cervical lesions method for detecting area as claimed in claim 3, it is characterised in that: the step S22 is specifically wrapped It includes:
Step S221, classification foundation of the shallow-layer feature of the training pattern as lesion region is chosen;
Step S222, depth characteristic is extracted to the training pattern:
WhereinIt indicates residual errors all in convolutional neural networks The Feature Mapping that module generates is as input, WLAnd WL-1Successively indicate that two 3 × 3 convolution weight matrixs, BN () are indicated to volume The output data of the product each hidden layer of neural network is normalized, and f () indicates ReLU activation primitive,Indicate convolution The convolution operation of neural network;
Step S223, the global draw pond operated by Squeeze, compresses the depth characteristic after convolution operation, makes Obtain the real number ordered series of numbers of C characteristic layer boil down to 1*1*C:
Wherein C indicates that the port number of characteristic layer, W indicate the width of characteristic layer, H table Show the height of characteristic layer, ucIndicate characteristic layer channel, i, j are positive integer, Fsq() indicates Squeeze operation;
Step S224, it is operated by Excitation by ZcWeight as characteristic layer reassigns every layer of depth characteristic:
Fex(Zc, W) and=σ (W2δ(W1Zc)), wherein Fex() indicates Excitation operation, and δ indicates ReLU activation primitive, W1It indicates The parameter that full connection generates for the first time, W2Indicate the parameter that second of full connection generates.
5. a kind of cervical lesions method for detecting area as described in claim 1, it is characterised in that: the step S30 is specifically wrapped It includes:
Step S31, by any scale image input area candidate network;
Step S32, described any scale image generates characteristic pattern using the convolution inclusion layer of convolutional neural networks;
Step S33, multiple dimensioned convolution operation is carried out on the characteristic pattern and chooses candidate region, and is distributed to each candidate region One for mark whether be lesion region binary label;
Step S34, the set of region candidate network output candidate region.
6. a kind of cervical lesions method for detecting area as claimed in claim 5, it is characterised in that: the step S33 is specifically wrapped It includes:
Step S331, sliding carries out the sliding window of selection feature at random on characteristic pattern for creation one;
Step S332, centered on the center of the sliding window, 9 are mapped on characteristic pattern using 3 kinds of scales and 3 kinds of length-width ratios The candidate region of kind different scale;
Step S333, to each candidate region distribution one for mark whether be lesion region binary label;
Step S334, judge that the IOU of candidate region and target area overlaps ratio, if >=70%, by the binary label It is set as positive number;If≤30%, the binary label is set as negative;Remaining is given up.
7. a kind of cervical lesions method for detecting area as claimed in claim 6, it is characterised in that: the step S40 is specifically wrapped It includes:
Step S41, an image impairment function, a Classification Loss function and a recurrence loss are defined based on the binary label Function;
Step S42, using the characteristic layer reassigned as the input of normalization exponential function, based on recurrence loss function to candidate Region carries out recurrence calculating, is divided according to the classification foundation the candidate region returned after calculating based on Classification Loss function Class demarcates cervical lesions region based on image impairment function.
8. a kind of cervical lesions method for detecting area as claimed in claim 7, it is characterised in that: described in the step S41 Image impairment function are as follows:
Wherein i indicates the rope for the candidate region chosen Draw;PiIndicate that candidate region i is the probability of target area;Indicate the value of binary label, if candidate region is target area, ThenOtherwisetiIndicate the coordinate vector of 4 endpoints in candidate region;Indicate the seat of 4 endpoints in real estate Mark vector;NclsIndicate change parameter when classifying to candidate region;NregIndicate ginseng when candidate region is normalized Number;λ indicates constant balance factor;
The Classification Loss function are as follows:
The recurrence loss function are as follows:
tx=(x-xa)/wa, ty=(y-ya)/ha, tw=log (w/wa), th=log (h/ha),
Wherein x, y indicate that the centre coordinate of sliding window, w indicate that the width of sliding window, h indicate the height of sliding window;xa,yaTable Show the centre coordinate of candidate region, waIndicate the width of candidate region, haIndicate the height of candidate region;x*,y*Indicate target area Centre coordinate, w*Indicate the width of target area, h*Indicate the height of target area.
CN201910602980.5A 2019-07-05 2019-07-05 A kind of cervical lesions method for detecting area Pending CN110335267A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910602980.5A CN110335267A (en) 2019-07-05 2019-07-05 A kind of cervical lesions method for detecting area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910602980.5A CN110335267A (en) 2019-07-05 2019-07-05 A kind of cervical lesions method for detecting area

Publications (1)

Publication Number Publication Date
CN110335267A true CN110335267A (en) 2019-10-15

Family

ID=68144293

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910602980.5A Pending CN110335267A (en) 2019-07-05 2019-07-05 A kind of cervical lesions method for detecting area

Country Status (1)

Country Link
CN (1) CN110335267A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111415331A (en) * 2020-03-03 2020-07-14 北京深睿博联科技有限责任公司 Abnormality detection method and system based on category relation in positive chest radiograph
CN111415350A (en) * 2020-03-27 2020-07-14 福建省妇幼保健院 Colposcope image identification method for detecting cervical lesions
CN112200253A (en) * 2020-10-16 2021-01-08 武汉呵尔医疗科技发展有限公司 Cervical cell image classification method based on senet
CN112614099A (en) * 2020-12-17 2021-04-06 杭州电子科技大学 Cervical cancer lesion region detection method based on fast-RCNN model

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106340016A (en) * 2016-08-31 2017-01-18 湖南品信生物工程有限公司 DNA quantitative analysis method based on cell microscope image
US20170206431A1 (en) * 2016-01-20 2017-07-20 Microsoft Technology Licensing, Llc Object detection and classification in images
CN109190540A (en) * 2018-06-06 2019-01-11 腾讯科技(深圳)有限公司 Biopsy regions prediction technique, image-recognizing method, device and storage medium
CN109636805A (en) * 2018-11-19 2019-04-16 浙江大学山东工业技术研究院 A kind of uterine neck image lesion region segmenting device and method based on classification priori

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170206431A1 (en) * 2016-01-20 2017-07-20 Microsoft Technology Licensing, Llc Object detection and classification in images
CN106340016A (en) * 2016-08-31 2017-01-18 湖南品信生物工程有限公司 DNA quantitative analysis method based on cell microscope image
CN109190540A (en) * 2018-06-06 2019-01-11 腾讯科技(深圳)有限公司 Biopsy regions prediction technique, image-recognizing method, device and storage medium
CN109636805A (en) * 2018-11-19 2019-04-16 浙江大学山东工业技术研究院 A kind of uterine neck image lesion region segmenting device and method based on classification priori

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JIE HU 等: "Squeeze-and-Excitation Networks", 《2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
范伟康: "基于改进 Faster R-CNN 的肺结节检测", 《中国优秀硕士学位论文全文数据库 卫生医药科技辑》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111415331A (en) * 2020-03-03 2020-07-14 北京深睿博联科技有限责任公司 Abnormality detection method and system based on category relation in positive chest radiograph
CN111415331B (en) * 2020-03-03 2023-05-23 北京深睿博联科技有限责任公司 Abnormal detection method and system based on category relation in positive chest radiography
CN111415350A (en) * 2020-03-27 2020-07-14 福建省妇幼保健院 Colposcope image identification method for detecting cervical lesions
CN111415350B (en) * 2020-03-27 2023-04-07 福建省妇幼保健院 Colposcope image identification method for detecting cervical lesions
CN112200253A (en) * 2020-10-16 2021-01-08 武汉呵尔医疗科技发展有限公司 Cervical cell image classification method based on senet
CN112614099A (en) * 2020-12-17 2021-04-06 杭州电子科技大学 Cervical cancer lesion region detection method based on fast-RCNN model

Similar Documents

Publication Publication Date Title
CN110335267A (en) A kind of cervical lesions method for detecting area
Li et al. Joint multiple fully connected convolutional neural network with extreme learning machine for hepatocellular carcinoma nuclei grading
CN108305249A (en) The quick diagnosis and methods of marking of full size pathological section based on deep learning
Chan et al. Texture-map-based branch-collaborative network for oral cancer detection
CN111429407B (en) Chest X-ray disease detection device and method based on double-channel separation network
CN103096786A (en) Image analysis for cervical neoplasia detection and diagnosis
CN112184659A (en) Lung image processing method, device and equipment
JP7312510B1 (en) Whole-slide pathological image classification system and construction method considering tumor microenvironment
CN111369501B (en) Deep learning method for identifying oral squamous cell carcinoma based on visual features
Li et al. Mass detection in mammograms by bilateral analysis using convolution neural network
CN114677378B (en) Computer-aided diagnosis and treatment system based on ovarian tumor benign and malignant prediction model
Xu et al. Mammographic mass segmentation using multichannel and multiscale fully convolutional networks
Yuan et al. Pulmonary nodule detection using 3-d residual u-net oriented context-guided attention and multi-branch classification network
CN109871869A (en) A kind of Lung neoplasm classification method and its device
Yonekura et al. Glioblastoma multiforme tissue histopathology images based disease stage classification with deep CNN
CN101968851B (en) Medical image processing method based on dictionary studying upsampling
Azour et al. An efficient transfer and ensemble learning based computer aided breast abnormality diagnosis system
Fan et al. Colposcopic multimodal fusion for the classification of cervical lesions
CN110399899A (en) Uterine neck OCT image classification method based on capsule network
Abd-Alhalem et al. Cervical cancer classification based on a bilinear convolutional neural network approach and random projection
CN112017772B (en) Method and system for constructing disease cognitive model based on female leucorrhea
Xu et al. A High‐Precision Classification Method of Mammary Cancer Based on Improved DenseNet Driven by an Attention Mechanism
CN116580017A (en) Improved Mask-R-CNN lung nodule auxiliary detection method integrating double-path channel attention and cavity space attention
Wang et al. Controlling False-Positives in Automatic Lung Nodule Detection by Adding 3D Cuboid Attention to a Convolutional Neural Network
Li et al. MC‐UNet: Multimodule Concatenation Based on U‐Shape Network for Retinal Blood Vessels Segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191015

RJ01 Rejection of invention patent application after publication