CN110853021A - Construction of detection classification model of pathological squamous epithelial cells - Google Patents

Construction of detection classification model of pathological squamous epithelial cells Download PDF

Info

Publication number
CN110853021A
CN110853021A CN201911108184.2A CN201911108184A CN110853021A CN 110853021 A CN110853021 A CN 110853021A CN 201911108184 A CN201911108184 A CN 201911108184A CN 110853021 A CN110853021 A CN 110853021A
Authority
CN
China
Prior art keywords
detection
judgment
model
visual field
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911108184.2A
Other languages
Chinese (zh)
Other versions
CN110853021B (en
Inventor
李文勇
张立篪
陈巍
蹇秀红
王鹏
殷亚娟
陶军之
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Dessert Pathological Diagnosis Center Co.,Ltd.
Original Assignee
Jiangsu Disset Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Disset Medical Technology Co Ltd filed Critical Jiangsu Disset Medical Technology Co Ltd
Priority to CN201911108184.2A priority Critical patent/CN110853021B/en
Publication of CN110853021A publication Critical patent/CN110853021A/en
Application granted granted Critical
Publication of CN110853021B publication Critical patent/CN110853021B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Investigating Or Analysing Biological Materials (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a construction of a detection classification model of pathological squamous epithelial cells, which comprises the construction of an abnormal cell detection model, the construction of a visual field diagram judgment model and the construction of a sample judgment model; three models in the detection classification model are buckled and are subjected to optimization detection, detail re-diagnosis, integration re-diagnosis and the like on the basis of the detection of the former model, so that the overall multiple control on the diagnosis result is realized, the accuracy of the diagnosis result is ensured, and a perfect sample diagnosis method is obtained.

Description

Construction of detection classification model of pathological squamous epithelial cells
Technical Field
The invention relates to the field of cell detection and classification, in particular to construction of a detection and classification model of pathological squamous epithelial cells.
Background
Squamous epithelial cells, also known as squamous epithelial cells, are derived mainly from the lower ureter, bladder, urethral and vaginal surface and uterus, and their surface is covered with squamous epithelial cells, the growth and differentiation of which are mainly affected by estrogen produced by the ovary, and the action of progestogen is to promote the shedding of epithelial cells. Squamous epithelial cell lesion is a common change in the cervical liquid-based cytology examination process, is a normal change of cervical cells, and the possibility of precancerous lesion is considered only when atypical squamous cell lesion appears, so that the detection and classification difficulty of pathological squamous epithelial cells is high, manual diagnosis needs abundant experience, but still consumes a great deal of time, and misjudgment also appears, so that accurate screening cannot be realized.
At present, the diagnosis efficiency can be improved by using a method for detecting squamous epithelial cells with the assistance of a computer based on image characteristics, but because the model is unreasonable to construct, the diagnosis result cannot ensure higher accuracy, and a diagnosis method for the whole sample is lacked.
Disclosure of Invention
In order to solve the problems, the invention provides a method for constructing a detection classification model of pathological squamous epithelial cells, which comprises the steps of constructing an abnormal cell detection model, constructing a visual field diagram judgment model and constructing a sample judgment model;
the construction of the abnormal cell detection model comprises the following steps:
the first step is as follows: detecting suspected diseased cells, wherein the detection training comprises candidate frame extraction, classification positioning and reward and punishment convergence;
the second step is that: optimizing the detection result, including optimizing training and optimizing testing, wherein the optimizing training includes extracting visual field map features, making prediction and comparing convergence;
the construction of the view map judgment model comprises the judgment of a single view map; the judgment on the single visual field image comprises judgment training and judgment testing; the judgment training comprises sub-image feature extraction, comprehensive prediction and comparison convergence;
the construction of the sample judgment model comprises sample diagnosis based on a single-view image; the sample diagnosis based on the single-view map comprises sample diagnosis training and sample diagnosis testing; the sample diagnosis training comprises composition sequence, state synthesis and alignment convergence.
As a preferred technical solution, the step of extracting candidate frames in the detection training includes: a visual field map of the cell image is input, and the detection network extracts candidate frames according to the generation and modification principles.
As a preferred technical solution, the step of classification and positioning in the detection training includes: extracting features corresponding to the pathological changes in the candidate frame based on the learning degree of the current detection network, obtaining a classification result of the candidate frame through feature selection and feature analysis, and adjusting the position of the candidate frame to obtain final positioning.
As a preferred technical solution, the step of extracting the characteristics of the view map in the optimization training includes: and inputting a visual field diagram of the detection frame generated in the first step, and extracting features related to true positive and false positive in the visual field diagram based on the learning degree of the current detection network.
As a preferred technical solution, the step of extracting the sub-graph features in the judgment training is as follows: and inputting the detection result in the abnormal cell detection model into a detection network as a view map, extracting a detection frame in the detection network as a sub-view map, and extracting the feature corresponding to the lesion in each sub-view map based on the learning degree of the current detection network.
As a preferred technical solution, the step of comprehensively predicting in the judgment training comprises: and comprehensively summarizing the features in the sub-view map into feature information, taking the feature information as the feature information of the whole view map, performing convolution, pooling and activation operations, inputting the feature information into a full-connection classification network, mapping the original pixel information of the picture into corresponding feature information, further mapping the feature information into classification information, and obtaining the judgment result of the whole view map.
As a preferred technical solution, the step of judging and testing is: and inputting a detection result output from the abnormal cell detection model as a visual field diagram into the trained visual field diagram judgment network, extracting a detection frame therein as a sub-visual field diagram, acquiring and summarizing features in the sub-visual field diagram, taking the integrated features as the features of the whole visual field diagram, performing convolution, pooling and activation operations, and inputting the features into the full-connection classification network to obtain the final judgment of the visual field diagram.
As a preferred technical solution, the sequence composing step in the sample diagnosis training comprises: and (4) arranging the visual field images according to the positive confidence coefficient according to the judgment result of the visual field image judgment model, and selecting the first 10 visual field images as a group of sequences.
As a preferred technical solution, the state integration in the sample diagnosis training comprises the following steps: and taking the obtained characteristics of the whole view field image in the view field image judgment model as representative image characteristics of the view field image, sequentially inputting one sequence in a group of sequences, combining the output of the previous position and the input of the current position together, using the combined output as the input of the RNN model of the current position, obtaining the output of the current position through convolution, pooling and activation operations of the RNN, continuing to the last position, obtaining the output of the last position, obtaining classification information through a full connection layer, and outputting a sample judgment result.
As a preferred technical solution, the sample judgment and test comprises the following steps: inputting the integrated characteristics in the visual field diagram of the front 10 confidence degrees in the samples into a trained sample judgment network, operating through an RNN model, performing convolution, pooling and activation operations through the RNN to obtain the output of the current position until the output of the last position, and passing through a full connection layer to obtain the judgment result of the current sample.
Has the advantages that: the construction of the detection classification model of the pathological squamous epithelial cells comprises the construction of an abnormal cell detection model, the construction of a visual field image judgment model, the construction of a sample judgment model, the three models are buckled and are progressive layer by layer, and the results are subjected to optimized detection, detailed re-diagnosis, integrated re-diagnosis and the like on the basis of the detection of the former model, so that the multiple control of the diagnosis result is realized on the whole, the accuracy of the diagnosis result is ensured, and a perfect sample diagnosis method is obtained.
Drawings
To further illustrate the beneficial effects of the construction of a pathological squamous epithelial cell detection classification model provided in the present invention, the accompanying drawings are provided, it is noted that the drawings provided in the present invention are only selected individual examples from all drawings and are not intended as a limitation of the claims, and all other corresponding maps obtained by the drawings provided in the present application should be considered as within the scope of the present application.
FIG. 1 is a schematic flow chart of the abnormal cell detection model of the present invention.
FIG. 2 is a schematic flow chart of a visual field map determination model according to the present invention.
FIG. 3 is a schematic flow chart of a sample judgment model according to the present invention.
Detailed Description
The invention will be further illustrated with reference to the following specific examples. It should be understood that these examples are for illustrative purposes only and are not intended to limit the scope of the present invention. Further, it should be understood that various changes or modifications of the present invention may be made by those skilled in the art after reading the teaching of the present invention, and such equivalents may fall within the scope of the present invention as defined in the appended claims.
Unless defined otherwise, all terms (including technical and scientific terms) used in disclosing the invention have the meaning commonly understood by one of ordinary skill in the art to which this invention belongs. By way of further guidance, definitions of terms are included to better understand the teachings of the present invention.
In order to solve the problems, the invention provides a method for constructing a detection classification model of pathological squamous epithelial cells, which comprises the steps of constructing an abnormal cell detection model, constructing a visual field diagram judgment model and constructing a sample judgment model;
the construction of the abnormal cell detection model comprises the following steps:
the first step is as follows: detecting suspected diseased cells;
the second step is that: and optimizing the detection result.
Construction of abnormal cell detection model
As shown in fig. 1, in the abnormal cell detection model, a microscope image of a cell is input into a detection network as a visual field image, and the visual field image is determined and an abnormal cell detection result is obtained through abnormal cell detection and abnormal cell refine, wherein the abnormal cell detection is a first step of 'detecting suspected diseased cells' constructed by the model and is intended to locate and classify features related to a lesion in the visual field image, and the abnormal cell refine is a second step of 'optimizing the detection result' constructed by the model, so that the detection result in the first step can be optimized, true positives can be identified, and false positives can be reduced.
The first step is as follows: detecting cells suspected of being diseased
The step of detecting the suspected diseased cells is based on the deep learning framework of the Faster R-CNN, and the abnormal cells are detected by taking the marking frame marked by the professional doctor as detection information.
In some embodiments, the step of detecting suspected diseased cells comprises detection training and detection testing.
In some embodiments, the detection training includes extracting candidate boxes, class positioning, and reward and punishment convergence.
In some embodiments, the step of extracting the candidate box is: a visual field map of the cell image is input, and the detection network extracts candidate frames according to the generation and modification principles.
In some embodiments, the field of view map is 1024 x 1024 in size.
In some embodiments, the generation and modification rules include the scale and size of the candidate boxes.
In some embodiments, the generating rule is to define an anchor as a pixel on the last layer view map of the pre-trained network convolution layer, and k candidate boxes can be generated, wherein each candidate box corresponds to a set of scaling and aspect ratio.
In some embodiments, the generation principle uses 3 scaling scales, i.e., 128, 256, 512, 3 aspect ratios, i.e., 1: 2. 1: 1. 2: 1.
the location of each anchor yields 9 candidate boxes according to the generation principles described above.
In some embodiments, the modification principle is to use a mark frame labeled by a professional doctor in advance to perform fine adjustment and delete the size of the candidate frame so as to meet the required size, and finally use a merging method based on the overlapping degree to merge the candidate frames with the overlapping degree greater than a certain fixed threshold value to complete the modification of the candidate frame.
In some embodiments, the step of categorizing the location is: extracting features corresponding to the pathological changes in the candidate frame based on the learning degree of the current detection network, obtaining a classification result of the candidate frame through feature selection and feature analysis, and adjusting the position of the candidate frame to obtain final positioning.
In some embodiments, the feature selection and feature analysis comprises convolution, pooling, activation; the convolution parameters are 3 × 256, 3 × 512, 3 × 1024; the pooling adopts a maximum pooling method; the activation employs the Relu function.
The most important part of the convolutional neural network is called the filter or kernel. The filter can convert a sub-node matrix on the current layer of neural network into a unit node matrix on the next layer of neural network. The unit node matrix refers to a node matrix with length and width of 1, but without limitation to depth. The length and width of the node matrix processed by the filter are manually specified, the size of the node matrix is also called the size of the filter, and the common sizes of the filter are 3 × 3 and 5 × 5. Because the depth of the filter process is consistent with the depth of the current layer neural network node matrix, although the node matrix is three-dimensional, the size of the filter only needs to specify two dimensions. Another setting in the filter that needs to be manually specified is the depth of the resulting matrix of unit nodes, which is referred to as the depth of the filter. In summary, the size of a filter refers to the size of the input node matrix of a filter, and the depth refers to the depth of the output unit node matrix. In the convolutional neural network, the parameters in the filter used by each convolutional layer are the same, and the shared filter parameters can prevent the content on the image from being influenced by the position.
The pooling layer is added between the convolution layers, so that the size of the matrix can be effectively reduced, and further, the parameters in the final full-connection layer are reduced, and therefore, the pooling layer can not only increase the calculation speed, but also prevent overfitting. The computation in the pooling layer filter is not a weighted sum of nodes, but rather a simpler maximum or average computation. The pooling layer operating with the maximum value is referred to as the maximum pooling layer, and the pooling layer operating with the average value is referred to as the average pooling layer.
Each neuron node in the neural network receives the output value of the neuron at the previous layer as the input value of the neuron, and transmits the input value to the next layer, and the neuron node at the input layer can directly transmit the input attribute value to the next layer (hidden layer or output layer). In a multi-layer neural network, there is a functional relationship between the output of an upper node and the input of a lower node, and this function is called an activation function. At present, the mainstream neural network mainly adopts a sigmoid function or a tanh function, the output is bounded, and the output can be easily used as the input of the next layer. Relu functions and their modifications, such as Leaky-ReLU, P-ReLU, R-ReLU, etc., have been used in recent years.
In some embodiments, the step of reward punishment convergence is: and comparing the classification result obtained by the detection network with the information marked by the doctor, and modifying the network parameters through reward and punishment until the network has the best convergence effect, so that the detection training is completed.
In some embodiments, the optimal convergence effect is that the loss on the training set gradually converges through the oscillation and remains stable.
In some embodiments, the step of detecting the test is: inputting a visual field diagram into a trained abnormal cell detection network, and obtaining detection and classification results and position information of a detection frame through convolution, pooling and activation operations; the convolution parameters are 3 × 256, 3 × 512, 3 × 1024; the pooling adopts a maximum pooling method; the activation employs the Relu function.
The second step is that: optimizing test results
The result obtained by the suspected pathological cell detection in the first step has certain false positive, and the abnormal cells which are detected are subjected to optimized detection based on the deep learning frame of densenet, so that the true positive and the false positive are judged, and the detection result of the false positive is reduced.
In some embodiments, the step of optimizing test results comprises optimizing training and optimizing testing.
In some embodiments, the optimization training includes extracting visual field map features, making predictions, and alignment convergence.
In some embodiments, the step of extracting the view map features is: and inputting a visual field diagram of the detection frame generated in the first step, and extracting features related to true positive and false positive in the visual field diagram based on the learning degree of the current detection network.
In some embodiments, the step of making a prediction is: inputting the extracted features into a detection network, and mapping the original pixel information of the picture into a classification result, namely prediction, through convolution, pooling and activation operations; the convolution parameters are 1 × 256, 1 × 512, 3 × 256, 3 × 512, 3 × 1024; the pooling adopts a maximum pooling method and an average pooling method; the activation employs the Relu function.
In some embodiments, the step of converging the alignment is: and comparing the prediction result obtained by the detection network with the result marked by the doctor, and automatically modifying the mapping relation by the model under the inconsistent condition until the network has the optimal convergence effect and the optimization training is finished.
In some embodiments, the step of optimizing the test is: inputting a visual field image containing suspected pathological cell from the detection network into the trained abnormal cell detection network, extracting corresponding characteristic information in the visual field image, and obtaining a classification result of optimized detection through convolution, pooling and activation operations.
Construction of visual field image judgment model
As shown in fig. 2, in the view map determination model, the detection result in the abnormal cell detection model is input into the detection network as a view map, the detection frame therein is extracted as a sub-view map, the features in the sub-view map are acquired and collected, the integrated features are used as the features of the whole view map, and the view map is determined to be abnormal through the full-connection network.
In some embodiments, the constructing of the view map determination model includes determining for a single view map.
Determination of single view
And on the basis of the detection model, detecting each view map again by using the detected detection frame to finish the final judgment of the view map.
In some embodiments, the judging of the single-view map includes judgment training and judgment testing.
In some embodiments, the decision training includes sub-graph feature extraction, comprehensive prediction, and alignment convergence.
In some embodiments, the step of extracting the sub-graph features is: and inputting the detection result in the abnormal cell detection model into a detection network as a view map, extracting a detection frame in the detection network as a sub-view map, and extracting the feature corresponding to the lesion in each sub-view map based on the learning degree of the current detection network.
In some embodiments, the number of the sub-view maps is not less than 5, and if the number of the sub-view maps is less than 5, the sub-view map with the highest confidence level is copied and supplemented to 5.
In some embodiments, the step of comprehensively predicting comprises: and comprehensively summarizing the features in the sub-view map into feature information, taking the feature information as the feature information of the whole view map, performing convolution, pooling and activation operations, inputting the feature information into a full-connection classification network, mapping the original pixel information of the picture into corresponding feature information, further mapping the feature information into classification information, and obtaining the judgment result of the whole view map.
The full connection in the application refers to that in the full connection neural network, nodes between every two layers are connected through edges and used for integrating the extracted features, and the full connection layer can integrate local information with category distinctiveness in a convolution layer or a pooling layer.
In some embodiments, the full connection is split into two layers, one layer being 256 nodes to 4096 nodes and a second layer being 4096 nodes to 2 nodes.
In some embodiments, the method used to synthesize the features in the sub-views is maximal pooling.
In some embodiments, the step of alignment convergence is the same as the step of alignment convergence in the abnormal cell detection model.
In some embodiments, the step of determining the test is: and inputting a detection result output from the abnormal cell detection model as a visual field diagram into the trained visual field diagram judgment network, extracting a detection frame therein as a sub-visual field diagram, acquiring and summarizing features in the sub-visual field diagram, taking the integrated features as the features of the whole visual field diagram, performing convolution, pooling and activation operations, and inputting the features into the full-connection classification network to obtain the final judgment of the visual field diagram.
Sample judgment model
As shown in fig. 3, 10 visual field images with high positive confidence are selected from the sample judgment model, the feature information obtained by integrating the visual field images obtained from the visual field image judgment model is input into the network, and the deep learning framework based on the RNN is sequentially state-integrated from high confidence to low confidence to complete the diagnosis at the sample level.
In some embodiments, the sample assessment model comprises a sample diagnosis based on a single-field view map.
Sample diagnosis based on single view map
And on the basis of the visual field image judgment model, utilizing the integrated characteristic information in the visual field image with higher confidence coefficient to sequentially synthesize the characteristic information to finish sample diagnosis.
In some embodiments, the single-view based sample diagnosis includes sample diagnosis training and sample diagnosis testing.
In some embodiments, the sample diagnostic training comprises component sequence, state synthesis, and alignment convergence.
In some embodiments, the step of composing the sequence is: and (4) arranging the visual field images according to the positive confidence coefficient according to the judgment result of the visual field image judgment model, and selecting the first 10 visual field images as a group of sequences.
In some embodiments, the step of state integration is: taking the obtained characteristics of the whole view field image in the view field image judgment model as representative image characteristics of the view field image, sequentially inputting one sequence in a group of sequences, combining the output of the previous position and the input of the current position together, using the combined output of the previous position and the input of the current position as the input of the RNN model of the current position, obtaining the output of the current position through convolution, pooling and activation operations of the RNN, continuing to the last position, obtaining the output of the last position, then obtaining classification information through full connection, and outputting a sample judgment result; the convolution parameters are 1 × 256, 1 × 512, 3 × 256, 3 × 512, 3 × 1024; the pooling adopts a maximum pooling method; the activation adopts a sigmoid function; the full connection is divided into two layers, one layer is 256 nodes to 1024 nodes, and the second layer is 1024 nodes to 2 nodes.
In some embodiments, the step of alignment convergence is the same as the step of alignment convergence in the abnormal cell detection model.
In some embodiments, the step of the sample judgment test is: inputting the integrated characteristics in the visual field diagram of the front 10 confidence degrees in the samples into a trained sample judgment network, operating through an RNN model, performing convolution, pooling and activation operations through the RNN to obtain the output of the current position until the output of the last position, and passing through a full connection layer to obtain the judgment result of the current sample.
The detection classification model of the pathological squamous epithelial cells comprises an abnormal cell detection model, a visual field image judgment model and a sample judgment model, the three models are buckled and progress layer by layer, the results are subjected to optimization detection, detail re-diagnosis, integration re-diagnosis and the like on the basis of the detection of the former model, the diagnosis result is subjected to multiple controls on the whole, the accuracy of the diagnosis result is ensured, and a complete sample diagnosis method is obtained.
Finally, it should be understood that the above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. The construction of a detection classification model of pathological squamous epithelial cells is characterized by comprising the construction of an abnormal cell detection model, the construction of a visual field image judgment model and the construction of a sample judgment model;
the construction of the abnormal cell detection model comprises the following steps:
the first step is as follows: detecting suspected diseased cells, wherein the detection training comprises candidate frame extraction, classification positioning and reward and punishment convergence;
the second step is that: optimizing the detection result, including optimizing training and optimizing testing, wherein the optimizing training includes extracting visual field map features, making prediction and comparing convergence;
the construction of the view map judgment model comprises the judgment of a single view map; the judgment on the single visual field image comprises judgment training and judgment testing; the judgment training comprises sub-image feature extraction, comprehensive prediction and comparison convergence;
the construction of the sample judgment model comprises sample diagnosis based on a single-view image; the sample diagnosis based on the single-view map comprises sample diagnosis training and sample diagnosis testing; the sample diagnosis training comprises composition sequence, state synthesis and alignment convergence.
2. The method for constructing the detection classification model of the pathological squamous epithelial cell as claimed in claim 1, wherein the step of extracting the candidate box in the detection training is as follows: a visual field map of the cell image is input, and the detection network extracts candidate frames according to the generation and modification principles.
3. The method for constructing a classification model for detecting pathological squamous epithelial cells as claimed in claim 1, wherein said step of classification localization in detection training comprises: extracting features corresponding to the pathological changes in the candidate frame based on the learning degree of the current detection network, obtaining a classification result of the candidate frame through feature selection and feature analysis, and adjusting the position of the candidate frame to obtain final positioning.
4. The method for constructing the detection and classification model of the pathological squamous epithelial cells as claimed in claim 1, wherein the step of extracting the visual field map features in the optimization training is as follows: and inputting a visual field diagram of the detection frame generated in the first step, and extracting features related to true positive and false positive in the visual field diagram based on the learning degree of the current detection network.
5. The method for constructing the detection and classification model of the pathological squamous epithelial cell according to claim 1, wherein the step of extracting the sub-graph features in the judgment training comprises the following steps: and inputting the detection result in the abnormal cell detection model into a detection network as a view map, extracting a detection frame in the detection network as a sub-view map, and extracting the feature corresponding to the lesion in each sub-view map based on the learning degree of the current detection network.
6. The method for constructing a model for detecting and classifying pathological squamous epithelial cells according to claim 1, wherein the step of comprehensively predicting in the judgment training comprises: and comprehensively summarizing the features in the sub-view map into feature information, taking the feature information as the feature information of the whole view map, performing convolution, pooling and activation operations, inputting the feature information into a full-connection classification network, mapping the original pixel information of the picture into corresponding feature information, further mapping the feature information into classification information, and obtaining the judgment result of the whole view map.
7. Construction of a model for the detection and classification of pathological squamous epithelial cells according to claim 1, characterized in that said judgment test comprises the steps of: and inputting a detection result output from the abnormal cell detection model as a visual field diagram into the trained visual field diagram judgment network, extracting a detection frame therein as a sub-visual field diagram, acquiring and summarizing features in the sub-visual field diagram, taking the integrated features as the features of the whole visual field diagram, performing convolution, pooling and activation operations, and inputting the features into the full-connection classification network to obtain the final judgment of the visual field diagram.
8. The method for constructing a pathological squamous epithelial cell detection classification model according to claim 1, wherein the steps of composing the sequence in the sample diagnosis training are: and (4) arranging the visual field images according to the positive confidence coefficient according to the judgment result of the visual field image judgment model, and selecting the first 10 visual field images as a group of sequences.
9. The method for constructing a pathological squamous epithelial cell detection classification model according to claim 1, wherein the step of state integration in the sample diagnosis training comprises: and taking the obtained characteristics of the whole view field image in the view field image judgment model as representative image characteristics of the view field image, sequentially inputting one sequence in a group of sequences, combining the output of the previous position and the input of the current position together, using the combined output as the input of the RNN model of the current position, obtaining the output of the current position through convolution, pooling and activation operations of the RNN, continuing to the last position, obtaining the output of the last position, obtaining classification information through a full connection layer, and outputting a sample judgment result.
10. The method for constructing a model for detecting and classifying pathological squamous epithelial cells according to claim 1, wherein the step of the sample judgment test comprises: inputting the integrated characteristics in the visual field diagram of the front 10 confidence degrees in the samples into a trained sample judgment network, operating through an RNN model, performing convolution, pooling and activation operations through the RNN to obtain the output of the current position until the output of the last position, and passing through a full connection layer to obtain the judgment result of the current sample.
CN201911108184.2A 2019-11-13 2019-11-13 Construction of detection classification model of pathological squamous epithelial cells Active CN110853021B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911108184.2A CN110853021B (en) 2019-11-13 2019-11-13 Construction of detection classification model of pathological squamous epithelial cells

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911108184.2A CN110853021B (en) 2019-11-13 2019-11-13 Construction of detection classification model of pathological squamous epithelial cells

Publications (2)

Publication Number Publication Date
CN110853021A true CN110853021A (en) 2020-02-28
CN110853021B CN110853021B (en) 2020-11-24

Family

ID=69600197

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911108184.2A Active CN110853021B (en) 2019-11-13 2019-11-13 Construction of detection classification model of pathological squamous epithelial cells

Country Status (1)

Country Link
CN (1) CN110853021B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113345286A (en) * 2021-08-03 2021-09-03 南京大经中医药信息技术有限公司 Teacher-and-bearing teaching system and method integrating AI technology and video technology
CN115205793A (en) * 2022-09-15 2022-10-18 广东电网有限责任公司肇庆供电局 Electric power machine room smoke detection method and device based on deep learning secondary confirmation
WO2023034301A1 (en) * 2021-09-01 2023-03-09 Emed Labs, Llc Image processing and presentation techniques for enhanced proctoring sessions

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120155725A1 (en) * 2010-12-16 2012-06-21 Massachusetts Institute Of Technology Bayesian Inference of Particle Motion and Dynamics from Single Particle Tracking and Fluorescence Correlation Spectroscopy
CN106845717A (en) * 2017-01-24 2017-06-13 哈尔滨工业大学 A kind of energy efficiency evaluation method based on multi-model convergence strategy
US9965891B2 (en) * 2014-04-16 2018-05-08 Heartflow, Inc. Systems and methods for image-based object modeling using multiple image acquisitions or reconstructions
CN108346145A (en) * 2018-01-31 2018-07-31 浙江大学 The recognition methods of unconventional cell in a kind of pathological section
CN109034221A (en) * 2018-07-13 2018-12-18 马丁 A kind of processing method and its device of cervical cytology characteristics of image
CN109544534A (en) * 2018-11-26 2019-03-29 上海联影智能医疗科技有限公司 A kind of lesion image detection device, method and computer readable storage medium
CN109614869A (en) * 2018-11-10 2019-04-12 天津大学 A kind of pathological image classification method based on multi-scale compress rewards and punishments network
US20190205760A1 (en) * 2017-12-31 2019-07-04 Definiens Ag Using a First Stain to Train a Model to Predict the Region Stained by a Second Stain
CN110021425A (en) * 2019-01-31 2019-07-16 湖南品信生物工程有限公司 A kind of relatively detector and its construction method and cervical cancer cell detection method
CN110070173A (en) * 2019-03-26 2019-07-30 山东女子学院 A kind of deep neural network dividing method based on sub-pieces in length and breadth
CN110211108A (en) * 2019-05-29 2019-09-06 武汉兰丁医学高科技有限公司 A kind of novel abnormal cervical cells automatic identifying method based on Feulgen colouring method
CN110334565A (en) * 2019-03-21 2019-10-15 江苏迪赛特医疗科技有限公司 A kind of uterine neck neoplastic lesions categorizing system of microscope pathological photograph
CN110335248A (en) * 2019-05-31 2019-10-15 上海联影智能医疗科技有限公司 Medical image lesion detection method, device, computer equipment and storage medium
CN110349156A (en) * 2017-11-30 2019-10-18 腾讯科技(深圳)有限公司 The recognition methods of lesion characteristics and device, storage medium in the picture of eyeground

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120155725A1 (en) * 2010-12-16 2012-06-21 Massachusetts Institute Of Technology Bayesian Inference of Particle Motion and Dynamics from Single Particle Tracking and Fluorescence Correlation Spectroscopy
US9965891B2 (en) * 2014-04-16 2018-05-08 Heartflow, Inc. Systems and methods for image-based object modeling using multiple image acquisitions or reconstructions
CN106845717A (en) * 2017-01-24 2017-06-13 哈尔滨工业大学 A kind of energy efficiency evaluation method based on multi-model convergence strategy
CN110349156A (en) * 2017-11-30 2019-10-18 腾讯科技(深圳)有限公司 The recognition methods of lesion characteristics and device, storage medium in the picture of eyeground
US20190205760A1 (en) * 2017-12-31 2019-07-04 Definiens Ag Using a First Stain to Train a Model to Predict the Region Stained by a Second Stain
CN108346145A (en) * 2018-01-31 2018-07-31 浙江大学 The recognition methods of unconventional cell in a kind of pathological section
CN109034221A (en) * 2018-07-13 2018-12-18 马丁 A kind of processing method and its device of cervical cytology characteristics of image
CN109614869A (en) * 2018-11-10 2019-04-12 天津大学 A kind of pathological image classification method based on multi-scale compress rewards and punishments network
CN109544534A (en) * 2018-11-26 2019-03-29 上海联影智能医疗科技有限公司 A kind of lesion image detection device, method and computer readable storage medium
CN110021425A (en) * 2019-01-31 2019-07-16 湖南品信生物工程有限公司 A kind of relatively detector and its construction method and cervical cancer cell detection method
CN110334565A (en) * 2019-03-21 2019-10-15 江苏迪赛特医疗科技有限公司 A kind of uterine neck neoplastic lesions categorizing system of microscope pathological photograph
CN110070173A (en) * 2019-03-26 2019-07-30 山东女子学院 A kind of deep neural network dividing method based on sub-pieces in length and breadth
CN110211108A (en) * 2019-05-29 2019-09-06 武汉兰丁医学高科技有限公司 A kind of novel abnormal cervical cells automatic identifying method based on Feulgen colouring method
CN110335248A (en) * 2019-05-31 2019-10-15 上海联影智能医疗科技有限公司 Medical image lesion detection method, device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
徐奕: "基于深度学习的组织病理图像分析", 《中国优秀硕士学位论文全文数据库(电子期刊)》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113345286A (en) * 2021-08-03 2021-09-03 南京大经中医药信息技术有限公司 Teacher-and-bearing teaching system and method integrating AI technology and video technology
WO2023034301A1 (en) * 2021-09-01 2023-03-09 Emed Labs, Llc Image processing and presentation techniques for enhanced proctoring sessions
CN115205793A (en) * 2022-09-15 2022-10-18 广东电网有限责任公司肇庆供电局 Electric power machine room smoke detection method and device based on deep learning secondary confirmation
CN115205793B (en) * 2022-09-15 2023-01-24 广东电网有限责任公司肇庆供电局 Electric power machine room smoke detection method and device based on deep learning secondary confirmation

Also Published As

Publication number Publication date
CN110853021B (en) 2020-11-24

Similar Documents

Publication Publication Date Title
CN109145939B (en) Semantic segmentation method for small-target sensitive dual-channel convolutional neural network
CN108305249B (en) Rapid diagnosis and scoring method of full-scale pathological section based on deep learning
CN110853021B (en) Construction of detection classification model of pathological squamous epithelial cells
CN108830209B (en) Remote sensing image road extraction method based on generation countermeasure network
CN111488921B (en) Intelligent analysis system and method for panoramic digital pathological image
CN108830326B (en) Automatic segmentation method and device for MRI (magnetic resonance imaging) image
CN110021425B (en) Comparison detector, construction method thereof and cervical cancer cell detection method
CN111783590A (en) Multi-class small target detection method based on metric learning
CN109285139A (en) A kind of x-ray imaging weld inspection method based on deep learning
CN113378796B (en) Cervical cell full-section classification method based on context modeling
CN109871875B (en) Building change detection method based on deep learning
CN109508360A (en) A kind of polynary flow data space-time autocorrelation analysis method of geography based on cellular automata
CN111723780A (en) Directional migration method and system of cross-domain data based on high-resolution remote sensing image
CN110334565A (en) A kind of uterine neck neoplastic lesions categorizing system of microscope pathological photograph
CN109711426A (en) A kind of pathological picture sorter and method based on GAN and transfer learning
CN110738132B (en) Target detection quality blind evaluation method with discriminant perception capability
CN105427309A (en) Multiscale hierarchical processing method for extracting object-oriented high-spatial resolution remote sensing information
CN108629369A (en) A kind of Visible Urine Sediment Components automatic identifying method based on Trimmed SSD
CN108446616B (en) Road extraction method based on full convolution neural network ensemble learning
CN109472801A (en) It is a kind of for multiple dimensioned neuromorphic detection and dividing method
CN113052228A (en) Liver cancer pathological section classification method based on SE-Incepton
CN114510594A (en) Traditional pattern subgraph retrieval method based on self-attention mechanism
CN111599444A (en) Intelligent tongue diagnosis detection method and device, intelligent terminal and storage medium
CN114445356A (en) Multi-resolution-based full-field pathological section image tumor rapid positioning method
CN104978569A (en) Sparse representation based incremental face recognition method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information

Inventor after: Li Wenyong

Inventor after: Zhang Ligeng

Inventor after: Chen Wei

Inventor after: Shi Hong Hong

Inventor after: Wang Peng

Inventor after: Yin Yajuan

Inventor after: Tao Junzhi

Inventor before: Li Wenyong

Inventor before: Zhang Lichi

Inventor before: Chen Wei

Inventor before: Shi Hong Hong

Inventor before: Wang Peng

Inventor before: Yin Yajuan

Inventor before: Tao Junzhi

CB03 Change of inventor or designer information
TR01 Transfer of patent right

Effective date of registration: 20240528

Address after: Room 101-1F, Room 101-2F and Room 101-3F, Building 3, No. 168 Shengpu Road, Suzhou Industrial Park, Jiangsu Province, 215000

Patentee after: Suzhou Dessert Pathological Diagnosis Center Co.,Ltd.

Country or region after: China

Address before: Room 402, Building G, No. 388 Ruoshui Road, Industrial Park, Suzhou City, Jiangsu Province, 215000

Patentee before: Jiangsu Disset Medical Technology Co.,Ltd.

Country or region before: China

TR01 Transfer of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Construction of a detection and classification model for pathological squamous epithelial cells

Granted publication date: 20201124

Pledgee: Zheshang Bank Co.,Ltd. Suzhou Branch

Pledgor: Suzhou Dessert Pathological Diagnosis Center Co.,Ltd.

Registration number: Y2024990000211

PE01 Entry into force of the registration of the contract for pledge of patent right