CN110853021B - Construction of detection classification model of pathological squamous epithelial cells - Google Patents

Construction of detection classification model of pathological squamous epithelial cells Download PDF

Info

Publication number
CN110853021B
CN110853021B CN201911108184.2A CN201911108184A CN110853021B CN 110853021 B CN110853021 B CN 110853021B CN 201911108184 A CN201911108184 A CN 201911108184A CN 110853021 B CN110853021 B CN 110853021B
Authority
CN
China
Prior art keywords
detection
visual field
model
judgment
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911108184.2A
Other languages
Chinese (zh)
Other versions
CN110853021A (en
Inventor
李文勇
张立篪
陈巍
蹇秀红
王鹏
殷亚娟
陶军之
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Disset Medical Technology Co ltd
Original Assignee
Jiangsu Disset Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Disset Medical Technology Co ltd filed Critical Jiangsu Disset Medical Technology Co ltd
Priority to CN201911108184.2A priority Critical patent/CN110853021B/en
Publication of CN110853021A publication Critical patent/CN110853021A/en
Application granted granted Critical
Publication of CN110853021B publication Critical patent/CN110853021B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Investigating Or Analysing Biological Materials (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a construction of a detection classification model of pathological squamous epithelial cells, which comprises the construction of an abnormal cell detection model, the construction of a visual field diagram judgment model and the construction of a sample judgment model; three models in the detection classification model are buckled and are subjected to optimization detection, detail re-diagnosis, integration re-diagnosis and the like on the basis of the detection of the former model, so that the overall multiple control on the diagnosis result is realized, the accuracy of the diagnosis result is ensured, and a perfect sample diagnosis method is obtained.

Description

Construction of detection classification model of pathological squamous epithelial cells
Technical Field
The invention relates to the field of cell detection and classification, in particular to construction of a detection and classification model of pathological squamous epithelial cells.
Background
Squamous epithelial cells, also known as squamous epithelial cells, are derived mainly from the lower ureter, bladder, urethral and vaginal surface and uterus, and their surface is covered with squamous epithelial cells, the growth and differentiation of which are mainly affected by estrogen produced by the ovary, and the action of progestogen is to promote the shedding of epithelial cells. Squamous epithelial cell lesion is a common change in the cervical liquid-based cytology examination process, is a normal change of cervical cells, and the possibility of precancerous lesion is considered only when atypical squamous cell lesion appears, so that the detection and classification difficulty of pathological squamous epithelial cells is high, manual diagnosis needs abundant experience, but still consumes a great deal of time, and misjudgment also appears, so that accurate screening cannot be realized.
At present, the diagnosis efficiency can be improved by using a method for detecting squamous epithelial cells with the assistance of a computer based on image characteristics, but because the model is unreasonable to construct, the diagnosis result cannot ensure higher accuracy, and a diagnosis method for the whole sample is lacked.
Disclosure of Invention
In order to solve the problems, the invention provides a method for constructing a detection classification model of pathological squamous epithelial cells, which comprises the steps of constructing an abnormal cell detection model, constructing a visual field diagram judgment model and constructing a sample judgment model;
the construction of the abnormal cell detection model comprises the following steps:
the first step is as follows: detecting suspected diseased cells, wherein the detection training comprises candidate frame extraction, classification positioning and reward and punishment convergence;
the second step is that: optimizing the detection result, including optimizing training and optimizing testing, wherein the optimizing training includes extracting visual field map features, making prediction and comparing convergence;
the construction of the view map judgment model comprises the judgment of a single view map; the judgment on the single visual field image comprises judgment training and judgment testing; the judgment training comprises sub-image feature extraction, comprehensive prediction and comparison convergence;
the construction of the sample judgment model comprises sample diagnosis based on a single-view image; the sample diagnosis based on the single-view map comprises sample diagnosis training and sample diagnosis testing; the sample diagnosis training comprises composition sequence, state synthesis and alignment convergence.
As a preferred technical solution, the step of extracting candidate frames in the detection training includes: a visual field map of the cell image is input, and the detection network extracts candidate frames according to the generation and modification principles.
As a preferred technical solution, the step of classification and positioning in the detection training includes: extracting features corresponding to the pathological changes in the candidate frame based on the learning degree of the current detection network, obtaining a classification result of the candidate frame through feature selection and feature analysis, and adjusting the position of the candidate frame to obtain final positioning.
As a preferred technical solution, the step of extracting the characteristics of the view map in the optimization training includes: and inputting a visual field diagram of the detection frame generated in the first step, and extracting features related to true positive and false positive in the visual field diagram based on the learning degree of the current detection network.
As a preferred technical solution, the step of extracting the sub-graph features in the judgment training is as follows: and inputting the detection result in the abnormal cell detection model into a detection network as a view map, extracting a detection frame in the detection network as a sub-view map, and extracting the feature corresponding to the lesion in each sub-view map based on the learning degree of the current detection network.
As a preferred technical solution, the step of comprehensively predicting in the judgment training comprises: and comprehensively summarizing the features in the sub-view map into feature information, taking the feature information as the feature information of the whole view map, performing convolution, pooling and activation operations, inputting the feature information into a full-connection classification network, mapping the original pixel information of the picture into corresponding feature information, further mapping the feature information into classification information, and obtaining the judgment result of the whole view map.
As a preferred technical solution, the step of judging and testing is: and inputting a detection result output from the abnormal cell detection model as a visual field diagram into the trained visual field diagram judgment network, extracting a detection frame therein as a sub-visual field diagram, acquiring and summarizing features in the sub-visual field diagram, taking the integrated features as the features of the whole visual field diagram, performing convolution, pooling and activation operations, and inputting the features into the full-connection classification network to obtain the final judgment of the visual field diagram.
As a preferred technical solution, the sequence composing step in the sample diagnosis training comprises: and (4) arranging the visual field images according to the positive confidence coefficient according to the judgment result of the visual field image judgment model, and selecting the first 10 visual field images as a group of sequences.
As a preferred technical solution, the state integration in the sample diagnosis training comprises the following steps: and taking the obtained characteristics of the whole view field image in the view field image judgment model as representative image characteristics of the view field image, sequentially inputting one sequence in a group of sequences, combining the output of the previous position and the input of the current position together, using the combined output as the input of the RNN model of the current position, obtaining the output of the current position through convolution, pooling and activation operations of the RNN, continuing to the last position, obtaining the output of the last position, obtaining classification information through a full connection layer, and outputting a sample judgment result.
As a preferred technical solution, the sample judgment and test comprises the following steps: inputting the integrated characteristics in the visual field diagram of the front 10 confidence degrees in the samples into a trained sample judgment network, operating through an RNN model, performing convolution, pooling and activation operations through the RNN to obtain the output of the current position until the output of the last position, and passing through a full connection layer to obtain the judgment result of the current sample.
Has the advantages that: the construction of the detection classification model of the pathological squamous epithelial cells comprises the construction of an abnormal cell detection model, the construction of a visual field image judgment model, the construction of a sample judgment model, the three models are buckled and are progressive layer by layer, and the results are subjected to optimized detection, detailed re-diagnosis, integrated re-diagnosis and the like on the basis of the detection of the former model, so that the multiple control of the diagnosis result is realized on the whole, the accuracy of the diagnosis result is ensured, and a perfect sample diagnosis method is obtained.
Drawings
To further illustrate the beneficial effects of the construction of a pathological squamous epithelial cell detection classification model provided in the present invention, the accompanying drawings are provided, it is noted that the drawings provided in the present invention are only selected individual examples from all drawings and are not intended as a limitation of the claims, and all other corresponding maps obtained by the drawings provided in the present application should be considered as within the scope of the present application.
FIG. 1 is a schematic flow chart of the abnormal cell detection model of the present invention.
FIG. 2 is a schematic flow chart of a visual field map determination model according to the present invention.
FIG. 3 is a schematic flow chart of a sample judgment model according to the present invention.
Detailed Description
The invention will be further illustrated with reference to the following specific examples. It should be understood that these examples are for illustrative purposes only and are not intended to limit the scope of the present invention. Further, it should be understood that various changes or modifications of the present invention may be made by those skilled in the art after reading the teaching of the present invention, and such equivalents may fall within the scope of the present invention as defined in the appended claims.
Unless defined otherwise, all terms (including technical and scientific terms) used in disclosing the invention have the meaning commonly understood by one of ordinary skill in the art to which this invention belongs. By way of further guidance, definitions of terms are included to better understand the teachings of the present invention.
In order to solve the problems, the invention provides a method for constructing a detection classification model of pathological squamous epithelial cells, which comprises the steps of constructing an abnormal cell detection model, constructing a visual field diagram judgment model and constructing a sample judgment model;
the construction of the abnormal cell detection model comprises the following steps:
the first step is as follows: detecting suspected diseased cells;
the second step is that: and optimizing the detection result.
Construction of abnormal cell detection model
As shown in fig. 1, in the abnormal cell detection model, a microscope image of a cell is input into a detection network as a visual field image, and the visual field image is determined and an abnormal cell detection result is obtained through abnormal cell detection and abnormal cell refine, wherein the abnormal cell detection is a first step of 'detecting suspected diseased cells' constructed by the model and is intended to locate and classify features related to a lesion in the visual field image, and the abnormal cell refine is a second step of 'optimizing the detection result' constructed by the model, so that the detection result in the first step can be optimized, true positives can be identified, and false positives can be reduced.
The first step is as follows: detecting cells suspected of being diseased
The step of detecting the suspected diseased cells is based on the deep learning framework of the Faster R-CNN, and the abnormal cells are detected by taking the marking frame marked by the professional doctor as detection information.
In some embodiments, the step of detecting suspected diseased cells comprises detection training and detection testing.
In some embodiments, the detection training includes extracting candidate boxes, class positioning, and reward and punishment convergence.
In some embodiments, the step of extracting the candidate box is: a visual field map of the cell image is input, and the detection network extracts candidate frames according to the generation and modification principles.
In some embodiments, the field of view map is 1024 x 1024 in size.
In some embodiments, the generation and modification rules include the scale and size of the candidate boxes.
In some embodiments, the generating rule is to define an anchor as a pixel on the last layer view map of the pre-trained network convolution layer, and k candidate boxes can be generated, wherein each candidate box corresponds to a set of scaling and aspect ratio.
In some embodiments, the generation principle uses 3 scaling scales, i.e., 128, 256, 512, 3 aspect ratios, i.e., 1: 2. 1: 1. 2: 1.
the location of each anchor yields 9 candidate boxes according to the generation principles described above.
In some embodiments, the modification principle is to use a mark frame labeled by a professional doctor in advance to perform fine adjustment and delete the size of the candidate frame so as to meet the required size, and finally use a merging method based on the overlapping degree to merge the candidate frames with the overlapping degree greater than a certain fixed threshold value to complete the modification of the candidate frame.
In some embodiments, the step of categorizing the location is: extracting features corresponding to the pathological changes in the candidate frame based on the learning degree of the current detection network, obtaining a classification result of the candidate frame through feature selection and feature analysis, and adjusting the position of the candidate frame to obtain final positioning.
In some embodiments, the feature selection and feature analysis comprises convolution, pooling, activation; the convolution parameters are 3 × 256, 3 × 512, 3 × 1024; the pooling adopts a maximum pooling method; the activation employs the Relu function.
The most important part of the convolutional neural network is called the filter or kernel. The filter can convert a sub-node matrix on the current layer of neural network into a unit node matrix on the next layer of neural network. The unit node matrix refers to a node matrix with length and width of 1, but without limitation to depth. The length and width of the node matrix processed by the filter are manually specified, the size of the node matrix is also called the size of the filter, and the common sizes of the filter are 3 × 3 and 5 × 5. Because the depth of the filter process is consistent with the depth of the current layer neural network node matrix, although the node matrix is three-dimensional, the size of the filter only needs to specify two dimensions. Another setting in the filter that needs to be manually specified is the depth of the resulting matrix of unit nodes, which is referred to as the depth of the filter. In summary, the size of a filter refers to the size of the input node matrix of a filter, and the depth refers to the depth of the output unit node matrix. In the convolutional neural network, the parameters in the filter used by each convolutional layer are the same, and the shared filter parameters can prevent the content on the image from being influenced by the position.
The pooling layer is added between the convolution layers, so that the size of the matrix can be effectively reduced, and further, the parameters in the final full-connection layer are reduced, and therefore, the pooling layer can not only increase the calculation speed, but also prevent overfitting. The computation in the pooling layer filter is not a weighted sum of nodes, but rather a simpler maximum or average computation. The pooling layer operating with the maximum value is referred to as the maximum pooling layer, and the pooling layer operating with the average value is referred to as the average pooling layer.
Each neuron node in the neural network receives the output value of the neuron at the previous layer as the input value of the neuron, and transmits the input value to the next layer, and the neuron node at the input layer can directly transmit the input attribute value to the next layer (hidden layer or output layer). In a multi-layer neural network, there is a functional relationship between the output of an upper node and the input of a lower node, and this function is called an activation function. At present, the mainstream neural network mainly adopts a sigmoid function or a tanh function, the output is bounded, and the output can be easily used as the input of the next layer. Relu functions and their modifications, such as Leaky-ReLU, P-ReLU, R-ReLU, etc., have been used in recent years.
In some embodiments, the step of reward punishment convergence is: and comparing the classification result obtained by the detection network with the information marked by the doctor, and modifying the network parameters through reward and punishment until the network has the best convergence effect, so that the detection training is completed.
In some embodiments, the optimal convergence effect is that the loss on the training set gradually converges through the oscillation and remains stable.
In some embodiments, the step of detecting the test is: inputting a visual field diagram into a trained abnormal cell detection network, and obtaining detection and classification results and position information of a detection frame through convolution, pooling and activation operations; the convolution parameters are 3 × 256, 3 × 512, 3 × 1024; the pooling adopts a maximum pooling method; the activation employs the Relu function.
The second step is that: optimizing test results
The result obtained by the suspected pathological cell detection in the first step has certain false positive, and the abnormal cells which are detected are subjected to optimized detection based on the deep learning frame of densenet, so that the true positive and the false positive are judged, and the detection result of the false positive is reduced.
In some embodiments, the step of optimizing test results comprises optimizing training and optimizing testing.
In some embodiments, the optimization training includes extracting visual field map features, making predictions, and alignment convergence.
In some embodiments, the step of extracting the view map features is: and inputting a visual field diagram of the detection frame generated in the first step, and extracting features related to true positive and false positive in the visual field diagram based on the learning degree of the current detection network.
In some embodiments, the step of making a prediction is: inputting the extracted features into a detection network, and mapping the original pixel information of the picture into a classification result, namely prediction, through convolution, pooling and activation operations; the convolution parameters are 1 × 256, 1 × 512, 3 × 256, 3 × 512, 3 × 1024; the pooling adopts a maximum pooling method and an average pooling method; the activation employs the Relu function.
In some embodiments, the step of converging the alignment is: and comparing the prediction result obtained by the detection network with the result marked by the doctor, and automatically modifying the mapping relation by the model under the inconsistent condition until the network has the optimal convergence effect and the optimization training is finished.
In some embodiments, the step of optimizing the test is: inputting a visual field image containing suspected pathological cell from the detection network into the trained abnormal cell detection network, extracting corresponding characteristic information in the visual field image, and obtaining a classification result of optimized detection through convolution, pooling and activation operations.
Construction of visual field image judgment model
As shown in fig. 2, in the view map determination model, the detection result in the abnormal cell detection model is input into the detection network as a view map, the detection frame therein is extracted as a sub-view map, the features in the sub-view map are acquired and collected, the integrated features are used as the features of the whole view map, and the view map is determined to be abnormal through the full-connection network.
In some embodiments, the constructing of the view map determination model includes determining for a single view map.
Determination of single view
And on the basis of the detection model, detecting each view map again by using the detected detection frame to finish the final judgment of the view map.
In some embodiments, the judging of the single-view map includes judgment training and judgment testing.
In some embodiments, the decision training includes sub-graph feature extraction, comprehensive prediction, and alignment convergence.
In some embodiments, the step of extracting the sub-graph features is: and inputting the detection result in the abnormal cell detection model into a detection network as a view map, extracting a detection frame in the detection network as a sub-view map, and extracting the feature corresponding to the lesion in each sub-view map based on the learning degree of the current detection network.
In some embodiments, the number of the sub-view maps is not less than 5, and if the number of the sub-view maps is less than 5, the sub-view map with the highest confidence level is copied and supplemented to 5.
In some embodiments, the step of comprehensively predicting comprises: and comprehensively summarizing the features in the sub-view map into feature information, taking the feature information as the feature information of the whole view map, performing convolution, pooling and activation operations, inputting the feature information into a full-connection classification network, mapping the original pixel information of the picture into corresponding feature information, further mapping the feature information into classification information, and obtaining the judgment result of the whole view map.
The full connection in the application refers to that in the full connection neural network, nodes between every two layers are connected through edges and used for integrating the extracted features, and the full connection layer can integrate local information with category distinctiveness in a convolution layer or a pooling layer.
In some embodiments, the full connection is split into two layers, one layer being 256 nodes to 4096 nodes and a second layer being 4096 nodes to 2 nodes.
In some embodiments, the method used to synthesize the features in the sub-views is maximal pooling.
In some embodiments, the step of alignment convergence is the same as the step of alignment convergence in the abnormal cell detection model.
In some embodiments, the step of determining the test is: and inputting a detection result output from the abnormal cell detection model as a visual field diagram into the trained visual field diagram judgment network, extracting a detection frame therein as a sub-visual field diagram, acquiring and summarizing features in the sub-visual field diagram, taking the integrated features as the features of the whole visual field diagram, performing convolution, pooling and activation operations, and inputting the features into the full-connection classification network to obtain the final judgment of the visual field diagram.
Sample judgment model
As shown in fig. 3, 10 visual field images with high positive confidence are selected from the sample judgment model, the feature information obtained by integrating the visual field images obtained from the visual field image judgment model is input into the network, and the deep learning framework based on the RNN is sequentially state-integrated from high confidence to low confidence to complete the diagnosis at the sample level.
In some embodiments, the sample assessment model comprises a sample diagnosis based on a single-field view map.
Sample diagnosis based on single view map
And on the basis of the visual field image judgment model, utilizing the integrated characteristic information in the visual field image with higher confidence coefficient to sequentially synthesize the characteristic information to finish sample diagnosis.
In some embodiments, the single-view based sample diagnosis includes sample diagnosis training and sample diagnosis testing.
In some embodiments, the sample diagnostic training comprises component sequence, state synthesis, and alignment convergence.
In some embodiments, the step of composing the sequence is: and (4) arranging the visual field images according to the positive confidence coefficient according to the judgment result of the visual field image judgment model, and selecting the first 10 visual field images as a group of sequences.
In some embodiments, the step of state integration is: taking the obtained characteristics of the whole view field image in the view field image judgment model as representative image characteristics of the view field image, sequentially inputting one sequence in a group of sequences, combining the output of the previous position and the input of the current position together, using the combined output of the previous position and the input of the current position as the input of the RNN model of the current position, obtaining the output of the current position through convolution, pooling and activation operations of the RNN, continuing to the last position, obtaining the output of the last position, then obtaining classification information through full connection, and outputting a sample judgment result; the convolution parameters are 1 × 256, 1 × 512, 3 × 256, 3 × 512, 3 × 1024; the pooling adopts a maximum pooling method; the activation adopts a sigmoid function; the full connection is divided into two layers, one layer is 256 nodes to 1024 nodes, and the second layer is 1024 nodes to 2 nodes.
In some embodiments, the step of alignment convergence is the same as the step of alignment convergence in the abnormal cell detection model.
In some embodiments, the step of the sample judgment test is: inputting the integrated characteristics in the visual field diagram of the front 10 confidence degrees in the samples into a trained sample judgment network, operating through an RNN model, performing convolution, pooling and activation operations through the RNN to obtain the output of the current position until the output of the last position, and passing through a full connection layer to obtain the judgment result of the current sample.
The detection classification model of the pathological squamous epithelial cells comprises an abnormal cell detection model, a visual field image judgment model and a sample judgment model, the three models are buckled and progress layer by layer, the results are subjected to optimization detection, detail re-diagnosis, integration re-diagnosis and the like on the basis of the detection of the former model, the diagnosis result is subjected to multiple controls on the whole, the accuracy of the diagnosis result is ensured, and a complete sample diagnosis method is obtained.
Finally, it should be understood that the above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. The construction of a detection classification model of pathological squamous epithelial cells is characterized by comprising the construction of an abnormal cell detection model, the construction of a visual field image judgment model and the construction of a sample judgment model;
the construction of the abnormal cell detection model comprises the following steps:
the first step is as follows: detecting suspected diseased cells, wherein the detection training comprises candidate frame extraction, classification positioning and reward and punishment convergence;
the second step is that: optimizing the detection result, including optimizing training and optimizing testing, wherein the optimizing training includes extracting visual field map features, making prediction and comparing convergence;
in the visual field image judgment model, a detection result in the abnormal cell detection model is used as a visual field image and is input into a detection network, a detection frame in the visual field image is extracted to be used as a sub-visual field image, the characteristics in the sub-visual field image are acquired and collected, the integrated characteristics are used as the characteristics of the whole visual field image, and abnormal judgment is carried out on the visual field image through a full-connection network;
the construction of the view map judgment model comprises the judgment of a single view map; the judgment on the single visual field image comprises judgment training and judgment testing; the judgment training comprises sub-image feature extraction, comprehensive prediction and comparison convergence;
selecting 10 visual field images with high positive confidence coefficient from the sample judgment model, inputting the characteristic information obtained in the visual field image judgment model after the visual field images are integrated into a network, and sequentially carrying out state synthesis according to the sequence from high confidence coefficient to low confidence coefficient on the basis of an RNN (neural network) -based deep learning framework to finish the diagnosis of the sample level;
the construction of the sample judgment model comprises sample diagnosis based on a single-view image; the sample diagnosis based on the single-view map comprises sample diagnosis training and sample diagnosis testing; the sample diagnosis training comprises composition sequence, state synthesis and alignment convergence.
2. The method for constructing the detection classification model of the pathological squamous epithelial cell as claimed in claim 1, wherein the step of extracting the candidate box in the detection training is as follows: a visual field map of the cell image is input, and the detection network extracts candidate frames according to the generation and modification principles.
3. The method for constructing a classification model for detecting pathological squamous epithelial cells as claimed in claim 1, wherein said step of classification localization in detection training comprises: extracting features corresponding to the pathological changes in the candidate frame based on the learning degree of the current detection network, obtaining a classification result of the candidate frame through feature selection and feature analysis, and adjusting the position of the candidate frame to obtain final positioning.
4. The method for constructing the detection and classification model of the pathological squamous epithelial cells as claimed in claim 1, wherein the step of extracting the visual field map features in the optimization training is as follows: and inputting a visual field diagram of the detection frame generated in the first step, and extracting features related to true positive and false positive in the visual field diagram based on the learning degree of the current detection network.
5. The method for constructing the detection and classification model of the pathological squamous epithelial cell according to claim 1, wherein the step of extracting the sub-graph features in the judgment training comprises the following steps: and inputting the detection result in the abnormal cell detection model into a detection network as a view map, extracting a detection frame in the detection network as a sub-view map, and extracting the feature corresponding to the lesion in each sub-view map based on the learning degree of the current detection network.
6. The method for constructing a model for detecting and classifying pathological squamous epithelial cells according to claim 1, wherein the step of comprehensively predicting in the judgment training comprises: and comprehensively summarizing the features in the sub-view map into feature information, taking the feature information as the feature information of the whole view map, performing convolution, pooling and activation operations, inputting the feature information into a full-connection classification network, mapping the original pixel information of the picture into corresponding feature information, further mapping the feature information into classification information, and obtaining the judgment result of the whole view map.
7. Construction of a model for the detection and classification of pathological squamous epithelial cells according to claim 1, characterized in that said judgment test comprises the steps of: and inputting a detection result output from the abnormal cell detection model as a visual field diagram into the trained visual field diagram judgment network, extracting a detection frame therein as a sub-visual field diagram, acquiring and summarizing features in the sub-visual field diagram, taking the integrated features as the features of the whole visual field diagram, performing convolution, pooling and activation operations, and inputting the features into the full-connection classification network to obtain the final judgment of the visual field diagram.
8. The method for constructing a pathological squamous epithelial cell detection classification model according to claim 1, wherein the steps of composing the sequence in the sample diagnosis training are: and (4) arranging the visual field images according to the positive confidence coefficient according to the judgment result of the visual field image judgment model, and selecting the first 10 visual field images as a group of sequences.
9. The method for constructing a pathological squamous epithelial cell detection classification model according to claim 1, wherein the step of state integration in the sample diagnosis training comprises: and taking the obtained characteristics of the whole view field image in the view field image judgment model as representative image characteristics of the view field image, sequentially inputting one sequence in a group of sequences, combining the output of the previous position and the input of the current position together, using the combined output as the input of the RNN model of the current position, obtaining the output of the current position through convolution, pooling and activation operations of the RNN, continuing to the last position, obtaining the output of the last position, obtaining classification information through a full connection layer, and outputting a sample judgment result.
10. Construction of a model for the detection and classification of pathological squamous epithelial cells according to claim 1, characterized in that said sample diagnostic test comprises the steps of: inputting the integrated characteristics in the visual field diagram of the front 10 confidence degrees in the samples into a trained sample judgment network, operating through an RNN model, performing convolution, pooling and activation operations through the RNN to obtain the output of the current position until the output of the last position, and passing through a full connection layer to obtain the judgment result of the current sample.
CN201911108184.2A 2019-11-13 2019-11-13 Construction of detection classification model of pathological squamous epithelial cells Active CN110853021B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911108184.2A CN110853021B (en) 2019-11-13 2019-11-13 Construction of detection classification model of pathological squamous epithelial cells

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911108184.2A CN110853021B (en) 2019-11-13 2019-11-13 Construction of detection classification model of pathological squamous epithelial cells

Publications (2)

Publication Number Publication Date
CN110853021A CN110853021A (en) 2020-02-28
CN110853021B true CN110853021B (en) 2020-11-24

Family

ID=69600197

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911108184.2A Active CN110853021B (en) 2019-11-13 2019-11-13 Construction of detection classification model of pathological squamous epithelial cells

Country Status (1)

Country Link
CN (1) CN110853021B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113345286B (en) * 2021-08-03 2021-11-19 南京大经中医药信息技术有限公司 Teacher-and-bearing teaching system and method integrating AI technology and video technology
US20230063441A1 (en) * 2021-09-01 2023-03-02 Emed Labs, Llc Image processing and presentation techniques for enhanced proctoring sessions
CN115205793B (en) * 2022-09-15 2023-01-24 广东电网有限责任公司肇庆供电局 Electric power machine room smoke detection method and device based on deep learning secondary confirmation

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8542898B2 (en) * 2010-12-16 2013-09-24 Massachusetts Institute Of Technology Bayesian inference of particle motion and dynamics from single particle tracking and fluorescence correlation spectroscopy
US9058692B1 (en) * 2014-04-16 2015-06-16 Heartflow, Inc. Systems and methods for image-based object modeling using multiple image acquisitions or reconstructions
CN106845717B (en) * 2017-01-24 2021-04-09 哈尔滨工业大学 Energy efficiency evaluation method based on multi-model fusion strategy
CN110349156B (en) * 2017-11-30 2023-05-30 腾讯科技(深圳)有限公司 Method and device for identifying lesion characteristics in fundus picture and storage medium
US11593656B2 (en) * 2017-12-31 2023-02-28 Astrazeneca Computational Pathology Gmbh Using a first stain to train a model to predict the region stained by a second stain
CN108346145B (en) * 2018-01-31 2020-08-04 浙江大学 Identification method of unconventional cells in pathological section
CN109034221A (en) * 2018-07-13 2018-12-18 马丁 A kind of processing method and its device of cervical cytology characteristics of image
CN109614869B (en) * 2018-11-10 2023-02-28 天津大学 Pathological image classification method based on multi-scale compression reward and punishment network
CN109544534B (en) * 2018-11-26 2020-10-16 上海联影智能医疗科技有限公司 Focal image detection device, method and computer-readable storage medium
CN110021425B (en) * 2019-01-31 2022-12-09 湖南品信生物工程有限公司 Comparison detector, construction method thereof and cervical cancer cell detection method
CN110334565A (en) * 2019-03-21 2019-10-15 江苏迪赛特医疗科技有限公司 A kind of uterine neck neoplastic lesions categorizing system of microscope pathological photograph
CN110070173A (en) * 2019-03-26 2019-07-30 山东女子学院 A kind of deep neural network dividing method based on sub-pieces in length and breadth
CN110211108A (en) * 2019-05-29 2019-09-06 武汉兰丁医学高科技有限公司 A kind of novel abnormal cervical cells automatic identifying method based on Feulgen colouring method
CN110335248B (en) * 2019-05-31 2021-08-17 上海联影智能医疗科技有限公司 Medical image focus detection method, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN110853021A (en) 2020-02-28

Similar Documents

Publication Publication Date Title
CN109145939B (en) Semantic segmentation method for small-target sensitive dual-channel convolutional neural network
CN109598727B (en) CT image lung parenchyma three-dimensional semantic segmentation method based on deep neural network
CN108305249B (en) Rapid diagnosis and scoring method of full-scale pathological section based on deep learning
CN110853021B (en) Construction of detection classification model of pathological squamous epithelial cells
CN108830326B (en) Automatic segmentation method and device for MRI (magnetic resonance imaging) image
CN108830209B (en) Remote sensing image road extraction method based on generation countermeasure network
CN110021425B (en) Comparison detector, construction method thereof and cervical cancer cell detection method
CN111488921B (en) Intelligent analysis system and method for panoramic digital pathological image
CN109285139A (en) A kind of x-ray imaging weld inspection method based on deep learning
CN110263705A (en) Towards two phase of remote sensing technology field high-resolution remote sensing image change detecting method
CN111783590A (en) Multi-class small target detection method based on metric learning
CN109711426A (en) A kind of pathological picture sorter and method based on GAN and transfer learning
CN109508360A (en) A kind of polynary flow data space-time autocorrelation analysis method of geography based on cellular automata
CN109871875B (en) Building change detection method based on deep learning
CN105427309A (en) Multiscale hierarchical processing method for extracting object-oriented high-spatial resolution remote sensing information
CN113378796B (en) Cervical cell full-section classification method based on context modeling
CN108446616B (en) Road extraction method based on full convolution neural network ensemble learning
CN110738132B (en) Target detection quality blind evaluation method with discriminant perception capability
CN108629369A (en) A kind of Visible Urine Sediment Components automatic identifying method based on Trimmed SSD
CN109472801A (en) It is a kind of for multiple dimensioned neuromorphic detection and dividing method
CN111723780A (en) Directional migration method and system of cross-domain data based on high-resolution remote sensing image
CN108846416A (en) The extraction process method and system of specific image
CN108710893A (en) A kind of digital image cameras source model sorting technique of feature based fusion
CN112215217B (en) Digital image recognition method and device for simulating doctor to read film
CN113052228A (en) Liver cancer pathological section classification method based on SE-Incepton

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Li Wenyong

Inventor after: Zhang Ligeng

Inventor after: Chen Wei

Inventor after: Shi Hong Hong

Inventor after: Wang Peng

Inventor after: Yin Yajuan

Inventor after: Tao Junzhi

Inventor before: Li Wenyong

Inventor before: Zhang Lichi

Inventor before: Chen Wei

Inventor before: Shi Hong Hong

Inventor before: Wang Peng

Inventor before: Yin Yajuan

Inventor before: Tao Junzhi