CN111134735A - Lung cell pathology rapid on-site evaluation system and method and computer readable storage medium - Google Patents

Lung cell pathology rapid on-site evaluation system and method and computer readable storage medium Download PDF

Info

Publication number
CN111134735A
CN111134735A CN201911319501.5A CN201911319501A CN111134735A CN 111134735 A CN111134735 A CN 111134735A CN 201911319501 A CN201911319501 A CN 201911319501A CN 111134735 A CN111134735 A CN 111134735A
Authority
CN
China
Prior art keywords
neural network
network model
image
sample
microscopic image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911319501.5A
Other languages
Chinese (zh)
Inventor
张新
叶德贤
房劬
姜辰希
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Xingmai Information Technology Co ltd
Zhongshan Hospital Fudan University
Original Assignee
Shanghai Xingmai Information Technology Co ltd
Zhongshan Hospital Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Xingmai Information Technology Co ltd, Zhongshan Hospital Fudan University filed Critical Shanghai Xingmai Information Technology Co ltd
Priority to CN201911319501.5A priority Critical patent/CN111134735A/en
Publication of CN111134735A publication Critical patent/CN111134735A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B10/00Other methods or instruments for diagnosis, e.g. instruments for taking a cell sample, for biopsy, for vaccination diagnosis; Sex determination; Ovulation-period determination; Throat striking implements
    • A61B10/02Instruments for taking cell samples or for biopsy
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0082Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes
    • A61B5/0084Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes for introduction into the body, e.g. by catheters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B10/00Other methods or instruments for diagnosis, e.g. instruments for taking a cell sample, for biopsy, for vaccination diagnosis; Sex determination; Ovulation-period determination; Throat striking implements
    • A61B2010/009Various features of diagnostic instruments

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Veterinary Medicine (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Pathology (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a rapid on-site assessment system, a rapid on-site assessment method and a computer-readable storage medium for lung cytopathology, which are used for rapidly assessing cell samples in a surgical site. The rapid on-site assessment system for lung cytopathic symptoms comprises: microscopic image acquisition device, it includes: the objective table is used for bearing the cell sample; the camera is used for shooting the cell sample to obtain a microscopic image of the sample; the image evaluation device is configured with a trained neural network model, and the trained neural network model is used for evaluating the microscopic image to obtain an evaluation result classified as negative or positive; and the output device is connected with the image evaluation device and used for outputting the evaluation result to a user. According to the invention, the neural network classification model is used for evaluating the microscopic image acquired by the microscopic image acquisition device, and the evaluation result is obtained on the operation site, so that the problems that the current cellular pathological diagnosis is complex and time-consuming and the pathological diagnosis result cannot be obtained immediately are solved, and the diagnosis efficiency is effectively improved.

Description

Lung cell pathology rapid on-site evaluation system and method and computer readable storage medium
Technical Field
The invention relates to an image processing technology, in particular to a rapid on-site assessment and evaluation technology for cellular pathology, and belongs to the technical field of microscopic pathology auxiliary diagnosis.
Background
Currently, the method of lung-assisted diagnosis is usually a needle biopsy under Computed Tomography (CT) guidance, which is a common method for diagnosing lung. The invention mainly aims to collect sample cells at a bronchus or lung focus position through a fiber bronchoscope lung biopsy so as to further evaluate and process the sample cells, thereby assisting in diagnosing disease conditions. Currently, the field is commonly used with a Rapid field evaluation technique for specimens, namely ROSE (Rapid on-site evaluation), which refers to a Rapid field examination of specimen cells by a cytopathologist and an evaluation of the quality of fine needle smear and biopsy prints. The examiner can know whether the sample size is enough through the ROSE to judge whether more sample sizes need to be collected, so that repeated puncture of a patient is avoided, enough sample sizes can be collected at one time, and meanwhile, a required evaluation result can be provided for subsequent diagnosis treatment of disease conditions through ROSE initial diagnosis.
Specifically, the bronchoscopic biopsy print and the fine needle aspiration smear are samples obtained by performing a forceps biopsy and a brush biopsy on a bronchus or an intra-pulmonary lesion or a transbronchial aspiration biopsy on a hilum or mediastinal lymph node in a bronchofiberscope lung biopsy, and a doctor performs ROSE on the samples, fixes and stains the slide, and then observes lung cells under a microscope. However, the prior art has the technical defect that a doctor who performs ROSE on a bronchoscope biopsy print and a fine needle puncture smear is a pathologist, and a respiratory doctor is difficult to complete ROSE, so that the bronchoscope biopsy print and the fine needle puncture smear need to be sent to the pathologist, the ROSE time is prolonged in the process, a patient is on an operating table and is often operated by the respiratory doctor, the requirement for obtaining a detection and evaluation structure in real time is urgent, the time of the patient on the operating table is precious, and a current microscope scanner generates a complete microscopic image after scanning the complete bronchoscope biopsy print or the fine needle puncture smear, and cannot process an evaluation picture while scanning in the scanning process, so that the evaluation processing time is prolonged, and the difficulty is brought to the rapid diagnosis of the disease. In summary, the problems faced at present are that it takes time to send the images to the pathology department for manual examination, and that it takes time to collect the large images before the examination can be started.
Disclosure of Invention
In view of the above problems in the prior art, the present invention provides a rapid on-site cellular pathology assessment system, method and computer-readable storage medium.
The invention provides a rapid on-site assessment system for lung cytopathology, which is used for rapidly assessing cell samples in a surgical site, and comprises:
microscopic image acquisition device, it includes: the objective table is used for bearing the cell sample; the camera is used for shooting the cell sample to obtain a microscopic image of the sample;
the image evaluation device is configured with a trained neural network model, and the trained neural network model is used for evaluating the microscopic image to obtain an evaluation result classified as negative or positive; the trained neural network model is obtained by training through the following steps: 1) obtaining training data, wherein the training data are microscopic images of a plurality of samples and corresponding marking information, and the marking information comprises negativity or positivity; 2) inputting the training data into a neural network model for training to obtain a trained neural network model;
and the output device is connected with the image evaluation device and used for outputting the evaluation result to a user.
In one embodiment, the neural network classification model is a trained convolutional neural network model, and the convolutional neural network model is configured with a convolutional layer, a pooling layer and a full-link layer; the convolutional layer is configured to extract image features; the pooling layer is configured to down-sample the feature map; the fully-connected layer is configured to map the downsampled features to a sample label space.
In one embodiment, the loss function for training the convolutional neural network classification model is:
Figure BDA0002326753450000021
or
Figure BDA0002326753450000022
In one embodiment, the training of the convolutional neural network further comprises: and testing the trained neural network by using the test data.
In one embodiment, the cell sample is collected by endoscopic or puncture surgery. In one embodiment, the lung cell sample is obtained by bronchoscopy or puncture surgery.
In one embodiment, the image evaluation device is further configured with an image marking module for marking the position of the positive cell in the microscopic image if the evaluation result is positive.
The invention also provides a rapid on-site assessment method for lung cell pathology, which comprises the following steps: obtaining a microscopic image of a sample, wherein the microscopic image of the sample is obtained by shooting a human body cell sample extracted on site through a microscopic image acquisition device arranged on an operation site;
evaluating the microscopic image by using the trained neural network model to obtain an evaluation result classified as negative or positive; the trained neural network model is obtained by training through the following steps: 1) obtaining training data, wherein the training data are microscopic images of a plurality of samples and corresponding marking information, and the marking information comprises negativity or positivity; 2) inputting the training data into a neural network model for training to obtain a trained neural network model;
and outputting the evaluation result to a user.
In one embodiment, the neural network classification model is a trained convolutional neural network model, and the convolutional neural network model is configured with a convolutional layer, a pooling layer and a full-link layer; the convolutional layer is configured to extract image features; the pooling layer is configured to down-sample the feature map; the fully-connected layer is configured to map the downsampled features to a sample label space.
In one embodiment, the loss function for training the convolutional neural network classification model is:
Figure BDA0002326753450000031
or
Figure BDA0002326753450000032
In one embodiment, the training of the convolutional neural network further comprises: and testing the trained neural network by using the test data.
In one embodiment, the cell sample is collected by endoscopic or puncture surgery. In one embodiment, the lung cell sample is obtained by bronchoscopy or puncture surgery.
In one embodiment, if the assessment is positive, the location of positive cells in the microscope image is marked.
The present invention also provides a computer-readable storage medium having stored thereon computer-executable instructions that, when executed, cause a computer to perform the rapid on-site assessment method of lung cytopathology recited in the present invention.
As described above, the invention utilizes the neural network classification model to evaluate the microscopic image acquired by the microscopic image acquisition device, and obtains the evaluation result on the operation site, thereby solving the problems that the prior cytopathology diagnosis is complex and time-consuming, and the pathological diagnosis result can not be obtained immediately, and effectively improving the diagnosis efficiency.
Drawings
FIG. 1: embodiment of rapid on-site assessment system for cell pathology
FIG. 2: embodiment of image evaluation device
FIG. 3: embodiment of rapid on-site assessment method for cell pathology
FIG. 4: embodiment of neural network model training method for microscopic image evaluation
FIG. 5: embodiment of neural network model for microscopic image evaluation
FIG. 6: negative lung cytopathology image example
FIG. 7: positive lung cytopathology image example
Detailed Description
In order to better explain the objects of the invention, the implementation of the solution and the advantages of the invention compared to the prior art, the invention will be further elaborated and explained below, by way of example, with reference to the drawings and examples of different embodiments shown. It is to be understood that the specific embodiments described in this section, and illustrated or described as exemplary, are merely illustrative of or convenient for an understanding of the overall inventive concept and are not intended to limit the scope of the claims. It is intended that all equivalents and modifications based on the spirit and subject matter of the invention shall fall within the scope of the invention.
The method provided by the embodiment of the invention can be applied to the system shown in FIG. 1. The image evaluation means in the system may be a computer device comprising a processor, a memory connected via a system bus, the memory having stored therein a computer program, which when executed by the processor may perform the steps of the method embodiments described below. Optionally, the computer device may further comprise a network interface, a display screen and an input device. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a nonvolatile storage medium storing an operating system and a computer program, and an internal memory. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. Optionally, the computer device may be a server, a personal computer, a personal digital assistant, other terminal devices such as a tablet computer, a mobile phone, and the like, or a cloud or a remote server, and the specific form of the computer device is not limited in the embodiment of the present application.
The following describes the technical solution of the present invention and how to solve the above technical problems with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
The following embodiments provide a rapid on-site assessment system for cellular pathology. As shown in fig. 1, the rapid on-site cytopathology evaluation system includes a microscopic image acquisition device 101, an image evaluation device 102, and an output device 103.
The microscopic image acquisition device 101 comprises an object stage and a camera. The stage is used to carry a sample, which may be cells extracted from a human body for pathological microscopic diagnosis. The human body cell sample can be obtained through puncture surgery, or can be obtained through an endoscope, or can be obtained through other medical means. The specimen is generally prepared as a microscopic slide and placed on a stage, and in some cases, it is necessary to perform a process such as staining on the specimen slide in order to more clearly distinguish the cells. The camera is used for shooting a sample to obtain a microscopic image. In some embodiments, the camera is connected to an eyepiece of the microscope for taking an image of the sample after microscopic magnification.
The image evaluation device 102 is used for obtaining a microscopic image of the sample for evaluation to obtain an evaluation result. In one embodiment, as shown in fig. 2, the image evaluation apparatus 102 includes an image acquisition module 1021, a trained neural network model 1022, and an image labeling module 1023. The image evaluation device 102 is connected with the microscopic image acquisition device 101 and can receive and evaluate the image data transmitted by the microscopic image acquisition device 101 in real time. The image acquisition module 1021 is used for acquiring a microscopic image of a sample acquired by the microscopic image acquisition device 101; the image evaluation device 102 is configured with a trained neural network model 1022, and the trained neural network model 1022 is used for evaluating the microscopic image, so that the evaluation result is negative or positive. If the evaluation result is positive, the image labeling module 1023 is used for labeling the position of the positive cell in the microscopic image. It can be understood that the connection mode between the image evaluation device 102 and the microscopic image acquisition device 101 should include any connection mode capable of transmitting data in real time, such as wired connection, wireless connection (e.g., WIFI connection, bluetooth connection, etc.), and may also be internal connection of different modules in an integrated whole machine; for another example, the connection between the image evaluation device 102 and the microscopic image acquisition device 101 may be a cloud connection, for example, the microscopic image acquisition device 101 is disposed in a diagnosis and treatment site of a hospital, and the image evaluation device 102 may be a cloud server, and the microscopic image acquisition device 101 may transmit the acquired image to the image evaluation device 102 for evaluation and diagnosis in real time through a general internet or a 5G network.
And the output device 103 is used for outputting the evaluation result of the microscopic image to a user. The output device 103 may be a display screen, a display, or a mobile terminal providing display output, or may be other output modes besides display output, such as a mode of playing voice, virtual reality \ augmented reality, or other output modes. The output device 103 may be a device integrated with the image evaluation device 103, such as a display screen integrated on the image evaluation device 102; or may be a device capable of signal transmission with the image evaluation device 103, such as a mobile terminal, a remotely connected display, and the like. It is understood that the output device 103 is connected to the image evaluation device 102 in a wired or wireless manner. In one embodiment, the output device 103 and the image evaluation device 102 may be connected through WIFI, bluetooth, cloud connection, or the like.
The following example provides a rapid on-site assessment method of cellular pathology. The specific steps of the method are specifically described below with reference to fig. 3. And also provides a computer-readable storage medium storing computer-executable instructions that, when executed, cause a computer to perform the image real-time evaluation method recited in the present invention.
As shown in fig. 3, in one embodiment, the basic steps of the rapid on-site assessment method for cytopathology images include: s301, obtaining a microscopic image of a sample; s302, evaluating the microscopic image by using the trained neural network model; and S303, outputting the evaluation result to the user. Specifically, the method comprises the following steps:
s301, obtaining a microscopic image of the sample. In one embodiment, a microscopic image of the sample is acquired by the microscopic image acquisition device 101 and transmitted to the image evaluation device 102.
And S302, evaluating the microscopic image by using the trained neural network model to obtain an evaluation result classified as negative or positive. In one embodiment, the trained neural network model is configured with a convolutional layer, a pooling layer, and a fully-connected layer; the convolutional layer is configured to extract image features; the pooling layer is configured to down-sample the feature map; the fully-connected layer is configured to map the downsampled features to a sample label space. It is to be understood that the trained neural network model may be a deep learning classification network model, a deep learning object detection network model, or a deep learning segmentation network model for image processing. The training process of the neural network model is shown in fig. 4, and includes: s4301, obtaining training data, wherein the training data are microscope images of a plurality of samples and corresponding labeling information, and the labeling information includes classification information for classifying the microscope images of the samples as negative or positive; s402, training the neural network model by using the training data to obtain the trained neural network model; and S403, testing the trained neural network model by using the test data.
In one embodiment, the loss function for training the convolutional neural network model is:
Figure BDA0002326753450000061
or
Figure BDA0002326753450000062
In one embodiment, if the assessment is positive, the location of positive cells in the microscope image is marked.
S303 outputs the evaluation result to the user. The method of outputting the evaluation result to the user may be display output, such as through a display screen, a display, a mobile terminal; other output modes besides display output can also be adopted, such as a mode of playing voice, a virtual reality/augmented reality mode and other output modes.
The training method of the neural network model used in step S302 is illustrated below with reference to fig. 4. As shown in fig. 4, the trained neural network model used in step S302 is obtained by training through the following method:
s401, training data are obtained. The training data of the neural network model comprises microscopic images of the sample and corresponding labeling information. For example, the sample may be cells or tissues of the lung, thyroid, breast, etc. of a human body; the annotation information can be the annotation made by a pathologist or an expert with professional knowledge on the microscopic image.
S402, training the neural network model by using the training data to obtain the trained neural network model. And training the neural network model according to the training data obtained in the step S401 to obtain the trained neural network model. It will be appreciated that this step may be implemented based on a variety of neural network models. For example, the microscope image labeled with classification information may be used as training data for training based on a deep learning classification network (such as CNN, VGG, inclusion, ResNet, WRN, squeezet, etc.); as can be understood by those skilled in the art, the neural network model obtained based on the deep learning classification network training can be used for classifying the microscopic image to be evaluated, and the obtained classification result is the evaluation result.
In one embodiment, a microscopic image labeled with classification information is input to a convolutional neural network model as training data to be trained to obtain a trained convolutional neural network model, the convolutional neural network model is composed of a convolutional layer, a pooling layer and a normalization layer, and a loss function for training the convolutional neural network model is as follows:
Figure BDA0002326753450000071
or
Figure BDA0002326753450000072
Wherein: loss represents a Loss function; weight represents Weight; x represents a prediction probability; class represents the corresponding category; j denotes the serial numbers of all categories. The cross entropy with the weight is that the cross entropy with the weight is used as a loss function of training because the proportion of the training sets corresponding to different disease categories is different, so that the accuracy of classification of the trained neural network model is higher.
After step S402 is completed, the trained neural network model available in step S302 is also obtained. Further, to ensure the quality of the trained neural network model, in one embodiment, the model may be tested by the following steps before being put into use.
And S403, testing the trained neural network model by using the test data. The testing step specifically comprises: s4031 obtains test data. The test data is a microscopic image of the sample that does not overlap with the training data and corresponding labeling information, and the specific method for obtaining the test data may refer to the method in step S401, which is not described herein again. S4032, the trained neural network model is used for evaluating the microscopic image in the test data to obtain a microscopic image test evaluation result; s4033, comparing the microscopic image test evaluation result with the labeling information in the test data to obtain a test comparison result.
The above-described embodiments are exemplary embodiments of the present invention. In order to more fully describe the method for evaluating the microscopic image by using the trained Neural network in step S302 in this embodiment, a Convolutional Neural Network (CNN) is further exemplified in conjunction with fig. 5.
In one embodiment, a convolutional neural network model for evaluating microscopic images of a sample is shown in fig. 5, comprising: input layer 501, convolutional layer 502, pooling layer 503, convolutional layer 504, pooling layer 505, full-link layer 506, and output layer 507. The input layer 501 performs image input, where the image is a microscopic image of a sample, and the image can be regarded as a two-dimensional array composed of individual pixel points, each having its own pixel value. The convolutional layers 502, 504 are configured to perform feature extraction on the input image, and in a convolutional neural network, there may be one or more convolutional layers. In one embodiment, the convolutional layer uses a convolution kernel to compute the feature map for the input image. When calculating the convolution, the size of the output feature map is smaller than the original. The larger the convolution kernel used, the smaller the resulting feature map. To reduce the amount of computation and increase the computation speed, a pooling layer may be generated for the convolutional layer, configured to down-sample the feature map to generate a smaller feature map. The fully-connected layer 506 is configured to map the downsampled features of the pooling layer 505 to a sample label space. And finally, outputting the classification result through an output layer 507.
The following is an example with reference to a specific application scenario. Currently, the field is commonly used with a Rapid field evaluation technique for specimens, namely ROSE (Rapid on-site evaluation), which is a field Rapid examination of specimen cells by a cytopathologist and an evaluation of the quality of fine needle punch smears and biopsy prints. The application scenario of ROSE is generally bronchoscopic biopsy print acquired by bronchoscopic biopsy or fine needle puncture smear acquired by fine needle puncture.
The invention uses a digital microscope to shoot cytopathology microscopic images: scanning a bronchoscopic biopsy print or fine needle puncture smear by using a microscope, wherein the magnification of the microscope is as follows: the objective lens can be magnified by 10, 20 and 100 times, the ocular lens can be magnified by 10 times, and the objective lens and the ocular lens can be magnified by 100 and 1000 times in cooperation. Scanning and shooting: the size of a lung cytological specimen slide is typically a few centimeters, while the field of view taken by a microscope at a time is very small, a complete large image consisting of hundreds or even thousands of microscopic images (depending on the size of the range in which the slide is taken and the magnification), up to hundreds of millions of pixels, in this case the size of the slide: 76 x 26mm (not the whole slide is shot, the shot range of the slide is adjustable), as the sample is generally smeared on the middle part of the slide, the edge of the shot range of the slide is often not provided with the sample, but in actual operation, the slide part of the edge of the sample is shot to ensure that the shot range of the sample is complete; the slide stage of the microscope is moved under the control of the control unit, and the slide is scanned and photographed line by line in a stepping mode, wherein 10 times of photographing are performed on each line, 10 lines are performed in total, and the total photographing of the sample can be completed by 100 times of photographing.
During the scanning process, each shooting of the camera generates a picture of a microscopic image, and after each picture of the microscopic image is generated, the picture is immediately transmitted to computer equipment for evaluation, wherein the size of the picture of one microscopic image is as follows: 1936 × 1216 pixels (when the sample is magnified by a microscope and photographed by a camera, the optical signal is converted into a digital signal, and the width of each pixel is 0.24 μm). After receiving the picture of the microscopic image, the microscopic image acquisition module of the computer equipment performs normalization and size adjustment, sends the picture into the microscopic image evaluation module, and can perform secondary classification on whether the cells have canceration or not through the trained convolutional neural network processing evaluation, so as to obtain the conclusion of negativity and positivity.
The method and apparatus can perform ROSE in real time in an operating room or assist a respiratory physician in performing ROSE, greatly speeding ROSE and reducing the risk of a patient waiting on an operating table for ROSE results.
The training process of the convolutional neural network used in the present invention to evaluate the treatment of cytopathic effects:
(1) after acquiring the image information, marking the images obtained by scanning the bronchoscope biopsy print and the fine needle puncture smear by a microscope, and dividing the images into negative and positive categories, wherein negative means no obvious pathological abnormality, and fig. 6 is an example of a normal lung cell microscopic image; positive means the presence of pathological abnormalities, as shown in fig. 7, which is an example of a microscopic image of lung cells diagnosed with lung adenocarcinoma.
(2) 70% of the data is used for training and input into a convolutional neural network consisting of convolutional layer, pooling layer and batch normalization layer, and the loss function is cross entropy or weighted cross entropy:
Figure BDA0002326753450000091
or
Figure BDA0002326753450000092
Wherein: loss represents a Loss function; weight represents Weight; x represents a prediction probability; class represents the corresponding category; j denotes the serial numbers of all categories. The cross entropy with the weight is different in proportion of training sets corresponding to different disease types, and the cross entropy with the weight is used as a loss function of training, so that the accuracy of classification of the trained neural network model is higher;
(3) using 20% of data to calculate a loss value after each parameter iteration update in the training to judge the quality of the model, and finishing the training of the model when the loss value is reduced to a smaller value and is not reduced any more;
(4) 10% of the data is used for testing the trained model, and the data is not involved in model fitting or calculating the loss value, namely is not involved in the training process at all, so the test result is more objective. The accuracy of the test result is the accuracy that the model can expect to achieve on the labeled data.
The above trained convolutional neural network is used to evaluate the microscopic image of the sample in this implementation, and the convolutional neural network model is shown in fig. 5 and includes: input layer 501, convolutional layer 502, pooling layer 503, convolutional layer 504, pooling layer 505, full-link layer 506, and output layer 507. The parameter configuration of each layer is detailed in table 1.
TABLE 1 convolutional neural network model parameter configuration
Figure BDA0002326753450000093
Figure BDA0002326753450000101
The input layer 501 is used for inputting a microscopic image acquired by the microscopic image acquisition device into the convolutional neural network, the original size of the microscopic image is 1936 × 1216 pixels, and in order to adapt to the input of the convolutional neural network, the original image is subjected to down sampling to obtain a characteristic map with the size of 224 × 224. Typically, the microscopic image is a color image represented by three RGB (red, green, blue) color values, thus generating three 224 × 224 size feature maps.
The convolutional layer 502 is used for feature extraction of the input microscopic image. In one embodiment, the convolution layer 502 computes 64 feature maps of 112 x 112 for 3 input feature images of 224 x 224 using a 7 x 7 convolution kernel.
To reduce the amount of computation and increase the computation speed, a pooling layer 503 may be configured for the convolutional layer 502 to down-sample the feature map to generate a feature map with a smaller size. In this example, the pooling layer 503 has a 3 × 3 kernel, and 64 maps of 56 × 56 are obtained by down-sampling the 64 maps of 112 × 112 generated by the convolutional layer 502.
Convolutional layer 504 comprises a concatenation of one convolutional layer with 1 x 1 convolution kernel and another convolutional layer with 3 x 3 convolution kernel, which is cycled 6 times to obtain 128 56 x 56 signatures.
A pooling layer 505 is connected to convolutional layer 504. The pooling layer 505 has a 2 x 2 kernel and is down-sampled from the 56 x 56 signature generated by the convolutional layer 504 to obtain a 28 x 28 signature.
The full connection layer 506 is configured to map the features obtained by downsampling the pooling layer 505 to a sample labeling space, and finally output a classification result through the output layer 507, where the classification result is a determination result of whether the microscopic image has pathological abnormality.
The whole equipment system can be placed in an operation field, and after a cell sample of a patient is obtained through biopsy or puncture, examination can be quickly carried out, a result can be output, diagnosis can be carried out in time, and efficiency is greatly improved. The other improvement of the invention is that the diagnosis of the cell pathology generally needs to be carried out by doctors with abundant experience, the diagnosis accuracy of the doctors with insufficient experience can not be ensured, and the size of the microscopic image is very large, so that the diagnosis of the doctors is easy to miss. The diagnosis is carried out through the trained neural network model, and the diagnosis accuracy can be further ensured. The advantages of the invention compared with the prior art are summarized as follows:
(1) under the condition of shortage of medical resources in the current society, the lung cell pathological image auxiliary diagnosis is carried out by adopting a deep learning neural network method, so that the diagnosis efficiency is greatly improved, and the medical resources are saved;
(2) because of the adoption of the neural network training method, the accuracy of the pathological evaluation of the invention is obviously improved, and through comparison, the pathological result obtained by evaluation after the neural network training is adopted is more accurate;
(3) the evaluation result can be quickly obtained at the operation site.
The above embodiments are illustrative, but not restrictive, and all changes that come within the spirit of the invention are desired to be protected and all changes that come within the spirit of the invention and equivalents thereof are desired to be protected.

Claims (10)

1. A rapid on-site assessment system for lung cytopathology, which is used for rapid assessment of cell samples at a surgical site, and is characterized by comprising:
microscopic image acquisition device, it includes: the objective table is used for bearing the cell sample; the camera is used for shooting the cell sample to obtain a microscopic image of the sample;
the image evaluation device is configured with a trained neural network model, and the trained neural network model is used for evaluating the microscopic image to obtain an evaluation result classified as negative or positive; the trained neural network model is obtained by training through the following steps:
1) obtaining training data, wherein the training data are microscopic images of a plurality of samples and corresponding marking information, and the marking information comprises negativity or positivity;
2) inputting the training data into a neural network model for training to obtain a trained neural network model;
and the output device is connected with the image evaluation device and used for outputting the evaluation result to a user.
2. The system of claim 1, wherein the neural network model is a convolutional neural network model configured with convolutional layers, pooling layers, fully-connected layers; the convolutional layer is configured to extract image features; the pooling layer is configured to down-sample the feature map; the fully-connected layer is configured to map the downsampled features to a sample label space.
3. The system of claim 2, wherein the loss function for training the convolutional neural network model is:
Figure FDA0002326753440000011
or
Figure FDA0002326753440000012
4. The system of claim 1, wherein the training of the convolutional neural network further comprises: and testing the trained neural network by using the test data.
5. The system of claim 1, wherein the cell sample is collected by endoscopic or puncture surgery.
6. The system of claim 1, wherein the image evaluation device is further configured with an image labeling module for labeling the location of positive cells in the microscope image if the evaluation result is positive.
7. A method for rapid on-site assessment of lung cytopathology, comprising:
obtaining a microscopic image of a sample, wherein the microscopic image of the sample is obtained by shooting a human body cell sample extracted on site through a microscopic image acquisition device arranged on an operation site;
evaluating the microscopic image by using the trained neural network model to obtain an evaluation result classified as negative or positive; the trained neural network model is obtained by training through the following steps:
1) obtaining training data, wherein the training data are microscopic images of a plurality of samples and corresponding marking information, and the marking information comprises negativity or positivity;
2) inputting the training data into a neural network model for training to obtain a trained neural network model; and outputting the evaluation result to a user.
8. The method of claim 7, wherein the cell sample is collected by endoscopic or puncture surgery.
9. The method of claim 7, wherein if the assessment result is positive, the location of the positive cells in the microscope image is marked.
10. A computer-readable storage medium having stored thereon computer-executable instructions that, when executed, cause a computer to perform the method for rapid on-site assessment of lung cytopathology of claims 7-9.
CN201911319501.5A 2019-12-19 2019-12-19 Lung cell pathology rapid on-site evaluation system and method and computer readable storage medium Pending CN111134735A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911319501.5A CN111134735A (en) 2019-12-19 2019-12-19 Lung cell pathology rapid on-site evaluation system and method and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911319501.5A CN111134735A (en) 2019-12-19 2019-12-19 Lung cell pathology rapid on-site evaluation system and method and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN111134735A true CN111134735A (en) 2020-05-12

Family

ID=70518997

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911319501.5A Pending CN111134735A (en) 2019-12-19 2019-12-19 Lung cell pathology rapid on-site evaluation system and method and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111134735A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111935399A (en) * 2020-07-31 2020-11-13 昆明市测绘研究院 Digitalization method of historical navigation films

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108564026A (en) * 2018-04-10 2018-09-21 复旦大学附属肿瘤医院 Network establishing method and system for Thyroid Neoplasms smear image classification
CN109376777A (en) * 2018-10-18 2019-02-22 四川木牛流马智能科技有限公司 Cervical cancer tissues pathological image analysis method and equipment based on deep learning
CN109815945A (en) * 2019-04-01 2019-05-28 上海徒数科技有限公司 A kind of respiratory tract inspection result interpreting system and method based on image recognition
CN110335668A (en) * 2019-05-22 2019-10-15 台州市中心医院(台州学院附属医院) Thyroid cancer cell pathological map auxiliary analysis method and system based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108564026A (en) * 2018-04-10 2018-09-21 复旦大学附属肿瘤医院 Network establishing method and system for Thyroid Neoplasms smear image classification
CN109376777A (en) * 2018-10-18 2019-02-22 四川木牛流马智能科技有限公司 Cervical cancer tissues pathological image analysis method and equipment based on deep learning
CN109815945A (en) * 2019-04-01 2019-05-28 上海徒数科技有限公司 A kind of respiratory tract inspection result interpreting system and method based on image recognition
CN110335668A (en) * 2019-05-22 2019-10-15 台州市中心医院(台州学院附属医院) Thyroid cancer cell pathological map auxiliary analysis method and system based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
枯叶蝶KYD: "《https://blog.csdn.net/u013548568/article/details/79676091》", 《TORCH CROSSENTROPYCRITERION》 *
焦李成等: "《人工智能、类脑计算与图像解译前沿》" *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111935399A (en) * 2020-07-31 2020-11-13 昆明市测绘研究院 Digitalization method of historical navigation films
CN111935399B (en) * 2020-07-31 2022-02-18 昆明市测绘研究院 Digitalization method of historical navigation films

Similar Documents

Publication Publication Date Title
CN110727097B (en) Pathological microscopic image real-time acquisition and analysis system, method, device and medium
CN111489833A (en) Lung cell pathology rapid on-site evaluation system and method and computer readable storage medium
CN109544526B (en) Image recognition system, device and method for chronic atrophic gastritis
CN111275016B (en) Slide scanning image acquisition and analysis method and device
CN109523535B (en) Pretreatment method of lesion image
Guo et al. Deep learning for assessing image focus for automated cervical cancer screening
CN110619318B (en) Image processing method, microscope, system and medium based on artificial intelligence
CN112380900A (en) Deep learning-based cervical fluid-based cell digital image classification method and system
KR102155381B1 (en) Method, apparatus and software program for cervical cancer decision using image analysis of artificial intelligence based technology
EP3998579A1 (en) Medical image processing method, apparatus and device, medium and endoscope
US20210090248A1 (en) Cervical cancer diagnosis method and apparatus using artificial intelligence-based medical image analysis and software program therefor
US10395091B2 (en) Image processing apparatus, image processing method, and storage medium identifying cell candidate area
CN110736748A (en) Immunohistochemical nuclear plasma staining section diagnosis method and system
WO2023155488A1 (en) Fundus image quality evaluation method and device based on multi-source multi-scale feature fusion
CN112801967B (en) Sperm morphology analysis method and device
JPWO2012102069A1 (en) Information processing system, information processing method, information processing apparatus, control method thereof, and control program thereof
CN110974306A (en) System for discernment and location pancreas neuroendocrine tumour under ultrasonic endoscope
CN111144271A (en) Method and system for automatically identifying biopsy parts and biopsy quantity under endoscope
JP4864709B2 (en) A system for determining the staining quality of slides using a scatter plot distribution
Pan et al. Breast tumor grading network based on adaptive fusion and microscopic imaging
CN111134735A (en) Lung cell pathology rapid on-site evaluation system and method and computer readable storage medium
KR20210033902A (en) Method, apparatus and software program for cervical cancer diagnosis using image analysis of artificial intelligence based technology
CN111047582A (en) Crohn's disease auxiliary diagnosis system under enteroscope based on degree of depth learning
JP6710853B2 (en) Probe-type confocal laser microscope endoscopic image diagnosis support device
US20210174147A1 (en) Operating method of image processing apparatus, image processing apparatus, and computer-readable recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200512