CN111341442A - Multifunctional integrated lung cancer auxiliary diagnosis system - Google Patents

Multifunctional integrated lung cancer auxiliary diagnosis system Download PDF

Info

Publication number
CN111341442A
CN111341442A CN202010140224.8A CN202010140224A CN111341442A CN 111341442 A CN111341442 A CN 111341442A CN 202010140224 A CN202010140224 A CN 202010140224A CN 111341442 A CN111341442 A CN 111341442A
Authority
CN
China
Prior art keywords
processor unit
lung cancer
processor
graphics processor
sections
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010140224.8A
Other languages
Chinese (zh)
Inventor
韩方剑
余莉
黄少冰
姜培
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Lanxi Biotechnology Co ltd
Original Assignee
Ningbo Lanxi Biotechnology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Lanxi Biotechnology Co ltd filed Critical Ningbo Lanxi Biotechnology Co ltd
Priority to CN202010140224.8A priority Critical patent/CN111341442A/en
Publication of CN111341442A publication Critical patent/CN111341442A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Public Health (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a multifunctional integrated lung cancer auxiliary diagnosis system, which comprises: the hardware platform comprises a processor I, a processor II, a graphics processor unit I, a graphics processor unit II and ARM Cortex-A57; a CNN algorithm framework consisting of a plurality of network layers for converting input data into output while learning more and more advanced functions; the processor I, the processor II, the graphics processor unit I and the graphics processor unit II are carried out on a computer, and a model algorithm trained through a CNN algorithm framework is deployed on ARM Cortex-A57 for actual scanning inference; the HE stained lung tissue section is converted into a full-digital section by a scanner, and is accurately marked by professional pathological experts by means of a remote pathological diagnosis auxiliary system; the invention can simplify the routine operation of pathological diagnosis, improve the efficiency and accuracy of pathological diagnosis and reduce the workload of pathological doctors.

Description

Multifunctional integrated lung cancer auxiliary diagnosis system
Technical Field
The invention relates to the technical field of lung cancer auxiliary diagnosis, in particular to a multifunctional integrated lung cancer auxiliary diagnosis system.
Background
Lung cancer, as a malignant tumor with the highest incidence, has a serious impact on the health of residents in China, and the incidence and death of the lung cancer are on the trend of rising year by year. The diagnostic analysis of cancer by pathologists is a complex and long-lasting task, and the current widespread diagnostic methods are based on examining and analyzing the cell morphology and cell structure of tissue sections under a microscope under a high magnification. The large number of lung cancer cases means the rapid increase of the workload of diagnosis and analysis and prognosis diagnosis, and the problems of shortage of pathologists, long culture period, long artificial pathological diagnosis time, easy occurrence of missed diagnosis and misdiagnosis and the like at present promote the rapid development of digital pathology and serve in the process of actual pathological diagnosis analysis and clinical diagnosis support.
Compared with the real-time imaging of a microscope, the digital pathology total section (WSI) has stronger interactivity, sharing property, eternal constancy and easy preservation property, and is suitable for scenes of remote pathology consultation, teaching research and the like. The rapid improvement of the computer processing capability, the progress of the image analysis algorithm research and the rise and development of big data research in the last decade promote the development of the technology of computer-aided medical pathological data analysis. Digital pathology, which consisted of the entire process of digitizing histopathological sections and analyzing the whole-section digital images (WSI), originated in the 1960 s by Prewitt and Mendelsohn who identified morphological features of different cell types from the information in the scanned images after converting the simple scanned images obtained in the field of view of the microscope into matrices of optical density values. Arthur Samuel in 1959 first proposed the concept of machine learning, namely "ability to learn without explicit programming". The branched deep learning algorithm, which was machine learning during the next two decades, was proposed by Rina Dechter, YannLeCun invented a convolutional neural network model. The first full-section scanner was introduced in 1990 and digital pathology developed in a breakthrough. The digital pathology whole slide scanning solution (IntelliSite) developed by philips corporation was approved in 2017, and FDA allowed the first medical device using AI to detect adult diabetic retinopathy (IDx-DR) in 2018.
Machine learning is a discipline that builds learning algorithm models through computational means, inputs empirical data into the models for training and then has the performance of predicting the results of the same type of data. Image recognition is the most common and typical application example of machine learning, and the recognition result can be obtained by converting an image into a two-dimensional matrix and inputting the two-dimensional matrix into an algorithm model. Similarly, the accumulation of large amounts of WSI data provides the basis and condition for the application of machine learning algorithms to full-slice image (WSI) recognition. As a branch in a machine learning algorithm, deep learning is a complex deep neural network and has the characteristics of multiple hidden layers, multiple parameters and large capacity. The methods of deep learning proposed by AlexKrizhevsky, illya Sutskever and Geoffrey e.hinton were absolutely superior to the ImageNet Large Scale Visual Recognition Competition (ILSVRC) held in 2012, making them more widely used in the image recognition field to follow.
Histopathological image recognition for lung cancer has been developed in recent years, and in the deep learning convolutional neural network model, Nicolas Coudray, Andre l. moreira and the like accurately classify WSI into adenocarcinoma, squamous cell carcinoma and normal lung tissue, obtain better sensitivity and specificity, and measure that the area under the average curve is about 0.97 in the retention population. Xin Luo, Xiao Zang and Lin Yang et al develop a pathological image analysis pipeline to automatically extract morphological features, and establish a statistical model based on the extracted features to predict the prognosis and survival result of lung cancer patients, and the results show that objective and quantitative analysis is performed on HE-stained pathological images through an algorithm model, and the prognosis of lung ADC and SCC patients is successfully predicted. Kun-Hsing Yu, Ce Zhang and Gerald J.Berry propose that a normalized machine learning method is adopted to select main characteristics, and a TCGA data set is distinguished into a short-term survivor and a long-term survivor of stage I adenocarcinoma or squamous cell carcinoma, and the result shows that the automatically obtained image characteristics can predict the prognosis of a lung cancer patient, thereby being beneficial to accurate oncology.
Disclosure of Invention
The invention aims to solve the defects in the prior art and provides a multifunctional integrated lung cancer auxiliary diagnosis system.
In order to achieve the purpose, the invention provides the following technical scheme:
a multifunctional integrated lung cancer auxiliary diagnosis system comprises:
the hardware platform comprises a processor I, a processor II, a graphics processor unit I, a graphics processor unit II and ARM Cortex-A57;
a CNN algorithm framework consisting of a plurality of network layers for converting input data into output while learning more and more advanced functions;
the processor I, the processor II, the graphics processor unit I and the graphics processor unit II are carried out on a computer, and a model algorithm trained through a CNN algorithm framework is deployed on ARM Cortex-A57 for actual scanning inference;
a training set from HE stained lung tissue sections that are converted by a scanner into all digital sections and accurately labeled by a professional pathologist with the aid of a remote pathological diagnosis assistance system;
test set, whose data does not require blending of all the sections of HE stained lung tissue sections together, all the sections of each HE stained lung tissue section are fed together into the model to test whether the results are consistent with what is expected.
Preferably, the processor I and the processor II are the same in model and both have a 16-core 3.2GHz, and the graphics processor unit I and the graphics processor unit II are the same in model and both have an RTX2080 Ti.
Preferably, the CNN algorithm framework is a convolutional neural network framework, and the HE stained lung tissue sections are respectively from large-scale three hospitals such as Changshan Xiangya Hospital and the first Hospital in Changsha.
Preferably, the scanner can automatically scan pathological sections and convert the pathological sections into full-section digital images, and can cut the full-section digital images into small blocks in the conversion process, the small blocks are suitable for the convolutional neural network and do not need to be cut by means of an openslide tool, and time for processing the data set is greatly saved.
Preferably, after the scanner scans an HE stained lung tissue section, the section is automatically cut into 256 × 256 small blocks for model training, so that a large amount of image processing time can be saved, and blank blocks only containing background areas need to be removed from all blocks in a training set and a verification set;
the slice images of all the slices in the training set are scrambled, whether the slices are cancerous or not is marked according to whether each slice contains a focus area, and the slice images are marked as 1 if one area contains the focus area and marked as 0 if no focus area exists.
Compared with the prior art, the invention has the beneficial effects that: the invention can simplify the routine operation of pathological diagnosis, improve the efficiency and accuracy of pathological diagnosis, reduce the workload of pathologists, and the auxiliary diagnosis system combining digital pathology and AI identification can assist pathologists in diagnosis and analysis, quantify the specific characteristics in the tissue slice images and display highly suspicious lesion areas, so that the time-consuming task in pathology is automated, the workload of pathologists is reduced, and more time is spent on advanced decision-making tasks.
The residual error network model is effective and stable in the field of lung cancer identification, and has more advantages in learning performance than a common convolution network. The trained model is deployed on an ARM-based scanner and is combined with a scanning process to jointly complete the scanning diagnosis function, so that the function integration is realized, a user can remotely log in a remote auxiliary pathological diagnosis system to access a full-section digital image, and long-time practice proves that the lung cancer auxiliary diagnosis system has certain practicability and accuracy in clinical diagnosis.
Drawings
FIG. 1 is a schematic diagram of the system of the present invention.
FIG. 2 is a schematic diagram of a building block network structure of the residual error network model of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-2, the present invention provides a technical solution:
a multifunctional integrated lung cancer auxiliary diagnosis system comprises:
the hardware platform comprises a processor I, a processor II, a graphics processor unit I, a graphics processor unit II and ARM Cortex-A57;
a CNN algorithm framework consisting of a plurality of network layers for converting input data into output while learning more and more advanced functions;
the processor I, the processor II, the graphics processor unit I and the graphics processor unit II are carried out on a computer, and a model algorithm trained through a CNN algorithm framework is deployed on ARM Cortex-A57 for actual scanning inference;
a training set from HE stained lung tissue sections that are converted by a scanner into all digital sections and accurately labeled by a professional pathologist with the aid of a remote pathological diagnosis assistance system;
test set, whose data does not require blending of all the sections of HE stained lung tissue sections together, all the sections of each HE stained lung tissue section are fed together into the model to test whether the results are consistent with what is expected.
Specifically, the processor I and the processor II are the same in model and are both 16 cores at 3.2GHz, and the graphics processor unit I and the graphics processor unit II are the same in model and are both RTX2080 Ti.
Specifically, the CNN algorithm framework is a convolutional neural network framework, and the HE stained lung tissue sections are respectively from large-scale three hospitals such as Changshan Xiangya Hospital and the first Hospital in Changsha.
Specifically, the scanner can automatically scan pathological sections and convert the pathological sections into full-section digital images, and can cut the full-section digital images into small image blocks in the conversion process, the small image blocks are suitable for a convolutional neural network and do not need to be subjected to image cutting by means of an openslide tool, and the time for processing a data set is greatly saved.
Specifically, after scanning a HE stained lung tissue section, the scanner automatically cuts the section into 256 × 256 small blocks for model training, so that a large amount of image processing time can be saved, and blank blocks only containing background areas need to be removed from all blocks in a training set and a verification set;
the slice images of all the slices in the training set are scrambled, whether the slices are cancerous or not is marked according to whether each slice contains a focus area, and the slice images are marked as 1 if one area contains the focus area and marked as 0 if no focus area exists.
The full-slice digital image (WSI) is huge in size, and a complete WSI full image under a 10-fold mirror generally occupies 2-3G of an internal memory and is as many as billions of pixels, so that model analysis is inconvenient. The whole image is cut into 256 × 256 block images which are used as input data of the deep learning model, and the research in the text does not need to carry out additional image cutting operation, and the 256 × 256 small blocks are automatically cut into 256 × 256 blocks for model training after the scanner scans one slice, so that a large amount of image processing time can be saved, and a complete WSI can be generally divided into 10000-. In all the cut-outs of the training and validation sets, we removed the blank cut-out containing only the background area. In order to improve the effect of model training, the image of the cut blocks of all the slices in the training set is disturbed, whether the cut blocks are cancerous or not is marked according to whether each cut block contains a focus area, the mark is 1 if one area contains the focus area, and the mark is 0 if no focus area exists. For the data in the test set, we did not need to mix all the slices of WSI together, and feed all the slices of each slice together into the model to test whether the results were consistent with what was expected.
The digital slices are obtained by scanning through a 10-fold lens, so that the training model is suitable for digital images acquired by a low-fold lens, can quickly and accurately display a prediction result, does not obviously reduce the accuracy and sensitivity by comparing the result with the result obtained by training through a 40-fold digital slice image, and is suitable for the initial lung cancer auxiliary diagnosis scene.
Due to the lack of data for digital slices of lung cancer tissue (few data on TCGA organograms) as presently disclosed, deep learning network model training methods require large data sets to reduce the risk of overfitting. Aiming at the problem of insufficient data volume, a lung tissue HE stained section collected from research cooperation units including Changshan elegant hospital, the first hospital in Changsha city, and the large-scale three hospitals such as Hunan tumor hospital and the like is scanned by using an autonomously developed multifunctional integrated scanner under the magnification of 10 times. The selected section sample is researched to be representative, the section sample contains lung cancer types as much as possible, and a part of the section containing medical artifacts, such as bubbles, small scratches, cover glass irregularity, fixing problems, burning, folds, cracks and the like on the section sample is selected, so that the training sample contains different conditions as much as possible, and the generalization capability of the training model is improved. These samples were labeled at pixel level by a pathologist with the aid of a remote aided pathological diagnosis system developed by us, and we classified the collected data sets into normal lung tissue sections and lung cancer tissue sections, wherein the normal lung tissue sections were 100 in total and the lung cancer tissue sections were 234 in total. Before model training, 70% of the two types of data (including normal tissues and lung cancer tissues) are selected as a training set, 15% is selected as a verification set and 15% is selected as a test set, wherein the training set and the verification set are used for hyper-parameter adjustment and model selection, and the test set is used for estimating the generalization performance of the final model.
Since the training power and efficiency of the convolutional neural network model (CNN) depends largely on the computational processing power and memory capacity of the computer, our research requires the use of a more highly configured computer for training. The experiments in which the model training was performed on a computer configured with two 16-core 3.2GHz processors and two RTX2080Ti graphics processor units, and the model algorithms after training were deployed on ARM Cortex-a57(quda-core) for actual scan inference.
The CNN has the advantages of local sensing, i.e. each time the filter performs convolution operation on a local data window, and shared weight, i.e. the filter has fixed parameters for different inputs, so that similar structures appear at different positions of the image. This greatly reduces the number of parameters that need to be learned (i.e. the number of weights no longer depends on the size of the input image) and makes the translation of the network relative to the input equal.
The CNN network architecture for training of the invention is a ResNet network structure, and residual learning explicitly reconfigures a layer into a reference layer input learning residual function instead of learning an unreferenced function.
Research shows that the residual error network is easier to optimize, and accuracy can be obtained through depth increase, namely, the layers behind the depth network at least realize the effect of identity mapping, and the problem of network degradation is solved.
Compared with a VGG network, the residual error network with the same depth has low complexity by several orders of magnitude, and the network structure is simplified. Residual network each building block of each stack layer follows the following mapping transformation: y ═ F (x, { W)i})+x(1)。
Where x and y are the input and output vectors of the considered layer. Function F (x, { W)i}) represents the residual map to learn. For the building block example of fig. 2 with two layers, F ═ W2σ(W1x), where σ denotes ReLU, where the identity mapping part does not introduce additional parameters nor computational complexity, and the dimensions of x and F must be equal in equation (1).
If the dimensions of the x and F transforms are not equal, a linear projection W may be performed by an identity mapping unitsWith matching dimensions: y ═ F (x, { W)i})+Wsx.(2)
Residual errorThe form of the function F is flexible and can represent a single layer as well as multiple layers. Function F (x, { W)i} may represent multiple convolutional layers, performing element-by-element addition on two profiles, channel-by-channel. And comparing the experiment with other algorithm models to obtain a result list.
The training process of the invention is completed on a computer platform which is provided with two 16-core 3.2GHz processors and two RTX2080Ti graphic processor units, the training tool is Pythrch, and the network architecture selects ResNet-18. Since the training scale of each cut is 224 x 224, the original image of 256x256 size needs to be cut to a specified size (typically only the edge portions are cut away, leaving the middle portion) and fed to the network input. Initialization of network weights is important because poor initialization can cause learning to stall due to gradient instability in deep networks. In order to solve the problem, a random initialization method is adopted to configure the weight parameters. Similarly, the input image is enhanced or corrected by adjusting the parameters of brightness, contrast, saturation and hue, and a random dithering process is performed to enhance the randomness of the training set. The number of the block samples in the training set is about 300 thousands, the single training time on the deep learning hardware platform built by the user is less than 30 minutes, the whole training process is iterated for about 100 times through repeated training and back propagation, and the network is trained for 1-2 weeks.
The test set is used for evaluating and testing the trained algorithm model, the section mentioned in the previous section refers to that the sequence of the slice images in the test set is not disturbed so as to conveniently count the identification accuracy of each slice, and the algorithm identification result and the expert labeling result of the same slice image can be directly evaluated and compared.
Meanwhile, the same training set is used for training in a common convolution network, the performance of a model trained on the same test data set by 10-time magnification is measured, the AUC of the ResNet network is slightly larger than the AUC of the common convolution network according to learning characteristic curves trained under two algorithm models, and the sensitivity and the accuracy of the model are better than those of the common convolution network.
The multifunctional integrated lung cancer auxiliary diagnosis system provided by the invention can improve the diagnosis efficiency and reduce the workload of pathologists. The deep learning framework is ResNet, and for the condition that the sample data of the lung cancer tissue section disclosed at present is lack, 334 lung pathological sections containing as many disease types as possible are selected from all cooperative hospitals for training and testing, and the final test result proves that the residual error network model is effective and stable in the field of lung cancer identification, so that the deep learning framework has the advantage of slower learning performance compared with a common convolution network.
Finally, the trained model is deployed on an ARM-based scanner and is combined with a scanning process to jointly complete the scanning diagnosis function, so that the function integration is realized, and a user can log in a remote auxiliary pathological diagnosis system to access a full-section digital image (WSI) remotely. In order to test the generalization capability of the model, the system is deployed in a hospital, and the lung cancer auxiliary diagnosis system has certain practicability and accuracy in clinical diagnosis through long-time practice.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (5)

1. A multifunctional integrated lung cancer auxiliary diagnosis system is characterized by comprising:
the hardware platform comprises a processor I, a processor II, a graphic processor unit I, a graphic processor unit II and ARMCortex-A57;
a CNN algorithm framework consisting of a plurality of network layers for converting input data into output while learning more and more advanced functions;
the processor I, the processor II, the graphics processor unit I and the graphics processor unit II are carried out on a computer, and a model algorithm trained through a CNN algorithm framework is deployed on ARM Cortex-A57 for actual scanning inference;
a training set from HE stained lung tissue sections that are converted by a scanner into all digital sections and accurately labeled by a professional pathologist with the aid of a remote pathological diagnosis assistance system;
test set, whose data does not require blending of all the sections of HE stained lung tissue sections together, all the sections of each HE stained lung tissue section are fed together into the model to test whether the results are consistent with what is expected.
2. The multifunctional integrated lung cancer auxiliary diagnosis system according to claim 1, wherein: the type of the processor I is the same as that of the processor II, the processor I and the processor II are both 16 cores and 3.2GHz, and the type of the graphics processor unit I is the same as that of the graphics processor unit II, and the type of the graphics processor unit I and that of the graphics processor unit II are both RTX2080 Ti.
3. The multifunctional integrated lung cancer auxiliary diagnosis system according to claim 1, wherein: the CNN algorithm framework is a convolutional neural network framework, and the HE stained lung tissue sections are respectively from large-scale three hospitals such as Changsha Xiangya Hospital and the first Hospital in Changsha.
4. The multifunctional integrated lung cancer auxiliary diagnosis system according to claim 1, wherein: the scanner can automatically scan pathological sections and convert the pathological sections into full-section digital images, and can cut the full-section digital images into small image blocks in the conversion process, the small image blocks are suitable for a convolutional neural network and do not need to be subjected to image cutting by means of an openslide tool, and the time for processing a data set is greatly saved.
5. The multifunctional integrated lung cancer auxiliary diagnosis system according to claim 1, wherein: after the scanner scans an HE stained lung tissue section, the section is automatically cut into 256x256 small blocks for model training, so that a large amount of image processing time can be saved, and blank blocks only containing background areas need to be removed from all blocks in a training set and a verification set;
the slice images of all the slices in the training set are scrambled, whether the slices are cancerous or not is marked according to whether each slice contains a focus area, and the slice images are marked as 1 if one area contains the focus area and marked as 0 if no focus area exists.
CN202010140224.8A 2020-03-03 2020-03-03 Multifunctional integrated lung cancer auxiliary diagnosis system Pending CN111341442A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010140224.8A CN111341442A (en) 2020-03-03 2020-03-03 Multifunctional integrated lung cancer auxiliary diagnosis system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010140224.8A CN111341442A (en) 2020-03-03 2020-03-03 Multifunctional integrated lung cancer auxiliary diagnosis system

Publications (1)

Publication Number Publication Date
CN111341442A true CN111341442A (en) 2020-06-26

Family

ID=71185804

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010140224.8A Pending CN111341442A (en) 2020-03-03 2020-03-03 Multifunctional integrated lung cancer auxiliary diagnosis system

Country Status (1)

Country Link
CN (1) CN111341442A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115954100A (en) * 2022-12-15 2023-04-11 东北林业大学 Intelligent auxiliary diagnosis system for gastric cancer pathological images
CN117152509A (en) * 2023-08-28 2023-12-01 北京透彻未来科技有限公司 Stomach pathological diagnosis and typing system based on cascade deep learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105975793A (en) * 2016-05-23 2016-09-28 麦克奥迪(厦门)医疗诊断系统有限公司 Auxiliary cancer diagnosis method based on digital pathological images
CN106250939A (en) * 2016-07-30 2016-12-21 复旦大学 System for Handwritten Character Recognition method based on FPGA+ARM multilamellar convolutional neural networks
CN109754879A (en) * 2019-01-04 2019-05-14 湖南兰茜生物科技有限公司 A kind of lung cancer computer aided detection method and system based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105975793A (en) * 2016-05-23 2016-09-28 麦克奥迪(厦门)医疗诊断系统有限公司 Auxiliary cancer diagnosis method based on digital pathological images
CN106250939A (en) * 2016-07-30 2016-12-21 复旦大学 System for Handwritten Character Recognition method based on FPGA+ARM multilamellar convolutional neural networks
CN109754879A (en) * 2019-01-04 2019-05-14 湖南兰茜生物科技有限公司 A kind of lung cancer computer aided detection method and system based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王莹等: "临床结直肠病理切片图像的自动辅助诊断", 《临床与实验病理学杂志》, no. 10, 21 October 2018 (2018-10-21) *
马艳军等: "飞桨:源于产业实践的开源深度学习平台", 《数据与计算发展前沿》, no. 05, 15 October 2019 (2019-10-15), pages 105 - 115 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115954100A (en) * 2022-12-15 2023-04-11 东北林业大学 Intelligent auxiliary diagnosis system for gastric cancer pathological images
CN115954100B (en) * 2022-12-15 2023-11-03 东北林业大学 Intelligent auxiliary diagnosis system for gastric cancer pathology image
CN117152509A (en) * 2023-08-28 2023-12-01 北京透彻未来科技有限公司 Stomach pathological diagnosis and typing system based on cascade deep learning
CN117152509B (en) * 2023-08-28 2024-04-30 北京透彻未来科技有限公司 Stomach pathological diagnosis and typing system based on cascade deep learning

Similar Documents

Publication Publication Date Title
Jannesari et al. Breast cancer histopathological image classification: a deep learning approach
EP3839885A1 (en) Real-time pathological microscopic image collection and analysis system, method and device and medium
CN109670510B (en) Deep learning-based gastroscope biopsy pathological data screening system
Rana et al. Computational histological staining and destaining of prostate core biopsy RGB images with generative adversarial neural networks
CN112768072B (en) Cancer clinical index evaluation system constructed based on imaging omics qualitative algorithm
Veta et al. Detecting mitotic figures in breast cancer histopathology images
AU2020411972A1 (en) Pathological diagnosis assisting method using AI, and assisting device
CN112088394A (en) Computerized classification of biological tissue
CN111986150A (en) Interactive marking refinement method for digital pathological image
CN113570619B (en) Computer-aided pancreas pathology image diagnosis system based on artificial intelligence
Mandache et al. Basal cell carcinoma detection in full field OCT images using convolutional neural networks
CN113724842B (en) Cervical tissue pathology auxiliary diagnosis method based on attention mechanism
CN111341442A (en) Multifunctional integrated lung cancer auxiliary diagnosis system
CN115063592B (en) Multi-scale-based full-scanning pathological feature fusion extraction method and system
Feng et al. Supervoxel based weakly-supervised multi-level 3D CNNs for lung nodule detection and segmentation
CN114648663A (en) Lung cancer CT image subtype classification method based on deep learning
Cai et al. Identifying architectural distortion in mammogram images via a se-densenet model and twice transfer learning
Pradhan et al. Lung cancer detection using 3D convolutional neural networks
CN117095815A (en) System for predicting prostate cancer patient with homologous recombination defect based on magnetic resonance image and pathological panoramic scanning slice
CN113538422B (en) Pathological image automatic classification method based on dyeing intensity matrix
Wen et al. Pulmonary nodule detection based on convolutional block attention module
CN113450305A (en) Medical image processing method, system, equipment and readable storage medium
Zhang et al. Evaluation of a new dataset for visual detection of cervical precancerous lesions
CN115880245A (en) Self-supervision-based breast cancer disease classification method
Liu et al. An optimal method for melanoma detection from dermoscopy images using reinforcement learning and support vector machine optimized by enhanced fish migration optimization algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200626

WD01 Invention patent application deemed withdrawn after publication