CN117373695A - Extreme deep convolutional neural network-based diagnosis system for diagnosis of cancer disease - Google Patents

Extreme deep convolutional neural network-based diagnosis system for diagnosis of cancer disease Download PDF

Info

Publication number
CN117373695A
CN117373695A CN202311317823.2A CN202311317823A CN117373695A CN 117373695 A CN117373695 A CN 117373695A CN 202311317823 A CN202311317823 A CN 202311317823A CN 117373695 A CN117373695 A CN 117373695A
Authority
CN
China
Prior art keywords
diagnosis
cancer
pixel
image block
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311317823.2A
Other languages
Chinese (zh)
Inventor
王书浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Thorough Future Technology Co ltd
Original Assignee
Beijing Thorough Future Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Thorough Future Technology Co ltd filed Critical Beijing Thorough Future Technology Co ltd
Priority to CN202311317823.2A priority Critical patent/CN117373695A/en
Publication of CN117373695A publication Critical patent/CN117373695A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H70/00ICT specially adapted for the handling or processing of medical references
    • G16H70/60ICT specially adapted for the handling or processing of medical references relating to pathologies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Epidemiology (AREA)
  • Data Mining & Analysis (AREA)
  • Primary Health Care (AREA)
  • Image Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Pathology (AREA)

Abstract

The invention provides a diagnosis system for a diagnosis of a cancer disease based on an extremely-deep convolutional neural network, which comprises a sample acquisition module, a diagnosis module and a diagnosis module, wherein the sample acquisition module is used for acquiring various slice samples in a pathology sample database according to the types of the organs of the cancer disease and inputting the various slice samples into the extremely-deep convolutional neural network; the image labeling module is used for preprocessing various slice samples to obtain massive image blocks, and pixel-level cancer labeling is respectively carried out on the massive image blocks based on historical diagnosis results to obtain training image blocks; the model training module is used for training the ultra-deep convolutional neural network based on the training image block and establishing a diagnosis model of the diagnosis of the cancer disease; the auxiliary diagnosis module is used for diagnosing the input slice to be diagnosed based on the diagnosis model of the cancer and obtaining a pixel-level diagnosis result.

Description

Extreme deep convolutional neural network-based diagnosis system for diagnosis of cancer disease
Technical Field
The invention relates to the technical field of auxiliary diagnosis, in particular to a diagnosis system for a diagnosis of a cancer disease based on an extremely deep convolutional neural network.
Background
Pathology is a highly specialized and empirically driven area requiring long learning and practice to develop a qualified pathologist, and tumor grading and prognosis are important bases for developing treatment regimens and predicting patient survival. Conventional methods of pathology diagnosis classification and prognosis are often based on limited sample and empirical judgment, which requires a highly experienced expert to analyze the morphological features of tissue sections, with limitations such as subjectivity, time consumption, and human error.
Disclosure of Invention
The invention provides a diagnosis system for the diagnosis of the cancer caused by the cancer based on the extremely deep convolutional neural network, which can rapidly and accurately identify and position the cancer cells and classify the malignancy degree of the tumor, reduce the influence of subjective judgment, provide a diagnosis result with consistency and repeatability, provide a reliable diagnosis reference basis for doctors, greatly improve the diagnosis efficiency and speed, facilitate the completion of prognosis treatment of patients, assist the doctors to complete the training of pathological learning, and improve the diagnosis level and pathological knowledge reserve of learners.
The invention provides a diagnosis system of a cancer disease based on an extremely deep convolutional neural network, which comprises:
the sample acquisition module is used for acquiring a plurality of slice samples from a pathological sample database according to the types of cancerous organs and inputting the plurality of slice samples into the ultra-deep convolutional neural network;
the image labeling module is used for preprocessing various slice samples to obtain a mass image block, and respectively labeling pixel-level cancers of the mass image block based on a historical diagnosis result to obtain a training image block;
the model training module is used for training the ultra-deep convolutional neural network based on the training image block and establishing a diagnosis model of the diagnosis of the cancer disease;
and the auxiliary diagnosis module is used for diagnosing the input slice to be diagnosed based on the diagnosis model of the cancer and the diagnosis of the cancer, and obtaining a pixel-level diagnosis result.
Preferably, in a diagnosis system for diagnosis of cancer of a patient based on an extremely deep convolutional neural network, the sample acquisition module includes:
the catalog generation unit is used for acquiring a storage catalog of the pathological sample database, determining the coverage category of the cancerous organ, determining the number of the cancerous organ categories and generating a sample acquisition catalog;
the number determining unit is used for distributing the sample acquisition number based on the preset sample number and the cancerous organ type number and determining the acquisition number of slice samples corresponding to each cancerous organ;
the sample acquisition unit is used for acquiring digital pathological section images based on the sample acquisition catalog and the acquisition quantity corresponding to each cancerous organ type, carrying out image quality evaluation on the digital pathological section images, and taking the digital pathological section with the image quality larger than a threshold value as a section sample.
Preferably, in a diagnosis system for diagnosis of cancer of a patient based on an extremely deep convolutional neural network, the image labeling module comprises:
the preprocessing unit is used for filtering the backgrounds of various slice samples based on an Otsu algorithm to obtain a clean slice sample, amplifying the clean slice sample according to a preset multiplying power and carrying out non-overlapping screenshot to obtain a mass image block;
the image labeling unit is used for respectively acquiring historical diagnosis results corresponding to the image blocks, determining the canceration type corresponding to the image blocks based on the historical diagnosis results, confirming, and adding pixel-level labels to the image blocks according to the confirmation results to generate the training image blocks.
Preferably, in a diagnosis system for diagnosis of cancer based on extremely deep convolutional neural network, the cancerous type includes cancer, non-cancer and neglect, wherein the pixel level standard corresponding to the cancer image block is 1, the pixel level standard corresponding to the non-cancer image block is 0, and the pixel level standard corresponding to the neglect image block is 255.
Preferably, in a diagnosis system for diagnosis of a cancer of a patient based on an extremely deep convolutional neural network, the preprocessing unit includes:
the starting determination subunit is used for acquiring an amplified clean slice sample, detecting adjacent pixel points of edge pixel points of the amplified clean slice sample, taking the edge pixel points of only two adjacent pixel points as target pixel points, and acquiring any one target pixel point as a interception starting point;
the positioning and intercepting subunit is used for establishing a pixel positioning coordinate system based on the intercepting starting point as a coordinate origin, and acquiring a first image block with a preset pixel size by taking the intercepting starting point as a reference based on the pixel positioning coordinate system;
taking the non-overlapping edge line of the first image block as a datum line of the next screenshot to carry out multidirectional interception to obtain a second image block, and taking the non-overlapping edge line of the second image block as a datum line of the next screenshot to carry out repeated multidirectional interception until all interception of the amplified clean slice sample is completed;
and the image sorting subunit is used for acquiring the category label corresponding to the amplified clean slice sample, and storing the first image block and the second image block into the corresponding storage unit based on the category label.
Preferably, in a diagnosis system for diagnosis of cancer of the patient based on an extremely deep convolutional neural network, locating the interception subunit comprises:
the judging subunit is used for monitoring in the process of intercepting the second image block, judging whether the second image block is an edge screenshot or not, if so, acquiring the actual pixel size corresponding to the edge screenshot, and taking the edge screenshot as a incomplete screenshot when the long pixel size is smaller than a preset pixel size;
if not, continuing to perform non-overlapping screenshot;
and the image expansion subunit is used for acquiring the incomplete pixel characteristics of the incomplete screenshot, carrying out virtual pixel supplementation on the incomplete screenshot based on the incomplete pixel characteristics to obtain a complete image block, and carrying out neglect area marking on the supplemented pixel area.
Preferably, in a diagnosis system for diagnosis of cancer of a patient based on an extremely deep convolutional neural network, the model training module comprises:
the training unit is used for training the ultra-deep convolutional neural network based on the massive training image blocks to obtain a diagnosis model of the diagnosis of the cancer disease;
the first interference unit is used for randomly rotating or turning over the training image block in the training process;
and the second interference unit is used for randomly adjusting the color parameters of the training image block in the training process to finish color interference, wherein the color parameters comprise brightness, saturation, contrast and hue.
Preferably, in a diagnosis system for diagnosis of a cancer of a patient based on an extremely deep convolutional neural network, the training unit includes:
an input subunit, configured to sequentially input training image blocks into the ultra-deep convolutional neural network;
the feature extraction unit is used for extracting bottom layer features and high layer features in the training image block based on the extremely deep convolutional neural network, and fusing the bottom layer features and the high layer features to obtain fusion features;
the error comparison unit is used for determining a cancer diagnosis result corresponding to the training image block based on the fusion characteristic, outputting a pixel level prediction result, and comparing the pixel level prediction result with a labeling result to obtain an overall training error of a large number of training image blocks;
and the model building unit is used for judging that the training is completed and building a diagnosis model of the diagnosis of the cancer disease when the overall training error is smaller than a preset value.
Preferably, in a diagnosis system for diagnosis of a cancer of a patient based on an extremely deep convolutional neural network, the auxiliary diagnosis module comprises:
the system comprises a mode selection unit, a control unit and a control unit, wherein the mode selection unit is used for selecting a system mode, and the system mode comprises an auxiliary teaching mode and an auxiliary diagnosis mode;
a diagnosis control unit for, when the diagnosis model of the diagnosis of the cancer of the patient is in an auxiliary diagnosis mode, the diagnosis model of the cancer disease is controlled to carry out identification diagnosis on the input slice to be diagnosed, and a pixel level diagnosis result is directly output;
when the diagnosis model of the cancer is in an auxiliary teaching mode, the diagnosis model of the cancer is controlled to identify and diagnose the input slice to be diagnosed, a hidden auxiliary diagnosis result is obtained, and after the manual diagnosis result is received, the hidden auxiliary diagnosis result is displayed and the lesion cells in the slice to be diagnosed are marked prominently.
Preferably, in a diagnosis system for diagnosis of a cancer of a patient based on an extremely deep convolutional neural network, a diagnosis control unit includes:
the result comparison subunit is used for comparing the manual diagnosis result with the hidden auxiliary diagnosis result after acquiring the manual diagnosis result when the diagnosis model of the cancer patient is in the auxiliary teaching mode, so as to acquire a plurality of diagnosis error positions;
and the color separation display subunit is used for identifying the manual diagnosis result based on a preset diagnosis semantic segmentation model, determining the error degree of each diagnosis error position, grading errors based on the error degree, and performing color separation display on the manual diagnosis result according to the grading result.
Compared with the prior art, the invention at least comprises the following beneficial effects:
according to the invention, the sample acquisition module acquires a plurality of slice samples in the pathological sample database according to the types of cancerous organs, and inputs the plurality of slice samples into the extremely deep convolutional neural network to acquire a plurality of pathological slice samples, thereby being beneficial to improving the diagnosable cancer range of the model and improving the diagnostic capability of the model; the image labeling module is used for preprocessing various slice samples to obtain a mass image block, pixel-level cancer labeling is respectively carried out on the mass image block based on a historical diagnosis result to obtain a training image block, and the mass training image block is beneficial to improving the accuracy of the diagnosis of the model; the model training module is used for training the ultra-deep convolutional neural network based on the training image block, establishing a diagnosis model of the diagnosis of the cancer, improving the generalization capability of the model and ensuring that in practical application, even if the input slice image to be diagnosed is incomplete, a more accurate diagnosis result can be obtained; the auxiliary diagnosis module is used for diagnosing the input slice to be diagnosed based on the diagnosis model of the cancer and obtaining a pixel-level diagnosis result, rapidly and accurately identifying and positioning tumor cells and classifying the malignancy degree of the tumor, reducing the influence of subjective judgment, providing a diagnosis result with consistency and repeatability, providing a reliable diagnosis reference basis for doctors, greatly improving the diagnosis efficiency and speed and facilitating the completion of prognosis treatment of patients.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
FIG. 1 is a schematic diagram of a diagnosis system for diagnosis of cancer pathologies based on extremely deep convolutional neural networks according to the present invention;
FIG. 2 is a schematic diagram of a sample acquisition module of a diagnosis system for cancer treatment based on an extremely deep convolutional neural network according to the present invention;
FIG. 3 is a schematic diagram of an image labeling module of a diagnosis system for diagnosis of cancer of a patient based on an extremely deep convolutional neural network;
FIG. 4 is a schematic diagram of a model training module of a diagnosis system for diagnosis of cancer disease based on an extremely deep convolutional neural network;
FIG. 5 is a schematic diagram of an auxiliary diagnostic module of a diagnosis system for diagnosis of cancer of a patient based on an extremely deep convolutional neural network.
Detailed Description
The preferred embodiments of the present invention will be described below with reference to the accompanying drawings, it being understood that the preferred embodiments described herein are for illustration and explanation of the present invention only, and are not intended to limit the present invention.
Example 1:
the invention provides a diagnosis system of a cancer disease based on an extremely deep convolutional neural network, as shown in figure 1, comprising:
the sample acquisition module is used for acquiring a plurality of slice samples from a pathological sample database according to the types of cancerous organs and inputting the plurality of slice samples into the ultra-deep convolutional neural network;
the image labeling module is used for preprocessing various slice samples to obtain a mass image block, and respectively labeling pixel-level cancers of the mass image block based on a historical diagnosis result to obtain a training image block;
the model training module is used for training the ultra-deep convolutional neural network based on the training image block and establishing a diagnosis model of the diagnosis of the cancer disease;
and the auxiliary diagnosis module is used for diagnosing the input slice to be diagnosed based on the diagnosis model of the cancer and the diagnosis of the cancer, and obtaining a pixel-level diagnosis result.
In this embodiment, the cancerous organ includes stomach, intestine, lung, prostate, endometrium, cervix, esophagus, lymph node, etc.
In this embodiment, the slice samples refer to a plurality of digital pathological slice images obtained from a pathological sample database.
In this embodiment, the image block refers to a local sample taken from a slice sample.
In this embodiment, the training image block refers to an image block with pixel-level cancer labeling.
The beneficial effects of the embodiment are that: according to the invention, the sample acquisition module acquires a plurality of slice samples in the pathological sample database according to the types of cancerous organs, and inputs the plurality of slice samples into the extremely deep convolutional neural network to acquire a plurality of pathological slice samples, thereby being beneficial to improving the diagnosable cancer range of the model and improving the diagnostic capability of the model; the image labeling module is used for preprocessing various slice samples to obtain a mass image block, pixel-level cancer labeling is respectively carried out on the mass image block based on a historical diagnosis result to obtain a training image block, and the mass training image block is beneficial to improving the accuracy of the diagnosis of the model; the model training module is used for training the ultra-deep convolutional neural network based on the training image block, establishing a diagnosis model of the diagnosis of the cancer, improving the generalization capability of the model and ensuring that in practical application, even if the input slice image to be diagnosed is incomplete, a more accurate diagnosis result can be obtained; the auxiliary diagnosis module is used for diagnosing the input slice to be diagnosed based on the diagnosis model of the cancer and obtaining a pixel-level diagnosis result, rapidly and accurately identifying and positioning tumor cells and classifying the malignancy degree of the tumor, reducing the influence of subjective judgment, providing a diagnosis result with consistency and repeatability, providing a reliable diagnosis reference basis for doctors, greatly improving the diagnosis efficiency and speed and facilitating the completion of prognosis treatment of patients.
Example 2:
on the basis of embodiment 1, the sample acquisition module, as shown in fig. 2, includes:
the catalog generation unit is used for acquiring a storage catalog of the pathological sample database, determining the coverage category of the cancerous organ, determining the number of the cancerous organ categories and generating a sample acquisition catalog;
the number determining unit is used for distributing the sample acquisition number based on the preset sample number and the cancerous organ type number and determining the acquisition number of slice samples corresponding to each cancerous organ;
the sample acquisition unit is used for acquiring digital pathological section images based on the sample acquisition catalog and the acquisition quantity corresponding to each cancerous organ type, carrying out image quality evaluation on the digital pathological section images, and taking the digital pathological section with the image quality larger than a threshold value as a section sample.
In this embodiment, the storage directory refers to a storage item directory of the pathology sample database.
In this embodiment, the cancerous organ coverage category refers to an organ coverage category corresponding to a cancer sample in the pathology sample database.
In this embodiment, the number of cancerous organ types refers to the number of organ types corresponding to the cancerous sample.
In this embodiment, the sample acquisition directory refers to a directory for acquiring slice samples.
In this embodiment, the preset number of samples refers to a preset total number of acquired slice samples.
The beneficial effects of the embodiment are that: the invention acquires the storage catalogue of the pathological sample database, determines the coverage category of the cancerous organ, determines the number of the types of the cancerous organ, generates the sample acquisition catalogue, ensures that all organs in the pathological database have corresponding slice samples, is favorable for improving the diagnosis capability of the diagnosis model of the extensive cancer pathological, distributes the sample acquisition number based on the preset sample number and the cancerous organ type number, determines the acquisition number of the slice samples corresponding to each cancerous organ, ensures that the organs have the corresponding slice samples, simultaneously ensures the acquisition order of the samples, acquires the digital pathological slice images based on the sample acquisition catalogue and the acquisition number corresponding to each cancerous organ type, evaluates the image quality of the digital pathological slice images, takes the digital pathological slice with the image quality larger than a threshold value as the slice sample, ensures the integral image quality of the slice sample, and is a training image with the extremely deep convolutional neural network improved as possible.
Example 3:
on the basis of embodiment 1, the image labeling module, as shown in fig. 3, includes:
the preprocessing unit is used for filtering the backgrounds of various slice samples based on an Otsu algorithm to obtain a clean slice sample, amplifying the clean slice sample according to a preset multiplying power and carrying out non-overlapping screenshot to obtain a mass image block;
the image labeling unit is used for respectively acquiring historical diagnosis results corresponding to the image blocks, determining the canceration type corresponding to the image blocks based on the historical diagnosis results, confirming, and adding pixel-level labels to the image blocks according to the confirmation results to generate the training image blocks.
The cancerous type comprises cancer, non-cancer and neglect, wherein the pixel level standard corresponding to the cancer image block is 1, the pixel level standard corresponding to the non-cancer image block is 0, and the pixel level standard corresponding to the neglect image block is 255.
In this embodiment, the Otsu algorithm (the oxford method) is an algorithm for determining the image binarization segmentation threshold.
In this embodiment, a clean slice sample refers to a background filtered slice image.
In this example, cancers include carcinomas, high-grade precancerous lesions, and other malignant tumors; non-cancers include low-grade precancerous lesions, inflammation, and other benign diseases; pixels that include poor quality image blocks and that annotate the ignored regions are ignored, 255 do not contribute to the cost function.
In this embodiment, the training image block is a 320×320 pixel image block, and the number of the training image block is approximately 1 hundred million.
The beneficial effects of the embodiment are that: according to the invention, the backgrounds of various slice samples are filtered based on an Otsu algorithm to obtain a net slice sample, background interference is eliminated, and the net slice sample is amplified according to a preset multiplying power and subjected to non-overlapping screenshot to obtain a pixel-level massive image block; and respectively acquiring historical diagnosis results corresponding to each image block, determining the canceration type corresponding to each image block based on the historical diagnosis results, confirming, adding pixel-level labels to the image blocks according to the confirmation results, generating training image blocks, completing the intelligent completion of image labeling task, acquiring pixel-level training images, and completing the preparation work of training of the extremely deep convolutional neural network.
Example 4:
on the basis of embodiment 3, the pretreatment unit includes:
the starting determination subunit is used for acquiring an amplified clean slice sample, detecting adjacent pixel points of edge pixel points of the amplified clean slice sample, taking the edge pixel points of only two adjacent pixel points as target pixel points, and acquiring any one target pixel point as a interception starting point;
the positioning and intercepting subunit is used for establishing a pixel positioning coordinate system based on the intercepting starting point as a coordinate origin, and acquiring a first image block with a preset pixel size by taking the intercepting starting point as a reference based on the pixel positioning coordinate system;
taking the non-overlapping edge line of the first image block as a datum line of the next screenshot to carry out multidirectional interception to obtain a second image block, and taking the non-overlapping edge line of the second image block as a datum line of the next screenshot to carry out repeated multidirectional interception until all interception of the amplified clean slice sample is completed;
and the image sorting subunit is used for acquiring the category label corresponding to the amplified clean slice sample, and storing the first image block and the second image block into the corresponding storage unit based on the category label.
In this embodiment, the magnified net slice sample is a magnified net slice sample, and the magnification is 200 times (i.e., the eyepiece is fixed 10 times and the objective lens is selected to be 20 times).
In this embodiment, the edge pixel point refers to a pixel point of an image edge of the enlarged net slice sample.
In this embodiment, the target pixel point is an edge pixel point with only two adjacent pixel points, and is an angle pixel point of the amplified net slice sample.
In this embodiment, the interception start point refers to any one of the target pixel points.
In this embodiment, the pixel positioning coordinate system refers to a coordinate system in which the origin point is taken as the origin point and the image edge line in which the origin point is taken as the coordinate axis.
In this embodiment, the first image block refers to the first screenshot of the enlarged net slice sample.
In this embodiment, the second image block refers to amplifying the net slice sample except for the other non-shots of the first shot.
In this embodiment, the non-overlapping edge line of the first image block refers to an image edge line where the first image block does not overlap with the enlarged clean slice sample. The non-overlapping edge line of the second image block refers to the image edge line of the current second image block, which is not coincident with the previous second image block.
The beneficial effects of the embodiment are that: the method comprises the steps of obtaining an amplified clean slice sample, detecting adjacent pixel points of edge pixel points of the amplified clean slice sample, taking the edge pixel points of only two adjacent pixel points as target pixel points, obtaining any one target pixel point as a interception starting point, reducing the occupation ratio of the number of pixels which do not accord with the pixel size of a training image block as much as possible, guaranteeing the integrity of a screenshot of the slice sample, improving the generalization capability of a model, establishing a pixel positioning coordinate system based on the interception starting point as a coordinate origin, and obtaining a first image block with a preset pixel size based on the pixel positioning coordinate system and taking the interception starting point as a reference; taking the non-overlapping edge line of the first image block as a datum line of the next screenshot to carry out multidirectional interception to obtain a second image block, and taking the non-overlapping edge line of the second image block as a datum line of the next screenshot to carry out repeated multidirectional interception until all interception of the amplified clean slice sample is completed, so that each screenshot is ensured to be non-overlapping, and the independence of the image blocks is ensured; the category labels corresponding to the amplified clean slice samples are obtained, the first image block and the second image block are stored in the corresponding storage units based on the category labels, so that the classified storage of the training image blocks is realized, the model training strategy is favorably adjusted by the system according to the actual application requirements, for example, the continuous input of the training image blocks of the same class is adopted, which is beneficial to improving the accuracy of the diagnosis of single cancers of the finally obtained diagnosis model of the diagnosis of the cancer of the patient, the training image blocks of different types are input at intervals, so that the adaptability of multiple cancers of the finally obtained diagnosis model of the diagnosis of the cancer is improved.
Example 5:
on the basis of embodiment 4, locating the intercept sub-unit includes:
the judging subunit is used for monitoring in the process of intercepting the second image block, judging whether the second image block is an edge screenshot or not, if so, acquiring the actual pixel size corresponding to the edge screenshot, and taking the edge screenshot as a incomplete screenshot when the long pixel size is smaller than a preset pixel size;
if not, continuing to perform non-overlapping screenshot;
and the image expansion subunit is used for acquiring the incomplete pixel characteristics of the incomplete screenshot, carrying out virtual pixel supplementation on the incomplete screenshot based on the incomplete pixel characteristics to obtain a complete image block, and carrying out neglect area marking on the supplemented pixel area.
In this embodiment, the edge screenshot means that the pixels included in the second image block have edge pixels.
In this embodiment, the incomplete screenshot refers to an edge screenshot with an actual pixel size smaller than a preset pixel size.
In this embodiment, the defective pixel feature refers to a position feature of insufficient size of the defective pixel, for example. When the starting point of the interception is the upper corner pixel point, the incomplete screenshot may be the lower half or the right half without an actual image.
In this embodiment, the virtual pixel replenishment refers to replenishing the pixel lacking in the incomplete image with a virtual pixel according to the incomplete pixel characteristics, and the virtual pixel is 255.
The beneficial effects of the embodiment are that: the invention monitors in the process of intercepting the second image block, judges whether the second image block is an edge screenshot, determines the edge screenshot as an incomplete screenshot according to size comparison, supplements virtual pixels of the incomplete screenshot based on incomplete pixel characteristics of the incomplete screenshot, obtains a complete image block, marks neglected areas of the supplemented pixel areas, ensures consistency of the training image block on the premise of ensuring that a slice sample can be intercepted, and is beneficial to improving generalization capability of a model.
Example 6:
on the basis of embodiment 1, the model training module, as shown in fig. 4, includes:
the training unit is used for training the ultra-deep convolutional neural network based on the massive training image blocks to obtain a diagnosis model of the diagnosis of the cancer disease;
the first interference unit is used for randomly rotating or turning over the training image block in the training process;
and the second interference unit is used for randomly adjusting the color parameters of the training image block in the training process to finish color interference, wherein the color parameters comprise brightness, saturation, contrast and hue.
In this embodiment, the random rotation angle includes 90, 180 and 270 degrees, and the flipping refers to flipping in the horizontal and vertical directions.
In the present embodiment, the adjustment range of the brightness is (0.0, 0.5); the saturation adjustment range is (0.0, 0.5); the contrast ratio is adjusted within the range of (0.0, 0.5); the hue adjustment range is (0.0,0.2).
The beneficial effects of the embodiment are that: according to the invention, the training image blocks are randomly rotated or turned over in the training process, and the color parameters of the training image blocks are randomly adjusted, so that the color interference is completed, and the generalization capability of the diagnosis model for the diagnosis of the cancer is improved.
Example 7:
on the basis of embodiment 6, the training unit includes:
an input subunit, configured to sequentially input training image blocks into the ultra-deep convolutional neural network;
the feature extraction unit is used for extracting bottom layer features and high layer features in the training image block based on the extremely deep convolutional neural network, and fusing the bottom layer features and the high layer features to obtain fusion features;
the error comparison unit is used for determining a cancer diagnosis result corresponding to the training image block based on the fusion characteristic, outputting a pixel level prediction result, and comparing the pixel level prediction result with a labeling result to obtain an overall training error of a large number of training image blocks;
and the model building unit is used for judging that the training is completed and building a diagnosis model of the diagnosis of the cancer disease when the overall training error is smaller than a preset value.
In this embodiment, the bottom-layer features and the high-layer features are fused by using ASPP (Atrous Spatial Pyramid Pooling, cavity space convolution pooling pyramid) network modules of a deep lab (semantic segmentation) model system.
The beneficial effects of the embodiment are that: the invention establishes the diagnosis model of the diagnosis of the cancer on the basis of the training of the extremely deep convolutional neural network, is favorable for providing an objective and accurate diagnosis reference result for doctors, provides more accurate tumor grading and prognosis prediction, and improves the diagnosis efficiency and speed.
Example 8:
on the basis of embodiment 1, the auxiliary diagnostic module, as shown in fig. 5, includes:
the system comprises a mode selection unit, a control unit and a control unit, wherein the mode selection unit is used for selecting a system mode, and the system mode comprises an auxiliary teaching mode and an auxiliary diagnosis mode;
a diagnosis control unit for, when the diagnosis model of the diagnosis of the cancer of the patient is in an auxiliary diagnosis mode, the diagnosis model of the cancer disease is controlled to carry out identification diagnosis on the input slice to be diagnosed, and a pixel level diagnosis result is directly output;
when the diagnosis model of the cancer is in an auxiliary teaching mode, the diagnosis model of the cancer is controlled to identify and diagnose the input slice to be diagnosed, a hidden auxiliary diagnosis result is obtained, and after the manual diagnosis result is received, the hidden auxiliary diagnosis result is displayed and the lesion cells in the slice to be diagnosed are marked prominently.
In this embodiment, the auxiliary teaching mode is used to serve doctors to complete pathology learning, and the auxiliary diagnosis mode is used to provide reliable diagnosis results for doctors or pathologists in clinical pathology research.
In this embodiment, the hidden auxiliary diagnostic result refers to a pixel-level diagnostic result that is not displayed directly.
In this embodiment, the manual diagnosis result refers to a result of diagnosing the slice to be diagnosed by the learner in the auxiliary teaching mode.
The beneficial effects of the embodiment are that: the invention completes the system mode selection through the mode selection unit, adjusts the output mode of the diagnosis result of the system according to different modes, so that the diagnosis system for the diagnosis of the cancer at the same time can be applied to the study of the pathology of doctors while providing reliable diagnosis results for the doctors or pathologists, breaks the time limit of the pathology study, and is beneficial to improving the diagnosis level and the pathology knowledge reserve of learners.
Example 9:
on the basis of embodiment 8, the diagnostic control unit includes:
the result comparison subunit is used for comparing the manual diagnosis result with the hidden auxiliary diagnosis result after acquiring the manual diagnosis result when the diagnosis model of the cancer patient is in the auxiliary teaching mode, so as to acquire a plurality of diagnosis error positions;
and the color separation display subunit is used for identifying the manual diagnosis result based on a preset diagnosis semantic segmentation model, determining the error degree of each diagnosis error position, grading errors based on the error degree, and performing color separation display on the manual diagnosis result according to the grading result.
In this embodiment, the diagnosis error position refers to a position where the learner diagnoses the error.
The beneficial effects of the embodiment are that: when the diagnosis model for the diagnosis of the cancer is in an auxiliary teaching mode, after the manual diagnosis result is obtained, the manual diagnosis result is compared with the hidden auxiliary diagnosis result, a plurality of diagnosis error positions are obtained, the manual diagnosis result is identified based on a preset diagnosis semantic segmentation model, the error degree of each diagnosis error position is determined, error grading is carried out based on the error degree, the manual diagnosis result is subjected to color separation display according to the grading result, the diagnosis result of a learner is automatically subjected to error judgment, the learner can conveniently and rapidly find the problem of the learner, meanwhile, the color separation display according to the error grading is convenient for the learner to evaluate the pathological learning of the learner, and the learner can conveniently adjust the strategy of the pathological learning.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (10)

1. A system for diagnosis of a cancer pathology based on an extremely deep convolutional neural network, comprising:
the sample acquisition module is used for acquiring a plurality of slice samples from a pathological sample database according to the types of cancerous organs and inputting the plurality of slice samples into the ultra-deep convolutional neural network;
the image labeling module is used for preprocessing various slice samples to obtain a mass image block, and respectively labeling pixel-level cancers of the mass image block based on a historical diagnosis result to obtain a training image block;
the model training module is used for training the ultra-deep convolutional neural network based on the training image block and establishing a diagnosis model of the diagnosis of the cancer disease;
and the auxiliary diagnosis module is used for diagnosing the input slice to be diagnosed based on the diagnosis model of the cancer and the diagnosis of the cancer, and obtaining a pixel-level diagnosis result.
2. The system of claim 1, wherein the sample acquisition module comprises:
the catalog generation unit is used for acquiring a storage catalog of the pathological sample database, determining the coverage category of the cancerous organ, determining the number of the cancerous organ categories and generating a sample acquisition catalog;
the number determining unit is used for distributing the sample acquisition number based on the preset sample number and the cancerous organ type number and determining the acquisition number of slice samples corresponding to each cancerous organ;
the sample acquisition unit is used for acquiring digital pathological section images based on the sample acquisition catalog and the acquisition quantity corresponding to each cancerous organ type, carrying out image quality evaluation on the digital pathological section images, and taking the digital pathological section with the image quality larger than a threshold value as a section sample.
3. The system of claim 1, wherein the image labeling module comprises:
the preprocessing unit is used for filtering the backgrounds of various slice samples based on an Otsu algorithm to obtain a clean slice sample, amplifying the clean slice sample according to a preset multiplying power and carrying out non-overlapping screenshot to obtain a mass image block;
the image labeling unit is used for respectively acquiring historical diagnosis results corresponding to the image blocks, determining the canceration type corresponding to the image blocks based on the historical diagnosis results, confirming, and adding pixel-level labels to the image blocks according to the confirmation results to generate the training image blocks.
4. A system for diagnosis of cancer pathology based on extremely deep convolutional neural network according to claim 3, characterized in that:
the cancerous type includes cancer, non-cancer and neglect, wherein the pixel level standard corresponding to the cancer image block is 1, the pixel level standard corresponding to the non-cancer image block is 0, and the pixel level standard corresponding to the neglect image block is 255.
5. A system for diagnosis of cancer in a patient based on an extremely deep convolutional neural network according to claim 3, wherein the preprocessing unit comprises:
the starting determination subunit is used for acquiring an amplified clean slice sample, detecting adjacent pixel points of edge pixel points of the amplified clean slice sample, taking the edge pixel points of only two adjacent pixel points as target pixel points, and acquiring any one target pixel point as a interception starting point;
the positioning and intercepting subunit is used for establishing a pixel positioning coordinate system based on the intercepting starting point as a coordinate origin, and acquiring a first image block with a preset pixel size by taking the intercepting starting point as a reference based on the pixel positioning coordinate system;
taking the non-overlapping edge line of the first image block as a datum line of the next screenshot to carry out multidirectional interception to obtain a second image block, and taking the non-overlapping edge line of the second image block as a datum line of the next screenshot to carry out repeated multidirectional interception until all interception of the amplified clean slice sample is completed;
and the image sorting subunit is used for acquiring the category label corresponding to the amplified clean slice sample, and storing the first image block and the second image block into the corresponding storage unit based on the category label.
6. The extreme deep convolutional neural network-based diagnosis system of cancer pathology, according to claim 1, characterized in that the positioning intercept subunit comprises:
the judging subunit is used for monitoring in the process of intercepting the second image block, judging whether the second image block is an edge screenshot or not, if so, acquiring the actual pixel size corresponding to the edge screenshot, and taking the edge screenshot as a incomplete screenshot when the long pixel size is smaller than a preset pixel size;
if not, continuing to perform non-overlapping screenshot;
and the image expansion subunit is used for acquiring the incomplete pixel characteristics of the incomplete screenshot, carrying out virtual pixel supplementation on the incomplete screenshot based on the incomplete pixel characteristics to obtain a complete image block, and carrying out neglect area marking on the supplemented pixel area.
7. The system of claim 1, wherein the model training module comprises:
the training unit is used for training the ultra-deep convolutional neural network based on the massive training image blocks to obtain a diagnosis model of the diagnosis of the cancer disease;
the first interference unit is used for randomly rotating or turning over the training image block in the training process;
and the second interference unit is used for randomly adjusting the color parameters of the training image block in the training process to finish color interference, wherein the color parameters comprise brightness, saturation, contrast and hue.
8. The system for diagnosis of cancer based on extremely deep convolutional neural network of claim 7, wherein the training unit comprises:
an input subunit, configured to sequentially input training image blocks into the ultra-deep convolutional neural network;
the feature extraction unit is used for extracting bottom layer features and high layer features in the training image block based on the extremely deep convolutional neural network, and fusing the bottom layer features and the high layer features to obtain fusion features;
the error comparison unit is used for determining a cancer diagnosis result corresponding to the training image block based on the fusion characteristic, outputting a pixel level prediction result, and comparing the pixel level prediction result with a labeling result to obtain an overall training error of a large number of training image blocks;
and the model building unit is used for judging that the training is completed and building a diagnosis model of the diagnosis of the cancer disease when the overall training error is smaller than a preset value.
9. The system of claim 1, wherein the auxiliary diagnostic module comprises:
the system comprises a mode selection unit, a control unit and a control unit, wherein the mode selection unit is used for selecting a system mode, and the system mode comprises an auxiliary teaching mode and an auxiliary diagnosis mode;
a diagnosis control unit for, when the diagnosis model of the diagnosis of the cancer of the patient is in an auxiliary diagnosis mode, the diagnosis model of the cancer disease is controlled to carry out identification diagnosis on the input slice to be diagnosed, and a pixel level diagnosis result is directly output;
when the diagnosis model of the cancer is in an auxiliary teaching mode, the diagnosis model of the cancer is controlled to identify and diagnose the input slice to be diagnosed, a hidden auxiliary diagnosis result is obtained, and after the manual diagnosis result is received, the hidden auxiliary diagnosis result is displayed and the lesion cells in the slice to be diagnosed are marked prominently.
10. The diagnosis system for the diagnosis of the cancer caused by the cancer based on the extremely deep convolutional neural network according to claim 9, wherein the diagnosis control unit comprises:
the result comparison subunit is used for comparing the manual diagnosis result with the hidden auxiliary diagnosis result after acquiring the manual diagnosis result when the diagnosis model of the cancer patient is in the auxiliary teaching mode, so as to acquire a plurality of diagnosis error positions;
and the color separation display subunit is used for identifying the manual diagnosis result based on a preset diagnosis semantic segmentation model, determining the error degree of each diagnosis error position, grading errors based on the error degree, and performing color separation display on the manual diagnosis result according to the grading result.
CN202311317823.2A 2023-10-12 2023-10-12 Extreme deep convolutional neural network-based diagnosis system for diagnosis of cancer disease Pending CN117373695A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311317823.2A CN117373695A (en) 2023-10-12 2023-10-12 Extreme deep convolutional neural network-based diagnosis system for diagnosis of cancer disease

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311317823.2A CN117373695A (en) 2023-10-12 2023-10-12 Extreme deep convolutional neural network-based diagnosis system for diagnosis of cancer disease

Publications (1)

Publication Number Publication Date
CN117373695A true CN117373695A (en) 2024-01-09

Family

ID=89399707

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311317823.2A Pending CN117373695A (en) 2023-10-12 2023-10-12 Extreme deep convolutional neural network-based diagnosis system for diagnosis of cancer disease

Country Status (1)

Country Link
CN (1) CN117373695A (en)

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1888949A (en) * 2006-07-12 2007-01-03 张华升 Hidden image identifying system, products, identifying device and producing method
CN105975793A (en) * 2016-05-23 2016-09-28 麦克奥迪(厦门)医疗诊断系统有限公司 Auxiliary cancer diagnosis method based on digital pathological images
CN108288506A (en) * 2018-01-23 2018-07-17 雨声智能科技(上海)有限公司 A kind of cancer pathology aided diagnosis method based on artificial intelligence technology
CN109791693A (en) * 2016-10-07 2019-05-21 文塔纳医疗系统公司 For providing the digital pathology system and related work process of visualization full slice image analysis
CN209486920U (en) * 2018-11-27 2019-10-11 李世学 A kind of national capital answer learning device of geographical teaching
CN110335668A (en) * 2019-05-22 2019-10-15 台州市中心医院(台州学院附属医院) Thyroid cancer cell pathological map auxiliary analysis method and system based on deep learning
CN111179227A (en) * 2019-12-16 2020-05-19 西北工业大学 Mammary gland ultrasonic image quality evaluation method based on auxiliary diagnosis and subjective aesthetics
CN111402642A (en) * 2020-06-05 2020-07-10 成都泰盟软件有限公司 Clinical thinking ability training and checking system
CN112215790A (en) * 2019-06-24 2021-01-12 杭州迪英加科技有限公司 KI67 index analysis method based on deep learning
CN112263217A (en) * 2020-08-27 2021-01-26 上海大学 Non-melanoma skin cancer pathological image lesion area detection method based on improved convolutional neural network
CN113222933A (en) * 2021-05-13 2021-08-06 西安交通大学 Image recognition system applied to renal cell carcinoma full-chain diagnosis
CN113946217A (en) * 2021-10-20 2022-01-18 北京科技大学 Intelligent auxiliary evaluation system for enteroscope operation skills
CN114332144A (en) * 2021-12-28 2022-04-12 中国联合网络通信集团有限公司 Sample granularity detection method and system, electronic equipment and storage medium
CN114898862A (en) * 2022-03-23 2022-08-12 刘倩 Cervical cancer computer-aided diagnosis method based on convolutional neural network and pathological section image
CN115206146A (en) * 2021-04-14 2022-10-18 北京医智影科技有限公司 Intelligent teaching method, system, equipment and medium for delineating radiotherapy target area
CN116137659A (en) * 2021-11-18 2023-05-19 华为技术有限公司 Inter-coded block partitioning method and apparatus
CN116152185A (en) * 2023-01-30 2023-05-23 北京透彻未来科技有限公司 Gastric cancer pathological diagnosis system based on deep learning
CN116309368A (en) * 2023-02-21 2023-06-23 北京透彻未来科技有限公司 Lung cancer pathological diagnosis system based on deep migration learning
CN116386902A (en) * 2023-04-24 2023-07-04 北京透彻未来科技有限公司 Artificial intelligent auxiliary pathological diagnosis system for colorectal cancer based on deep learning

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1888949A (en) * 2006-07-12 2007-01-03 张华升 Hidden image identifying system, products, identifying device and producing method
CN105975793A (en) * 2016-05-23 2016-09-28 麦克奥迪(厦门)医疗诊断系统有限公司 Auxiliary cancer diagnosis method based on digital pathological images
CN109791693A (en) * 2016-10-07 2019-05-21 文塔纳医疗系统公司 For providing the digital pathology system and related work process of visualization full slice image analysis
CN108288506A (en) * 2018-01-23 2018-07-17 雨声智能科技(上海)有限公司 A kind of cancer pathology aided diagnosis method based on artificial intelligence technology
CN209486920U (en) * 2018-11-27 2019-10-11 李世学 A kind of national capital answer learning device of geographical teaching
CN110335668A (en) * 2019-05-22 2019-10-15 台州市中心医院(台州学院附属医院) Thyroid cancer cell pathological map auxiliary analysis method and system based on deep learning
CN112215790A (en) * 2019-06-24 2021-01-12 杭州迪英加科技有限公司 KI67 index analysis method based on deep learning
CN111179227A (en) * 2019-12-16 2020-05-19 西北工业大学 Mammary gland ultrasonic image quality evaluation method based on auxiliary diagnosis and subjective aesthetics
CN111402642A (en) * 2020-06-05 2020-07-10 成都泰盟软件有限公司 Clinical thinking ability training and checking system
CN112263217A (en) * 2020-08-27 2021-01-26 上海大学 Non-melanoma skin cancer pathological image lesion area detection method based on improved convolutional neural network
CN115206146A (en) * 2021-04-14 2022-10-18 北京医智影科技有限公司 Intelligent teaching method, system, equipment and medium for delineating radiotherapy target area
CN113222933A (en) * 2021-05-13 2021-08-06 西安交通大学 Image recognition system applied to renal cell carcinoma full-chain diagnosis
CN113946217A (en) * 2021-10-20 2022-01-18 北京科技大学 Intelligent auxiliary evaluation system for enteroscope operation skills
CN116137659A (en) * 2021-11-18 2023-05-19 华为技术有限公司 Inter-coded block partitioning method and apparatus
CN114332144A (en) * 2021-12-28 2022-04-12 中国联合网络通信集团有限公司 Sample granularity detection method and system, electronic equipment and storage medium
CN114898862A (en) * 2022-03-23 2022-08-12 刘倩 Cervical cancer computer-aided diagnosis method based on convolutional neural network and pathological section image
CN116152185A (en) * 2023-01-30 2023-05-23 北京透彻未来科技有限公司 Gastric cancer pathological diagnosis system based on deep learning
CN116309368A (en) * 2023-02-21 2023-06-23 北京透彻未来科技有限公司 Lung cancer pathological diagnosis system based on deep migration learning
CN116386902A (en) * 2023-04-24 2023-07-04 北京透彻未来科技有限公司 Artificial intelligent auxiliary pathological diagnosis system for colorectal cancer based on deep learning

Similar Documents

Publication Publication Date Title
US11842556B2 (en) Image analysis method, apparatus, program, and learned deep learning algorithm
CN109903284B (en) HER2 immunohistochemical image automatic discrimination method and system
CN112101451B (en) Breast cancer tissue pathological type classification method based on generation of antagonism network screening image block
CN113454733A (en) Multi-instance learner for prognostic tissue pattern recognition
JP6336391B2 (en) Information processing apparatus, information processing method, and program
CN109670510A (en) A kind of gastroscopic biopsy pathological data screening system and method based on deep learning
CN109086836A (en) A kind of automatic screening device of cancer of the esophagus pathological image and its discriminating method based on convolutional neural networks
CN113574534A (en) Machine learning using distance-based similarity labels
JP2000513465A (en) Automatic microscope-assisted inspection method for living tissue samples and body fluid samples
CN111340128A (en) Lung cancer metastatic lymph node pathological image recognition system and method
CN110796661B (en) Fungal microscopic image segmentation detection method and system based on convolutional neural network
CN113011257A (en) Breast cancer immunohistochemical artificial intelligence interpretation method
US20040014165A1 (en) System and automated and remote histological analysis and new drug assessment
CN114743672A (en) Intelligent prediction method and system for NSCLC lymph node metastasis risk
CN115206495A (en) Renal cancer pathological image analysis method and system based on CoAtNet deep learning and intelligent microscopic device
CN112270684B (en) Microscopic image immunohistochemical virtual multiple labeling and analyzing method and system
McKenna et al. Automated classification for visual-only postmortem inspection of porcine pathology
CN111951271B (en) Method and device for identifying cancer cells in pathological image
CN117373695A (en) Extreme deep convolutional neural network-based diagnosis system for diagnosis of cancer disease
Guo et al. Pathological Detection of Micro and Fuzzy Gastric Cancer Cells Based on Deep Learning.
Kuo et al. Automated assessment in HER-2/neu immunohistochemical expression of breast cancer
Gupta et al. A review on deep learning-based polyp segmentation for efficient colorectal cancer screening
CN113222928A (en) Artificial intelligent urothelial cancer recognition system for urocytology
Ayana et al. Vision Transformers for Breast Cancer Human Epidermal Growth Factor Receptor 2 Expression Staging without Immunohistochemical Staining
CN112950550B (en) Deep learning-based type 2 diabetes kidney disease image classification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination