CN114612482A - Method and system for positioning and classifying gastric cancer neuroinfiltration digital pathological section images - Google Patents

Method and system for positioning and classifying gastric cancer neuroinfiltration digital pathological section images Download PDF

Info

Publication number
CN114612482A
CN114612482A CN202210217663.3A CN202210217663A CN114612482A CN 114612482 A CN114612482 A CN 114612482A CN 202210217663 A CN202210217663 A CN 202210217663A CN 114612482 A CN114612482 A CN 114612482A
Authority
CN
China
Prior art keywords
gastric cancer
network
region
nerve
neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210217663.3A
Other languages
Chinese (zh)
Inventor
高钦泉
胡紫薇
兰俊林
陈刚
张和军
王健超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Tumour Hospital (fujian Tumour Institute Fujian Cancer Control And Prevention Center)
Fuzhou University
Original Assignee
Fujian Tumour Hospital (fujian Tumour Institute Fujian Cancer Control And Prevention Center)
Fuzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Tumour Hospital (fujian Tumour Institute Fujian Cancer Control And Prevention Center), Fuzhou University filed Critical Fujian Tumour Hospital (fujian Tumour Institute Fujian Cancer Control And Prevention Center)
Priority to CN202210217663.3A priority Critical patent/CN114612482A/en
Publication of CN114612482A publication Critical patent/CN114612482A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30092Stomach; Gastric
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a method and a system for positioning and classifying gastric cancer neuro-infiltration digital pathological section images, which comprises the steps of firstly segmenting a whole pathological image into small patch training image blocks in a sliding window mode; secondly, constructing and inputting a training image block into a training segmentation model, segmenting a gastric cancer cell region and outputting a gastric cancer cell generation mask, and meanwhile constructing and inputting the training image block into a DETR model, detecting a nerve region and positioning and classifying nerves; then, based on the output result of the gastric cancer segmentation model and the output result of the DETR network, inputting a gastric cancer segmentation mask and a nerve detection result graph into a nerve infiltration judgment network, performing pixel-level overlapping judgment on a nerve region and a gastric cancer region, and judging the nerve region as a normal nerve region if no overlapping region exists, and not performing positioning and category presentation on the nerve region; if the overlapped area exists, judging that the nerve infiltration exists, and outputting the positioning and classification result of the nerve infiltration; and finally, splicing the small patch image blocks back to the original size image for output.

Description

Method and system for positioning and classifying gastric cancer neuroinfiltration digital pathological section images
Technical Field
The invention belongs to the technical field of machine learning and medical image processing, and particularly relates to a method and a system for positioning and classifying gastric cancer neuro-infiltration digital pathological section images.
Background
According to the global cancer burden data of 2020, gastric cancer is a malignant tumor with fifth-grade incidence and fourth-grade fatality worldwide. The early diagnosis rate of the gastric cancer in China is low, the proportion of the advanced stage is large, the survival rate of the gastric cancer in 5 years is always low, and the total rate is about 35 percent, which is mainly related to the easy recurrence and transfer of the gastric cancer in the advanced stage.
TNM staging has always been an important reference for prognosis assessment of gastric cancer, but clinical practice finds that it is increasingly not satisfactory for clinical needs, e.g. patients with the same TNM staging sometimes have very different prognosis, or patients with different stages have similar prognosis, which means that there are other factors that affect the prognosis of gastric cancer in addition to TNM staging. In recent years, it has been found that paraneurological infiltration (PNI) may be another important metastasis diffusion pathway of gastric cancer following lymphatic metastasis, blood metastasis and intraperitoneal transplantation metastasis, and this neurotropic invasion characteristic is closely related to prognosis.
In recent years, with the gradual and powerful calculation of the GPU, artificial intelligence has gone into various industries, and medical industries closely related to human life and health also have application scenes with huge potential. The image data analysis method based on the intelligent computer technology is widely applied to data analysis in the medical field, meanwhile, under the large background of deep learning, computing power and data become hard currency in the algorithm field, and different medical image databases such as a liver tumor database LiTS, a lung nodule database LIDC-IDRI, an senile dementia database ADNI and the like are established by some national departments and social institutions, so that a plurality of researchers are promoted to concentrate on researching medical image processing methods, and a plurality of achievements are obtained.
However, there is no deep research and achievement in the application of machine learning in identification and location of gastric cancer neuro-infiltration, in the existing clinical application, the human eye is required to go to the HE digital staining slice image to search for the effective information of the neuro-infiltration, the process is time-consuming and labor-consuming, and the accuracy is not easy to guarantee.
Disclosure of Invention
In order to make up for the blank and the defects of the prior art, the invention provides the method and the system for positioning and classifying the gastric cancer nerve infiltration digital pathological section images, which can output a neural positioning and classification identification result image with suspected nerve infiltration by inputting a pathological section so as to automatically obtain the preliminary interpretation of nerve invasion, greatly reduce the reading time of doctors, improve the working efficiency of the doctors and have very important significance and wide prospect.
In order to achieve the purpose, the scheme of the invention firstly adopts a sliding window mode for the whole pathological image, and can cut the pathological image into 2048 multiplied by 2048 small patch training image blocks according to a 20-time mirror visual field; secondly, constructing and inputting a training image block into a training segmentation model, segmenting a gastric cancer cell region and outputting a gastric cancer cell generation mask, constructing and inputting the training image block into an End-to-End Object Detection with transformations (DETR) model, and detecting a neural region to position and classify nerves; then, based on the output result of the gastric cancer segmentation model and the output result of the DETR network, inputting a gastric cancer segmentation mask and a neural detection result graph into a neural infiltration discrimination network, performing pixel-level overlapping discrimination on a neural area and a gastric cancer area, and discriminating the neural area as a normal neural area if no overlapping area exists, and not performing positioning and category presentation on the neural area; if the overlapped area exists, judging that the nerve infiltration exists, and outputting the positioning and classification result of the nerve infiltration; and finally, splicing the small patch image blocks back to the original size image for output.
The invention specifically adopts the following technical scheme:
a method for positioning and classifying gastric cancer neuroinfiltration digital pathological section images is characterized by comprising the following steps:
step S1: the whole pathological image is segmented into small patch training image blocks in a sliding window mode;
step S2: constructing a segmentation model with EfficientnNet-B3 as an encoder and Unet + + as a decoder, training input of a training image block, segmenting a gastric cancer cell region and outputting a gastric cancer cell generation mask; constructing and inputting training image blocks into a DETR model, detecting neural areas, and positioning and classifying nerves;
step S3: inputting a gastric cancer segmentation mask and a nerve detection result graph into a nerve infiltration judging network based on an output result of a gastric cancer segmentation model and an output result of a DETR network, performing pixel-level overlapping judgment on a nerve region and a gastric cancer region, and outputting a positioning and classifying result of nerve infiltration;
step S4: and (5) splicing the small patch image block back to the original size image for output.
Further, in step S1, the large-size WSI is cut into smaller 2048 × 2048 blocks using a sliding window based sampling strategy.
Further, in step S2, the efficientnenet-B3 encoder portion is pre-trained on the data set ImageNet; in the U-Net + + decompressor part, the performance is improved by using parallel compression and excitation SE blocks in the decoder, and the network is divided to separate the structures of various basic pathological sections of gastric cancer; the SE block endows different channels with different weights so as to enhance useful characteristics and inhibit useless characteristics, each intermediate layer is connected to the initial node of the layer, and all related characteristic graphs are merged to the final node of the layer; finally, the connection layer in the split network combines the transposed convolution layer of the previous layer with all the feature mappings of the matching layer of the encoding path.
Further, in step S2, inputting the training image blocks into the DETR model, and in the process of detecting the neural regions to locate and classify the nerves, adding a plurality of training data sets of normal neural images when training the network, so that the network learns the characteristics of nerves at different positions according to the pathological part of the gastric cancer, and improving the accuracy of detecting the network identification; extracting 2D features from the image using Resnet101 as a skeleton; the position coding module uses sine and cosine functions with different frequencies to code spatial information, converts a two-dimensional characteristic vector into a one-dimensional characteristic diagram, and transmits the converted one-dimensional characteristic diagram to a 6-layer encoder and a decoder as the input of a target detection network; after the output of the decoder, two forward feedback networks are connected and transmitted to a 6-layer encoder; the output of the encoder is connected with two forward feedback networks to respectively predict the neural area and the category thereof.
Further, in step S3, the neuro-infiltration recognition network obtains a segmentation mask of the cancerous region through a segmentation model, obtains a neural detection result graph through detecting a neural network DETR model, performs pixel-level overlap determination on the result graph through a determination statement on outputs of the two networks under cooperation of the segmentation model and the DETR model, and determines the neural region as a normal neural region without performing positioning and category presentation if there is no overlap region; and if the overlapped area exists, judging that the network is the neural infiltration, and finally, taking the positioning and category information of the neural infiltration as the output of the neural infiltration judging network.
And, a gastric cancer neuro-infiltration digital pathological section image positioning and classification system, characterized by, based on a computer system, comprising:
the image cutting module is used for cutting the whole pathological image into small patch training image blocks in a sliding window mode;
the segmentation module is obtained by inputting a training image block into training by taking EfficientnNet-B3 as an encoder and Unet + + as a decoder, and is used for segmenting a gastric cancer cell area and outputting a gastric cancer cell generation mask;
the detection module is obtained by inputting the training image block into the DETR model for training and is used for detecting a neural area and positioning and classifying nerves;
the nerve infiltration judging network is used for inputting the stomach cancer segmentation mask and the nerve detection result graph into the nerve infiltration judging network based on the output result of the segmentation module and the output result of the detection module, performing pixel-level overlapping judgment on a nerve region and a stomach cancer region, judging the nerve region as a normal nerve region if no overlapping region exists, and not performing positioning and category presentation on the nerve region; if the overlapped area exists, judging that the nerve infiltration exists, and outputting the positioning and classification result of the nerve infiltration;
and the image splicing module is used for splicing the small patch image blocks output by the neural infiltration judging network back to the original size image for output.
Further, the efficientnenet-B3 encoder portion of the segmentation module is pre-trained on the data set ImageNet; in the U-Net + + decompressor part, the performance is improved by using parallel compression and excitation SE blocks in the decoder, and the network is divided to separate the structures of various basic pathological sections of gastric cancer; the SE block endows different channels with different weights so as to enhance useful characteristics and inhibit useless characteristics, each intermediate layer is connected to the initial node of the layer, and all related characteristic graphs are combined to the final node of the layer; finally, the connection layer in the split network combines the transposed convolution layer of the previous layer with all the feature mappings of the matching layer of the encoding path.
Furthermore, when the detection module trains the network, a plurality of training data sets of normal neural images are added, so that the network learns the characteristics of nerves at different positions according to the pathological part of the gastric cancer, and the accuracy of detecting network identification is improved; extracting 2D features from the image using Resnet101 as a skeleton; the position coding module uses sine and cosine functions with different frequencies to code spatial information, converts a two-dimensional characteristic vector into a one-dimensional characteristic diagram, and transmits the converted one-dimensional characteristic diagram to a 6-layer encoder and a decoder as the input of a target detection network; and two forward feedback networks are connected after the output of the decoder, and the neural area and the category of the neural area are respectively predicted.
Compared with the prior art, the invention and the optimal scheme thereof combine the algorithm for segmenting the gastric cancer area, assist the interpretation of the final result of the neural invasion detection and quickly find the suspected neural infiltration, thereby greatly reducing the time and the energy for the doctor to find the neural infiltration on the HE digital pathological section, making up the blank of the current deep learning algorithm on the neural infiltration interpretation in the field of gastric cancer digital pathological section image processing, and laying a foundation for the application of the artificial intelligence technology in the field of gastric cancer pathological image assistance. And the method also has wide application value in the fields of prognosis observation of the treatment response of the targeted drugs, training of doctors in the pathology department and the like.
Drawings
FIG. 1 is a schematic overall flow chart of an embodiment of the present invention;
FIG. 2 is a schematic diagram of an overall algorithm framework according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of a segmentation model algorithm according to an embodiment of the present invention.
Detailed Description
In order to make the features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail as follows:
it should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
As shown in fig. 1-2, in the scheme provided in this embodiment, a whole pathological image is first segmented into 2048 × 2048 small patch training image blocks in a sliding window manner; secondly, constructing and inputting a training image block into a training segmentation model, segmenting a gastric cancer cell region and outputting a gastric cancer cell generation mask, constructing and inputting the training image block into an End-to-End Object Detection with transformations (DETR) model, and detecting a neural region to position and classify nerves; then, based on the output result of the gastric cancer segmentation model and the output result of the DETR network, inputting a gastric cancer segmentation mask and a nerve detection result graph into a nerve infiltration judgment network, performing pixel-level overlapping judgment on a nerve region and a gastric cancer region, and judging the nerve region as a normal nerve region if no overlapping region exists, and not performing positioning and category presentation on the nerve region; if the overlapped area exists, judging that the nerve infiltration exists, and outputting the positioning and classification result of the nerve infiltration; and finally, splicing the small patch image blocks back to the original size image for output.
The specific implementation process is as follows:
1. sampling method based on sliding window
Because the size of the gastric cancer digital pathological section is very large, it is often impractical to directly input the gastric cancer digital pathological section into a deep learning model for segmentation and target detection. While reducing the resolution of the entire gastric cancer digital pathology can solve this problem, it is likely that a large amount of detailed information will be lost. Therefore, using a sliding window based sampling strategy, the large size WSI is cropped into smaller 2048 × 2048 blocks. In order to ensure that all pixels in the gastric cancer digital pathological section are predicted with no or little additional computational overhead, the present embodiment crops the gastric cancer digital pathological section into image blocks at equal intervals to obtain a sub-image containing no repetitive region.
2. Segmenting gastric cancer digital pathological section gastric cancer cells
And constructing and training a segmentation model with EfficientnNet-B3 as an encoder and Unet + + as a decoder, and obtaining a segmentation result of the gastric cancer region on the slice according to the trained segmentation model. Wherein the EfficientnNet-B3 encoder section was pre-trained on the data set ImageNet. In the U-Net + + decompressor section, the segmentation network is able to skillfully separate the structures of various basic pathological sections of gastric cancer by using parallel compression and excitation (SE) blocks in the decoder to improve performance. The SE block assigns different weights to different channels to enhance useful features, suppress useless features from each intermediate level connecting to the starting node of the level, and merge all relevant feature maps to the final node of the level. Finally, the connection layer in the split network combines the transposed convolution layer of the previous layer with all the feature mappings of the matching layer of the encoding path. Specifically, the split network model is shown in fig. 3.
3. Detecting gastric cancer digital pathological section nerve region
The embodiment needs the target detection method with positioning and classification capability to solve the problem of uncertain nerve infiltration position in the large graph. Under the condition of lacking of data set training, the training data set of normal neural images can be added during network training, so that the network learns the characteristics of nerves at different positions according to the pathological part of the gastric cancer, and the accuracy of network identification detection is improved. The present embodiment extracts 2D features from images using Resnet101 as a skeleton. In order to fully extract the position information of the image, the position coding module uses sine and cosine functions with different frequencies to code the space information, converts a two-dimensional characteristic vector into a one-dimensional characteristic diagram, uses the converted one-dimensional characteristic diagram as the input of a target detection network, transmits the input to a 6-layer coder and a decoder, and is connected with two forward feedback networks after the output of the decoder to respectively predict a neural area and the category of the neural area.
4. Neural infiltration discrimination by neural infiltration discrimination network
The neural infiltration discrimination network needs to combine the segmentation mask of the gastric cancer and the positioning and classification of nerves to generate the recognition of the neural infiltration. The method comprises the steps that a segmentation model obtains a segmentation mask of a cancerous region, a neural positioning and classification result image is obtained by detecting a DETR model of a neural network after training, pixel-level overlapping judgment is carried out on output result images of the two networks, if no overlapping region exists, the neural region is judged to be a normal neural region, and positioning and classification are not carried out on the neural region; and if the overlapped area exists, judging that the network is the neural infiltration, and finally, taking the positioning and category information of the neural infiltration as the output of the neural infiltration judging network.
One specific example is provided below to further illustrate the present solution:
firstly, segmenting the whole pathological image into 2048 multiplied by 2048 small patch training image blocks in a sliding window mode according to a 20-time mirror visual field; secondly, constructing and inputting a training image block into a training segmentation model, segmenting a gastric cancer cell region and outputting a gastric cancer cell generation mask, and meanwhile constructing and inputting the training image block into the model, detecting a nerve region and positioning and classifying nerves; then, based on the output result of the gastric cancer segmentation model and the output result of the End-to-End Object Detection with transforms (DETR) network, inputting a gastric cancer segmentation mask and a neural Detection result graph into a neural infiltration judgment network, performing pixel-level overlapping judgment on a neural area and a gastric cancer area, and outputting a positioning and classification result which is most likely to be neural infiltration; and finally, the finally output small patch image blocks are spliced into the original size image, so that the automatic auxiliary doctor obtains the result of the preliminary interpretation of the nerve invasion, and the film reading time and the diagnosis difficulty of the doctor can be greatly reduced. The specific process is as follows:
(1) the full-digital pathological section image of the stomach cancer is obtained by a professional digital scanning device of the pathological section of the stomach cancer in the pathology department of the hospital. Integrating the information of the patient and constructing a gastric cancer digital pathological section database.
(2) The image preprocessing is carried out on the obtained gastric cancer digital pathological section, and due to different staining processes, the image has larger difference, and the image section has the problems of artifact, noise, white slice, tissue loss, tearing and the like. The method eliminates irrelevant information in the image by setting a threshold, and performs normalization, geometric transformation and image enhancement on the image to enhance the detectability of the relevant information.
(3) The HE image gastric cancer cells are segmented based on a segmentation model with EfficientnNet-B3 as an encoder and Unet + + as a decoder, a pre-training method is adopted, scale information is added, the structure of various basic pathological sections of gastric cancer can be separated, parallel extrusion and excitation blocks are used in the decoder to improve performance, segmentation accuracy is improved, and the region where the cancerous cells are located is output.
(4) Because the used target detection method DETR network has the positioning and classifying capability, the problem that the position of the nerve infiltration in a large graph is uncertain can be solved. Due to the lack of training of the neural infiltration data set, the training data set added with some normal neural images during network training enables the network to learn a plurality of neural structure characteristics and improves the accuracy of network identification detection according to the pathological part of the gastric cancer. And outputting the positioning and classification information of the obtained nerves.
(5) And obtaining a segmentation mask of the cancerous region through a segmentation model, and obtaining a neural detection result graph through a detection neural network DETR model. Under the coordination of the segmentation model and the DETR model, the neural infiltration identification network carries out pixel-level identification on output result graphs of the two networks, if no overlapping area exists, the neural area is judged to be a normal neural area, and positioning and classification presentation are not carried out on the neural area; and if the overlapped area exists, judging that the network is the neural infiltration, and finally, taking the positioning and category information of the neural infiltration as the output of the neural infiltration judging network.
(6) The image blocks are pieced back to the size of the original image of the digital pathological section, and the result of the preliminary interpretation of the nerve infringement can be displayed in the image.
The above programming scheme provided by this embodiment can be stored in a computer readable storage medium in a coded form, and implemented in a computer program manner, and inputs basic parameter information required for calculation through computer hardware, and outputs a calculation result.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations of methods, apparatus (devices), and computer program products according to embodiments of the invention. It will be understood that each flow of the flowcharts, and combinations of flows in the flowcharts, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows.
The foregoing is directed to preferred embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow. However, any simple modification, equivalent change and modification of the above embodiments according to the technical essence of the present invention are within the protection scope of the technical solution of the present invention.
The present invention is not limited to the above preferred embodiments, and other various methods and systems for locating and classifying digital pathological section images of gastric cancer neuro-infiltration can be obtained by anyone with the benefit of the present invention.

Claims (8)

1. A method for positioning and classifying gastric cancer neuroinfiltration digital pathological section images is characterized by comprising the following steps:
step S1: the whole pathological image is segmented into small patch training image blocks in a sliding window mode;
step S2: constructing a segmentation model with EfficientnNet-B3 as an encoder and Unet + + as a decoder, training input of a training image block, segmenting a gastric cancer cell region and outputting a gastric cancer cell generation mask; constructing and inputting training image blocks into a DETR model, detecting neural areas, and positioning and classifying nerves;
step S3: inputting a gastric cancer segmentation mask and a nerve detection result graph into a nerve infiltration judging network based on an output result of a gastric cancer segmentation model and an output result of a DETR network, performing pixel-level overlapping judgment on a nerve region and a gastric cancer region, and outputting a positioning and classifying result of nerve infiltration;
step S4: and (5) splicing the small patch image block back to the original size image for output.
2. The method for positioning and classifying gastric cancer neuroinfiltration digital pathological section images according to claim 1, which is characterized in that: in step S1, the large-size WSI is cropped into smaller 2048 × 2048 blocks using a sliding-window based sampling strategy.
3. The method for positioning and classifying gastric cancer neuroinfiltration digital pathological section images according to claim 1, which is characterized in that: in step S2, the EfficientnNet-B3 encoder portion is pre-trained on the data set ImageNet; in the U-Net + + decompressor part, the performance is improved by using parallel compression and excitation SE blocks in the decoder, and the network is divided to separate the structures of various basic pathological sections of gastric cancer; the SE block endows different channels with different weights so as to enhance useful characteristics and inhibit useless characteristics, each intermediate layer is connected to the initial node of the layer, and all related characteristic graphs are combined to the final node of the layer; finally, the connection layer in the split network combines the transposed convolution layer of the previous layer with all the feature mappings of the matching layer of the encoding path.
4. The method for positioning and classifying gastric cancer neuroinfiltration digital pathological section images according to claim 1, which is characterized in that: in step S2, inputting the training image blocks into the DETR model, and in the process of detecting the neural regions to locate and classify the nerves, adding a plurality of training data sets of normal neural images when training the network, so that the network learns the characteristics of nerves at different positions according to the pathological part of the gastric cancer, and improving the accuracy of detecting the network identification; extracting 2D features from the image using Resnet101 as a skeleton; the position coding module uses sine and cosine functions with different frequencies to code spatial information, converts a two-dimensional characteristic vector into a one-dimensional characteristic diagram, and transmits the converted one-dimensional characteristic diagram to a 6-layer encoder and a decoder as the input of a target detection network; and two forward feedback networks are connected after the output of the decoder, and the neural area and the category of the neural area are respectively predicted.
5. The method for positioning and classifying gastric cancer neuroinfiltration digital pathological section images according to claim 1, which is characterized in that: in step S3, the neuro-infiltration identification network obtains a segmentation mask of the cancerous region through a segmentation model, obtains a neural detection result graph through detecting a neural network DETR model, identifies a pixel-level overlapping region of the output result graphs of the two networks under the cooperation of the segmentation model and the DETR model, and if there is no overlapping region, determines the neural region as a normal neural region without positioning and presenting the category of the neural region; and if the overlapped area exists, judging that the network is the neural infiltration, and finally, taking the positioning and category information of the neural infiltration as the output of the neural infiltration judging network.
6. A system for locating and classifying gastric cancer neuro-infiltration digital pathological section images is characterized by comprising, based on a computer system:
the image cutting module is used for cutting the whole pathological image into small patch training image blocks in a sliding window mode;
the segmentation module is obtained by inputting a training image block into training by taking EfficientnNet-B3 as an encoder and Unet + + as a decoder, and is used for segmenting a gastric cancer cell region and outputting a gastric cancer cell generation mask;
the detection module is obtained by inputting the training image block into the DETR model for training and is used for detecting a neural area and positioning and classifying nerves;
the nerve infiltration judging network is used for inputting the stomach cancer segmentation mask and the nerve detection result graph into the nerve infiltration judging network based on the output result of the segmentation module and the output result of the detection module, performing pixel-level overlapping judgment on a nerve region and a stomach cancer region, judging the nerve region as a normal nerve region if no overlapping region exists, and not performing positioning and category presentation on the nerve region; if the overlapped area exists, judging that the nerve infiltration exists, and outputting the positioning and classification result of the nerve infiltration;
and the image splicing module is used for splicing the small patch image blocks output by the neural infiltration judgment network back to the original size image for output.
7. The gastric cancer neuroinfiltration digital pathological section image positioning and classifying system according to claim 6, wherein: the EfficientnNet-B3 encoder part of the segmentation module is pre-trained on a data set ImageNet; in the U-Net + + decompressor part, the performance is improved by using parallel compression and excitation SE blocks in the decoder, and the network is divided to separate the structures of various basic pathological sections of gastric cancer; the SE block endows different channels with different weights so as to enhance useful characteristics and inhibit useless characteristics, each intermediate layer is connected to the initial node of the layer, and all related characteristic graphs are combined to the final node of the layer; finally, the connection layer in the split network combines the transposed convolution layer of the previous layer with all the feature mappings of the matching layer of the encoding path.
8. The gastric cancer neuroinfiltration digital pathological section image positioning and classifying system according to claim 6, wherein: when the detection module trains the network, a plurality of training data sets of normal neural images are added, so that the network learns the characteristics of nerves at different positions according to the pathological part of the gastric cancer, and the accuracy of detection network identification is improved; extracting 2D features from the image using Resnet101 as a skeleton; the position coding module uses sine and cosine functions with different frequencies to code spatial information, converts a two-dimensional characteristic vector into a one-dimensional characteristic diagram, and transmits the converted one-dimensional characteristic diagram to a 6-layer encoder and a decoder as the input of a target detection network; and two forward feedback networks are connected after the output of the decoder, and the neural area and the category of the neural area are respectively predicted.
CN202210217663.3A 2022-03-08 2022-03-08 Method and system for positioning and classifying gastric cancer neuroinfiltration digital pathological section images Pending CN114612482A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210217663.3A CN114612482A (en) 2022-03-08 2022-03-08 Method and system for positioning and classifying gastric cancer neuroinfiltration digital pathological section images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210217663.3A CN114612482A (en) 2022-03-08 2022-03-08 Method and system for positioning and classifying gastric cancer neuroinfiltration digital pathological section images

Publications (1)

Publication Number Publication Date
CN114612482A true CN114612482A (en) 2022-06-10

Family

ID=81861937

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210217663.3A Pending CN114612482A (en) 2022-03-08 2022-03-08 Method and system for positioning and classifying gastric cancer neuroinfiltration digital pathological section images

Country Status (1)

Country Link
CN (1) CN114612482A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190065817A1 (en) * 2017-08-29 2019-02-28 Konica Minolta Laboratory U.S.A., Inc. Method and system for detection and classification of cells using convolutional neural networks
WO2021003821A1 (en) * 2019-07-11 2021-01-14 平安科技(深圳)有限公司 Cell detection method and apparatus for a glomerular pathological section image, and device
CN113034462A (en) * 2021-03-22 2021-06-25 福州大学 Method and system for processing gastric cancer pathological section image based on graph convolution
CN113674288A (en) * 2021-07-05 2021-11-19 华南理工大学 Automatic segmentation method for non-small cell lung cancer digital pathological image tissues

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190065817A1 (en) * 2017-08-29 2019-02-28 Konica Minolta Laboratory U.S.A., Inc. Method and system for detection and classification of cells using convolutional neural networks
WO2021003821A1 (en) * 2019-07-11 2021-01-14 平安科技(深圳)有限公司 Cell detection method and apparatus for a glomerular pathological section image, and device
CN113034462A (en) * 2021-03-22 2021-06-25 福州大学 Method and system for processing gastric cancer pathological section image based on graph convolution
CN113674288A (en) * 2021-07-05 2021-11-19 华南理工大学 Automatic segmentation method for non-small cell lung cancer digital pathological image tissues

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
胡紫薇: ""A multi-task deep learning framework for perineural invasion recognition in gastric cancer whole slide images"", 《BIOMEDICAL SIGNAL PROCESSING AND CONTROL》, 9 November 2022 (2022-11-09) *
高钦泉: ""基于模糊控制的车流量检测系统的设计与应用"", 《物联网技术》, 31 October 2021 (2021-10-31) *

Similar Documents

Publication Publication Date Title
US11704808B1 (en) Segmentation method for tumor regions in pathological images of clear cell renal cell carcinoma based on deep learning
CN113256641B (en) Skin lesion image segmentation method based on deep learning
Masoud Abdulhamid et al. New auxiliary function with properties in nonsmooth global optimization for melanoma skin cancer segmentation
CN113674253A (en) Rectal cancer CT image automatic segmentation method based on U-transducer
CN111179237A (en) Image segmentation method and device for liver and liver tumor
CN112215844A (en) MRI (magnetic resonance imaging) multi-mode image segmentation method and system based on ACU-Net
Li et al. PSENet: Psoriasis severity evaluation network
Ansari et al. Multiple sclerosis lesion segmentation in brain MRI using inception modules embedded in a convolutional neural network
Wang et al. SC-dynamic R-CNN: A self-calibrated dynamic R-CNN model for lung cancer lesion detection
Dipu et al. Brain tumor detection using various deep learning algorithms
CN115861181A (en) Tumor segmentation method and system for CT image
Lai et al. Toward accurate polyp segmentation with cascade boundary-guided attention
CN116739992B (en) Intelligent auxiliary interpretation method for thyroid capsule invasion
Khattar et al. Computer assisted diagnosis of skin cancer: a survey and future recommendations
CN116596890A (en) Dynamic image thyroid cancer risk layering prediction method based on graph convolution network
CN111754530A (en) Prostate ultrasonic image segmentation and classification method
CN114612482A (en) Method and system for positioning and classifying gastric cancer neuroinfiltration digital pathological section images
Wang Predicting Colorectal Cancer Using Residual Deep Learning with Nursing Care
Tyagi et al. An amalgamation of vision transformer with convolutional neural network for automatic lung tumor segmentation
CN114529554A (en) Intelligent auxiliary interpretation method for gastric cancer HER2 digital pathological section
Kalaivani et al. A Deep Ensemble Model for Automated Multiclass Classification Using Dermoscopy Images
Yang et al. Tracking cancer lesions on surgical samples of gastric cancer by artificial intelligent algorithms
Bhardwaj et al. Modeling of An CNN Architecture for Kidney Stone Detection Using Image Processing
Li et al. SG-MIAN: Self-guided multiple information aggregation network for image-level weakly supervised skin lesion segmentation
Jenisha et al. Automated Liver Tumor Segmentation Using Deep Transfer Learning and Attention Mechanisms

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination