CN112967287A - Gastric cancer focus identification method, device, equipment and storage medium based on image processing - Google Patents

Gastric cancer focus identification method, device, equipment and storage medium based on image processing Download PDF

Info

Publication number
CN112967287A
CN112967287A CN202110139586.XA CN202110139586A CN112967287A CN 112967287 A CN112967287 A CN 112967287A CN 202110139586 A CN202110139586 A CN 202110139586A CN 112967287 A CN112967287 A CN 112967287A
Authority
CN
China
Prior art keywords
focus
gastric cancer
convolution
lesion
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110139586.XA
Other languages
Chinese (zh)
Inventor
王佳平
谢春梅
李风仪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202110139586.XA priority Critical patent/CN112967287A/en
Publication of CN112967287A publication Critical patent/CN112967287A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30092Stomach; Gastric
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to a gastric cancer focus identification method, a gastric cancer focus identification device, gastric cancer focus identification equipment and a storage medium based on image processing. Labeling the focus level of the region containing the focus based on the digital image of the pathological section; obtaining a segmented focus result through a pre-trained improved U-net semantic segmentation model to obtain recognition of a focus, obtaining a segmented contour result through the model to obtain recognition of a gland, and extracting a focus position; overlapping the two results to obtain the identification of the isolated gland guaranteed by the contour result and the identification of the lesion grade guaranteed by the additional label in the lesion result; and then, by unifying the lesion levels in the same connected domain in the contour, a gland marked in the same contour is provided with a determined lesion level, the lesion area is further graded, different types of lesion glands can be well distinguished, a gastric cancer lesion identification result with separated glands and marked with the lesion level is obtained, and the identification precision is high.

Description

Gastric cancer focus identification method, device, equipment and storage medium based on image processing
Technical Field
The invention relates to the technical field of image processing, in particular to a gastric cancer focus identification method, a gastric cancer focus identification device, gastric cancer focus identification equipment and a storage medium based on image processing.
Background
In recent years, with the increasing digitalization of pathology and the development of deep learning and other technologies, computer-aided pathological diagnosis and screening of disease images are becoming more and more mature. At present, some segmentation algorithms can identify the cancerous region, but due to the complexity of the histopathological image, the precision of the segmentation algorithms is difficult to guarantee, and the malignancy degree of the cancerous region cannot be further interpreted.
In the diagnosis of gastric cancer pathology, the morphological structure of glands is crucial to the evaluation of malignancy of canceration, and as partial ducts are densely distributed in tissues, when a general semantic segmentation algorithm segments the ducts, a plurality of independent individual glands are connected together, which greatly affects the quantitative analysis of the glands, thereby reducing the final accuracy of the algorithm.
Disclosure of Invention
Aiming at the problems of low pathological image identification precision and incapability of judging the malignancy degree in the prior art, the invention provides a gastric cancer focus identification method, a device, equipment and a storage medium based on image processing.
The invention is realized by the following technical scheme:
the gastric cancer focus identification method based on image processing comprises the following steps,
on the digital image of the pathological section to be identified, carrying out lesion grade marking on the region containing the lesion according to gastric cancer typing to obtain a marked image;
identifying the marked image through a pre-trained improved U-net semantic segmentation model to respectively obtain a segmented focus result and a segmented outline result;
overlaying the contour result on the focus result, unifying focus levels in the same connected domain in the contour to obtain a gastric cancer focus identification result with separated glands and marked focus levels;
the improved U-net semantic segmentation model comprises an input-in downsampling convolution and two parallel output upsampling convolutions; performing feature extraction on the segmented glandular focus by the downsampling convolution and the first upsampling convolution to obtain a segmented focus result; and performing feature extraction on the segmented gland contour by using the downsampling convolution and the second upsampling convolution to obtain a segmented contour result.
Preferably, the improved U-net semantic segmentation model further comprises a residual error network module and an attention module; the residual error network module and the attention module are sequentially arranged in each convolution layer of the down-sampling convolution and the up-sampling convolution.
Further, when the feature extraction of the segmented glandular lesion is performed by the downsampling convolution and a first upsampling convolution, the residual error network module in the first upsampling convolution is correspondingly connected with the residual error network module in the first upsampling convolution in a layer-skipping manner;
when the downsampling convolution and the second upsampling convolution are used for extracting the characteristics of the segmented gland contour, the residual error network module in the first upsampling convolution is correspondingly connected with the residual error network module in the second upsampling convolution in a layer-skipping mode;
the image sizes of the hierarchical processing of the corresponding layer jump connection are the same.
Preferably, before the lesion-level labeling of the region containing the lesion according to the gastric cancer classification on the digital image of the pathological section to be identified, the digital image of the pathological section to be identified is obtained through scanning; the digital image is in any one of the formats svs, kfb, ndpi, and tif.
Preferably, the lesion grade of the region containing the lesion is labeled according to gastric cancer typing, specifically, the lesion grade of the region containing the lesion is labeled by adopting gastric cancer pathology WHO typing.
Preferably, after the annotated image is obtained, before the annotated image is identified through a pre-trained improved U-net semantic segmentation model, the data arrangement of the annotated image is further included;
segmenting the marked image, and reserving the image containing the tissue area to obtain an effective image;
dividing the effective image into a training set and a test set after mask masking, and finishing data arrangement of the marked image; the training set and the testing set are used for training and testing the pre-trained improved U-net semantic segmentation model.
Preferably, when the pre-trained improved U-net semantic segmentation model is pre-trained, including,
freezing each convolution layer in one of the two up-sampling convolutions, and performing feature extraction and training of a segmentation object on each convolution layer in the other up-sampling convolution layer until the improved U-net semantic segmentation model is converged;
and freezing the trained upsampling convolution, and performing feature extraction and training of a segmentation object on each convolution layer in the untrained upsampling convolution until the improved U-net semantic segmentation model is converged to finish the pre-training.
Preferably, the lesion levels in the same connected component within the uniform contour include,
counting pixel values occupied by each focus level in the same connected domain in the outline;
and changing all pixel values in the same connected domain into the type with the largest proportion to obtain the focus level corresponding to the pixel value with the largest proportion as the focus level of the same connected domain in the outline.
Preferably, after the gastric cancer lesion identification result with separated glands and marked lesion grade is obtained, the method further comprises the following steps,
carrying out identification of the same operation on the residual digital images of the pathological sections to be identified to respectively obtain corresponding gastric cancer focus identification results;
and counting the number and area information of all kinds of grades of focuses in the digital image of the pathological section to be identified to obtain the type of the gastric cancer focus of the pathological section to be identified.
The gastric cancer focus recognition device based on image processing comprises,
the marking module is used for marking the focus grade of the region containing the focus on the digital image of the pathological section to be identified according to the gastric cancer classification to obtain a marked image;
the segmentation module is used for identifying the marked image through a pre-trained improved U-net semantic segmentation model to respectively obtain a segmented focus result and a segmented contour result;
the identification module is used for overlaying the contour result on the focus result, unifying the focus levels in the same communication domain in the contour, and obtaining the stomach cancer focus identification result which is separated from the gland and marked with the focus levels and is based on image processing;
the improved U-net semantic segmentation model in the segmentation module is configured to comprise a downsampling convolution of an access input and an upsampling convolution of two parallel outputs; performing feature extraction on the segmented glandular focus by the downsampling convolution and the first upsampling convolution to obtain a segmented focus result; and performing feature extraction on the segmented gland contour by using the downsampling convolution and the second upsampling convolution to obtain a segmented contour result.
A computer device, comprising:
a memory for storing a computer program;
a processor for implementing the method for identifying gastric cancer lesion based on image processing as described in any one of the above when executing the computer program.
A computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the image processing-based gastric cancer lesion identification method as defined in any one of the above.
Compared with the prior art, the invention has the following beneficial technical effects:
the invention provides a gastric cancer focus identification method based on image processing, which is based on a digital image of a pathological section, firstly, the focus grade of an area containing a focus is marked, so that the marking information of the focus grade can be added in the subsequent identification and result output; obtaining a segmented focus result through a pre-trained improved U-net semantic segmentation model to obtain recognition of a focus, obtaining a segmented contour result through the pre-trained improved U-net semantic segmentation model to obtain recognition of a gland, and extracting a focus position; overlapping the two results to obtain the identification of the isolated gland guaranteed by the contour result and the identification of the lesion grade guaranteed by the additional label in the lesion result; and then, through unifying the lesion levels in the same communication domain in the contour, a gland marked in the same contour is provided with a determined lesion level, the lesion area is further graded, different types of lesion glands can be well distinguished, a gastric cancer lesion identification result with separated glands and marked lesion levels is obtained, and the overall identification precision is improved. Meanwhile, the method is based on the innovation of an improved U-net semantic segmentation model structure, so that too many extra operations are basically not added, the efficient operation of the method is ensured, and the calculated amount is basically kept consistent.
Drawings
Fig. 1 is a flowchart of a gastric cancer focus identification method according to an embodiment of the present invention.
FIG. 2a is a labeled image according to an embodiment of the present invention.
Fig. 2b is a segmented lesion result image according to an embodiment of the present invention.
Fig. 2c is a segmented contour result image according to an embodiment of the present invention.
Fig. 2d is an image of a gastric cancer lesion recognition result according to an embodiment of the present invention.
Fig. 3 is a structure of the improved U-net semantic segmentation model according to the embodiment of the present invention.
Fig. 4 is a schematic structural diagram of a content module in the embodiment of the present invention.
Fig. 5 is a schematic structural diagram of the se module according to the embodiment of the present invention.
Fig. 6 is a block diagram showing the structure of a gastric cancer lesion recognition apparatus according to an embodiment of the present invention.
Detailed Description
The present invention will now be described in further detail with reference to specific examples, which are intended to be illustrative, but not limiting, of the invention.
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
As used in this disclosure, "module," "device," "system," and the like are intended to refer to a computer-related entity, either hardware, a combination of hardware and software, or software in execution. In particular, for example, an element may be, but is not limited to being, a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. Also, an application or script running on a server, or a server, may be an element. One or more elements may be in a process and/or thread of execution and an element may be localized on one computer and/or distributed between two or more computers and may be operated by various computer-readable media. The elements may also communicate by way of local and/or remote processes based on a signal having one or more data packets, e.g., from a data packet interacting with another element in a local system, distributed system, and/or across a network in the internet with other systems by way of the signal.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The method for identifying the gastric cancer focus provided by the invention is a method capable of effectively identifying the malignant degree of the gastric cancer based on a digital pathological image, not only can extract the focus position, but also can further grade the focus area, and can well distinguish different types of lesion glands, so that the overall accuracy of identifying the gastric cancer focus can be greatly improved under the condition of not increasing the calculation amount basically.
Specifically, as shown in fig. 1, the method for identifying gastric cancer lesion based on image processing of the present invention comprises,
step 101, performing lesion grade marking on a region containing a lesion on a digital image of a pathological section to be identified according to stomach cancer typing to obtain a marked image, as shown in fig. 2 a;
the method comprises the steps of obtaining a digital image of a pathological section to be identified, wherein the digital image of the pathological section to be identified actually comprises a focus area and a normal tissue area without a focus, and the focus area and the normal tissue area are distinguished obviously.
Step 102, identifying the marked image through a pre-trained improved U-net semantic segmentation model to respectively obtain a segmented focus result and a segmented contour result;
the improved U-net semantic segmentation model comprises an input-in downsampling convolution and two parallel output upsampling convolutions; performing feature extraction on the segmented glandular lesion by the downsampling convolution and the first upsampling convolution to obtain a segmented lesion result, as shown in fig. 2 b; the downsampling convolution and the second upsampling convolution are used for carrying out feature extraction on the segmented gland contour to obtain a segmented contour result, and the segmented contour result is shown in fig. 2 c;
the existing U-net semantic segmentation model is a high-precision medical image semantic segmentation model based on small samples, is used as an image semantic segmentation network and is mainly used for processing medical images and analyzing the medical images to obtain ideal result output under the condition of less provided samples; the whole flow of U-Net is U-shaped, including a down-sampling convolution process and an up-sampling convolution process, outputting aiming at the identification of cells, but in the existing practical situation, because partial glandular tubes are distributed densely in tissues, when a U-Net semantic segmentation algorithm is carried out to segment glands, a plurality of independent individual glands can be connected together in the output single gland segmentation result, which can cause great influence on the quantitative analysis of the glands.
In the invention, two up-sampling convolution processes are adopted, the focus is respectively identified and output as one path, the output does not distinguish the contour, the gland contour is identified and output as one path, and the output does not distinguish the focus; therefore, efficient centralized processing can be carried out on the output characteristics in the whole identification process, and accurate identification results can be obtained.
Step 103, overlaying the contour result on the lesion result, unifying the lesion levels in the same connected domain in the contour, and obtaining a gastric cancer lesion identification result with separated glands and marked lesion levels, as shown in fig. 2 d.
The gland identified by the contour result is framed on a focus result to obtain corresponding focus areas in each gland, and then the framed same communication area, namely a plurality of focus levels in the same gland are unified into a mainstream focus level to be used as the focus level of the gland, so that the gland is identified and the focus levels are distinguished; when the contour result is overlaid on the focus result, the contour result is overlaid according to the image coordinates, and pixels are multiplied at the same coordinate position, so that the overlaid image is obtained, the gland can be distinguished, and the mark of the focus level can be realized. Because the branch for predicting the glandular tube contour is added, the branch can be used for predicting the glandular tube contour in gastric cancer pathology and separating the connected glands in the original gland segmentation result, thereby improving the segmentation effect of the glands and improving the segmentation precision.
In an optional example of the present invention, the improved U-net semantic segmentation model is further improved based on the above steps, and a residual error network module and an attention module are sequentially disposed in each convolution layer of the downsampling convolution and the upsampling convolution. The finally obtained optimized model is shown in fig. 3, wherein the annotated image is used as the input of the improved U-net semantic segmentation model, feature localization is performed through four convolutional layers in the down-sampling convolution respectively through convolution operation, classification reduction is performed through four convolutional layers in the two up-sampling convolutions respectively, and finally a focus result and a contour result are obtained through output.
Namely, the convolution operation of each layer is improved to be better fitted with a classification function through a residual error network module (resnet module) so as to obtain higher classification precision and improve the feature extraction capability; the attention module (se module) is used for improving the relation between the characteristic channels, improving the progress of the whole model and improving the segmentation precision; a se module is added after the resnet module, the se module can acquire the importance degree of each characteristic channel in an autonomous learning mode, and then useful characteristics are improved according to the importance degree and characteristics with low use are inhibited, so that the accuracy of the whole model is improved; and then, in the lower sampling convolution process and the upper sampling convolution process, layer-skipping connection (skip connection) is carried out on the feature graph, and shallow positioning information and high-layer pixel classification judgment information are fused, so that a better result is obtained.
Specifically, as shown in fig. 4, the above resnet module is a conventional residual block (plain residual block): a network formed by stacking 2 3 × 3 convolution layers and a plurality of residual error network modules is called a residual error network; the short path connection (shortcut) in the residual error network is directly completed through simple identity mapping without complex transformation gate and carrying gate, so that extra parameters are not required to be introduced, and the calculation burden is reduced. The expression is as follows,
y=F(x)+x;
wherein y is the output, x is the input, F (x) is the residual function; the final output is the sum of the addition operation of x input by near path connection and F (x) obtained by residual operation; an activation function layer (relu layer) is provided after the output and between the 2 3 × 3 convolutional layers.
The se module is shown in fig. 5, and each convolution operation is actually a multiply-add operation performed on the input spatial dimension (width and height, HxW) and channel dimension (channels). The se module starts from above the Channel dimension of the feature, forming an attention mechanism; by extracting the importance degree of the channel, the weight is improved if the result is better, the calculation is facilitated and the result is more similar to the label; the number of the characteristic graphs (maps) corresponds to the channels, so that the number of the target characteristic graphs is increased, the obtained weights finally influence the result through convolution and global sampling, and the overall accuracy of the model is influenced.
The method comprises the following steps of specifically connecting layers, wherein when feature extraction of segmented gland lesions is carried out on a lower sampling convolution and a first upper sampling convolution, the residual error network module in the first upper sampling convolution is correspondingly connected with the layer jump of the residual error network module in the first upper sampling convolution; when the feature extraction of the segmented gland contour is carried out by the down-sampling convolution and the second up-sampling convolution, the residual error network module in the first up-sampling convolution is correspondingly connected with the residual error network module in the second up-sampling convolution in a layer-skipping manner; the image sizes processed by the levels corresponding to the jump layer connection are the same; through skip connection in the residual error network module, on one hand, information lost after passing through a plurality of convolution layers can be supplemented, and on the other hand, the problem of gradient disappearance can be reduced while a network layer is deepened, so that the feature improvement capability is improved.
In an optional embodiment of the invention, before the lesion grade marking is carried out on the region containing the lesion according to the gastric cancer classification on the digital image of the pathological section to be identified, the digital image of the pathological section to be identified is obtained through scanning; the digital image is in any one of the formats svs, kfb, ndpi, and tif. In the preferred embodiment, a data acquisition step is added, specifically by using a scanner or other scanning device to scan the pathological section to obtain digital images of the whole section, wherein the digital images may be in different scanning formats or in the same scanning format, for example, any one of the four formats mentioned above.
In an optional embodiment of the present invention, the lesion grade is labeled according to gastric cancer classification for the region containing the lesion, specifically, the lesion grade is labeled according to gastric cancer pathology WHO classification for the region containing the lesion. And the step of executing data labeling is mainly to perform fine labeling on the region containing the focus on the whole digital image of the pathological section to be identified according to three types of low grade, high grade and cancer. During marking, firstly, the image area corresponding to each pixel is divided through a pixel-level division algorithm, and adjacent areas at the same level are marked, so that low-level and high-level area marking for the glandular duct and area marking for cancer are realized.
In an optional example of the present invention, after obtaining the annotated image, before identifying the annotated image by the pre-trained improved U-net semantic segmentation model, the method further comprises data sorting of the annotated image; segmenting the marked image, and reserving the image containing the tissue area to obtain an effective image; dividing the effective image into a training set and a test set after mask masking, and finishing data arrangement of the marked image; the training set and the testing set are used for training and testing the pre-trained improved U-net semantic segmentation model. Because the digital image of the pathological section obtained by scanning is very large, the number of pixels is more than 10 hundred million, and the digital image cannot be directly imported into the training of the pre-trained improved U-net semantic segmentation model, the digital image of the original pathological section needs to be segmented, only the image containing the tissue region needs to be reserved, a corresponding mask (mask) is generated, and all data are divided into a training set and a test set for subsequent model training and testing.
In an optional embodiment of the invention, when the pre-trained improved U-net semantic segmentation model is pre-trained, freezing each convolution layer of two up-sampling convolutions, performing feature extraction and training of segmented objects on each convolution layer of the other up-sampling convolution, and waiting for the improved U-net semantic segmentation model to converge; and freezing the trained upsampling convolution, and performing feature extraction and training of a segmentation object on each convolution layer in the untrained upsampling convolution until the improved U-net semantic segmentation model is converged to finish the pre-training.
The pre-training in the preferred embodiment is the model inference, the images in the training set are input into the model, each layer of the branches of the segmentation outline is frozen firstly during the training, only the branches of the feature extraction and the segmentation focus, namely the upper half part in fig. 3, are trained, and the model is converged; and then, the trained feature extraction and branch of the segmentation focus are frozen, and only the branch of the segmentation contour, namely the lower half part in the figure 3, is trained until the final model converges. Therefore, two outputs can be obtained, one is a result of segmenting the focus, and the other is a result of segmenting the outline, and after the test of the test set is used for meeting the requirements, the pre-training of the model is completed, and the formal gastric cancer focus identification is carried out.
In an optional embodiment of the present invention, unifying the lesion levels in the same connected domain in the contour includes counting the pixel values occupied by each lesion level in the same connected domain in the contour; and changing all pixel values in the same connected domain into the type with the largest proportion to obtain the focus level corresponding to the pixel value with the largest proportion as the focus level of the same connected domain in the outline.
Specifically, two outputs of a segmented focus result and a segmented contour result are combined, the contour result is covered on the focus result, and the pixel value in the same connected domain is changed into the one with the largest ratio, so that a plurality of labels, namely categories, existing in one contour are unified into one category, and therefore the connected glands in the original focus result can be separated, and each gland corresponds to one category, namely low level, high level or cancer; when the two results are covered, the two results are superposed according to the image coordinates, and the image operation is realized through the multiplication relation.
In an optional embodiment of the invention, after the gastric cancer focus identification result with separated glands and marked focus level is obtained, the method further comprises the steps of carrying out identification on the residual digital image of the pathological section to be identified by the same operation to respectively obtain corresponding gastric cancer focus identification results; and counting the number and area information of all kinds of grades of focuses in the digital image of the pathological section to be identified to obtain the type of the gastric cancer focus of the pathological section to be identified.
Through the steps of statistical analysis, the information of the number, the area and the like of all the focuses identified by the model of all the digital images on one pathological section is counted, so as to obtain the final type of the pathological section, whether gastric cancer focuses exist and the corresponding severity grade, and provide reference for doctors.
The method of the invention not only can automatically position the focus position according to the pathological image of the gastric cancer tissue, accurately identify and count the focus area, but also can judge the malignancy degree of the focus and provide help for doctors to read the film at ordinary times; by adding the resnet module, the se module and the branch operation of segmenting the contour, the feature extraction capability of the model can be effectively improved, adjacent glands can be effectively separated, the recognition result is improved, and the overall accuracy of the algorithm is improved; all changes of the method are changes of the model structure, so that too many extra operations are basically not added, the efficient operation of the algorithm can be ensured, and the calculated amount is basically kept consistent; not only improves the precision and enhances the identification quality, but also ensures the identification speed without increasing the operation amount.
The present invention also provides an apparatus for recognizing gastric cancer lesion based on image processing, which performs recognition of gastric cancer lesion based on image processing in accordance with the above-described recognition method, as shown in fig. 6, comprising,
the labeling module 601 is used for performing lesion grade labeling on a region containing a lesion on the digital image of the pathological section to be identified according to gastric cancer classification to obtain a labeled image;
a segmentation module 602, configured to identify the labeled image through a pre-trained improved U-net semantic segmentation model, and obtain a segmented lesion result and a segmented contour result respectively;
the identification module 603 is configured to overlay and superimpose the contour result on the lesion result, unify lesion levels in the same connected domain in the contour, and obtain a gastric cancer lesion identification result with separated glands and labeled lesion levels;
the improved U-net semantic segmentation model in the segmentation module 602 is configured to include a downsampled convolution of an access input and an upsampled convolution of two parallel outputs; performing feature extraction on the segmented glandular focus by the downsampling convolution and the first upsampling convolution to obtain a segmented focus result; and performing feature extraction on the segmented gland contour by using the downsampling convolution and the second upsampling convolution to obtain a segmented contour result.
The present invention also provides a computer apparatus comprising a memory for storing a computer program; a processor for implementing the steps of the gastric cancer lesion identification method based on image processing as described above when executing the computer program.
The present invention also provides a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, implements the steps of the image processing-based gastric cancer lesion identification method as described above.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should be noted that: although the present invention has been described in detail with reference to the above embodiments, those skilled in the art can make modifications and equivalents to the embodiments of the present invention without departing from the spirit and scope of the present invention, which is set forth in the claims of the present application.
It will be appreciated by those skilled in the art that the invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The embodiments disclosed above are therefore to be considered in all respects as illustrative and not restrictive. All changes which come within the scope of or equivalence to the invention are intended to be embraced therein.

Claims (12)

1. The method for identifying gastric cancer focus based on image processing is characterized by comprising the following steps,
on the digital image of the pathological section to be identified, carrying out lesion grade marking on the region containing the lesion according to gastric cancer typing to obtain a marked image;
identifying the marked image through a pre-trained improved U-net semantic segmentation model to respectively obtain a segmented focus result and a segmented outline result;
overlaying the contour result on the focus result, unifying focus levels in the same connected domain in the contour to obtain a gastric cancer focus identification result with separated glands and marked focus levels;
the improved U-net semantic segmentation model comprises an input-in downsampling convolution and two parallel output upsampling convolutions; performing feature extraction on the segmented glandular focus by the downsampling convolution and the first upsampling convolution to obtain a segmented focus result; and performing feature extraction on the segmented gland contour by using the downsampling convolution and the second upsampling convolution to obtain a segmented contour result.
2. The method for image processing-based gastric cancer lesion identification according to claim 1, wherein the improved U-net semantic segmentation model further comprises a residual network module and an attention module; the residual error network module and the attention module are sequentially arranged in each convolution layer of the down-sampling convolution and the up-sampling convolution.
3. The method for identifying gastric cancer lesions based on image processing according to claim 2,
when the feature extraction of the segmented glandular focus is carried out by the down-sampling convolution and the first up-sampling convolution, the residual error network module in the first up-sampling convolution is correspondingly connected with the residual error network module in the first up-sampling convolution in a layer-skipping manner;
when the downsampling convolution and the second upsampling convolution are used for extracting the characteristics of the segmented gland contour, the residual error network module in the first upsampling convolution is correspondingly connected with the residual error network module in the second upsampling convolution in a layer-skipping mode;
the image sizes of the hierarchical processing of the corresponding layer jump connection are the same.
4. The method for identifying gastric cancer lesion based on image processing as claimed in claim 1, wherein before labeling lesion grade of the region containing lesion according to gastric cancer classification on the digital image of the pathological section to be identified, the method further comprises obtaining the digital image of the pathological section to be identified by scanning; the digital image is in any one of the formats svs, kfb, ndpi, and tif.
5. The method for identifying gastric cancer lesions based on image processing as claimed in claim 1, wherein the lesion-containing region is labeled according to gastric cancer classification, specifically, gastric cancer pathology WHO classification is adopted to label the lesion-containing region.
6. The method for identifying gastric cancer lesions based on image processing according to claim 1, wherein after the annotated image is obtained, before the annotated image is identified by a pre-trained improved U-net semantic segmentation model, the method further comprises the step of sorting data of the annotated image;
segmenting the marked image, and reserving the image containing the tissue area to obtain an effective image;
dividing the effective image into a training set and a test set after mask masking, and finishing data arrangement of the marked image; the training set and the testing set are used for training and testing the pre-trained improved U-net semantic segmentation model.
7. The method for identifying gastric cancer focus based on image processing as claimed in claim 1, wherein the pre-trained improved U-net semantic segmentation model is pre-trained, comprising,
freezing each convolution layer in one of the two up-sampling convolutions, and performing feature extraction and training of a segmentation object on each convolution layer in the other up-sampling convolution layer until the improved U-net semantic segmentation model is converged;
and freezing the trained upsampling convolution, and performing feature extraction and training of a segmentation object on each convolution layer in the untrained upsampling convolution until the improved U-net semantic segmentation model is converged to finish the pre-training.
8. The image-processing-based gastric cancer lesion identification method according to claim 1, wherein the lesion levels in the same connected component within the uniform contour comprise,
counting pixel values occupied by each focus level in the same connected domain in the outline;
and changing all pixel values in the same connected domain into the type with the largest proportion to obtain the focus level corresponding to the pixel value with the largest proportion as the focus level of the same connected domain in the outline.
9. The method for identifying gastric cancer foci based on image processing according to claim 1, wherein after obtaining the gastric cancer focus identification result with isolated glands and marked focus grade, further comprising,
carrying out identification of the same operation on the residual digital images of the pathological sections to be identified to respectively obtain corresponding gastric cancer focus identification results;
and counting the number and area information of all kinds of grades of focuses in the digital image of the pathological section to be identified to obtain the type of the gastric cancer focus of the pathological section to be identified.
10. The gastric cancer focus recognition device based on image processing is characterized by comprising,
the marking module is used for marking the focus grade of the region containing the focus on the digital image of the pathological section to be identified according to the gastric cancer classification to obtain a marked image;
the segmentation module is used for identifying the marked image through a pre-trained improved U-net semantic segmentation model to respectively obtain a segmented focus result and a segmented contour result;
the identification module is used for overlaying the contour result on the focus result, unifying the focus levels in the same communication domain in the contour, and obtaining a gastric cancer focus identification result with separated glands and marked focus levels;
the improved U-net semantic segmentation model in the segmentation module is configured to comprise a downsampling convolution of an access input and an upsampling convolution of two parallel outputs; performing feature extraction on the segmented glandular focus by the downsampling convolution and the first upsampling convolution to obtain a segmented focus result; and performing feature extraction on the segmented gland contour by using the downsampling convolution and the second upsampling convolution to obtain a segmented contour result.
11. A computer device, comprising:
a memory for storing a computer program;
a processor for implementing the image processing-based gastric cancer lesion identification method according to any one of claims 1 to 9 when executing the computer program.
12. A computer-readable storage medium, having a computer program stored thereon, which, when being executed by a processor, implements the image-processing-based gastric cancer lesion identification method according to any one of claims 1 to 9.
CN202110139586.XA 2021-01-29 2021-01-29 Gastric cancer focus identification method, device, equipment and storage medium based on image processing Pending CN112967287A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110139586.XA CN112967287A (en) 2021-01-29 2021-01-29 Gastric cancer focus identification method, device, equipment and storage medium based on image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110139586.XA CN112967287A (en) 2021-01-29 2021-01-29 Gastric cancer focus identification method, device, equipment and storage medium based on image processing

Publications (1)

Publication Number Publication Date
CN112967287A true CN112967287A (en) 2021-06-15

Family

ID=76273065

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110139586.XA Pending CN112967287A (en) 2021-01-29 2021-01-29 Gastric cancer focus identification method, device, equipment and storage medium based on image processing

Country Status (1)

Country Link
CN (1) CN112967287A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113706514A (en) * 2021-08-31 2021-11-26 平安科技(深圳)有限公司 Focus positioning method, device and equipment based on template image and storage medium
CN113888567A (en) * 2021-10-21 2022-01-04 中国科学院上海微系统与信息技术研究所 Training method of image segmentation model, image segmentation method and device
CN114078234A (en) * 2022-01-07 2022-02-22 泰豪软件股份有限公司 Detection method, system, storage medium and equipment for power supply area construction process
CN114511581A (en) * 2022-04-20 2022-05-17 四川大学华西医院 Multi-task multi-resolution collaborative esophageal cancer lesion segmentation method and device
CN114693692A (en) * 2022-03-30 2022-07-01 上海交通大学医学院附属第九人民医院 Pathological image segmentation and classification method, device, equipment and medium
CN114862763A (en) * 2022-04-13 2022-08-05 华南理工大学 Gastric cancer pathological section image segmentation prediction method based on EfficientNet
CN115619810A (en) * 2022-12-19 2023-01-17 中国医学科学院北京协和医院 Prostate partition method, system and equipment
CN115829980A (en) * 2022-12-13 2023-03-21 深圳核韬科技有限公司 Image recognition method, device, equipment and storage medium for fundus picture
CN116071622A (en) * 2023-04-06 2023-05-05 广州思德医疗科技有限公司 Stomach image recognition model construction method and system based on deep learning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110599476A (en) * 2019-09-12 2019-12-20 腾讯科技(深圳)有限公司 Disease grading method, device, equipment and medium based on machine learning
CN111091527A (en) * 2018-10-24 2020-05-01 华中科技大学 Method and system for automatically detecting pathological change area in pathological tissue section image
CN111145206A (en) * 2019-12-27 2020-05-12 联想(北京)有限公司 Liver image segmentation quality evaluation method and device and computer equipment
CN111968127A (en) * 2020-07-06 2020-11-20 中国科学院计算技术研究所 Cancer focus area identification method and system based on full-section pathological image
CN112084930A (en) * 2020-09-04 2020-12-15 厦门大学 Focus region classification method and system for full-view digital pathological section

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111091527A (en) * 2018-10-24 2020-05-01 华中科技大学 Method and system for automatically detecting pathological change area in pathological tissue section image
CN110599476A (en) * 2019-09-12 2019-12-20 腾讯科技(深圳)有限公司 Disease grading method, device, equipment and medium based on machine learning
CN111145206A (en) * 2019-12-27 2020-05-12 联想(北京)有限公司 Liver image segmentation quality evaluation method and device and computer equipment
CN111968127A (en) * 2020-07-06 2020-11-20 中国科学院计算技术研究所 Cancer focus area identification method and system based on full-section pathological image
CN112084930A (en) * 2020-09-04 2020-12-15 厦门大学 Focus region classification method and system for full-view digital pathological section

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113706514B (en) * 2021-08-31 2023-08-11 平安科技(深圳)有限公司 Focus positioning method, device, equipment and storage medium based on template image
CN113706514A (en) * 2021-08-31 2021-11-26 平安科技(深圳)有限公司 Focus positioning method, device and equipment based on template image and storage medium
CN113888567A (en) * 2021-10-21 2022-01-04 中国科学院上海微系统与信息技术研究所 Training method of image segmentation model, image segmentation method and device
CN113888567B (en) * 2021-10-21 2024-05-14 中国科学院上海微系统与信息技术研究所 Training method of image segmentation model, image segmentation method and device
CN114078234A (en) * 2022-01-07 2022-02-22 泰豪软件股份有限公司 Detection method, system, storage medium and equipment for power supply area construction process
CN114078234B (en) * 2022-01-07 2022-05-31 泰豪软件股份有限公司 Detection method, system, storage medium and equipment for power supply area construction process
CN114693692A (en) * 2022-03-30 2022-07-01 上海交通大学医学院附属第九人民医院 Pathological image segmentation and classification method, device, equipment and medium
CN114862763A (en) * 2022-04-13 2022-08-05 华南理工大学 Gastric cancer pathological section image segmentation prediction method based on EfficientNet
CN114511581B (en) * 2022-04-20 2022-07-08 四川大学华西医院 Multi-task multi-resolution collaborative esophageal cancer lesion segmentation method and device
CN114511581A (en) * 2022-04-20 2022-05-17 四川大学华西医院 Multi-task multi-resolution collaborative esophageal cancer lesion segmentation method and device
CN115829980A (en) * 2022-12-13 2023-03-21 深圳核韬科技有限公司 Image recognition method, device, equipment and storage medium for fundus picture
CN115619810A (en) * 2022-12-19 2023-01-17 中国医学科学院北京协和医院 Prostate partition method, system and equipment
CN115619810B (en) * 2022-12-19 2023-10-03 中国医学科学院北京协和医院 Prostate partition segmentation method, system and equipment
CN116071622A (en) * 2023-04-06 2023-05-05 广州思德医疗科技有限公司 Stomach image recognition model construction method and system based on deep learning
CN116071622B (en) * 2023-04-06 2024-01-12 广州思德医疗科技有限公司 Stomach image recognition model construction method and system based on deep learning

Similar Documents

Publication Publication Date Title
CN112967287A (en) Gastric cancer focus identification method, device, equipment and storage medium based on image processing
CN111027547B (en) Automatic detection method for multi-scale polymorphic target in two-dimensional image
CN110599448B (en) Migratory learning lung lesion tissue detection system based on MaskScoring R-CNN network
CN112966684B (en) Cooperative learning character recognition method under attention mechanism
CN111696094B (en) Immunohistochemical PD-L1 membrane staining pathological section image processing method, device and equipment
CN111784671B (en) Pathological image focus region detection method based on multi-scale deep learning
Zhang et al. SHA-MTL: soft and hard attention multi-task learning for automated breast cancer ultrasound image segmentation and classification
Pan et al. Cell detection in pathology and microscopy images with multi-scale fully convolutional neural networks
CN114266794B (en) Pathological section image cancer region segmentation system based on full convolution neural network
Bouchet et al. Intuitionistic fuzzy set and fuzzy mathematical morphology applied to color leukocytes segmentation
Zhu et al. Guideline-based additive explanation for computer-aided diagnosis of lung nodules
CN115063425B (en) Reading knowledge graph-based structured inspection finding generation method and system
CN115546605A (en) Training method and device based on image labeling and segmentation model
CN117078930A (en) Medical image segmentation method based on boundary sensing and attention mechanism
He et al. Segmentation ability map: Interpret deep features for medical image segmentation
CN114998362A (en) Medical image segmentation method based on double segmentation models
Silva-Rodríguez et al. Prostate gland segmentation in histology images via residual and multi-resolution u-net
CN116342474A (en) Wafer surface defect detection method
CN116883341A (en) Liver tumor CT image automatic segmentation method based on deep learning
CN113822846A (en) Method, apparatus, device and medium for determining region of interest in medical image
CN116189130A (en) Lane line segmentation method and device based on image annotation model
CN113989269B (en) Traditional Chinese medicine tongue image tooth trace automatic detection method based on convolutional neural network multi-scale feature fusion
CN113379770A (en) Nasopharyngeal carcinoma MR image segmentation network construction method, image segmentation method and device
CN117457235B (en) Pathological damage mode prediction method and device, storage medium and electronic equipment
Nguyen et al. Adaptation of Distinct Semantics for Uncertain Areas in Polyp Segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination