CN116596869A - Method, device and storage medium for detecting infiltration depth of stomach marker - Google Patents

Method, device and storage medium for detecting infiltration depth of stomach marker Download PDF

Info

Publication number
CN116596869A
CN116596869A CN202310503578.8A CN202310503578A CN116596869A CN 116596869 A CN116596869 A CN 116596869A CN 202310503578 A CN202310503578 A CN 202310503578A CN 116596869 A CN116596869 A CN 116596869A
Authority
CN
China
Prior art keywords
stomach
infiltration depth
marker
detection state
endoscope image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310503578.8A
Other languages
Chinese (zh)
Other versions
CN116596869B (en
Inventor
郑碧清
胡珊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Endoangel Medical Technology Co Ltd
Original Assignee
Wuhan Endoangel Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Endoangel Medical Technology Co Ltd filed Critical Wuhan Endoangel Medical Technology Co Ltd
Publication of CN116596869A publication Critical patent/CN116596869A/en
Application granted granted Critical
Publication of CN116596869B publication Critical patent/CN116596869B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/273Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the upper alimentary canal, e.g. oesophagoscopes, gastroscopes
    • A61B1/2736Gastroscopes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30092Stomach; Gastric
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Surgery (AREA)
  • Theoretical Computer Science (AREA)
  • Radiology & Medical Imaging (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Optics & Photonics (AREA)
  • Pathology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Quality & Reliability (AREA)
  • Signal Processing (AREA)
  • Geometry (AREA)
  • Gastroenterology & Hepatology (AREA)
  • Endoscopes (AREA)

Abstract

The invention discloses a method, a device and a storage medium for detecting the infiltration depth of a stomach marker, wherein the method comprises the steps of acquiring stomach endoscope images of a target user with the stomach marker in a plurality of different detection states, identifying the stomach endoscope images in each detection state according to different stomach characteristics, respectively identifying the stomach endoscope images in each detection state to determine the infiltration depth information of the stomach marker of the target user in each detection state, acquiring a plurality of infiltration depth information, acquiring the total area and the lesion area of the stomach endoscope images in a preset detection state, and determining the infiltration depth of the stomach marker of the target user according to the total area, the lesion area and the plurality of infiltration depth information. By adopting the embodiment of the invention, the infiltration depth of the stomach marker of the target user can be comprehensively and accurately determined, and the technical problem that the infiltration depth of the stomach cancer is difficult to accurately detect is solved.

Description

Method, device and storage medium for detecting infiltration depth of stomach marker
Technical Field
The invention relates to the technical field of medical assistance, in particular to a method and a device for detecting infiltration depth of a stomach marker and a storage medium.
Background
Gastric cancer is the third global fatal malignancy, the number of new cases in 2018 exceeds 100 ten thousand, the number of deaths exceeds 78 ten thousand, and the gastric cancer accounts for 8.2% of the number of cancer-related deaths from a global perspective. Delayed diagnosis and treatment are important causes of high mortality from gastric cancer, and early detection and therapeutic treatment are critical to reducing mortality and improving patient outcome.
The treatment means and measures of gastric cancer depend on the severity judgment of gastric cancer lesions, wherein the canceration occurs at or below the mucosal layer, so the judgment of the infiltration depth of gastric cancer is an important judgment basis for performing surgical treatment or endoscopic treatment. The prognosis, economic cost, etc. associated with different treatments are very different. Therefore, accurately judging the infiltration depth of gastric cancer of a patient is a difficult problem facing a digestive clinical endoscopist.
Disclosure of Invention
The embodiment of the invention aims to provide a method and a device for detecting the infiltration depth of a stomach marker and a storage medium, so as to solve the technical problem that the infiltration depth of gastric cancer is difficult to accurately detect.
In a first aspect, in order to achieve the above object, an embodiment of the present invention provides a method for detecting infiltration depth of a gastric marker, including:
Obtaining a stomach endoscope image of a target user with stomach markers under a plurality of different detection states, wherein the stomach endoscope image under each detection state corresponds to different stomach characteristics;
respectively identifying stomach endoscope images under each detection state to determine the infiltration depth information of the stomach markers of the target user under each detection state, so as to obtain a plurality of infiltration depth information;
acquiring the total area of the stomach endoscope image and the area of the lesion area under a preset detection state;
and determining the infiltration depth of the stomach marker of the target user according to the total area, the lesion area and the infiltration depth information.
Further, the stomach endoscope images in the plurality of different detection states comprise a first stomach endoscope image in a first detection state, a second stomach endoscope image in a second detection state and a third stomach endoscope image in a third detection state, the stomach features comprise morphological features, color features and structural features, and the infiltration depth information comprises confidence information that the infiltration depth of the stomach marker corresponding to the stomach endoscope image is the first infiltration depth and the second infiltration depth respectively;
Identifying the stomach endoscope images under each detection state respectively to determine the infiltration depth information of the stomach marker of the target user under each detection state, so as to obtain a plurality of infiltration depth information, wherein the method comprises the following steps:
identifying the first stomach endoscope image according to morphological characteristics of the first stomach endoscope image to obtain first confidence coefficient information and second confidence coefficient information, wherein the immersion depth of a stomach marker corresponding to the first stomach endoscope image is respectively a first immersion depth and a second immersion depth;
identifying the second stomach endoscope image according to the color characteristics of the second stomach endoscope image to obtain third confidence coefficient information and fourth confidence coefficient information, wherein the immersion depth of the stomach marker corresponding to the second stomach endoscope image is the first immersion depth, and the third confidence coefficient information and the fourth confidence coefficient information are the second immersion depth respectively;
and identifying the third stomach endoscopic image according to the structural characteristics of the third stomach endoscopic image to obtain fifth confidence coefficient information and sixth confidence coefficient information, wherein the infiltration depth of the stomach marker corresponding to the third stomach endoscopic image is the first infiltration depth, and the fifth confidence coefficient information and the sixth confidence coefficient information are the second infiltration depth respectively.
Further, the identifying the third stomach portion endoscope image according to the structural feature of the third stomach portion endoscope image, to obtain fifth confidence information of the first infiltration depth and sixth confidence information of the second infiltration depth of the stomach marker corresponding to the third stomach portion endoscope image, respectively, includes:
detecting the third stomach endoscopic image and determining a lesion area in the third stomach endoscopic image;
layering the stomach structure in the third stomach endoscopic image to obtain a multi-layer membrane layer forming the stomach structure;
and identifying the third stomach endoscopic image according to the position relation between the lesion area and the multilayer film layer to obtain fifth confidence coefficient information and sixth confidence coefficient information, wherein the infiltration depth of the stomach marker corresponding to the third stomach endoscopic image is the first infiltration depth and the second infiltration depth respectively.
Further, the first infiltration depth comprises infiltration depth of cancer in the mucosa, the second infiltration depth comprises infiltration depth of cancer under the mucosa, and the multilayer membrane layer comprises a mucosa layer, a mucosa muscle layer, a submucosa layer, an intrinsic muscle layer and a serosa layer;
The detecting the third stomach portion endoscope image to determine a lesion area in the third stomach portion endoscope image includes:
inputting the third stomach endoscopic image into a trained target detection model so that the trained target detection model detects a lesion area in the third stomach endoscopic image and determines the lesion area in the third stomach endoscopic image;
the trained target detection model is obtained by training a training set formed by stomach endoscope images of stomach and corresponding labeling images labeled with lesion areas;
the layering treatment is performed on the stomach structure in the third stomach endoscopic image to obtain a multi-layer film layer forming the stomach structure, and the layering treatment comprises the following steps:
inputting the third stomach portion endoscope image into a trained image segmentation model, so that the trained image segmentation model segments the third stomach portion endoscope image according to the structural characteristics of the third stomach portion endoscope image to obtain a mucous membrane layer, a mucous membrane musculature, a submucosa, an intrinsic musculature and a serosa layer which form a stomach structure in the third stomach portion endoscope image;
the trained image segmentation model is obtained by training a training set formed by the stomach endoscope image in the third detection state and the corresponding marked image marked with the stomach structure film layer;
Identifying the third stomach portion endoscope image according to the position relationship between the lesion area and the multilayer film layer to obtain fifth confidence coefficient information and sixth confidence coefficient information, wherein the immersion depth of the stomach marker corresponding to the third stomach portion endoscope image is the first immersion depth, respectively, and the fifth confidence coefficient information and the sixth confidence coefficient information comprise:
inputting the third stomach endoscopic image into a trained infiltration depth recognition model of a third stomach marker, so that the trained third stomach cancer infiltration depth recognition model recognizes the third stomach endoscopic image according to the position relation of the third stomach endoscopic image, and obtaining fifth confidence information of the stomach marker corresponding to the third stomach endoscopic image and sixth confidence information of the submucosal cancer, wherein the infiltration depth of the stomach marker is respectively in the mucosa;
the trained infiltration depth recognition model of the third stomach marker is obtained by training a training set consisting of stomach endoscope images and corresponding marked infiltration depths in the third detection state.
Further, the identifying the first stomach endoscope image according to the morphological feature of the first stomach endoscope image, to obtain first confidence information of the first infiltration depth and second confidence information of the second infiltration depth of the stomach marker corresponding to the first stomach endoscope image, respectively, includes:
Inputting the first stomach endoscope image into a trained infiltration depth identification model of a first stomach marker, so that the trained infiltration depth identification model of the first stomach marker identifies the first stomach endoscope image according to morphological characteristics of the first stomach endoscope image, and the infiltration depth of the stomach marker corresponding to the first stomach endoscope image is respectively first confidence information of the cancer in the mucosa and second confidence information of the cancer under the mucosa;
the trained first stomach marker infiltration depth recognition model is obtained by training a training set consisting of a stomach endoscope image in the first detection state and a corresponding marked infiltration depth;
the identifying the second stomach endoscope image according to the color characteristics of the second stomach endoscope image, to obtain third confidence information and fourth confidence information that the infiltration depth of the stomach marker corresponding to the second stomach endoscope image is the first infiltration depth and the second infiltration depth, respectively, includes:
inputting the second stomach endoscope image into a trained infiltration depth recognition model of a second stomach marker, so that the trained infiltration depth recognition model of the second stomach marker recognizes the second stomach endoscope image according to the color characteristics of the second stomach endoscope image, and the infiltration depth of the stomach marker corresponding to the second stomach endoscope image is respectively third confidence information of the cancer in the mucosa and fourth confidence information of the cancer under the mucosa;
The trained second stomach marker infiltration depth recognition model is obtained by training a training set consisting of a stomach endoscope image in the second detection state and a corresponding marked infiltration depth.
Further, the first detection state is a white light detection state, the second detection state is a light change detection state, the third detection state is an ultrasonic detection state, and the preset detection state is any one detection state of the white light detection state, the light change detection state and the ultrasonic detection state;
the obtaining the total area of the stomach endoscope image and the lesion area under the preset detection state comprises the following steps:
determining the total area of the stomach endoscope image in the preset detection state according to the length and the width of the stomach endoscope image in the preset detection state;
invoking the trained target detection model, and detecting the stomach endoscope image in a preset detection state to determine a lesion area of the stomach endoscope image in the preset detection state;
and calculating the lesion area of the lesion area according to the lesion area.
Further, the determining the infiltration depth of the stomach marker of the target user according to the total area, the lesion area and the infiltration depth information includes:
Determining a ratio of the total area to the area of the lesion;
inputting the total area, the ratio, the first confidence information, the third confidence information and the fifth confidence information of the cancer in the mucosa, the second confidence information, the fourth confidence information and the sixth confidence information of the cancer under the mucosa into a trained stomach marker infiltration depth fitting model, so that the trained stomach marker infiltration depth fitting model fits the total area, the ratio, the first confidence information, the third confidence information and the fifth confidence information of the cancer in the mucosa, the second confidence information, the fourth confidence information and the sixth confidence information of the cancer under the mucosa, and obtaining the stomach marker infiltration depth of the target user;
the well-trained infiltration depth fitting model of the stomach marker comprises a random forest and a decision tree. In a second aspect, in order to solve the same technical problem, an embodiment of the present invention provides an infiltration depth detection apparatus for a stomach marker, including:
a first acquisition module for acquiring a stomach endoscope image of a target user with stomach markers in a plurality of different detection states, wherein the stomach endoscope image in each detection state corresponds to different stomach characteristics;
The identification module is used for respectively identifying the stomach endoscope images under each detection state so as to determine the infiltration depth information of the stomach marker of the target user under each detection state and obtain a plurality of infiltration depth information;
the second acquisition module is used for acquiring the total area of the stomach endoscope image and the lesion area under the preset detection state;
and the determining module is used for determining the infiltration depth of the stomach marker of the target user according to the total area, the lesion area and the infiltration depth information.
In a third aspect, to solve the same technical problem, an embodiment of the present invention provides an electronic device, including a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, where the memory is coupled to the processor, and where the processor executes the computer program to implement the steps in the method for detecting the immersion depth of the stomach marker according to any one of the above.
In a fourth aspect, in order to solve the same technical problem, an embodiment of the present invention provides a computer readable storage medium, where a computer program is stored, where a device where the computer readable storage medium is controlled to execute the steps in the method for detecting the infiltration depth of the stomach marker according to any one of the above is when the computer program runs.
The embodiment of the invention provides a method, a device and a storage medium for detecting the infiltration depth of a stomach marker, which can determine a plurality of pieces of infiltration depth information of the stomach marker of a target user in each detection state by acquiring stomach endoscope images of the target user in a plurality of different detection states, and can comprehensively and accurately determine the infiltration depth of the stomach marker of the target user according to the total area of the stomach endoscope images in the preset detection states, the area of a lesion area in the stomach endoscope images and the plurality of pieces of infiltration depth information in the plurality of detection states, thereby solving the technical problem that the infiltration depth of stomach cancer is difficult to detect accurately.
Drawings
FIG. 1 is a schematic flow chart of a method for detecting the infiltration depth of a stomach marker according to an embodiment of the present invention;
FIGS. 2a-2b are schematic illustrations of a first endoscopic gastric image in a first detection state provided by an embodiment of the present invention;
FIGS. 3a-3b are schematic illustrations of a second endoscopic gastric image in a second detection state provided by an embodiment of the present invention;
FIGS. 4a-4e are schematic illustrations of a third endoscopic gastric portion image in a third detection state provided by an embodiment of the present invention;
FIG. 5 is another schematic view of a third endoscopic gastric portion image in a third detection state provided by an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of a device for detecting the infiltration depth of a stomach marker according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 8 is a schematic diagram of another structure of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
In the related embodiment, the treatment means and measures of the gastric cancer depend on the severity judgment of the gastric cancer lesion, wherein the canceration is limited to the mucous membrane layer or the lower part of the mucous membrane layer, namely the judgment of the infiltration depth of the gastric cancer, which is an important judgment basis for performing surgical operation treatment or endoscopic treatment. The prognosis, economic cost, etc. associated with different treatments are very different. However, determination of gastric cancer infiltration depth is a difficult task for clinicians, even for advanced physicians. Therefore, how to improve the accuracy of the determination of the depth of infiltration of gastric cancer is a difficult problem facing the digestive endoscopist.
In order to solve the technical problems in the related art, please refer to fig. 1, fig. 1 is a schematic flow chart of a method for detecting the infiltration depth of a stomach marker according to an embodiment of the present invention, as shown in fig. 1, the method for detecting the infiltration depth of a stomach marker according to an embodiment of the present invention includes steps 101 to 104;
Step 101, obtaining a gastric endoscope image of a target user in which gastric markers are present in a plurality of different detection states, each of which corresponds to a different gastric feature.
In this embodiment, the stomach marker may be gastric cancer, or may be a foreign body swallowed in the stomach, and the present embodiment can determine the infiltration depth of the stomach marker of the target user by detecting the stomach endoscope image of the target user having the stomach marker.
The different detection states correspond to different detection environments, so that the stomach structure of the target user can also have different stomach characteristics under the different detection environments. According to the embodiment, by combining different stomach characteristics, the infiltration depth of the stomach marker of the target user can be comprehensively detected and analyzed.
It should be noted that, the embodiment of the invention is mainly used for improving the accuracy of determining the infiltration depth of the gastric cancer so as to assist an endoscopist to accurately detect the infiltration depth in clinic, so that the stomach marker provided by the embodiment of the invention is mainly used for unfolding and describing the gastric cancer.
Step 102, identifying the stomach endoscope images under each detection state respectively to determine the infiltration depth information of the stomach marker of the target user under each detection state, and obtaining a plurality of infiltration depth information.
In this embodiment, the gastric endoscope images in the plurality of different detection states include a first gastric endoscope image in the first detection state, a second gastric endoscope image in the second detection state, and a third gastric endoscope image in the third detection state, the gastric features include morphological features, color features, and structural features, and the infiltration depth information includes confidence information that the infiltration depth of the gastric marker corresponding to the gastric endoscope image is the first infiltration depth and the second infiltration depth, respectively.
The step of identifying the stomach endoscope image under each detection state to determine the infiltration depth information of the stomach marker of the target user under each detection state and obtain a plurality of infiltration depth information comprises the following steps: identifying the first stomach endoscope image according to morphological characteristics of the first stomach endoscope image to obtain first confidence coefficient information and second confidence coefficient information, wherein the immersion depth of a stomach marker corresponding to the first stomach endoscope image is respectively a first immersion depth and a second immersion depth; identifying the second stomach endoscope image according to the color characteristics of the second stomach endoscope image to obtain third confidence coefficient information and fourth confidence coefficient information, wherein the immersion depth of the stomach marker corresponding to the second stomach endoscope image is the first immersion depth, and the third confidence coefficient information and the fourth confidence coefficient information are the second immersion depth respectively; and identifying the third stomach endoscopic image according to the structural characteristics of the third stomach endoscopic image to obtain fifth confidence coefficient information and sixth confidence coefficient information, wherein the infiltration depth of the stomach marker corresponding to the third stomach endoscopic image is the first infiltration depth, and the fifth confidence coefficient information and the sixth confidence coefficient information are the second infiltration depth respectively.
In this embodiment, in the first detection state, the morphological feature of the stomach of the target user can be obtained; in the second detection state, the stomach color characteristics of the target user can be obtained; in the third detection state, the structural features of the stomach of the target user can be obtained.
It should be noted that the stomach in different detection states has a plurality of stomach features, and the stomach feature in a certain detection state, for example, the morphological feature of the stomach in the first detection state, provided in this embodiment is the most significant one of the plurality of stomach features in the first detection state, and according to the most significant stomach feature, the confidence information that the infiltration depth of the stomach marker is the first infiltration depth and the second infiltration depth can be effectively determined.
Specifically, the step of identifying the first stomach endoscope image according to the morphological feature of the first stomach endoscope image to obtain first confidence information of the first infiltration depth and second confidence information of the second infiltration depth of the stomach marker corresponding to the first stomach endoscope image, where the first confidence information and the second confidence information are respectively: inputting the first stomach endoscope image into a trained infiltration depth recognition model of a first stomach marker, so that the trained infiltration depth recognition model of the first stomach marker recognizes the first stomach endoscope image according to morphological characteristics of the first stomach endoscope image, and the infiltration depth of the stomach marker corresponding to the first stomach endoscope image is respectively first confidence information of the cancer in the mucosa and second confidence information of the cancer under the mucosa.
The trained first stomach marker infiltration depth recognition model is obtained by training a training set consisting of a stomach endoscope image in the first detection state and a corresponding marked infiltration depth. The trained infiltration depth recognition model of the first stomach marker may be a classification model of the intramucosal cancer and the submucosal cancer in the first detection state, and is configured to recognize the first stomach endoscope image according to morphological features of the stomach in the first stomach endoscope image, so as to output a first probability value that the first stomach endoscope image belongs to the intramucosal cancer and a second probability value that the first stomach endoscope image belongs to the submucosal cancer, where a sum of the first probability value (i.e., the first confidence information) and the second probability value (i.e., the second confidence information) is 1, and the probability value is used to characterize a probability that the first stomach endoscope image belongs to the intramucosal cancer and the submucosal cancer.
Therefore, through the trained infiltration depth recognition model of the first stomach marker, the confidence information of the first stomach endoscope image belonging to the intramucosal cancer and the submucosal cancer can be determined in the first detection state.
Specifically, referring to fig. 2a-2b, fig. 2a-2b are schematic diagrams of a first endoscopic gastric image in a first detection state according to the present embodiment of the invention, wherein fig. 2a is a schematic diagram of a first endoscopic gastric image in the first detection state as a submucosal cancer, and fig. 2b is a schematic diagram of a first endoscopic gastric image in the first detection state as a submucosal cancer. Through the trained infiltration depth recognition model of the first stomach marker provided by the embodiment, the confidence information of the first stomach endoscope image as the mucosa internal cancer and the submucosal cancer can be determined according to the morphological characteristics of the first stomach endoscope image in the first detection state as shown in fig. 2a and 2 b.
In some embodiments, the step of identifying the second stomach endoscope image according to the color feature of the second stomach endoscope image to obtain third confidence information of the first infiltration depth and fourth confidence information of the second infiltration depth of the stomach marker corresponding to the second stomach endoscope image, which are respectively: inputting the second stomach endoscope image into a trained infiltration depth recognition model of a second stomach marker, so that the trained infiltration depth recognition model of the second stomach marker recognizes the second stomach endoscope image according to the color characteristics of the second stomach endoscope image, and the infiltration depth of the stomach marker corresponding to the second stomach endoscope image is respectively third confidence information of the cancer in the mucosa and fourth confidence information of the cancer under the mucosa.
The trained second stomach marker infiltration depth recognition model is obtained by training a training set consisting of a stomach endoscope image in the second detection state and a corresponding marked infiltration depth. The trained infiltration depth recognition model of the second stomach marker may be a classification model of the intramucosal cancer and the submucosal cancer in the second detection state, and is configured to recognize the second stomach endoscope image according to the color features of the stomach in the second stomach endoscope image, so as to output a first probability value that the second stomach endoscope image belongs to the intramucosal cancer and a second probability value that the second stomach endoscope image belongs to the submucosal cancer, where a sum of the first probability value (i.e., the third confidence information) and the second probability value (i.e., the fourth confidence information) is 1, and the probability value is used to characterize a probability that the second stomach endoscope image belongs to the intramucosal cancer and the submucosal cancer.
Therefore, through the trained infiltration depth recognition model of the second stomach marker, the confidence information of the second stomach endoscope image belonging to the intramucosal cancer and the submucosal cancer can be determined in the second detection state.
Specifically, referring to fig. 3a-3b, fig. 3a-3b are schematic diagrams of a second endoscopic gastric image in a second detection state according to an embodiment of the present invention, wherein fig. 3a is a schematic diagram of a second endoscopic gastric image in the second detection state as a submucosal cancer, and fig. 3b is a schematic diagram of a second endoscopic gastric image in the second detection state as a submucosal cancer. Through the trained infiltration depth recognition model of the second stomach marker provided in this embodiment, the confidence information of the second stomach endoscope image as the intra-mucosal cancer and the submucosal cancer can be determined according to the color features of the second stomach endoscope image in the second detection state (the color features obtained after the staining amplification) as shown in fig. 3a and 3 b.
As an optional embodiment, the step of identifying the third gastric portion endoscope image according to the structural feature of the third gastric portion endoscope image to obtain fifth confidence information of the first immersion depth and sixth confidence information of the second immersion depth of the gastric marker corresponding to the third gastric portion endoscope image, specifically includes: detecting the third stomach endoscopic image and determining a lesion area in the third stomach endoscopic image; layering the stomach structure in the third stomach endoscopic image to obtain a multi-layer membrane layer forming the stomach structure; and identifying the third stomach endoscopic image according to the position relation between the lesion area and the multilayer film layer to obtain fifth confidence coefficient information and sixth confidence coefficient information, wherein the infiltration depth of the stomach marker corresponding to the third stomach endoscopic image is the first infiltration depth and the second infiltration depth respectively.
The first infiltration depth comprises infiltration depth of cancer in a mucous membrane, the second infiltration depth comprises infiltration depth of cancer under the mucous membrane, and the multilayer membrane layer comprises a mucous membrane layer, a mucous membrane muscle layer, a mucous membrane submucosa, an intrinsic muscle layer and a serosa layer.
In one embodiment, the step of detecting the third gastric endoscopic image and determining the lesion area in the third gastric endoscopic image specifically includes: and inputting the third stomach endoscopic image into a trained target detection model so that the trained target detection model detects a lesion area in the third stomach endoscopic image and determines the lesion area in the third stomach endoscopic image.
The trained target detection model is obtained by training a training set consisting of stomach endoscope images of the stomach and corresponding labeling images labeled with lesion areas.
In another embodiment, the step of layering the stomach structure in the third gastric endoscopic image to obtain a multi-layer film layer forming the stomach structure specifically includes: and inputting the third stomach portion endoscope image into a trained image segmentation model, so that the trained image segmentation model segments the third stomach portion endoscope image according to the structural characteristics of the third stomach portion endoscope image, and a mucous membrane layer, a mucous membrane muscle layer, a submucosa layer, an intrinsic muscle layer and a serosa layer which form a stomach structure in the third stomach portion endoscope image are obtained.
The trained image segmentation model is obtained by training a training set consisting of a stomach endoscope image in the third detection state and a corresponding labeling image labeled with a stomach structure membrane layer. The trained image segmentation model may include 5 image segmentation models, such as a mucosal layer image segmentation model, a mucosal muscle layer image segmentation model, a submucosal layer image segmentation model, an intrinsic muscle layer image segmentation model, and a serosa layer image segmentation model. In this way, the third stomach portion endoscopic image can be identified by the 5 image segmentation models, respectively, to determine the mucosal layer, the mucosal musculature, the submucosal layer, the musculature, and the serosa layer constituting the stomach structure in the third stomach portion endoscopic image.
Specifically, referring to fig. 4a-4e, fig. 4a-4e are schematic diagrams of a third endoscopic gastric image under a third detection state according to an embodiment of the present invention, wherein a region a in fig. 4a is a schematic diagram of a mucosal layer in the third endoscopic gastric image under the third detection state, b region b in fig. 4b is a schematic diagram of a mucosal layer in the third endoscopic gastric image under the third detection state, c region in fig. 4c is a schematic diagram of a submucosa layer in the third endoscopic gastric image under the third detection state, d region in fig. 4d is a schematic diagram of an intrinsic muscular layer in the third endoscopic gastric image under the third detection state, and e region in fig. 4e is a schematic diagram of a serosal layer in the third endoscopic gastric image under the third detection state. By training the image segmentation model, the 5 image segmentation models provided in the above embodiment can identify each of the layers constituting the stomach structure in the third stomach portion endoscopic image in the third detection state, so that the structural feature of the stomach in the third stomach portion endoscopic image in the third detection state can be determined, and the positional relationship between each stomach layer in the third stomach portion endoscopic image in the third detection state can be determined according to the structural feature.
Specifically, referring to fig. 5, fig. 5 is another schematic diagram of a third gastroscope image in a third detection state according to an embodiment of the present invention, as shown in fig. 5, fig. 5 shows a positional relationship between gastric layers in the third gastroscope image in the third detection state, where a region is a mucosal layer of a stomach structure, b region is a mucosal layer of a stomach structure, c region is a submucosal layer of a stomach structure, d region is an intrinsic layer of a stomach structure, e region is a serous layer of a stomach structure, and f region is a lesion region in the third gastroscope image in the third detection state.
In some embodiments, the step of identifying the third gastric portion endoscope image according to the positional relationship between the lesion area and the multi-layer film layer to obtain fifth confidence information and sixth confidence information that the infiltration depth of the gastric marker corresponding to the third gastric portion endoscope image is the first infiltration depth and the second infiltration depth, respectively, specifically includes: and inputting the third stomach endoscopic image into a trained infiltration depth identification model of a third stomach marker, so that the trained third stomach cancer infiltration depth identification model identifies the third stomach endoscopic image according to the position relation of the third stomach endoscopic image, and the infiltration depth of the stomach marker corresponding to the third stomach endoscopic image is respectively fifth confidence information of the cancer in the mucosa and sixth confidence information of the cancer under the mucosa.
The trained infiltration depth recognition model of the third stomach marker is obtained by training a training set consisting of stomach endoscope images and corresponding marked infiltration depths in the third detection state.
Specifically, the trained infiltration depth recognition model of the third stomach marker can accurately determine the infiltration depth of the stomach marker of the target user according to the position relationship between the lesion region and the multilayer film layer.
Step 103, acquiring the total area of the stomach endoscope image and the lesion area under the preset detection state.
In this embodiment, the first detection state is a white light detection state, the second detection state is a light change detection state, the third detection state is an ultrasonic detection state, and the preset detection state is any one of the white light detection state, the light change detection state and the ultrasonic detection state.
Specifically, the white light detection state is a detection state when white light endoscopic detection is performed, and under the detection state, infiltration depth analysis can be performed according to the morphology of a focus in an acquired white light endoscopic image (white light imaging, WLI); the light change detection state is a detection state when the amplification endoscope detection is carried out, and under the detection state, the light change reaction of an endoscope image under the endoscope can be caused according to different light sources under the endoscope of the amplification endoscope combined with narrow-band imaging (magnifying endoscopy with narrow-band imaging, ME-NBI), so that the infiltration depth analysis is carried out according to the color change of the endoscope image after the light change reaction; the ultrasonic detection state is a detection state when ultrasonic endoscopy is performed, and in the detection state, the hierarchical structure of the stomach can be determined according to ultrasonic endoscopy (endoscopic ultrasonography, EUS), so that infiltration depth analysis can be performed through the positional relationship between the hierarchical structure of the stomach and a lesion area.
The step of acquiring the total area of the stomach endoscope image and the area of the lesion area in the preset detection state specifically comprises the following steps: determining the total area of the stomach endoscope image in the preset detection state according to the length and the width of the stomach endoscope image in the preset detection state; invoking the trained target detection model, and detecting the stomach endoscope image in a preset detection state to determine a lesion area in the stomach endoscope image in the preset detection state; and calculating the lesion area of the lesion area according to the lesion area.
Specifically, the total area of the stomach endoscope image can be directly calculated according to the length and the width of the stomach endoscope image, and can also be calculated according to the pixel area of the stomach endoscope image; similarly, the area of the lesion region in the stomach endoscope image can also be obtained by calculating the size information of the region, and can also be obtained by calculating the pixel area of the region.
Step 104, determining the infiltration depth of the stomach marker of the target user according to the total area, the lesion area and the infiltration depth information.
In this embodiment, the step of determining the infiltration depth of the stomach marker of the target user according to the total area, the lesion area, and the plurality of infiltration depth information specifically includes: determining a ratio of the total area to the area of the lesion; and inputting the total area, the ratio, the first confidence information, the third confidence information and the fifth confidence information of the cancer in the mucous membrane, the second confidence information, the fourth confidence information and the sixth confidence information of the cancer under the mucous membrane into a trained stomach marker infiltration depth fitting model, so that the trained stomach marker infiltration depth fitting model fits the total area, the ratio, the first confidence information, the third confidence information and the fifth confidence information of the cancer in the mucous membrane, the second confidence information, the fourth confidence information and the sixth confidence information of the cancer under the mucous membrane, and obtaining the stomach marker infiltration depth of the target user.
The well-trained infiltration depth fitting model of the stomach marker comprises a random forest and a decision tree. The method comprises the steps of training a fitting model of the immersion depth of a stomach marker to be trained until convergence, wherein the fitting model of the immersion depth of the stomach marker to be trained is obtained by taking the total area of a stomach endoscope image, the ratio between the area of a lesion area and the total area of the same user in a preset detection state and confidence information of the intramucosal cancer and the submucosal cancer generated after gastric cancer immersion depth analysis in three different detection states as training sample data and the actual gastric cancer immersion depth information of the corresponding user as labeling data.
In order to eliminate adverse effects caused by abnormal data, the present embodiment also needs to normalize each area before calculating the ratio of the total area to the area of the lesion area. Thus, the accuracy of the immersion depth analysis can be improved.
Specifically, the total area and the area information of the lesion area in the training data of the trained fitting model of the infiltration depth of the stomach marker are obtained from the stomach endoscope image under the preset detection state, so when the trained fitting model of the infiltration depth of the stomach marker is obtained by training according to the area information in the stomach endoscope image under the first detection state, the preset detection state also has to be the first detection state; similarly, when the trained fitting model of the infiltration depth of the stomach marker is obtained by training according to the area information in the stomach endoscope image under the second detection state/the third detection state, the preset detection state must also be the second detection state/the third detection state.
In summary, the method for detecting the infiltration depth of the stomach marker provided by the embodiment of the invention includes obtaining the stomach endoscope images of the target user with the stomach marker in a plurality of different detection states, comparing the stomach endoscope images in each detection state with the stomach characteristics, respectively identifying the stomach endoscope images in each detection state to determine the infiltration depth information of the stomach marker of the target user in each detection state, obtaining a plurality of infiltration depth information, obtaining the total area and the lesion area of the stomach endoscope images in the preset detection state, and determining the infiltration depth of the stomach marker of the target user according to the total area, the lesion area and the infiltration depth information. By adopting the embodiment of the invention, the infiltration depth of the stomach marker of the target user can be comprehensively and accurately determined, and the technical problem that the infiltration depth of the stomach cancer is difficult to accurately detect is solved.
According to the method described in the above embodiments, the present embodiment will be further described from the perspective of a stomach marker infiltration depth detection apparatus, which may be implemented as a separate entity, or may be implemented in an electronic device, such as a terminal, which may include a mobile phone, a tablet computer, or the like.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an apparatus for detecting the infiltration depth of a stomach marker according to an embodiment of the present invention, as shown in fig. 6, an apparatus 600 for detecting the infiltration depth of a stomach marker according to an embodiment of the present invention includes:
a first acquisition module 601 is configured to acquire a gastric endoscope image of a target user in which gastric markers are present in a plurality of different detection states, each of the gastric endoscope images corresponding to a different gastric characteristic.
The identifying module 602 is configured to identify the stomach endoscope images under each detection state, so as to determine the infiltration depth information of the stomach marker of the target user under each detection state, and obtain a plurality of infiltration depth information.
In this embodiment, the gastric endoscope images in the plurality of different detection states include a first gastric endoscope image in the first detection state, a second gastric endoscope image in the second detection state, and a third gastric endoscope image in the third detection state, the gastric features include morphological features, color features, and structural features, and the infiltration depth information includes confidence information that the infiltration depth of the gastric marker corresponding to the gastric endoscope image is the first infiltration depth and the second infiltration depth, respectively.
Specifically, the identification module 602 is specifically configured to: identifying the first stomach endoscope image according to morphological characteristics of the first stomach endoscope image to obtain first confidence coefficient information and second confidence coefficient information, wherein the immersion depth of a stomach marker corresponding to the first stomach endoscope image is respectively a first immersion depth and a second immersion depth; identifying the second stomach endoscope image according to the color characteristics of the second stomach endoscope image to obtain third confidence coefficient information and fourth confidence coefficient information, wherein the immersion depth of the stomach marker corresponding to the second stomach endoscope image is the first immersion depth, and the third confidence coefficient information and the fourth confidence coefficient information are the second immersion depth respectively; and identifying the third stomach endoscopic image according to the structural characteristics of the third stomach endoscopic image to obtain fifth confidence coefficient information and sixth confidence coefficient information, wherein the infiltration depth of the stomach marker corresponding to the third stomach endoscopic image is the first infiltration depth, and the fifth confidence coefficient information and the sixth confidence coefficient information are the second infiltration depth respectively.
In this embodiment, the identification module 602 is specifically further configured to: detecting the third stomach endoscopic image and determining a lesion area in the third stomach endoscopic image; layering the stomach structure in the third stomach endoscopic image to obtain a multi-layer membrane layer forming the stomach structure; and identifying the third stomach endoscopic image according to the position relation between the lesion area and the multilayer film layer to obtain fifth confidence coefficient information and sixth confidence coefficient information, wherein the infiltration depth of the stomach marker corresponding to the third stomach endoscopic image is the first infiltration depth and the second infiltration depth respectively.
The first infiltration depth comprises infiltration depth of cancer in a mucous membrane, the second infiltration depth comprises infiltration depth of cancer under the mucous membrane, and the multilayer membrane layer comprises a mucous membrane layer, a mucous membrane muscle layer, a mucous membrane submucosa, an intrinsic muscle layer and a serosa layer.
In one embodiment, the identification module 602 is specifically further configured to: and inputting the third stomach endoscopic image into a trained target detection model so that the trained target detection model detects a lesion area in the third stomach endoscopic image and determines the lesion area in the third stomach endoscopic image.
The trained target detection model is obtained by training a training set consisting of stomach endoscope images of the stomach and corresponding labeling images labeled with lesion areas.
In another embodiment, the identification module 602 is specifically further configured to: and inputting the third stomach portion endoscope image into a trained image segmentation model, so that the trained image segmentation model segments the third stomach portion endoscope image according to the structural characteristics of the third stomach portion endoscope image, and a mucous membrane layer, a mucous membrane muscle layer, a submucosa layer, an intrinsic muscle layer and a serosa layer which form a stomach structure in the third stomach portion endoscope image are obtained.
The trained image segmentation model is obtained by training a training set consisting of a stomach endoscope image in the third detection state and a corresponding labeling image labeled with a stomach structure membrane layer.
In a third embodiment, the identification module 602 is specifically further configured to: and inputting the third stomach endoscopic image into a trained infiltration depth identification model of a third stomach marker, so that the trained third stomach cancer infiltration depth identification model identifies the third stomach endoscopic image according to the position relation of the third stomach endoscopic image, and the infiltration depth of the stomach marker corresponding to the third stomach endoscopic image is respectively fifth confidence information of the cancer in the mucosa and sixth confidence information of the cancer under the mucosa.
The trained infiltration depth recognition model of the third stomach marker is obtained by training a training set consisting of stomach endoscope images and corresponding marked infiltration depths in the third detection state.
As an alternative embodiment, the identification module 602 is specifically further configured to: inputting the first stomach endoscope image into a trained infiltration depth recognition model of a first stomach marker, so that the trained infiltration depth recognition model of the first stomach marker recognizes the first stomach endoscope image according to morphological characteristics of the first stomach endoscope image, and the infiltration depth of the stomach marker corresponding to the first stomach endoscope image is respectively first confidence information of the cancer in the mucosa and second confidence information of the cancer under the mucosa.
The trained first stomach marker infiltration depth recognition model is obtained by training a training set consisting of a stomach endoscope image in the first detection state and a corresponding marked infiltration depth.
As another alternative embodiment, the identification module 602 is specifically further configured to: inputting the second stomach endoscope image into a trained infiltration depth recognition model of a second stomach marker, so that the trained infiltration depth recognition model of the second stomach marker recognizes the second stomach endoscope image according to the color characteristics of the second stomach endoscope image, and the infiltration depth of the stomach marker corresponding to the second stomach endoscope image is respectively third confidence information of the cancer in the mucosa and fourth confidence information of the cancer under the mucosa.
The trained second stomach marker infiltration depth recognition model is obtained by training a training set consisting of a stomach endoscope image in the second detection state and a corresponding marked infiltration depth.
The second acquisition module 603 is configured to acquire a total area of the endoscopic gastric image and an area of the lesion area in a preset detection state.
In this embodiment, the first detection state is a white light detection state, the second detection state is a light change detection state, the third detection state is an ultrasonic detection state, and the preset detection state is any one of the white light detection state, the light change detection state and the ultrasonic detection state.
Specifically, the second obtaining module 603 is specifically configured to: determining the total area of the stomach endoscope image in the preset detection state according to the length and the width of the stomach endoscope image in the preset detection state; invoking the trained target detection model, and detecting the stomach endoscope image in a preset detection state to determine a lesion area in the stomach endoscope image in the preset detection state; and calculating the lesion area of the lesion area according to the lesion area.
A determining module 604, configured to determine an infiltration depth of the stomach marker of the target user according to the total area, the lesion area, and a plurality of infiltration depth information.
In this embodiment, the determining module 604 is specifically configured to: determining a ratio of the total area to the area of the lesion;
And inputting the total area, the ratio, the first confidence information, the third confidence information and the fifth confidence information of the cancer in the mucous membrane, the second confidence information, the fourth confidence information and the sixth confidence information of the cancer under the mucous membrane into a trained stomach marker infiltration depth fitting model, so that the trained stomach marker infiltration depth fitting model fits the total area, the ratio, the first confidence information, the third confidence information and the fifth confidence information of the cancer in the mucous membrane, the second confidence information, the fourth confidence information and the sixth confidence information of the cancer under the mucous membrane, and obtaining the stomach marker infiltration depth of the target user.
The well-trained infiltration depth fitting model of the stomach marker comprises a random forest and a decision tree.
In the implementation, each module and/or unit may be implemented as an independent entity, or may be combined arbitrarily and implemented as the same entity or a plurality of entities, where the implementation of each module and/or unit may refer to the foregoing method embodiment, and the specific beneficial effects that may be achieved may refer to the beneficial effects in the foregoing method embodiment, which are not described herein again.
In addition, referring to fig. 7, fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, where the electronic device may be a mobile terminal, such as a smart phone, a tablet computer, or the like. As shown in fig. 7, the electronic device 700 includes a processor 701, a memory 702. The processor 701 is electrically connected to the memory 702.
The processor 701 is a control center of the electronic device 700, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device 700 and processes data by running or loading application programs stored in the memory 702, and calling data stored in the memory 702, thereby performing overall monitoring of the electronic device 700.
In this embodiment, the processor 701 in the electronic device 700 loads the instructions corresponding to the processes of one or more application programs into the memory 702 according to the following steps, and the processor 701 executes the application programs stored in the memory 702, so as to implement various functions:
obtaining a stomach endoscope image of a target user with stomach markers under a plurality of different detection states, wherein the stomach endoscope image under each detection state corresponds to different stomach characteristics;
Respectively identifying stomach endoscope images under each detection state to determine the infiltration depth information of the stomach markers of the target user under each detection state, so as to obtain a plurality of infiltration depth information;
acquiring the total area of the stomach endoscope image and the area of the lesion area under a preset detection state;
and determining the infiltration depth of the stomach marker of the target user according to the total area, the lesion area and the infiltration depth information.
The electronic device 700 may implement the steps in any embodiment of the method for detecting the immersion depth of the stomach marker provided by the embodiment of the present invention, so that the beneficial effects that can be achieved by any embodiment of the method for detecting the immersion depth of the stomach marker provided by the embodiment of the present invention can be achieved, which are detailed in the previous embodiments and are not described herein again.
Referring to fig. 8, fig. 8 is another schematic structural diagram of an electronic device provided by the embodiment of the present invention, and fig. 8 is a specific structural block diagram of the electronic device provided by the embodiment of the present invention, where the electronic device may be used to implement the method for detecting the infiltration depth of the stomach marker provided in the above embodiment. The electronic device 800 may be a mobile terminal such as a smart phone or a notebook computer.
The RF circuit 810 is configured to receive and transmit electromagnetic waves, and to perform mutual conversion between the electromagnetic waves and the electrical signals, thereby communicating with a communication network or other devices. RF circuitry 810 may include various existing circuit elements for performing these functions, such as an antenna, a radio frequency transceiver, a digital signal processor, an encryption/decryption chip, a Subscriber Identity Module (SIM) card, memory, and so forth. The RF circuitry 810 may communicate with various networks such as the internet, intranets, wireless networks, or other devices via wireless networks. The wireless network may include a cellular telephone network, a wireless local area network, or a metropolitan area network. The wireless network may use various communication standards, protocols, and technologies including, but not limited to, global system for mobile communications (Global System for Mobile Communication, GSM), enhanced mobile communications technology (Enhanced Data GSM Environment, EDGE), wideband code division multiple access technology (Wideband Code Division Multiple Access, WCDMA), code division multiple access technology (Code Division Access, CDMA), time division multiple access technology (Time Division Multiple Access, TDMA), wireless fidelity technology (Wireless Fidelity, wi-Fi) (e.g., institute of electrical and electronics engineers standards IEEE 802.11a,IEEE 802.11b,IEEE802.11g and/or IEEE802.11 n), internet telephony (Voice over Internet Protocol, voIP), worldwide interoperability for microwave access (Worldwide Interoperability for Microwave Access, wi-Max), other protocols for mail, instant messaging, and short messaging, as well as any other suitable communication protocols, even including those not currently developed.
The memory 820 may be used to store software programs and modules, such as program instructions/modules corresponding to the method for detecting the infiltration depth of the stomach marker in the above embodiments, and the processor 880 executes the software programs and modules stored in the memory 820, thereby performing various functional applications and detecting the infiltration depth of the stomach marker, namely, implementing the following functions:
obtaining a stomach endoscope image of a target user with stomach markers under a plurality of different detection states, wherein the stomach endoscope image under each detection state corresponds to different stomach characteristics;
respectively identifying stomach endoscope images under each detection state to determine the infiltration depth information of the stomach markers of the target user under each detection state, so as to obtain a plurality of infiltration depth information;
acquiring the total area of the stomach endoscope image and the area of the lesion area under a preset detection state;
and determining the infiltration depth of the stomach marker of the target user according to the total area, the lesion area and the infiltration depth information.
Memory 820 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, memory 820 may further include memory located remotely from processor 880, which may be connected to electronic device 800 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input unit 830 may be used to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, the input unit 830 may include a touch-sensitive surface 831 as well as other input devices 832. The touch-sensitive surface 831, also referred to as a touch screen or touch pad, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on the touch-sensitive surface 831 or thereabout by using any suitable object or accessory such as a finger, stylus, etc.), and actuate the corresponding connection device according to a predetermined program. Alternatively, touch-sensitive surface 831 can include both a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into touch point coordinates, which are then sent to the processor 880 and can receive commands from the processor 880 and execute them. In addition, the touch-sensitive surface 831 can be implemented using a variety of types, such as resistive, capacitive, infrared, and surface acoustic waves. In addition to the touch-sensitive surface 831, the input unit 830 may also include other input devices 832. In particular, other input devices 832 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, mouse, joystick, etc.
The display unit 840 may be used to display information entered by a user or provided to a user as well as various graphical user interfaces of the electronic device 800, which may be composed of graphics, text, icons, video, and any combination thereof. The display unit 840 may include a display panel 841, and optionally, the display panel 841 may be configured in the form of an LCD (Liquid Crystal Display ), an OLED (Organic Light-Emitting Diode), or the like. Further, touch-sensitive surface 831 can overlay display panel 841, and upon detection of a touch operation thereon or thereabout by touch-sensitive surface 831, is communicated to processor 880 for determining the type of touch event, whereupon processor 880 provides a corresponding visual output on display panel 841 based on the type of touch event. Although in the figures, touch-sensitive surface 831 and display panel 841 are implemented as two separate components, in some embodiments touch-sensitive surface 831 may be integrated with display panel 841 to implement input and output functions.
The electronic device 800 may also include at least one sensor 850, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel 841 according to the brightness of ambient light, and a proximity sensor that may generate an interrupt when the folder is closed or closed. As one of the motion sensors, the gravity acceleration sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and the direction when the mobile phone is stationary, and can be used for applications of recognizing the gesture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc. that may also be configured with the electronic device 800 are not described in detail herein.
Audio circuitry 860, speakers 861, and microphone 862 may provide an audio interface between the user and the electronic device 800. The audio circuit 860 may transmit the received electrical signal converted from audio data to the speaker 861, and the electrical signal is converted into a sound signal by the speaker 861 to be output; on the other hand, the microphone 862 converts the collected sound signals into electrical signals, which are received by the audio circuit 860 and converted into audio data, which are processed by the audio data output processor 880 and transmitted to, for example, another terminal via the RF circuit 810, or which are output to the memory 820 for further processing. Audio circuitry 860 may also include an ear bud jack to provide communication of peripheral headphones with electronic device 800.
The electronic device 800, via the transmission module 870 (e.g., wi-Fi module), may facilitate user reception of requests, transmission of information, etc., that provides wireless broadband internet access to the user. Although the transmission module 870 is shown in the figures, it is understood that it is not a necessary component of the electronic device 800 and may be omitted entirely as desired within the scope of not changing the essence of the invention.
The processor 880 is a control center of the electronic device 800, connects various parts of the entire cellular phone using various interfaces and lines, and performs various functions of the electronic device 800 and processes data by running or executing software programs and/or modules stored in the memory 820, and calling data stored in the memory 820, thereby performing overall monitoring of the electronic device. Optionally, processor 880 may include one or more processing cores; in some embodiments, processor 880 may integrate an application processor that primarily handles operating systems, user interfaces, applications, and the like, with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 880.
The electronic device 800 also includes a power supply 890 (e.g., a battery) that provides power to the various components, and in some embodiments, may be logically connected to the processor 880 via a power management system to perform functions such as managing charging, discharging, and power consumption via the power management system. Power supply 890 may also include one or more of any components of a dc or ac power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, etc.
Although not shown, the electronic device 800 further includes a camera (e.g., front camera, rear camera), a bluetooth module, etc., which are not described herein. In particular, in this embodiment, the display unit of the electronic device is a touch screen display, the mobile terminal further includes a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for:
obtaining a stomach endoscope image of a target user with stomach markers under a plurality of different detection states, wherein the stomach endoscope image under each detection state corresponds to different stomach characteristics;
Respectively identifying stomach endoscope images under each detection state to determine the infiltration depth information of the stomach markers of the target user under each detection state, so as to obtain a plurality of infiltration depth information;
acquiring the total area of the stomach endoscope image and the area of the lesion area under a preset detection state;
and determining the infiltration depth of the stomach marker of the target user according to the total area, the lesion area and the infiltration depth information.
In the implementation, each module may be implemented as an independent entity, or may be combined arbitrarily, and implemented as the same entity or several entities, and the implementation of each module may be referred to the foregoing method embodiment, which is not described herein again.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor. To this end, an embodiment of the present invention provides a storage medium having stored therein a plurality of instructions that can be loaded by a processor to perform the steps of any one of the embodiments of the method for detecting the immersion depth of a stomach marker provided by the embodiment of the present invention.
Wherein the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
The instructions stored in the storage medium can execute the steps in any embodiment of the method for detecting the immersion depth of the stomach marker provided by the embodiment of the present application, so that the beneficial effects of any embodiment of the method for detecting the immersion depth of the stomach marker provided by the embodiment of the present application can be achieved, which are detailed in the previous embodiments and are not described herein.
The method, the device, the electronic equipment and the storage medium for detecting the infiltration depth of the stomach marker provided by the embodiment of the application are described in detail, and specific examples are applied to the principle and the implementation mode of the application, and the description of the above embodiments is only used for helping to understand the method and the core idea of the application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, the present description should not be construed as limiting the present application. Moreover, it will be apparent to those skilled in the art that various modifications and variations can be made without departing from the principles of the present application, and such modifications and variations are also considered to be within the scope of the application.

Claims (10)

1. A method for detecting the infiltration depth of a stomach marker, comprising:
obtaining a stomach endoscope image of a target user with stomach markers under a plurality of different detection states, wherein the stomach endoscope image under each detection state corresponds to different stomach characteristics;
respectively identifying stomach endoscope images under each detection state to determine the infiltration depth information of the stomach markers of the target user under each detection state, so as to obtain a plurality of infiltration depth information;
acquiring the total area of the stomach endoscope image and the area of the lesion area under a preset detection state;
and determining the infiltration depth of the stomach marker of the target user according to the total area, the lesion area and the infiltration depth information.
2. The method for detecting the infiltration depth of the gastric marker according to claim 1, wherein the gastric endoscope images under the plurality of different detection states comprise a first gastric endoscope image under the first detection state, a second gastric endoscope image under the second detection state and a third gastric endoscope image under the third detection state, the gastric features comprise morphological features, color features and structural features, and the infiltration depth information comprises confidence information that the infiltration depth of the gastric marker corresponding to the gastric endoscope images is the first infiltration depth and the second infiltration depth respectively;
Identifying the stomach endoscope images under each detection state respectively to determine the infiltration depth information of the stomach marker of the target user under each detection state, so as to obtain a plurality of infiltration depth information, wherein the method comprises the following steps:
identifying the first stomach endoscope image according to morphological characteristics of the first stomach endoscope image to obtain first confidence coefficient information and second confidence coefficient information, wherein the immersion depth of a stomach marker corresponding to the first stomach endoscope image is respectively a first immersion depth and a second immersion depth;
identifying the second stomach endoscope image according to the color characteristics of the second stomach endoscope image to obtain third confidence coefficient information and fourth confidence coefficient information, wherein the immersion depth of the stomach marker corresponding to the second stomach endoscope image is the first immersion depth, and the third confidence coefficient information and the fourth confidence coefficient information are the second immersion depth respectively;
and identifying the third stomach endoscopic image according to the structural characteristics of the third stomach endoscopic image to obtain fifth confidence coefficient information and sixth confidence coefficient information, wherein the infiltration depth of the stomach marker corresponding to the third stomach endoscopic image is the first infiltration depth, and the fifth confidence coefficient information and the sixth confidence coefficient information are the second infiltration depth respectively.
3. The method for detecting the infiltration depth of the stomach marker according to claim 2, wherein the identifying the third stomach endoscopic image according to the structural feature of the third stomach endoscopic image, to obtain the fifth confidence information and the sixth confidence information that the infiltration depth of the stomach marker corresponding to the third stomach endoscopic image is the first infiltration depth and the second infiltration depth, respectively, includes:
detecting the third stomach endoscopic image and determining a lesion area in the third stomach endoscopic image;
layering the stomach structure in the third stomach endoscopic image to obtain a multi-layer membrane layer forming the stomach structure;
and identifying the third stomach endoscopic image according to the position relation between the lesion area and the multilayer film layer to obtain fifth confidence coefficient information and sixth confidence coefficient information, wherein the infiltration depth of the stomach marker corresponding to the third stomach endoscopic image is the first infiltration depth and the second infiltration depth respectively.
4. The method of claim 3, wherein the first infiltration depth comprises an infiltration depth of an intramucosal cancer and the second infiltration depth comprises an infiltration depth of a submucosal cancer, and wherein the multilayer membranous layers comprise a mucosal layer, a mucosal muscle layer, a submucosal layer, an intrinsic muscle layer, and a serosal layer;
The detecting the third stomach portion endoscope image to determine a lesion area in the third stomach portion endoscope image includes:
inputting the third stomach endoscopic image into a trained target detection model so that the trained target detection model detects a lesion area in the third stomach endoscopic image and determines the lesion area in the third stomach endoscopic image;
the trained target detection model is obtained by training a training set formed by stomach endoscope images of stomach and corresponding labeling images labeled with lesion areas;
the layering treatment is performed on the stomach structure in the third stomach endoscopic image to obtain a multi-layer film layer forming the stomach structure, and the layering treatment comprises the following steps:
inputting the third stomach portion endoscope image into a trained image segmentation model, so that the trained image segmentation model segments the third stomach portion endoscope image according to the structural characteristics of the third stomach portion endoscope image to obtain a mucous membrane layer, a mucous membrane musculature, a submucosa, an intrinsic musculature and a serosa layer which form a stomach structure in the third stomach portion endoscope image;
the trained image segmentation model is obtained by training a training set formed by the stomach endoscope image in the third detection state and the corresponding marked image marked with the stomach structure film layer;
Identifying the third stomach portion endoscope image according to the position relationship between the lesion area and the multilayer film layer to obtain fifth confidence coefficient information and sixth confidence coefficient information, wherein the immersion depth of the stomach marker corresponding to the third stomach portion endoscope image is the first immersion depth, respectively, and the fifth confidence coefficient information and the sixth confidence coefficient information comprise:
inputting the third stomach endoscopic image into a trained infiltration depth recognition model of a third stomach marker, so that the trained third stomach cancer infiltration depth recognition model recognizes the third stomach endoscopic image according to the position relation of the third stomach endoscopic image, and obtaining fifth confidence information of the stomach marker corresponding to the third stomach endoscopic image and sixth confidence information of the submucosal cancer, wherein the infiltration depth of the stomach marker is respectively in the mucosa;
the trained infiltration depth recognition model of the third stomach marker is obtained by training a training set consisting of stomach endoscope images and corresponding marked infiltration depths in the third detection state.
5. The method for detecting the infiltration depth of the gastric marker according to claim 4, wherein the identifying the first gastric endoscope image according to the morphological feature of the first gastric endoscope image to obtain the first confidence information and the second confidence information that the infiltration depth of the gastric marker corresponding to the first gastric endoscope image is the first infiltration depth and the second infiltration depth, respectively, includes:
Inputting the first stomach endoscope image into a trained infiltration depth identification model of a first stomach marker, so that the trained infiltration depth identification model of the first stomach marker identifies the first stomach endoscope image according to morphological characteristics of the first stomach endoscope image, and the infiltration depth of the stomach marker corresponding to the first stomach endoscope image is respectively first confidence information of the cancer in the mucosa and second confidence information of the cancer under the mucosa;
the trained first stomach marker infiltration depth recognition model is obtained by training a training set consisting of a stomach endoscope image in the first detection state and a corresponding marked infiltration depth;
the identifying the second stomach endoscope image according to the color characteristics of the second stomach endoscope image, to obtain third confidence information and fourth confidence information that the infiltration depth of the stomach marker corresponding to the second stomach endoscope image is the first infiltration depth and the second infiltration depth, respectively, includes:
inputting the second stomach endoscope image into a trained infiltration depth recognition model of a second stomach marker, so that the trained infiltration depth recognition model of the second stomach marker recognizes the second stomach endoscope image according to the color characteristics of the second stomach endoscope image, and the infiltration depth of the stomach marker corresponding to the second stomach endoscope image is respectively third confidence information of the cancer in the mucosa and fourth confidence information of the cancer under the mucosa;
The trained second stomach marker infiltration depth recognition model is obtained by training a training set consisting of a stomach endoscope image in the second detection state and a corresponding marked infiltration depth.
6. The method according to claim 4, wherein the first detection state is a white light detection state, the second detection state is a light-variation detection state, the third detection state is an ultrasonic detection state, and the preset detection state is any one of the white light detection state, the light-variation detection state and the ultrasonic detection state;
the obtaining the total area of the stomach endoscope image and the lesion area under the preset detection state comprises the following steps:
determining the total area of the stomach endoscope image in the preset detection state according to the length and the width of the stomach endoscope image in the preset detection state;
invoking the trained target detection model, and detecting the stomach endoscope image in a preset detection state to determine a lesion area in the stomach endoscope image in the preset detection state;
and calculating the lesion area of the lesion area according to the lesion area.
7. The method of claim 6, wherein determining the depth of infiltration of the stomach marker of the target user based on the total area, the lesion area, and the plurality of pieces of infiltration depth information comprises:
determining a ratio of the total area to the area of the lesion;
inputting the total area, the ratio, the first confidence information, the third confidence information and the fifth confidence information of the cancer in the mucosa, the second confidence information, the fourth confidence information and the sixth confidence information of the cancer under the mucosa into a trained stomach marker infiltration depth fitting model, so that the trained stomach marker infiltration depth fitting model fits the total area, the ratio, the first confidence information, the third confidence information and the fifth confidence information of the cancer in the mucosa, the second confidence information, the fourth confidence information and the sixth confidence information of the cancer under the mucosa, and obtaining the stomach marker infiltration depth of the target user;
the well-trained infiltration depth fitting model of the stomach marker comprises a random forest and a decision tree.
8. An immersion depth detection device for a stomach marker, comprising:
a first acquisition module for acquiring a stomach endoscope image of a target user with stomach markers in a plurality of different detection states, wherein the stomach endoscope image in each detection state corresponds to different stomach characteristics;
the identification module is used for respectively identifying the stomach endoscope images under each detection state so as to determine the infiltration depth information of the stomach marker of the target user under each detection state and obtain a plurality of infiltration depth information;
the second acquisition module is used for acquiring the total area of the stomach endoscope image and the lesion area under the preset detection state;
and the determining module is used for determining the infiltration depth of the stomach marker of the target user according to the total area, the lesion area and the infiltration depth information.
9. An electronic device comprising a processor, a memory and a computer program stored in the memory and configured to be executed by the processor, the memory being coupled to the processor, and the processor implementing the steps in the method for detecting the immersion depth of a stomach marker according to any one of claims 1 to 7 when the computer program is executed by the processor.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program, wherein the computer program when run controls a device in which the computer readable storage medium is located to perform the steps of the method for detecting the infiltration depth of a stomach marker according to any one of claims 1 to 7.
CN202310503578.8A 2022-11-22 2023-04-28 Method, device and storage medium for detecting infiltration depth of stomach marker Active CN116596869B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2022114706236 2022-11-22
CN202211470623.6A CN116109559A (en) 2022-11-22 2022-11-22 Method, device and storage medium for detecting infiltration depth of stomach marker

Publications (2)

Publication Number Publication Date
CN116596869A true CN116596869A (en) 2023-08-15
CN116596869B CN116596869B (en) 2024-03-05

Family

ID=86255085

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202211470623.6A Withdrawn CN116109559A (en) 2022-11-22 2022-11-22 Method, device and storage medium for detecting infiltration depth of stomach marker
CN202310503578.8A Active CN116596869B (en) 2022-11-22 2023-04-28 Method, device and storage medium for detecting infiltration depth of stomach marker

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202211470623.6A Withdrawn CN116109559A (en) 2022-11-22 2022-11-22 Method, device and storage medium for detecting infiltration depth of stomach marker

Country Status (1)

Country Link
CN (2) CN116109559A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019245009A1 (en) * 2018-06-22 2019-12-26 株式会社Aiメディカルサービス Method of assisting disease diagnosis based on endoscope image of digestive organ, diagnosis assistance system, diagnosis assistance program, and computer-readable recording medium having said diagnosis assistance program stored thereon
CN112270676A (en) * 2020-11-13 2021-01-26 上海理工大学 Computer-aided judgment method for endometrial cancer muscle layer infiltration depth of MRI (magnetic resonance imaging) image
US20210023083A1 (en) * 2019-07-11 2021-01-28 The Board Of Trustees Of The Leland Stanford Junior University Diagnosis and regulation of epidermal differentiation and cancer cell activity
CN112614128A (en) * 2020-12-31 2021-04-06 山东大学齐鲁医院 System and method for assisting biopsy under endoscope based on machine learning
CN113421272A (en) * 2021-06-22 2021-09-21 厦门理工学院 Method, device and equipment for monitoring tumor infiltration depth and storage medium
CN113610847A (en) * 2021-10-08 2021-11-05 武汉楚精灵医疗科技有限公司 Method and system for evaluating stomach markers in white light mode
CN113643291A (en) * 2021-10-14 2021-11-12 武汉大学 Method and device for determining esophagus marker infiltration depth grade and readable storage medium
CN113706533A (en) * 2021-10-28 2021-11-26 武汉大学 Image processing method, image processing device, computer equipment and storage medium
CN114078128A (en) * 2022-01-20 2022-02-22 武汉大学 Medical image processing method, device, terminal and storage medium
CN114998348A (en) * 2022-08-03 2022-09-02 南方医科大学南方医院 Computer-readable storage medium and colorectal cancer prognosis prediction model construction system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019245009A1 (en) * 2018-06-22 2019-12-26 株式会社Aiメディカルサービス Method of assisting disease diagnosis based on endoscope image of digestive organ, diagnosis assistance system, diagnosis assistance program, and computer-readable recording medium having said diagnosis assistance program stored thereon
US20210023083A1 (en) * 2019-07-11 2021-01-28 The Board Of Trustees Of The Leland Stanford Junior University Diagnosis and regulation of epidermal differentiation and cancer cell activity
CN112270676A (en) * 2020-11-13 2021-01-26 上海理工大学 Computer-aided judgment method for endometrial cancer muscle layer infiltration depth of MRI (magnetic resonance imaging) image
CN112614128A (en) * 2020-12-31 2021-04-06 山东大学齐鲁医院 System and method for assisting biopsy under endoscope based on machine learning
CN113421272A (en) * 2021-06-22 2021-09-21 厦门理工学院 Method, device and equipment for monitoring tumor infiltration depth and storage medium
CN113610847A (en) * 2021-10-08 2021-11-05 武汉楚精灵医疗科技有限公司 Method and system for evaluating stomach markers in white light mode
CN113643291A (en) * 2021-10-14 2021-11-12 武汉大学 Method and device for determining esophagus marker infiltration depth grade and readable storage medium
CN113706533A (en) * 2021-10-28 2021-11-26 武汉大学 Image processing method, image processing device, computer equipment and storage medium
CN114078128A (en) * 2022-01-20 2022-02-22 武汉大学 Medical image processing method, device, terminal and storage medium
CN114998348A (en) * 2022-08-03 2022-09-02 南方医科大学南方医院 Computer-readable storage medium and colorectal cancer prognosis prediction model construction system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZIXIAO LU等: "Deep-Learning–Based Characterization of Tumor-Infiltrating Lymphocytes in Breast Cancers From Histopathology Images and Multiomics Data", 《 JCO CLINICAL CANCER INFORMATICS》, vol. 4, pages 480 - 490, XP055904390, DOI: 10.1200/CCI.19.00126 *

Also Published As

Publication number Publication date
CN116109559A (en) 2023-05-12
CN116596869B (en) 2024-03-05

Similar Documents

Publication Publication Date Title
CN109886243B (en) Image processing method, device, storage medium, equipment and system
US10031218B2 (en) Method and apparatus for sensing fingerprints
CN110059744B (en) Method for training neural network, method and equipment for processing image and storage medium
WO2021135601A1 (en) Auxiliary photographing method and apparatus, terminal device, and storage medium
EP3876188A1 (en) Colon polyp image processing method and apparatus, and system
TWI679552B (en) Unlocking control method and mobile terminal
US20180314874A1 (en) Method For Displaying Fingerprint Identification Area And Mobile Terminal
CN107122760B (en) Fingerprint identification method and Related product
CN111462036A (en) Pathological image processing method based on deep learning, model training method and device
CN107480496A (en) Solve lock control method and Related product
JP2022546453A (en) FITNESS AID METHOD AND ELECTRONIC DEVICE
CN113919390A (en) Method for identifying touch operation and electronic equipment
CN111027490B (en) Face attribute identification method and device and storage medium
WO2019024718A1 (en) Anti-counterfeiting processing method, anti-counterfeiting processing apparatus and electronic device
WO2019001254A1 (en) Method for iris liveness detection and related product
CN111078108A (en) Screen display method and device, storage medium and mobile terminal
CN112288843A (en) Three-dimensional construction method and device of focus, terminal device and storage medium
CN107193474A (en) Solve lock control method and Related product
CN110517771B (en) Medical image processing method, medical image identification method and device
CN107194223A (en) Fingerprint recognition region display methods and Related product
CN105513098B (en) Image processing method and device
CN116596869B (en) Method, device and storage medium for detecting infiltration depth of stomach marker
CN115984228A (en) Gastroscope image processing method and device, electronic equipment and storage medium
CN113343195A (en) Electronic equipment and fingerprint unlocking method and fingerprint unlocking device thereof
CN115393323B (en) Target area obtaining method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant