CN117671573B - Helicobacter pylori infection state identification method and device based on gastroscope image - Google Patents
Helicobacter pylori infection state identification method and device based on gastroscope image Download PDFInfo
- Publication number
- CN117671573B CN117671573B CN202410145987.XA CN202410145987A CN117671573B CN 117671573 B CN117671573 B CN 117671573B CN 202410145987 A CN202410145987 A CN 202410145987A CN 117671573 B CN117671573 B CN 117671573B
- Authority
- CN
- China
- Prior art keywords
- current frame
- helicobacter pylori
- module
- image
- gastroscope
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 41
- 206010019375 Helicobacter infections Diseases 0.000 title claims description 30
- 241000590002 Helicobacter pylori Species 0.000 claims abstract description 65
- 229940037467 helicobacter pylori Drugs 0.000 claims abstract description 65
- 208000015181 infectious disease Diseases 0.000 claims abstract description 63
- 208000024891 symptom Diseases 0.000 claims abstract description 54
- 238000001514 detection method Methods 0.000 claims abstract description 47
- 210000001156 gastric mucosa Anatomy 0.000 claims abstract description 38
- 230000011218 segmentation Effects 0.000 claims abstract description 28
- 238000004659 sterilization and disinfection Methods 0.000 claims abstract description 27
- 230000001954 sterilising effect Effects 0.000 claims abstract description 14
- 238000012549 training Methods 0.000 claims description 17
- 238000002575 gastroscopy Methods 0.000 claims description 14
- 241000287828 Gallus gallus Species 0.000 claims description 11
- 230000002496 gastric effect Effects 0.000 claims description 10
- 238000010586 diagram Methods 0.000 claims description 7
- 210000004400 mucous membrane Anatomy 0.000 claims description 7
- 210000004907 gland Anatomy 0.000 claims description 6
- 210000002784 stomach Anatomy 0.000 claims description 6
- 238000013145 classification model Methods 0.000 claims description 5
- 230000008961 swelling Effects 0.000 claims description 5
- 206010003694 Atrophy Diseases 0.000 claims description 4
- 241000238586 Cirripedia Species 0.000 claims description 4
- 206010054949 Metaplasia Diseases 0.000 claims description 4
- 230000037444 atrophy Effects 0.000 claims description 4
- 238000000338 in vitro Methods 0.000 claims description 4
- 210000004877 mucosa Anatomy 0.000 claims description 4
- 210000003097 mucus Anatomy 0.000 claims description 4
- 238000011176 pooling Methods 0.000 claims description 4
- 230000002062 proliferating effect Effects 0.000 claims description 4
- 230000008859 change Effects 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 210000003462 vein Anatomy 0.000 claims description 3
- 206010028980 Neoplasm Diseases 0.000 claims description 2
- 210000004204 blood vessel Anatomy 0.000 claims description 2
- 239000003814 drug Substances 0.000 abstract description 4
- 208000037062 Polyps Diseases 0.000 description 8
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 108010046334 Urease Proteins 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 208000005718 Stomach Neoplasms Diseases 0.000 description 3
- XSQUKJJJFZCRTK-UHFFFAOYSA-N Urea Chemical compound NC(N)=O XSQUKJJJFZCRTK-UHFFFAOYSA-N 0.000 description 3
- 239000004202 carbamide Substances 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 206010017758 gastric cancer Diseases 0.000 description 3
- 201000011549 stomach cancer Diseases 0.000 description 3
- 241000894006 Bacteria Species 0.000 description 2
- CURLTUGMZLYLDI-UHFFFAOYSA-N Carbon dioxide Chemical compound O=C=O CURLTUGMZLYLDI-UHFFFAOYSA-N 0.000 description 2
- 206010048215 Xanthomatosis Diseases 0.000 description 2
- 239000000427 antigen Substances 0.000 description 2
- 102000036639 antigens Human genes 0.000 description 2
- 108091007433 antigens Proteins 0.000 description 2
- 230000001580 bacterial effect Effects 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 206010015150 Erythema Diseases 0.000 description 1
- 208000007882 Gastritis Diseases 0.000 description 1
- 208000012671 Gastrointestinal haemorrhages Diseases 0.000 description 1
- 240000004282 Grewia occidentalis Species 0.000 description 1
- 241000588747 Klebsiella pneumoniae Species 0.000 description 1
- 206010030111 Oedema mucosal Diseases 0.000 description 1
- 241000270295 Serpentes Species 0.000 description 1
- 241000191967 Staphylococcus aureus Species 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 238000003556 assay Methods 0.000 description 1
- 238000001574 biopsy Methods 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 229910002092 carbon dioxide Inorganic materials 0.000 description 1
- 239000001569 carbon dioxide Substances 0.000 description 1
- 230000000747 cardiac effect Effects 0.000 description 1
- 239000003153 chemical reaction reagent Substances 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000012258 culturing Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 229940079593 drug Drugs 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002550 fecal effect Effects 0.000 description 1
- 210000003608 fece Anatomy 0.000 description 1
- 238000011010 flushing procedure Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000027119 gastric acid secretion Effects 0.000 description 1
- 208000030304 gastrointestinal bleeding Diseases 0.000 description 1
- 239000001963 growth medium Substances 0.000 description 1
- 238000002483 medication Methods 0.000 description 1
- 239000002609 medium Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000007170 pathology Effects 0.000 description 1
- 239000007793 ph indicator Substances 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 229940126409 proton pump inhibitor Drugs 0.000 description 1
- 239000000612 proton pump inhibitor Substances 0.000 description 1
- 238000012134 rapid urease test Methods 0.000 description 1
- 230000003014 reinforcing effect Effects 0.000 description 1
- 230000000405 serological effect Effects 0.000 description 1
- 210000002966 serum Anatomy 0.000 description 1
- 230000036555 skin type Effects 0.000 description 1
- 238000010186 staining Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 210000001519 tissue Anatomy 0.000 description 1
- 230000002792 vascular Effects 0.000 description 1
- 210000000264 venule Anatomy 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
The invention belongs to the technical field of computer medicine, and relates to a method and a device for identifying the infection state of helicobacter pylori based on a gastroscope image, which are used for acquiring a gastroscope video stream image, carrying out frame segmentation on the gastroscope video stream image, and obtaining the position identification information of a current frame from a gastroscope video frame after frame segmentation through a position identification model; when the position identification information meets the preset position condition, judging the gastric mucosa state of the current frame through a range character image identification model and a local character image detection model; determining the key symptom category of the current frame based on the part identification information of the current frame and the gastric mucosa state; and counting the key symptom categories of each gastroscope video frame of the gastroscope video stream image to obtain the infection state of helicobacter pylori. The invention can judge whether the state is the state after Hp sterilization, accurately judge different states of Hp infection and assist doctors to diagnose under the endoscope.
Description
Technical Field
The invention belongs to the technical field of computer medicine, and particularly relates to a method and a device for identifying the infection state of helicobacter pylori based on a gastroscope image.
Background
The current methods for detecting helicobacter pylori (Helicobacter pylori, hp) can be mainly divided into two main classes: noninvasive detection and invasive detection.
The existing rapid urease test in invasive detection is easy to cause false negative when gastrointestinal bleeding occurs or gastric acid secretion is inhibited by a proton pump inhibitor, and meanwhile, other urease-producing bacteria exist in the stomach, such as klebsiella pneumoniae, staphylococcus aureus and the like, and false positive is easy to occur. Both histological pathology and bacterial culture are susceptible to false positives or false negatives from the experience of the operator, the site of the stomach biopsy, and other medications.
In view of the fact that the traditional method is not accurate enough in identifying the Hp infection state and cannot judge whether the Hp infection state is the condition after Hp sterilization, how to accurately judge different states of Hp infection and assist doctors in diagnosis under an endoscope is a problem to be solved urgently. Meanwhile, no relevant patent for distinguishing helicobacter pylori infection state based on deep learning exists at present.
Disclosure of Invention
According to a first aspect of the present invention, the present invention claims a method for identifying the infection status of helicobacter pylori based on gastroscopy, comprising:
obtaining a gastroscope video stream image, carrying out frame segmentation on the gastroscope video stream image, and obtaining the part identification information of the current frame from the gastroscope video frame after the frame segmentation through a part identification model;
when the position identification information meets the preset position condition, judging the gastric mucosa state of the current frame through a range character image identification model and a focal character image detection model;
determining the key symptom category of the current frame based on the part identification information of the current frame and the gastric mucosa state;
and counting the key symptom categories of each gastroscope video frame of the gastroscope video stream image to obtain the infection state of helicobacter pylori.
Further, the obtaining the gastroscope video stream image, performing frame segmentation on the gastroscope video stream image, obtaining the part identification information of the current frame from the gastroscope video frame after frame segmentation through the part identification model, and further includes:
performing frame segmentation on the gastroscope video stream image to obtain a plurality of candidate gastroscope video frames;
scaling and normalizing the plurality of candidate gastroscope video frames, and then inputting the frames into a part classification network to obtain a part prediction result of the current frame;
and adding the part prediction result into a voting window, and outputting the part prediction result of the current frame as part identification information of the current frame when the part category with the largest occurrence number in the voting window is consistent with the part prediction result, otherwise, the part identification information of the current frame is invalid.
Further, when the location identification information meets a preset location condition, the gastric mucosa state of the current frame is judged through the range feature identification model and the focal feature detection model, and the method further includes:
removing an invalid image and a non-stomach image according to the part identification information, and starting a range characteristic image identification model and a focal characteristic image detection model;
inputting the current frame meeting the conditions into the range feature recognition model and the local feature detection model to obtain a range feature recognition result and a local feature recognition result of the current frame;
and integrating the range symptom identification result and the focal symptom identification result to obtain the gastric mucosa state of the current frame.
Further, the determining the key symptom category of the current frame based on the location identification information of the current frame and the gastric mucosa state further includes:
and when the gastric mucosa state of the current frame meets the preset symptom condition and the part identification information meets the preset part condition, determining the key symptom type of the current frame.
Further, counting the key symptom categories of each gastroscope video frame of the gastroscope video stream image, and when the infection state of helicobacter pylori is obtained, further comprising:
and when the predicted part of the continuous first number of video frames of the part recognition model is in vitro, ending the current gastroscope mucous membrane state recognition, and counting the key symptom categories of the gastroscope video frames of the gastroscope video stream image.
Further, inputting the current frame meeting the condition into the range feature recognition model to obtain a range feature recognition result of the current frame, and further including:
the identification categories of the range characteristic image identification model comprise regular arrangement of set fine veins, diffuse redness, mucosa swelling, intestinal metaplasia, map redness, punctate redness, chicken skin, fold coarse snaking, white turbid mucus, barnacle symptoms, atrophy and other categories;
the range symptom identification model adopts a Mobileone classification model, and structurally comprises a first Mobileone module, a second Mobileone module, a third Mobileone module, a fourth Mobileone module, a fifth Mobileone module, a sixth Mobileone module, an average pooling layer, a linear layer and a softmax layer;
gradually extracting edge, color change and texture bottom layer characteristics from the output characteristics of the first mobile phone module, the second mobile phone module and the third mobile phone module;
the output feature map of the fourth MobileOne module is associated with adjacent region features to form a combination of textures and shapes, namely middle layer features;
the output features of the fifth and sixth Mobileone modules gradually focus on high-level abstract features, including high-level semantic features of gastric mucosa blood vessel see-through and chicken skin-like gastric mucosa swelling states;
each MobileOne module comprises a training stage and an reasoning stage;
the training stage consists of a plurality of re-parameterizable branches, and the training stage is converted into a single-branch structure through re-parameterization equivalent transformation;
in the training stage, the MobileOne module consists of a depthwise convolution module and a pointwise convolution module;
the depthwise convolution module consists of three branches of a 1X 1 depthwise convolution, a k block 3X 3 depthwise convolution and a BN layer, and the pointwise convolution module consists of a k block 1X 1 convolution branch and a BN branch;
in the reasoning stage, each MobileOne module is re-parameterized as a 3×3 depthwise convolution sum and a 1×1 convolution of a single branch;
and inputting the current frame meeting the conditions into the range character recognition model, and outputting the respective confidence degrees of n range characters to obtain the range character recognition result of the current frame.
Further, inputting the current frame meeting the conditions into the focal feature detection model to obtain a focal feature identification result of the current frame;
the identification category of the focal feature detection model comprises scratch marks, gastric basal gland polyps, proliferative polyps and yellow tumors;
the focal feature image detection model adopts an RTMDet example segmentation model, and structurally comprises the following components: a first mobile unit, a second mobile unit, a third mobile unit, a fourth mobile unit and a fifth mobile unit in a backbone shared with the range feature recognition model adopt a pack layer of PAFPN structure to detect the head;
the feature map generated by the fifth MobileOne module obtains a second splicing feature P' through upsampling and the fourth MobileOne module Concat operation;
the Concat operation is based on channel splicing of feature graphs;
the second splicing characteristic P' is up-sampled and then is operated with a characteristic diagram Concat generated by the third MobileOne module to obtain a third splicing characteristic P3;
the third splicing characteristic P3 is convolved with the step length of 2 and then subjected to Concat operation with the second splicing characteristic P' to obtain a fourth splicing characteristic P4;
the fourth splicing feature P4 is convolved with the step length of 2 and then is operated with a feature map Concat generated by the fifth MobileOne module to obtain a fifth splicing feature P5;
the third splicing characteristic P3, the fourth splicing characteristic P4 and the fifth splicing characteristic P5 are input into a detection head to predict the target position and the category.
Further, the statistics of the key symptom categories of the gastroscope video frames of the gastroscope video stream image to obtain the infection status of helicobacter pylori further includes:
counting the key symptoms of the gastroscope, and when the unique symptoms of helicobacter pylori infection appear, considering the current helicobacter pylori infection state as infected, otherwise, continuously judging whether the helicobacter pylori infection and the sterilization share the symptoms.
Further, the method further comprises the following steps:
if the helicobacter pylori infection and the post-sterilization shared sign appear, counting the helicobacter pylori infection score and the post-sterilization score, and accumulating each sign only once, and finally determining whether the helicobacter pylori infection or the helicobacter pylori post-sterilization is the helicobacter pylori infection or the helicobacter pylori post-sterilization by the score;
and dynamically adjusting the key symptom score according to the occurrence frequency of the corresponding state in the actual gastroscopy, if no common symptom exists, judging whether the unique symptom map sample reddening after helicobacter pylori degerming exists, if yes, sterilizing, otherwise, judging that the helicobacter pylori infection exists.
According to a second aspect of the present invention, the present invention claims a gastroscopic image-based infection status recognition device of helicobacter pylori, comprising:
the position identification module is used for acquiring a gastroscope video stream image, carrying out frame segmentation on the gastroscope video stream image, and obtaining position identification information of a current frame from the gastroscope video frame after the frame segmentation through the position identification model;
the state identification module is used for judging the gastric mucosa state of the current frame through the range characteristic image identification model and the focal characteristic image detection model when the position identification information meets the preset position condition;
the sign recognition module is used for determining the key sign category of the current frame based on the position recognition information of the current frame and the gastric mucosa state;
the infection state identification module is used for counting the key symptom categories of the gastroscope video frames of the gastroscope video stream image to obtain the infection state of helicobacter pylori;
the helicobacter pylori infection state identification device based on the gastroscope image is used for executing the helicobacter pylori infection state identification method based on the gastroscope image.
The invention belongs to the technical field of computer medicine, and relates to a method and a device for identifying the infection state of helicobacter pylori based on a gastroscope image, which are used for acquiring a gastroscope video stream image, carrying out frame segmentation on the gastroscope video stream image, and obtaining the position identification information of a current frame from a gastroscope video frame after frame segmentation through a position identification model; when the position identification information meets the preset position condition, judging the gastric mucosa state of the current frame through a range character image identification model and a local character image detection model; determining the key symptom category of the current frame based on the part identification information of the current frame and the gastric mucosa state; and counting the key symptom categories of each gastroscope video frame of the gastroscope video stream image to obtain the infection state of helicobacter pylori. The invention can judge whether the state is the state after Hp sterilization, accurately judge different states of Hp infection and assist doctors to diagnose under the endoscope.
Drawings
FIG. 1 is a flowchart showing a method for identifying the infection status of helicobacter pylori based on a gastroscopic image according to an embodiment of the invention;
FIG. 2 is a schematic diagram of a model class of a symptom identification model of a method for identifying the infection status of helicobacter pylori based on gastroscopic images according to an embodiment of the invention;
FIG. 3 is a schematic diagram showing the operation of a MobileOne module of the method for identifying the infection status of helicobacter pylori based on gastroscopic images according to the embodiment of the invention;
FIG. 4 is a schematic view showing the feature extraction of a range profile recognition model of a method for recognizing the infection status of helicobacter pylori based on gastroscopic images according to an embodiment of the invention;
FIG. 5 is a schematic view of a model for detecting focal features of a method for identifying the infection status of helicobacter pylori based on gastroscopy according to an embodiment of the invention;
FIG. 6 is a schematic view showing the feature extraction of a model for detecting focal features of a method for identifying the infection status of helicobacter pylori based on gastroscopic images according to an embodiment of the invention;
FIG. 7 is a schematic diagram of the structure of a detecting head of a method for identifying the infection status of helicobacter pylori based on gastroscopy according to an embodiment of the invention;
FIG. 8 is a flowchart showing the Hp status discrimination of the method for identifying the infection status of helicobacter pylori based on gastroscopic images according to the embodiment of the invention;
FIG. 9 is a block diagram showing the structure of a device for identifying the infection status of helicobacter pylori based on a gastroscopic image according to an embodiment of the invention.
Detailed Description
The current methods for detecting helicobacter pylori (Helicobacter pylori, hp) can be mainly divided into two main classes: noninvasive detection and invasive detection. The noninvasive detection method does not need to obtain gastric mucosa tissues through an endoscope or other means, and mainly comprises the following three steps:
urea breath test: by utilizing the characteristic that helicobacter pylori secretes urease, whether helicobacter pylori is infected is judged by orally taking urea containing isotope labels and exhaling carbon dioxide containing isotope labels.
Serological detection: the serum is tested for helicobacter pylori antibody levels by blood drawing to determine whether helicobacter pylori is infected.
Fecal antigen detection: whether helicobacter pylori is infected or not is judged by detecting the antigen of helicobacter pylori in feces.
Invasive detection refers to a method that requires direct or indirect detection by endoscopic or other means to obtain gastric mucosal tissue. Noninvasive detection mainly comprises the following three methods:
rapid urease assay: by utilizing the characteristic that helicobacter pylori secretes urease, whether helicobacter pylori is infected or not is judged by putting gastric mucosa tissues into a reagent containing urea and a pH indicator and observing color change.
Histopathological examination: the helicobacter pylori infection is judged by staining and microscopic observation of gastric mucosal tissue and directly finding the form and distribution of helicobacter pylori.
Bacterial culture: by culturing the bacteria of the gastric mucosal tissue on a specific culture medium, the type and the number of helicobacter pylori are directly separated and identified to judge whether the helicobacter pylori is infected or not.
Hp infection is classified into three states of no Hp infection, hp infection and Hp sterilization according to "Beijing gastritis Classification" (Jia Teng Yuan Xuan, liaoning science and technology Press, month 6 of 2018). In the four-corner theory of gastric cancer, the Hp infection state is one of the important factors affecting the occurrence and development of gastric cancer, and can interact with background mucous membrane, naked eye morphology and tissue type differently, resulting in different gastric cancer manifestations. However, it is difficult to determine whether the state is after the Hp sterilization in the conventional detection method.
According to a first embodiment of the present invention, referring to FIG. 1, the present invention claims a method for identifying the infection status of helicobacter pylori based on gastroscopic images, comprising:
obtaining a gastroscope video stream image, carrying out frame segmentation on the gastroscope video stream image, and obtaining part identification information of a current frame from the gastroscope video frame subjected to frame segmentation through a part identification model;
when the position identification information meets the preset position condition, judging the gastric mucosa state of the current frame through a range character image identification model and a local character image detection model;
determining the key symptom category of the current frame based on the part identification information of the current frame and the gastric mucosa state;
and counting the key symptom categories of each gastroscope video frame of the gastroscope video stream image to obtain the infection state of helicobacter pylori.
In this embodiment, the part recognition model is an image classification network, and the classification is 12 types shown in table 1, wherein the classification 4-10 is a full-stomach classification, the classification 4-8 is a stomach body, and the invalidation is a picture in which the details of the gastroscope cannot be recognized due to flushing, rapid movement, overexposure and other reasons in the image.
Table 1 part class table
Further, obtaining a gastroscope video stream image, performing frame segmentation on the gastroscope video stream image, and obtaining the part identification information of the current frame from the gastroscope video frame after frame segmentation through a part identification model, wherein the method further comprises the following steps:
performing frame segmentation on the gastroscope video stream image to obtain a plurality of candidate gastroscope video frames;
scaling and normalizing the multiple candidate gastroscope video frames, and then inputting the frames into a part classification network to obtain a part prediction result of the current frame;
and adding the part prediction result into a voting window, and outputting the part prediction result of the current frame as part identification information of the current frame when the part category with the largest occurrence number in the voting window is consistent with the part prediction result, otherwise, the part identification information of the current frame is invalid.
In this embodiment, the location classification network uses MobileNetV2, and performs image preprocessing, i.e. clipping, image scaling, and normalization operations, where the dimension of the input location classification network is 224×224×3, the detailed network structure of MobileNetV2 is shown in table 2, and the final network outputs a vector with a length n, where n is 12 location categories, and represents the confidence of each location category, and the largest output node is the category identified by the location classification network.
TABLE 2 MobileNet V2 network architecture
The voting window is a queue with the size of 10, and stores the classification result of the last 10 times of part classification network. The input is the part result obtained by the part classification network of the last 10 frames, and the part result is output as the part with the largest accumulated times in the 10 frames. Taking the voting window q= [1,2,5,3,4,5,5,5,5,5] as an example, the part class obtained by the last 10 frames of part classification network is saved in Q, wherein the part class 5, namely the cardiac column, is the class with the largest occurrence number in the voting window. When the part category with the largest occurrence number in the voting window is consistent with the part classification network prediction result of the current frame, outputting the part prediction result of the current frame as the current prediction part of the current frame, otherwise, the prediction part of the current frame is an invalid category. The above operations are to further ensure the accuracy of the part recognition.
Further, when the location identification information meets the preset location condition, the gastric mucosa state of the current frame is judged through the range feature identification model and the focal feature detection model, and the method further comprises the following steps:
removing an invalid image and a non-stomach image according to the part identification information, and starting a range characteristic image identification model and a focal characteristic image detection model;
inputting the current frame meeting the conditions into a range character recognition model and a local character detection model to obtain a range character recognition result and a local character recognition result of the current frame;
and integrating the range characteristic image recognition result and the local characteristic image recognition result to obtain the gastric mucosa state of the current frame.
Further, determining the key symptom category of the current frame based on the part identification information of the current frame and the gastric mucosa state, further comprises:
and when the gastric mucosa state of the current frame meets the preset symptom condition and the part identification information meets the preset part condition, determining the key symptom category of the current frame.
In this example, the key signs under the gastroscope are selected according to the gastroscope performance under different Hp states as shown in table 3.
TABLE 3 Critical Condition table under gastroscope
Further, counting the key symptom categories of each gastroscope video frame of the gastroscope video stream image, and when the infection state of helicobacter pylori is obtained, further comprising:
and when the predicted parts of the continuous first number of video frames of the part recognition model are in vitro, ending the current gastroscope mucous membrane state recognition, and counting the key symptom categories of the gastroscope video frames of the gastroscope video stream image.
Referring to fig. 2, the classes of the current range profile recognition model include RAC (regular arrangement of collecting venules, regular array of collective thin veins), diffuse redness, mucosal swelling, intestinal metaplasia, map redness, punctiform redness, chicken skin-like folds, coarse snaking, white turbidity mucus, barnacle signs, atrophy, and other classes (including normal gastric mucosa and gastric pictures other than the above gastroscopic manifestations) of class 12 gastric mucosa states. The focal feature detection model is used for detecting scratch features, gastric basal gland polyps, proliferative polyps and xanthomas in 4 types, wherein a box is a target position of the focal feature.
Further, inputting the current frame meeting the condition into a range characteristic image recognition model to obtain a range characteristic image recognition result of the current frame, and further comprising:
identification categories of the range characteristic image identification model include RAC, diffuse redness, mucosa swelling, intestinal metaplasia, map redness, punctate redness, chicken skin-like, fold rough snake, white turbid mucus, barnacle, atrophy, and other categories;
the range feature image recognition model adopts a Mobileone classification model, and structurally comprises a first Mobileone module, a second Mobileone module, a third Mobileone module, a fourth Mobileone module, a fifth Mobileone module, a sixth Mobileone module, an average pooling layer, a linear layer and a softmax layer;
gradually extracting bottom layer characteristics such as edges, color changes, textures and the like from the output characteristics of the first mobile phone module, the second mobile phone module and the third mobile phone module;
associating adjacent region features in the output feature map of the fourth MobileOne module to form a combination of textures and shapes, namely middle layer features;
the output features of the fifth Mobileone module and the sixth Mobileone module gradually focus on abstract features with higher levels, such as high-level semantic features in gastric mucosa states of gastric mucosa vascular see-through, chicken skin-like bulge and the like;
each MobileOne module comprises a training stage and an reasoning stage;
the training phase consists of a plurality of re-parameterizable branches, and the training phase is converted into a single-branch structure through re-parameterization equivalent transformation in the reasoning phase;
in the training stage, the MobileOne module consists of a depthwise convolution module and a pointwise convolution module;
the depthwise convolution module consists of three branches of a 1X 1 depthwise convolution, a k block 3X 3 depthwise convolution and a BN layer, and the pointwise convolution module consists of a k block 1X 1 convolution branch and a BN branch;
in the reasoning stage, each MobileOne module is re-parameterized as a 3×3 depthwise convolution sum and a 1×1 convolution of a single branch;
and inputting the current frame meeting the conditions into a range character recognition model, and outputting the respective confidence degrees of n range characters to obtain a range character recognition result of the current frame.
In this embodiment, the scopic feature recognition model adopts a MobileOne classification model, which is composed of a plurality of MobileOne modules, and the network structure is shown in table 4. Specifically, through image preprocessing, namely clipping, image scaling and normalization operations, the dimension of the input range feature recognition model is 448×448×3, the MobileOne classification model consists of 6 stages of MobileOne modules, and finally, the respective confidence degrees of n range features are output through an average pooling layer, a linear layer and a softmax operation, wherein n is 12, namely 12 kinds of range feature categories.
Table 4 MobileOne model network structure table
As shown in fig. 3, the MobileOne module is divided into two structures of a training phase and an reasoning phase, wherein the training phase consists of a plurality of reparameterizable branches, the reparameterized branches are equivalently transformed into a single-branch structure in the reasoning phase, so that the reasoning speed is improved, the memory access cost is reduced, dw convolution represents depthwise convolution, and relu is a relu activation function. Specifically, in the training phase, the MobileOne module consists of a depthwise convolution module and a pointwise convolution module. Wherein the depthwise convolution module consists of three branches of 1×1 depthwise convolution, k block 3×3 depthwise convolution and BN layer, and the pointwise convolution module consists of k block 1×1 convolution branches and BN branches. In the inference phase, the MobileOne module can be reparameterized as a 3×3 depthwise convolution and a 1×1 convolution of a single branch.
The brief operation is as follows: first, the convolution operation and BN layer are fused, and the convolution operation is expressed asWherein->The convolutional weights, offsets, and inputs, respectively, the BN layer operation is expressed asWherein->Respectively mean, standard deviation and two learnable parameters. Then the convolution operation is brought into the BN layer formula
The above formula can be regarded as convolution operation, and the convolution weights are fusedAnd bias->Can be respectively expressed as
The same thing can prove that the 1×1 depthwise convolution and BN layer branches in the MobileOne module can be converted into equivalent 3×3 depthwise convolutions by zero padding and adding an identity convolution, respectively. The three branches of the depthwise convolution module of the training phase combine the outputsCan be expressed as
+/>
Wherein the method comprises the steps of,/>,/>,/>,/>,/>The convolution weights and offsets after the fusion of the three branches are respectively adopted. Therefore, the training stage multi-branch structure can be equivalently transformed into single-branch 3×3 depthwise convolution, and the stage convolution weight is inferred>And bias->Can be respectively expressed as
The same can prove that the multi-branch structure of the pointwise convolution module can be equivalently reparameterized into single-branch 1×1 convolution.
The characteristic extraction flow of the range characteristic image recognition model is shown in fig. 4, and the characteristic images of stages C2, C4 and C6 are visualized by taking chicken skin type as an example. As can be seen from fig. 4, in the shallow layer feature output by C2, the model mainly extracts bottom layer features such as edges and textures, and the like, in the feature map output by C4, adjacent region features are gradually associated to form middle layer features of texture and shape combination, and in the output feature of C6, high-level semantic features are focused, focusing is performed on the positions of the chicken skin-like protrusions, namely, dense uniform small granular protrusion features in the chicken skin-like category, so as to judge the chicken skin-like category to be output.
Further, inputting the current frame meeting the conditions into a local feature detection model to obtain a local feature identification result of the current frame;
the identification category of the focal feature detection model comprises scratch marks, gastric basal gland polyps, proliferative polyps and xanthomas;
the focal feature detection model adopts an RTMDet example segmentation model, and structurally comprises the following components: the first mobile unit, the second mobile unit, the third mobile unit, the fourth mobile unit and the fifth mobile unit in the backbone network shared with the range feature recognition model adopt a pack layer of PAFPN structure to detect the head;
the feature map generated by the fifth MobileOne module obtains a second splicing feature P' through upsampling and fourth MobileOne module Concat operation;
concat operation is based on the splicing of the channels of the feature map;
the second splicing characteristic P' is up-sampled and then is operated with a characteristic diagram Concat generated by a third MobileOne module to obtain a third splicing characteristic P3;
the third splicing characteristic P3 is convolved with the step length of 2 and then subjected to Concat operation with the second splicing characteristic P' to obtain a fourth splicing characteristic P4;
the fourth splicing feature P4 is convolved with the step length of 2 and then is operated with a feature map Concat generated by a fifth MobileOne module to obtain a fifth splicing feature P5;
the third stitching characteristic P3, the fourth stitching characteristic P4 and the fifth stitching characteristic P5 are input into the detection head to predict the target position and the category.
In this embodiment, the structure and weight of C1-C5 in table 4 are shared with the range profile recognition model, C3-C5 in fig. 5 is also C3-C5 in table 4, that is, the RTMDet model takes 448×448 images as input, extracts a multi-scale feature map through the MobileOne network, and transmits the feature map obtained by C3-C5 into the panpn of the RTMDet.
The feature extraction process of the local feature image detection model is shown in fig. 6, taking the gastric basal gland polyp category as an example, when in the stage of the shallow network C3, the model mainly extracts the bottom features such as edges, textures and the like of the local feature image; further associating pixel information of the adjacent area in the stage C5 to form high-level semantic features preliminarily, and gradually focusing on a focus area; performing multi-scale feature fusion in the neg layer, and starting to highlight boundaries of focal signs; and finally, accurately positioning the region of the gastric basal gland polyp with the focal sign at the output layer.
The neg layer of RTMDet adopts PAFPN structure, and PAFPN has increased the route from the bottom upwards to fuse high-level characteristic and bottom characteristic with FPN, plays the effect of reinforcing feature expression. Specifically, as shown in fig. 5, a feature map generated by P' in PAFPN for C5 is obtained through upsampling and a feature map Concat operation generated by C4, where the Concat operation is based on the concatenation of channels; p3 is obtained by performing a Concat operation on the feature map generated by P' after upsampling and C3; p4 is obtained by performing convolution of P3 with the step length of 2 and then performing P' Concat operation; and P5 is obtained by performing convolution on P4 with the step length of 2 and then performing operation on the result and a feature map Concat generated by C5, and then the dimensions of the feature maps obtained by P3, P4 and P5 output are 56×56×256, 28×28×512 and 14×14×1024 respectively.
The detection heads are respectively transmitted by P3, P4 and P5 and are used for predicting the target position and the category, the detailed structure of the detection heads is shown in fig. 7, and the detection heads mainly comprise classification branches and boundary regression branches, and each branch is subjected to two 3×3 convolutions and one 1×1 convolution. Sorting branch outputs,/>The local characteristic image category is 4, and H and W respectively represent the width and height of the characteristic image under the scale. Boundary regression branch output is +.>Where 4 is 4 target bounding box parameters. In particular, the convolution weights are shared among the different layers in the SepBN module, but the BN layers are calculated independently. And finally, integrating the output of the three detection heads to obtain the target category and the target position.
Further, statistics of the key symptom categories of each gastroscope video frame of the gastroscope video stream image is carried out to obtain the infection status of helicobacter pylori, and the method further comprises the following steps:
counting the key symptoms of the gastroscope, and when the unique symptoms of helicobacter pylori infection appear, considering the current helicobacter pylori infection state as infected, otherwise, continuously judging whether the helicobacter pylori infection and the sterilization share the symptoms.
Further, the method further comprises the following steps:
if the helicobacter pylori infection and the post-sterilization shared sign appear, counting the helicobacter pylori infection score and the post-sterilization score, and accumulating each sign only once, and finally determining whether the helicobacter pylori infection or the helicobacter pylori post-sterilization is the helicobacter pylori infection or the helicobacter pylori post-sterilization by the score;
and dynamically adjusting the key symptom score according to the occurrence frequency of the corresponding state in the actual gastroscopy, if no common symptom exists, judging whether the unique symptom map sample reddening after helicobacter pylori degerming exists, if yes, sterilizing, otherwise, judging that the helicobacter pylori infection exists.
In this embodiment, when the site recognition model predicts that the site is in vitro for 60 continuous frames, the current gastroscope mucosa state recognition is ended, and the current gastroscope key sign data is counted. The Hp state discrimination flow chart is shown in fig. 8. Specifically, statistics is carried out on the key signs of the gastroscope, when the unique signs of HP infection appear, the current Hp state is considered to be Hp infection, otherwise, whether the Hp infection and the post-sterilization shared signs are continuously judged. The classification of unique signs of Hp infection and common signs of Hp infection after sterilization is shown in Table 5 (Yes, no). If the Hp infection and post-sterilization shared signs appear, the Hp infection score and post-sterilization score are counted according to Table 5, each sign is accumulated only once, and finally whether the Hp infection or the post-sterilization Hp is determined by the score higher. Table 5 scores may be dynamically adjusted based on the frequency of occurrence of the corresponding state in the actual gastroscopy. If no common sign exists, the map sample with the unique sign after Hp sterilization needs to be judged to be reddish, if the unique sign exists, the map sample is sterilized, otherwise, the map sample is free of Hp infection.
TABLE 5 assignment of key and score after infection and sterilization of 5 Hp
According to a second embodiment of the present invention, referring to FIG. 9, the present invention claims a device for identifying the infection state of helicobacter pylori based on a gastroscopic image, comprising:
the part identification module is used for acquiring a gastroscope video stream image, carrying out frame segmentation on the gastroscope video stream image, and obtaining part identification information of a current frame from the gastroscope video frame after the frame segmentation through the part identification model;
the state identification module is used for judging the gastric mucosa state of the current frame through the range characteristic image identification model and the focal characteristic image detection model when the position identification information meets the preset position condition;
the sign recognition module is used for determining the key sign category of the current frame based on the position recognition information of the current frame and the gastric mucosa state;
the infection state identification module is used for counting the key symptom categories of the gastroscope video frames of the gastroscope video stream image to obtain the infection state of helicobacter pylori;
the infection state recognition device of helicobacter pylori based on the gastroscope image applies the executed method for recognizing the infection state of helicobacter pylori based on the gastroscope image.
Those skilled in the art will appreciate that various modifications and improvements can be made to the disclosure. For example, the various devices or components described above may be implemented in hardware, or may be implemented in software, firmware, or a combination of some or all of the three.
A flowchart is used in this disclosure to describe the steps of a method according to an embodiment of the present disclosure. It should be understood that the steps that follow or before do not have to be performed in exact order. Rather, the various steps may be processed in reverse order or simultaneously. Also, other operations may be added to these processes.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the methods described above may be implemented by a computer program to instruct related hardware, and the program may be stored in a computer readable storage medium, such as a read only memory, a magnetic disk, or an optical disk. Alternatively, all or part of the steps of the above embodiments may be implemented using one or more integrated circuits. Accordingly, each module/unit in the above embodiment may be implemented in the form of hardware, or may be implemented in the form of a software functional module. The present disclosure is not limited to any specific form of combination of hardware and software.
Unless defined otherwise, all terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The foregoing is illustrative of the present disclosure and is not to be construed as limiting thereof. Although a few exemplary embodiments of this disclosure have been described, those skilled in the art will readily appreciate that many modifications are possible in the exemplary embodiments without materially departing from the novel teachings and advantages of this disclosure. Accordingly, all such modifications are intended to be included within the scope of this disclosure as defined in the claims. It is to be understood that the foregoing is illustrative of the present disclosure and is not to be construed as limited to the specific embodiments disclosed, and that modifications to the disclosed embodiments, as well as other embodiments, are intended to be included within the scope of the appended claims. The disclosure is defined by the claims and their equivalents.
In the description of the present specification, reference to the terms "one embodiment," "some embodiments," "illustrative embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the spirit and principles of the invention, the scope of which is defined by the claims and their equivalents.
Claims (7)
1. A method for identifying the infection status of helicobacter pylori based on gastroscopy, comprising:
obtaining a gastroscope video stream image, carrying out frame segmentation on the gastroscope video stream image, and obtaining the part identification information of the current frame from the gastroscope video frame after the frame segmentation through a part identification model;
when the position identification information meets the preset position condition, judging the gastric mucosa state of the current frame through a range character image identification model and a focal character image detection model;
determining the key symptom category of the current frame based on the part identification information of the current frame and the gastric mucosa state;
counting the key symptom categories of each gastroscope video frame of the gastroscope video stream image to obtain the infection state of helicobacter pylori;
when the position identification information meets the preset position condition, the gastric mucosa state of the current frame is judged through the range characteristic image identification model and the focal characteristic image detection model, and the method further comprises the following steps:
removing an invalid image and a non-stomach image according to the part identification information, and starting a range characteristic image identification model and a focal characteristic image detection model;
inputting the current frame meeting the conditions into the range feature recognition model and the local feature detection model to obtain a range feature recognition result and a local feature recognition result of the current frame;
integrating the range feature recognition result and the focal feature recognition result to obtain the gastric mucosa state of the current frame;
the current frame meeting the conditions is input into the range feature recognition model to obtain the range feature recognition result of the current frame, and the method further comprises the following steps:
the identification categories of the range characteristic image identification model comprise regular arrangement of set fine veins, diffuse redness, mucosa swelling, intestinal metaplasia, map redness, punctate redness, chicken skin, fold coarse snaking, white turbid mucus, barnacle symptoms, atrophy and other categories;
the range symptom identification model adopts a Mobileone classification model, and structurally comprises a first Mobileone module, a second Mobileone module, a third Mobileone module, a fourth Mobileone module, a fifth Mobileone module, a sixth Mobileone module, an average pooling layer, a linear layer and a softmax layer;
gradually extracting edge, color change and texture bottom layer characteristics from the output characteristics of the first mobile phone module, the second mobile phone module and the third mobile phone module;
the output feature map of the fourth MobileOne module is associated with adjacent region features to form a combination of textures and shapes, namely middle layer features;
the output features of the fifth and sixth Mobileone modules gradually focus on high-level abstract features, including high-level semantic features of gastric mucosa blood vessel see-through and chicken skin-like gastric mucosa swelling states;
each MobileOne module comprises a training stage and an reasoning stage;
the training stage consists of a plurality of re-parameterizable branches, and the training stage is converted into a single-branch structure through re-parameterization equivalent transformation;
in the training stage, the MobileOne module consists of a depthwise convolution module and a pointwise convolution module;
the depthwise convolution module consists of three branches of a 1X 1 depthwise convolution, a k block 3X 3 depthwise convolution and a BN layer, and the pointwise convolution module consists of a k block 1X 1 convolution branch and a BN branch;
in the reasoning stage, each MobileOne module is re-parameterized as a 3×3 depthwise convolution sum and a 1×1 convolution of a single branch;
inputting the current frame meeting the conditions into the range feature recognition model, and outputting the respective confidence degrees of n range features to obtain a range feature recognition result of the current frame;
inputting the current frame meeting the conditions into the local feature detection model to obtain a local feature identification result of the current frame;
the identification category of the focal feature detection model comprises scratch marks, gastric basal gland polyps, proliferative polyps and yellow tumors;
the focal feature image detection model adopts an RTMDet example segmentation model, and structurally comprises the following components: a first mobile unit, a second mobile unit, a third mobile unit, a fourth mobile unit and a fifth mobile unit in a backbone shared with the range feature recognition model adopt a pack layer of PAFPN structure to detect the head;
the feature map generated by the fifth MobileOne module obtains a second splicing feature P' through upsampling and the fourth MobileOne module Concat operation;
the Concat operation is based on channel splicing of feature graphs;
the second splicing characteristic P' is up-sampled and then is operated with a characteristic diagram Concat generated by the third MobileOne module to obtain a third splicing characteristic P3;
the third splicing characteristic P3 is convolved with the step length of 2 and then subjected to Concat operation with the second splicing characteristic P' to obtain a fourth splicing characteristic P4;
the fourth splicing feature P4 is convolved with the step length of 2 and then is operated with a feature map Concat generated by the fifth MobileOne module to obtain a fifth splicing feature P5;
the third splicing characteristic P3, the fourth splicing characteristic P4 and the fifth splicing characteristic P5 are input into a detection head to predict the target position and the category.
2. The method for identifying the infection state of helicobacter pylori based on a gastroscope image according to claim 1, wherein the steps of obtaining a gastroscope video stream image, performing frame segmentation on the gastroscope video stream image, and obtaining the part identification information of the current frame from the gastroscope video frame after the frame segmentation through a part identification model, further comprise:
performing frame segmentation on the gastroscope video stream image to obtain a plurality of candidate gastroscope video frames;
scaling and normalizing the plurality of candidate gastroscope video frames, and then inputting the frames into a part classification network to obtain a part prediction result of the current frame;
and adding the part prediction result into a voting window, and outputting the part prediction result of the current frame as part identification information of the current frame when the part category with the largest occurrence number in the voting window is consistent with the part prediction result, otherwise, the part identification information of the current frame is invalid.
3. The method for identifying the infection state of helicobacter pylori based on a gastroscopic image according to claim 1, wherein the determining the key feature category of the current frame based on the part identification information of the current frame and the gastric mucosa state further comprises:
and when the gastric mucosa state of the current frame meets the preset symptom condition and the part identification information meets the preset part condition, determining the key symptom type of the current frame.
4. The method for identifying the infection state of helicobacter pylori based on gastroscopy according to claim 1, wherein the step of counting key symptom categories of each gastroscopy video frame of the gastroscopy video stream image to obtain the infection state of helicobacter pylori further comprises:
and when the predicted part of the continuous first number of video frames of the part recognition model is in vitro, ending the current gastroscope mucous membrane state recognition, and counting the key symptom categories of the gastroscope video frames of the gastroscope video stream image.
5. The method for identifying the infection status of helicobacter pylori based on gastroscopy according to claim 1, wherein the counting the key feature categories of each gastroscopy video frame of the gastroscopy video stream image to obtain the infection status of helicobacter pylori further comprises:
counting the key symptoms of the gastroscope, and when the unique symptoms of helicobacter pylori infection appear, considering the current helicobacter pylori infection state as infected, otherwise, continuously judging whether the helicobacter pylori infection and the sterilization share the symptoms.
6. The gastroscopic image-based infection status recognition method of helicobacter pylori according to claim 1, further comprising:
if the helicobacter pylori infection and the post-sterilization shared sign appear, counting the helicobacter pylori infection score and the post-sterilization score, and accumulating each sign only once, and finally determining whether the helicobacter pylori infection or the helicobacter pylori post-sterilization is the helicobacter pylori infection or the helicobacter pylori post-sterilization by the score;
and dynamically adjusting the key symptom score according to the occurrence frequency of the corresponding state in the actual gastroscopy, if no common symptom exists, judging whether the unique symptom map sample reddening after helicobacter pylori degerming exists, if yes, sterilizing, otherwise, judging that the helicobacter pylori infection exists.
7. A device for identifying the infection state of helicobacter pylori based on a gastroscopic image, comprising:
the position identification module is used for acquiring a gastroscope video stream image, carrying out frame segmentation on the gastroscope video stream image, and obtaining position identification information of a current frame from the gastroscope video frame after the frame segmentation through the position identification model;
the state identification module is used for judging the gastric mucosa state of the current frame through the range characteristic image identification model and the focal characteristic image detection model when the position identification information meets the preset position condition;
the sign recognition module is used for determining the key sign category of the current frame based on the position recognition information of the current frame and the gastric mucosa state;
the infection state identification module is used for counting the key symptom categories of the gastroscope video frames of the gastroscope video stream image to obtain the infection state of helicobacter pylori;
the gastroscopic image-based infection status recognition apparatus for helicobacter pylori employs the method for performing the gastroscopic image-based infection status recognition of helicobacter pylori as defined in any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410145987.XA CN117671573B (en) | 2024-02-01 | 2024-02-01 | Helicobacter pylori infection state identification method and device based on gastroscope image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410145987.XA CN117671573B (en) | 2024-02-01 | 2024-02-01 | Helicobacter pylori infection state identification method and device based on gastroscope image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117671573A CN117671573A (en) | 2024-03-08 |
CN117671573B true CN117671573B (en) | 2024-04-12 |
Family
ID=90075434
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410145987.XA Active CN117671573B (en) | 2024-02-01 | 2024-02-01 | Helicobacter pylori infection state identification method and device based on gastroscope image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117671573B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114359131A (en) * | 2021-11-12 | 2022-04-15 | 浙江大学 | Helicobacter pylori stomach video full-automatic intelligent analysis system and marking method thereof |
CN116051961A (en) * | 2023-02-16 | 2023-05-02 | 山东浪潮科学研究院有限公司 | Target detection model training method, target detection method, device and medium |
CN116090517A (en) * | 2022-12-30 | 2023-05-09 | 杭州华橙软件技术有限公司 | Model training method, object detection device, and readable storage medium |
-
2024
- 2024-02-01 CN CN202410145987.XA patent/CN117671573B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114359131A (en) * | 2021-11-12 | 2022-04-15 | 浙江大学 | Helicobacter pylori stomach video full-automatic intelligent analysis system and marking method thereof |
CN116090517A (en) * | 2022-12-30 | 2023-05-09 | 杭州华橙软件技术有限公司 | Model training method, object detection device, and readable storage medium |
CN116051961A (en) * | 2023-02-16 | 2023-05-02 | 山东浪潮科学研究院有限公司 | Target detection model training method, target detection method, device and medium |
Non-Patent Citations (1)
Title |
---|
MobileOne: An Improved One millisecond Mobile Backbone;Pavan Kumar Anasosalu Vasu等;《arXiv》;20230328;1-16 * |
Also Published As
Publication number | Publication date |
---|---|
CN117671573A (en) | 2024-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zhang et al. | Diagnosing chronic atrophic gastritis by gastroscopy using artificial intelligence | |
Yogapriya et al. | Gastrointestinal tract disease classification from wireless endoscopy images using pretrained deep learning model | |
WO2019245009A1 (en) | Method of assisting disease diagnosis based on endoscope image of digestive organ, diagnosis assistance system, diagnosis assistance program, and computer-readable recording medium having said diagnosis assistance program stored thereon | |
US20220172828A1 (en) | Endoscopic image display method, apparatus, computer device, and storage medium | |
CN111144271B (en) | Method and system for automatically identifying biopsy parts and biopsy quantity under endoscope | |
EP4198819A1 (en) | Method for detecting and classifying lesion area in clinical image | |
CN110974306B (en) | System for discernment and location pancreas neuroendocrine tumour under ultrasonic endoscope | |
US11244450B2 (en) | Systems and methods utilizing artificial intelligence for placental assessment and examination | |
CN114782760B (en) | Stomach disease picture classification system based on multitask learning | |
Sun et al. | A novel gastric ulcer differentiation system using convolutional neural networks | |
CN115115897B (en) | Multi-modal pre-trained gastric tumor classification system | |
Adewole et al. | Deep learning methods for anatomical landmark detection in video capsule endoscopy images | |
CN109460717A (en) | Alimentary canal Laser scanning confocal microscope lesion image-recognizing method and device | |
CN113222957A (en) | Multi-class focus high-speed detection method and system based on capsule lens image | |
CN112651375A (en) | Helicobacter pylori stomach image recognition and classification system based on deep learning model | |
CN116664929A (en) | Laryngoscope image multi-attribute classification method based on multi-modal information fusion | |
CN117671573B (en) | Helicobacter pylori infection state identification method and device based on gastroscope image | |
Yue et al. | Benchmarking polyp segmentation methods in narrow-band imaging colonoscopy images | |
CN116563216B (en) | Endoscope ultrasonic scanning control optimization system and method based on standard site intelligent recognition | |
CN117350979A (en) | Arbitrary focus segmentation and tracking system based on medical ultrasonic image | |
JP6710853B2 (en) | Probe-type confocal laser microscope endoscopic image diagnosis support device | |
Gatoula et al. | Enhanced CNN-Based Gaze Estimation on Wireless Capsule Endoscopy Images | |
Kwon et al. | Weakly supervised attention map training for histological localization of colonoscopy images | |
CN116385814B (en) | Ultrasonic screening method, system, device and medium for detection target | |
KR102564443B1 (en) | Gastroscopy system with improved reliability of gastroscopy using deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |