CN113052843A - Method, apparatus, system, storage medium and computing device for assisting endoscopy - Google Patents

Method, apparatus, system, storage medium and computing device for assisting endoscopy Download PDF

Info

Publication number
CN113052843A
CN113052843A CN202110603794.0A CN202110603794A CN113052843A CN 113052843 A CN113052843 A CN 113052843A CN 202110603794 A CN202110603794 A CN 202110603794A CN 113052843 A CN113052843 A CN 113052843A
Authority
CN
China
Prior art keywords
inspection
examination
actual
curve
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110603794.0A
Other languages
Chinese (zh)
Other versions
CN113052843B (en
Inventor
曾凡
乔元风
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Xuanwei Digital Medical Technology Co ltd
Xuanwei Beijing Biotechnology Co ltd
Original Assignee
Henan Xuan Yongtang Medical Information Technology Co ltd
Xuanwei Beijing Biotechnology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan Xuan Yongtang Medical Information Technology Co ltd, Xuanwei Beijing Biotechnology Co ltd filed Critical Henan Xuan Yongtang Medical Information Technology Co ltd
Priority to CN202110603794.0A priority Critical patent/CN113052843B/en
Publication of CN113052843A publication Critical patent/CN113052843A/en
Application granted granted Critical
Publication of CN113052843B publication Critical patent/CN113052843B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30028Colon; Small intestine
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30092Stomach; Gastric

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Endoscopes (AREA)

Abstract

A method, apparatus, system, storage medium, and computing device for assisting an endoscopic examination are provided. The method comprises the following steps: acquiring an image acquired during the inspection of operating the endoscope equipment; identifying the image through at least one model constructed based on a deep neural network to obtain an identification result at least capable of indicating a certain part corresponding to the image; determining at least an inspection progress based on the recognition result; determining at least inspection time information and/or image acquisition quantity of a certain part or each part based on the identification result after the inspection of the certain part or each part is determined to be finished based on the inspection progress; drawing an actual examination curve based on examination time information and/or image acquisition quantity of a certain part or each part; the actual inspection curve is compared with a corresponding standard inspection curve to determine inspection quality. The inspection quality monitoring of a certain part or each part is realized by image recognition of the video input by the endoscope, and the inspection quality control level of the endoscope is effectively improved.

Description

Method, apparatus, system, storage medium and computing device for assisting endoscopy
Technical Field
Embodiments of the present invention relate to the field of artificial intelligence technology, and more particularly, to a method, an apparatus, a system, a storage medium, and a computing device for assisting an endoscopy.
Background
This section is intended to provide a background or context to the embodiments of the invention that are recited in the claims. The description herein is not admitted to be prior art by inclusion in this section.
At present, in order to survey the space (for example tunnel, culvert, esophagus, alimentary canal, intestines and stomach etc.) that is cramped, narrow, the naked eye is difficult to direct observation, often adopt the scope to survey, because the space is narrow and small, be difficult to direct observation, current detection progress can only be judged through the image that the scope was gathered to operating personnel, and is very unfriendly to the operating personnel who is not experienced.
Disclosure of Invention
In this context, embodiments of the present invention are intended to provide a method, apparatus, system, storage medium, and computing device for assisting endoscopy.
In a first aspect of embodiments of the present invention, there is provided a method of assisting endoscopy, comprising:
acquiring an image acquired during the inspection of operating the endoscope equipment;
identifying the image through at least one model constructed based on a deep neural network to obtain an identification result at least capable of indicating a certain part corresponding to the image;
determining at least an inspection progress based on the recognition result;
determining at least inspection time information and/or image acquisition quantity of a certain part or each part based on the identification result after the inspection of the certain part or each part is determined to be finished based on the inspection progress;
drawing an actual examination curve based on examination time information and/or image acquisition quantity of a certain part or each part;
the actual inspection curve is compared with a corresponding standard inspection curve to determine inspection quality.
In an embodiment of this embodiment, the at least one deep neural network-based constructed model includes:
one or more background judgment models for identifying the corresponding part of the current image; or
One or more background judgment models and one or more target judgment models, wherein the target judgment models are used for identifying whether the current image comprises a specific target or not.
In an embodiment of the present invention, the background judgment model is obtained by training based on an image recognition model and a first training data set, where the first training data set includes a plurality of images of a part to be recognized;
the target judgment model is obtained by training based on a target detection model and a second training data set, wherein the second training data set comprises a plurality of images of targets needing to be detected.
In an embodiment of the present invention, the background judgment model uses shuffle netv2 as a backbone network, and adds a random deactivation layer before the last linear transformation layer; and/or
The target judgment model uses a target detection model YoloV3 and takes Darknet-53 as a backbone network.
In an embodiment of the present invention, after at least determining the progress of the examination based on the recognition result, the method further includes:
performing a prompt for an auxiliary operation, the prompt comprising:
the part under examination, the examination starting time and the examination duration are displayed according to preset rules;
the progress of the examination is displayed in three dimensions.
In one embodiment of the present embodiment, the inspection progress displayed in three-dimensional form can indicate at least one of a portion where inspection has been completed, a portion being inspected, a portion to be inspected, and a portion where inspection is missed;
wherein the site being examined is displayed at an angle convenient for viewing.
In one embodiment of the present embodiment, the portions of different inspection progresses are presented with different display effects; and
different parts of the same inspection progress are presented with different display effects; or
Different parts of the same inspection progress are presented with the same display effect.
In an embodiment of this embodiment, determining at least an inspection progress based on the recognition result includes:
determining whether the part corresponding to the image is the same as the part corresponding to the previous image or not according to the recognition result;
if the current position is the same as the position, the inspection progress is not changed, and the inspection duration of the current position is accumulated;
if not, updating the inspection progress, and determining the inspection ending time of the previous part and the inspection starting time of the current part.
In one embodiment of the present invention, after the examination of a certain region is completed, the drawing of an actual examination curve based on examination time information and the number of image acquisitions of the certain region includes:
determining the image acquisition quantity of each moment in the actual inspection period of a certain part based on the inspection time information and the image acquisition quantity of the part;
and drawing an actual examination curve based on the image acquisition quantity of each moment in the actual examination time period of the part according to the time sequence.
In an embodiment of this embodiment, after determining the inspection quality of each site, the method further comprises:
based on the inspection quality of each part, the overall inspection quality is determined.
In one embodiment of the present invention, the drawing of the actual examination curve based on the examination time information and the number of image acquisitions of each part includes:
determining the actual examination time period of each part;
determining the image acquisition quantity of each moment in the actual inspection time period of each part;
and drawing a first overall actual examination curve based on the image acquisition quantity of each moment in the actual examination time period of each part according to the time sequence.
In one embodiment of the present invention, the drawing of an actual examination curve based on examination time information of each part includes:
determining the actual examination duration of each part;
and drawing a second overall actual examination curve based on the actual examination time length of each part.
In one embodiment of the present invention, the drawing of an actual examination curve based on examination time information of each part includes:
determining actual examination start time of each part;
and drawing a third overall actual examination curve based on the actual examination starting time of each part.
In one embodiment of the present embodiment, comparing the actual inspection curve with the corresponding standard inspection curve to determine inspection quality comprises:
respectively acquiring the vertical axis arrays of the actual inspection curve and the standard inspection curve;
respectively performing linear fitting on the two longitudinal axis arrays to obtain an actual inspection fitting function and a standard inspection fitting function;
calculating the slopes of the actual inspection fitting function and the standard inspection fitting function respectively;
calculating respective radians based on slopes of the actual inspection fitting function and the standard inspection fitting function respectively;
calculating the angles of the two based on the respective radians;
if the angle between the two is within the preset range, the actual inspection is judged to be standard;
otherwise, it is not standard.
In one embodiment of this embodiment, comparing the actual inspection curve with a corresponding standard inspection curve to determine inspection quality comprises:
acquiring an inspection center line based on the actual inspection curve and a standard inspection curve;
the abscissa of the inspection center line corresponds to the abscissa of the actual inspection curve and the abscissa of the standard inspection curve one by one, and any ordinate of the inspection center line is as follows:
at the corresponding horizontal axis coordinate, the larger minus smaller difference in the vertical axis coordinate of the actual checking curve and the standard checking curve is added with the smaller vertical axis coordinate;
calculating an actual inspection score based on areas of enclosing and semi-enclosing graphs formed by the intersection of the inspection centerline and the standard inspection curve;
the actual inspection score is inversely proportional to the value of the area.
In one embodiment of the present embodiment, determining the actual examination time length or the actual examination time period of each part includes:
determining a first examination duration or a first examination time period of each part based on the time sequence and the identification result of the images acquired by the corresponding time sequence;
filtering the first check duration or the first check time period based on a preset filtering rule;
and determining the filtered first examination time length or the first examination time period as the actual examination time length or the actual examination time period of the corresponding part.
In an embodiment of the present invention, the filtering the first check duration or the first check period based on a preset filtering rule includes:
aligning the first examination duration or the starting point of the first examination time interval of a certain part with a preset standard examination;
judging whether the first inspection duration or the first inspection time period of a certain part exceeds a preset threshold value or not;
if the number of the parts exceeds the preset number, the exceeding parts of the preset proportion or number are additionally reserved on the basis of the aligning parts, and the rest parts are discarded;
if not, the data is not discarded.
In one example of this embodiment, the location of the missed detection is determined by at least:
and when the proportion of the detected background in the whole background of a certain part reaches a preset threshold value based on the image recognition result, determining that the part is missed for detection.
In a second aspect of an embodiment of the present invention, there is provided an apparatus for assisting endoscopy, including:
an image acquisition module configured to acquire an image acquired while operating an endoscopic device for examination;
the recognition module is configured to recognize the image through at least one model constructed based on a deep neural network, and a recognition result at least indicating that the image corresponds to a certain part is obtained;
a progress determination module configured to determine at least an inspection progress based on the recognition result;
the quality monitoring module is configured to determine at least inspection time information and/or image acquisition quantity of a certain part or each part based on the identification result after the inspection of the certain part or each part is determined to be completed based on the inspection progress; and
drawing an actual examination curve based on examination time information and/or image acquisition quantity of a certain part or each part;
the actual inspection curve is compared with a corresponding standard inspection curve to determine inspection quality.
In a third aspect of embodiments of the present invention, there is provided a system for assisting endoscopy, comprising:
the client equipment comprises a display module, a communication module and a processing module, wherein the display module is used for displaying prompts and videos of auxiliary operations, and the prompts at least comprise the display of inspection progress in a three-dimensional form;
the communication module is used for receiving prompts sent by the quality control center and/or the monitoring center, receiving videos collected by the endoscope equipment, decompressing the videos through the processing module, converting formats of the videos and sending the videos to the quality control center;
the quality control center is used for identifying the video through at least one model constructed based on the deep neural network to obtain an identification result which at least can indicate a certain part corresponding to a current frame image of the video; and
determining at least an inspection progress based on the recognition result; and
determining at least inspection time information and/or image acquisition quantity of a certain part or each part based on the identification result after the inspection of the certain part or each part is determined to be finished based on the inspection progress; and
drawing an actual examination curve based on examination time information and/or image acquisition quantity of a certain part or each part;
comparing the actual inspection curve with a corresponding standard inspection curve to determine inspection quality;
and the supervision center is used for analyzing and counting the quality information of each inspection so as to carry out unified management on the quality information.
In a fourth aspect of the embodiments of the present invention, there is provided a storage medium storing a computer program which, when executed by a processor, implements the method for assisting endoscopy.
In a fifth aspect of embodiments of the present invention, there is provided a computing device comprising: a processor; a memory for storing the processor-executable instructions; the processor is used for executing the method for assisting endoscopy.
According to the method, the device, the system, the storage medium and the computing equipment for assisting the endoscopy, images acquired during the endoscopy operation are acquired; identifying the image through at least one model constructed based on a deep neural network to obtain an identification result at least capable of indicating a certain part corresponding to the image; determining at least an inspection progress based on the recognition result; determining at least inspection time information and/or image acquisition quantity of a certain part or each part based on the identification result after the inspection of the certain part or each part is determined to be finished based on the inspection progress; drawing an actual examination curve based on examination time information and/or image acquisition quantity of a certain part or each part; the actual inspection curve is compared with a corresponding standard inspection curve to determine inspection quality. The video input by the endoscope is identified through the image, so that the inspection quality monitoring of a certain part or each part is realized, and the inspection quality control level of the endoscope can be effectively improved.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present invention will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
FIG. 1 schematically illustrates a flow chart of a method of assisting an endoscopy according to an embodiment of the present invention;
FIG. 2 schematically illustrates an interface diagram displaying prompts for an auxiliary operation of a method of assisting an endoscopy according to another embodiment of the present invention;
FIG. 3 schematically illustrates an interface diagram displaying prompts for an auxiliary operation of a method of assisting an endoscopy according to another embodiment of the present invention;
FIG. 4 schematically illustrates an interface diagram displaying prompts for an auxiliary operation of a method of assisting an endoscopy according to another embodiment of the present invention;
FIG. 5 shows the legend of FIGS. 6 and 7;
FIG. 6 is a schematic diagram illustrating an effect of an actual examination curve of each portion obtained by linear fitting according to the number of images acquired at each time of each portion according to a method for assisting endoscopy according to another embodiment of the present invention;
FIG. 7 is a schematic diagram illustrating the effect of the actual inspection curve shown in FIG. 6 after being converted into a square wave;
FIGS. 8a and 8b are schematic diagrams illustrating the effect of the square wave filtering of FIG. 7 according to the present invention, wherein FIG. 8a is a graphical illustration of FIG. 8 b;
figure 9 schematically illustrates an endoscopic device examination dwell time fit diagram of a method of assisting an endoscopic examination in accordance with another embodiment of the present invention;
FIGS. 10a and 10b schematically illustrate a standard examination and an actual examination curve of a method of assisted endoscopy according to another embodiment of the present invention;
FIGS. 11a and 11b are schematic diagrams illustrating a standard exam, a mean midline and an actual exam curve of a method of assisted endoscopy according to another embodiment of the present invention;
FIGS. 12a and 12b are schematic diagrams illustrating an area of a mean centerline and an actual examination curve of a method for assisting an endoscopy according to another embodiment of the present invention;
FIGS. 13a and 13b are schematic diagrams illustrating a mean centerline and a derivative of an actual examination curve of a method for assisted endoscopy according to another embodiment of the present invention;
FIG. 14 is a block diagram of an apparatus for assisting endoscopy according to an embodiment of the present invention;
FIG. 15 is a block diagram of a system for assisting endoscopy provided in accordance with an embodiment of the present invention;
FIG. 16 is a schematic diagram of a computer-readable storage medium according to an embodiment of the present invention;
fig. 17 is an illustration of a computing device provided by an embodiment of the invention.
In the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Detailed Description
The principles and spirit of the present invention will be described with reference to a number of exemplary embodiments. It is understood that these embodiments are given solely for the purpose of enabling those skilled in the art to better understand and to practice the invention, and are not intended to limit the scope of the invention in any way. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As will be appreciated by one skilled in the art, embodiments of the present invention may be embodied as a system, apparatus, device, method, or computer program product. Accordingly, the present disclosure may be embodied in the form of: entirely hardware, entirely software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software.
According to embodiments of the present invention, a method, apparatus, system, storage medium, and computing device for assisting endoscopy are presented.
Moreover, any number of elements in the drawings are by way of example and not by way of limitation, and any nomenclature is used solely for differentiation and not by way of limitation.
The principles and spirit of the present invention are explained in detail below with reference to several representative embodiments of the invention.
Exemplary method
A method for assisting endoscopy according to an exemplary embodiment of the present invention will be described below with reference to fig. 1. It should be noted that the above application scenarios are merely illustrated for the convenience of understanding the spirit and principles of the present invention, and the embodiments of the present invention are not limited in this respect. Rather, embodiments of the present invention may be applied to any scenario where applicable.
An embodiment of the present invention provides a method of assisting endoscopy, including:
step S110, acquiring an image acquired during the operation of the endoscope equipment for examination;
step S120, identifying the image through at least one model constructed based on a deep neural network to obtain an identification result at least capable of indicating a certain part corresponding to the image;
step S130, at least determining the checking progress based on the recognition result;
step S140, based on the checking progress, after the checking of a certain part or each part is determined to be finished, at least checking time information and/or image acquisition quantity of the certain part or each part is determined based on the recognition result;
step S150, drawing an actual examination curve based on examination time information and/or image acquisition quantity of a certain part or each part;
step S160, comparing the actual inspection curve with the corresponding standard inspection curve to determine the inspection quality.
It is understood that the application scene of the method for assisting endoscopy of the present invention can be a narrow and difficult-to-directly-observe space such as a tunnel, a pipeline, an alimentary canal, etc., and the preferred embodiment of the present invention is described in detail by taking the alimentary canal as an example.
The method of the invention can obtain the inspection progress and/or inspection quality by online processing based on the video images acquired by the endoscope in real time, and also obtain the inspection progress and/or inspection quality by offline processing based on the acquired video images.
How to assist endoscopy is described below with reference to the accompanying drawings:
firstly, step S110 is executed to acquire an image acquired during the examination of the endoscope equipment;
in this embodiment, it is first necessary to acquire images acquired during an endoscopic device examination, which is typically a continuous process, i.e., a series of consecutive images, i.e., video, are acquired.
In an embodiment of the present invention, the acquiring of the image acquired during the examination of the operating endoscopic device is acquiring a video stream acquired by the endoscopic device.
Next, step S120 is executed, the image is identified through at least one model constructed based on a deep neural network, and an identification result at least indicating that the image corresponds to a certain part is obtained;
in an embodiment of this embodiment, the at least one deep neural network-based constructed model includes:
one or more background judgment models for identifying the corresponding part of the current image;
in this embodiment, based on the model is judged to the background and current input the image that the scope of model was judged to the background was gathered can be confirmed the input model is gathered to the scope during the image, the position of examining, particularly, in alimentary canal inspection field, the position can include the throat, granular nest, larynx, esophagus upper portion, esophagus middle part, esophagus lower part, cardia dentate line, neck esophagus, antrum, pylorus, stomach angle, the stomach fundus, observe in the stomach cardia and the stomach body reversal of observing the stomach upper portion, the stomach body middle part, the stomach body lower part etc. position before, the back wall and big, little bend. It should be understood that the above locations are only non-limiting examples in the present embodiment, and those skilled in the art may select more or less location classifications according to actual needs, or perform corresponding classifications in other application scenarios, such as pipelines or tunnels.
In an embodiment of the present invention, the plurality of background determination models may be a single global background determination model and a plurality of partial background determination models, for example, in a digestive tract examination scene, the single global background determination model may be set to distinguish large part classifications of stomach, esophagus, and intestine, and then a corresponding partial background determination model may be set for each specific part, for example, the stomach background determination model may be set for the stomach, and the gastric background determination model may specifically determine classification results of front wall, rear wall, greater curvature, and lesser curvature of parts such as antrum, pylorus, angle of stomach, fundus, stomach, and body of stomach. Similarly, an intestinal background judgment model, an esophagus background judgment model and other possible or required subsection background judgment models can be set.
It is to be understood that only one background determination model may be provided to determine all the detailed portions, which is not limited in the present embodiment, and those skilled in the art may set the background determination model according to actual needs.
In an embodiment of the present invention, the background judgment model is obtained by training based on an image recognition model and a first training data set, where the first training data set includes a plurality of images of a part to be recognized; in this embodiment, the image recognition model may be any one of the prior arts such as AlexNet, VGG19, ResNet _152, inclusion v4, DenseNet, and the like, but this embodiment is not limited thereto, and the first training data set includes images of each part to be recognized, and there are a plurality of images of each part, for example, the images of each part listed in the above embodiment may be a plurality of images.
When the background judgment model is trained, a background sample set can be respectively generated into a test set, a verification set and a training set according to a preset proportion (for example, 8:1: 1), wherein the training set is used for training the weights of different layers in the neural network model; the verification set is used for constructing a model or used as a reference for adjusting the super parameters (network nodes, iteration times, learning rate and the like); the test set is used for evaluating the final generalization capability of the model and estimating various performance parameters (contents such as precision, loss rate and the like) of the model; specifically, the model is trained by using a training set, and model training parameters are continuously adjusted according to a verification set to obtain a stable network model structure. In an optional embodiment of the present embodiment, after the training is completed, a statistical form formed by the model parameters is observed, and a model with the best performance is selected as a finally used background judgment model according to a statistical principle.
The background sample set may be obtained by sorting images actually acquired based on history, or may be a network open source data set, which is not limited in this embodiment, and the effectiveness of the present invention is not affected by any data set acquisition manner.
In a preferred embodiment of this embodiment, the background judgment model uses shuffle net v2 as a backbone network, and adds a random deactivation layer before the last linear transformation layer.
Considering that a model having only a background judgment function cannot recognize a specific object in the background, for example: the background judgment model can identify the descending duodenum, but cannot identify the papillary portion in that region; if only the edge of the part is detected, the threshold value of the prediction probability of the corresponding part background judgment model is probably reached, and if the threshold value is taken as the basis of the part judgment, the detection is easy to miss.
Thus, in another embodiment of this embodiment, the at least one deep neural network-based constructed model comprises:
one or more background judgment models and one or more target judgment models, wherein the target judgment models are used for identifying whether the current image comprises a specific target or not.
In this embodiment, the implementation details of the background judgment model are the same as those of the background judgment model in the above embodiment, and are not described herein again; the target determination model is used for detecting a specific target contained in the current image, and the specific target can be used as a marker for identifying and confirming a part, or a part needing to pay attention to, or an abnormal part, and the like. In a preferred embodiment of the present embodiment, the specific targets include: pharynx, cardia dentate line, descending nipple of duodenum, mouth covering device, and throat.
It is understood that in this embodiment, only one target judgment model may be set to judge all the specific targets, or a corresponding target judgment model may be set for each or some specific targets, that is, a plurality of target judgment models may be set.
In this embodiment, the target determination model is obtained by training based on a target detection model and a second training data set, where the second training data set includes a plurality of images of a target to be detected.
In this embodiment, the object detection model may be any one of the prior arts such as R-CNN, SPP-net, Fast-RCNN, YOLO series, SSD, Mask RCNN, RetinaNet, etc., but this embodiment is not limited thereto, and the second training data set includes images of specific objects to be detected, and there are a plurality of images of each specific object, for example, the images of specific objects listed in the above embodiments. When the target judgment model is trained, the target sample set may be respectively generated into a test set, a verification set and a training set according to a preset ratio (e.g., 8:1: 1), and then training is performed, where a similar process is described in detail in the background judgment model training section, and details are not repeated here.
In a preferred embodiment, the target judgment model uses a target detection model YoloV3, and uses Darknet-53 as a backbone network.
After the image is identified or detected by one or two neural networks, step S130 is performed next, and at least an inspection progress is determined based on the identification result;
in an embodiment of the present invention, after the image acquired by the endoscope is input into the one or two neural networks, the portion under examination when the image is acquired by the endoscope is determined, and then the progress of the examination is determined based on the determined portion under examination;
the method specifically comprises the following steps:
determining whether the part corresponding to the image is the same as the part corresponding to the previous image or not according to the recognition result;
if the current position is the same as the position, the inspection progress is not changed, and the inspection duration of the current position is accumulated;
if not, updating the inspection progress, and determining the inspection ending time of the previous part and the inspection starting time of the current part;
for example, a video stream acquired by an endoscope device is input into a background judgment model, each frame of image in the video stream is classified and judged to obtain a classification part with the maximum probability and a corresponding probability corresponding to each frame of image, then the classification part with the maximum probability can be directly used as a part being inspected when the corresponding image is acquired, or whether the maximum probability is greater than a set threshold value is judged, if so, the classification part with the maximum probability is used as the part being inspected when the corresponding image is acquired;
and if the classified part judged based on the current image is not the same as the classified part judged based on the previous frame image, updating the current inspection progress to be inspecting the classified part judged based on the current image, and updating the accumulated inspection time and the inspection end time of the previous classified part and the inspection start time of the current classified part;
in the embodiment in which the classified part having the maximum probability greater than the preset threshold is determined as the part being inspected when the corresponding image is acquired, if the classified part judged based on the previous frame of image is empty, the initial observation time of the current classified part is updated;
optionally, in an embodiment, after obtaining a determination result of the background determination model based on the image, putting the result (i.e., tag classification) into a result buffer queue with a certain length, traversing the queue, determining whether each tag classification in the queue includes an esophagus classification, if so, determining that the current background is an esophagus, and updating an esophagus-related page and prompt information; if the esophagus is not included and the duodenum is not currently included, the stomach background related interface and the prompt information are updated.
Optionally, in an embodiment, after obtaining a determination result of the background determination model based on the image, putting the result (i.e., the tags) into a result buffer queue with a certain length, traversing the queue, determining whether each tag in the model includes a duodenum classification, and if the duodenum part is included and the input target determination model prediction result includes a "descending papilla of duodenum" part, updating the descending background of duodenum and the prompt information.
It should be noted that the video stream collected by the endoscope device can be copied into two parts, one part is input into the background judgment model, the other part is input into the target judgment model, and background judgment and target judgment are performed simultaneously; or the video stream acquired by the endoscopic device may be input into a background judgment model, and when the background judgment model obtains a specific classified part based on the current image, the current image is input into a target judgment model to determine whether the current image includes a specific target, for example, when the background judgment result of the current image is determined to be a duodenum part based on the image of the video stream, the current image is input into the target judgment model to determine whether the current image includes the specific target: and determining the inspection progress when the current image is acquired to be the duodenum part based on double judgment results of the background judgment model and the target judgment model, so that the judgment accuracy is improved.
It will be appreciated that the results of the background judgment model may be replicated to form multiple copies, e.g., three copies, for judging whether the esophagus is included, the duodenum is included, and the progress and time of the examination are updated, respectively.
After the checking progress is determined, next, prompting for auxiliary operation is carried out, wherein the prompting at least comprises displaying the checking progress in a three-dimensional form; the method specifically comprises the following steps:
the part under examination, the examination starting time and the examination duration are displayed according to preset rules;
in this embodiment, the part being inspected can be displayed in text and three-dimensional forms at the same time, and the inspection start time and the inspection duration of the part being inspected can be displayed in text form, so that the operator can observe the inspection progress and the inspection time status in real time;
the progress of the examination is displayed in three dimensions.
The reason why the progress of examination of each part is displayed in three dimensions in this embodiment is that some parts such as the stomach are an independent solid organ in three-dimensional space, and the detected part indicates that the details of the display are affected if the display is performed only on a two-dimensional plane. If the text prompt is used, the attention of the operator can be dispersed, and the missed diagnosis rate can be increased.
In an embodiment of the present invention, the method for displaying the progress of the examination of the stomach in a three-dimensional manner includes the following specific steps:
1) initializing the transparency and the current surface color of models of all recognizable parts of the stomach, wherein the detected parts are meat color, the undetected parts are color blocks with certain differences (such as green), and initializing whether each background model in the 3D stomach model classifies a detected background array in the format: { anterior antrum: True, posterior antrum: False, … greater curvature of body: False }, where True is the detected background, False is the undetected background, and False is the default.
2) Loading a 3D model comprising a antrum, a pylorus, a gastric angle, a fundus, a stomach, a front wall, a rear wall, a greater curvature and a lesser curvature of the parts of the stomach body and the like, loading a 3D model comprising an esophagus and a cardia part, and assembling according to the anatomical structures of all the parts to obtain a complete 3D stomach model, wherein as shown in figure 2, the left side in figure 2 shows a complete 3D stomach model.
3) And receiving part classification information, processing the information, and updating the value of the index of the background array, wherein if the current background is detected to be True, the non-updated value is not detected.
4) After the latest detected background classification is transmitted, the assembled 3D model rotates by a certain angle, and the current background is displayed in front of the most convenient observation, as shown in fig. 3, the 3D model displayed on the left side in fig. 3 rotates by a certain angle.
5) Traversing all background arrays, if the result is True, updating the part to be a specific color such as flesh color, otherwise, displaying the part to be a color block with a certain difference, as shown in fig. 3, and displaying different parts of the 3D model displayed on the left side in fig. 3 by the color block with a certain difference.
6) And emptying temporary variables, picture cache data and 3D model cache data, and calling a garbage recovery mechanism to avoid memory overflow.
As can be seen from the above steps, in an embodiment of the present embodiment, the inspection progress displayed in a three-dimensional form can indicate at least one of a portion where inspection has been completed, a portion being inspected, a portion to be inspected, and a portion where inspection is missed;
wherein the site being examined is displayed at an angle convenient for viewing.
Optionally, in this embodiment, the parts with different inspection progresses are presented with different display effects; and
different parts of the same inspection progress are presented with different display effects; or
Different parts of the same inspection progress are presented with the same display effect.
For example, the portion that has completed the examination is displayed in flesh color, the portion that is being examined is displayed in flesh color by blinking, and the portion to be examined is displayed in green; alternatively, if there are a plurality of inspected sites or sites to be inspected, different sites may be distinguished by the same color but different shades, for example, inspected antrum and pylorus are shown in dark flesh and light flesh, and inspected gastric horn and fundus are shown in dark green and light green, respectively; it is understood that if there are a plurality of portions already inspected or portions to be inspected, different portions of the same inspection progress may be displayed in the same color, in the same shade, without distinction.
In addition, if it is confirmed that the inspection flow has ended and it is judged that there is a missed inspection portion, it may be displayed with a different display effect from the portion where the inspection has been completed, the portion being inspected, and the portion to be inspected, and on the basis of the above example, the missed inspection portion may be displayed in yellow.
In this embodiment, the location of the missed detection is determined at least by the following means:
and when the proportion of the detected background in the whole background of a certain part reaches a preset threshold value based on the image recognition result, determining that the part is missed for detection.
For example, in a preferred embodiment, see fig. 4, the upper gastrointestinal background judgment result is received first, and if the undetected background accounts for one third of the total background, the undetected part text notes are not displayed; and if the number of the detected parts reaches one third, displaying the text prompt information of the undetected parts.
Optionally, in an embodiment of the present embodiment, the prompt for performing the auxiliary operation further includes a real-time prompt for a next operation.
Optionally, in an embodiment of the present embodiment, data acquired and/or determined in the endoscopic examination process is also persisted for subsequent quality supervision or archiving and future reference. Specifically, the persistence-enabled data includes the acquired images, the acquisition time (including the start examination time, the end examination time, the examination duration, and the examination start time, the examination end time, and the examination duration for each region).
When the persistence is performed, the specific steps may include:
1) receiving a determination of a background determination model output, comprising: and receiving the index probability in the result output by the background judgment model, confirming the index with the maximum probability in the classification array, deeply cloning the frame image corresponding to the classification array, and then putting the frame image into a persistence queue.
2) Storing each frame of image according to the position classification, comprising: and receiving the pop image in the persistent queue, and putting the frame image into different folders according to different classification parts (for example, an esophagus folder is created by esophagus classification, and a stomach folder is created by stomach classification).
3) Counting the classified folders at different parts according to the modification time sequence, comprising the following steps: creating a Mapping table (Mapping) QcDict consisting of Key-Value pairs (Key-Value), storing file names as character strings as keys, and taking an array consisting of modification dates of each file as Value. Traversing the image classification folder, recording the time of checking the image modification date under the folder according to the format of 'HH 24: MM: SS', adding (appendix) to the end of the Value array, and obtaining the complete QcDict after traversing.
It is considered that if the overall examination time is calculated only by comparison, and the examination start and end time and examination duration of each part such as esophagus, stomach and duodenum are not recorded separately, the significance of the quality control result is also greatly reduced, for example: in clinical situations, for example, in the examination report before the auxiliary patient, the lesion part is the duodenum descending part, and the double-diagnosis doctor really only observes the part for a long time, and in the case that the total examination time reaches the standard, selective missed diagnosis is actually carried out on the esophagus and the stomach, which does not conform to the quality control flow of single upper gastrointestinal tract detection.
Optionally, in this embodiment, the method may further include steps S140 to S160, based on the inspection progress, after determining that the inspection of the one or each part is completed, determining the inspection condition (including the inspection time information and/or the number of image acquisitions) of the one or each part based on the recognition result, and comparing the inspection condition with the corresponding standard inspection to determine the inspection quality.
This embodiment generally proposes two quality determination methods:
firstly, determining the inspection quality of a single part and then determining the overall inspection quality
In one embodiment of the present invention, after a site inspection is completed, determining an inspection condition of the site based on the recognition result, and comparing the inspection condition with a corresponding standard inspection to determine inspection quality, the method includes:
after the part inspection is finished, determining the actual inspection time length of the part;
in the present embodiment, the actual examination time length of each part may be determined based on the image modification time of each part persisted in the above example;
or the actual examination time length of each part can be determined according to the examination time length of each part recorded in the examination;
alternatively, the actual examination time lengths of the respective parts may be calculated in a weighted or average manner by integrating the above two manners, so as to calculate the final actual examination time length of the respective parts.
Comparing the actual examination duration of the site with a standard examination duration of the site to determine an examination quality.
In this embodiment, division operation may be performed according to the actual inspection duration of each part and the standard inspection duration of the part, a ratio is calculated, if the ratio is within a preset range (0.8-1.2), the ratio is determined to be qualified, and if the ratio is not within the preset range, the ratio is determined to be unqualified; alternatively, a plurality of threshold ranges and corresponding inspection quality levels may be preset, and the inspection quality level may be determined according to which threshold range the ratio falls into.
In one embodiment of the present invention, after a site inspection is completed, determining an inspection condition of the site based on the recognition result, and comparing the inspection condition with a corresponding standard inspection to determine inspection quality, the method includes:
after the part inspection is finished, determining the image acquisition quantity of each moment in the actual inspection time period of the part;
in the present embodiment, the number of image acquisitions at each time within the actual examination period may be determined based on the images of the respective parts that are persisted in the above example and the image modification time;
comparing the number of image acquisitions at each time during an actual examination of the part with a standard examination of the part to determine an examination quality;
in this embodiment, a division operation may be performed according to the number of acquired images at each time in the actual inspection period of each part and the number of acquired images at each time in the standard inspection period of the part, a ratio is calculated, and if the ratio is within a preset range (0.8-1.2), the ratio is determined to be qualified, and if the ratio is not within the preset range, the ratio is determined to be unqualified; alternatively, a plurality of threshold ranges and corresponding inspection quality levels may be preset, and the inspection quality level may be determined according to which threshold range the ratio falls into.
In one embodiment of the present embodiment, comparing the number of image acquisitions at each instant in the actual examination period of the part with a standard examination of the part to determine an examination quality comprises:
drawing an actual examination curve based on the image acquisition quantity of each moment in the actual examination time period of the part according to the time sequence;
in this embodiment, the actual examination curve and the standard examination curve of each part can be drawn by taking time as a horizontal axis and the number of image acquisitions as a vertical axis;
comparing the actual inspection curve with an inspection curve of a standard inspection of the site to determine inspection quality;
specifically, it may be:
respectively acquiring the vertical axis arrays of an actual inspection curve and a standard inspection curve;
respectively performing linear fitting on the two longitudinal axis arrays to obtain an actual inspection fitting function and a standard inspection fitting function;
calculating the slopes of the actual inspection fitting function and the standard inspection fitting function respectively;
calculating respective radians based on slopes of the actual inspection fitting function and the standard inspection fitting function respectively;
calculating the angles of the two based on the respective radians;
if the angle between the two is within the preset range, the actual inspection is judged to be standard;
otherwise, it is not standard.
Further, comparing the actual inspection curve with the standard inspection curve to determine inspection quality, comprising:
acquiring an inspection center line based on the actual inspection curve and a standard inspection curve;
the abscissa of the inspection center line corresponds to the abscissa of the actual inspection curve and the abscissa of the standard inspection curve one by one, and any ordinate of the inspection center line is as follows:
at the corresponding horizontal axis coordinate, the larger minus smaller difference in the vertical axis coordinate of the actual checking curve and the standard checking curve is added with the smaller vertical axis coordinate;
calculating an actual inspection score based on areas of enclosing and semi-enclosing graphs formed by the intersection of the inspection centerline and the standard inspection curve;
the actual inspection score is inversely proportional to the value of the area.
After calculating the inspection quality or score of each site, in an embodiment of the present embodiment, after determining the inspection quality of each site, the method further includes:
determining overall inspection quality based on inspection quality of each part;
in this embodiment, the total inspection quality may be obtained by simply summarizing the inspection quality or the score of each part, or may be obtained by calculating the total inspection quality according to the preset weight of each part.
Secondly, determining the overall inspection quality directly according to the inspection conditions of all parts
In an embodiment of the present invention, after the examination of each part is completed, the method further includes:
determining the actual examination time period of each part;
determining the image acquisition quantity of each moment in the actual inspection time period of each part;
drawing a first total actual examination curve based on the image acquisition quantity of each moment in the actual examination time period of each part according to the time sequence;
in this embodiment, when the inspection curve is drawn, time is taken as a horizontal axis, and the number of image acquisitions at each time is taken as a vertical axis;
the sequence of the preset inspection is consistent with that of the standard inspection;
comparing the first overall actual inspection curve with a first overall standard inspection curve of an overall standard inspection to determine inspection quality.
In yet another embodiment of this embodiment, after the examination of each site is completed, the method further comprises:
determining the actual examination duration of each part;
drawing a second overall actual examination curve based on the actual examination duration of each part;
in the present embodiment, the parts are classified into the horizontal axis, and the actual examination duration of each part is the vertical axis;
comparing the second overall actual inspection curve with a second overall standard inspection curve of an overall standard inspection to determine inspection quality.
In an embodiment of the present invention, after the examination of each part is completed, the method further includes:
determining actual examination start time of each part;
drawing a third overall actual examination curve based on the actual examination starting time of each part;
in the present embodiment, the parts are classified into the horizontal axis, and the start examination time of each part is the vertical axis;
comparing the third overall actual inspection curve with a third overall standard inspection curve of an overall standard inspection to determine inspection quality.
In the same manner as in the method of determining inspection quality from two curves, comparing an actual inspection curve with a standard inspection curve to determine inspection quality includes:
respectively acquiring the vertical axis arrays of the actual inspection curve and the standard inspection curve;
respectively performing linear fitting on the two longitudinal axis arrays to obtain an actual inspection fitting function and a standard inspection fitting function;
calculating the slopes of the actual inspection fitting function and the standard inspection fitting function respectively;
calculating respective radians based on slopes of the actual inspection fitting function and the standard inspection fitting function respectively;
calculating the angles of the two based on the respective radians;
if the angle between the two is within the preset range, the actual inspection is judged to be standard;
otherwise, it is not standard.
Similarly, comparing the actual inspection curve to the standard inspection curve to determine inspection quality, includes:
acquiring an inspection center line based on the actual inspection curve and a standard inspection curve;
the abscissa of the inspection center line corresponds to the abscissa of the actual inspection curve and the abscissa of the standard inspection curve one by one, and any ordinate of the inspection center line is as follows:
at the corresponding horizontal axis coordinate, the larger minus smaller difference in the vertical axis coordinate of the actual checking curve and the standard checking curve is added with the smaller vertical axis coordinate;
calculating an actual inspection score based on areas of enclosing and semi-enclosing graphs formed by the intersection of the inspection centerline and the standard inspection curve;
the actual inspection score is inversely proportional to the value of the area.
Considering that endoscopy examination is performed in a human body such as a stomach, when other parts are observed through the upper, middle and lower parts of the stomach body, other parts in the process can be acquired unintentionally, in addition, only through artificial intelligence model judgment, videos with more than 25 frames per second are input into a deep neural network model to obtain model prediction probability, and a part classification index with the maximum probability is judged, and under the condition that a large amount of video data exists, misjudgment can occur certainly. Without filtering based on expert experience, the direct output of AI must lead to false positive determinations of undetected sites. Therefore, the determination needs to be performed according to the actually observed retention time and the overall inspection sequence, and in an embodiment of the present embodiment, the processing result is filtered according to a preset filtering rule to remove an error result.
Determining the actual examination time length or the actual examination time period of each part, comprising:
determining a first examination duration or a first examination time period of each part based on the time sequence and the identification result of the images acquired by the corresponding time sequence;
filtering the first check duration or the first check time period based on a preset filtering rule;
and determining the filtered first examination time length or the first examination time period as the actual examination time length or the actual examination time period of the corresponding part.
Specifically, the filtering the first check duration or the first check period based on a preset filtering rule includes:
aligning the first examination duration or the starting point of the first examination time interval of a certain part with a preset standard examination;
judging whether the first inspection duration or the first inspection time period of a certain part exceeds a preset threshold value or not;
if the number of the parts exceeds the preset number, the exceeding parts of the preset proportion or number are additionally reserved on the basis of the aligning parts, and the rest parts are discarded;
if not, the data is not discarded.
On the basis of the above example, how to draw a third overall actual inspection curve based on the actual inspection start time of each part and determine inspection quality and score is explained in detail;
4) intercepting and normalizing the inspection quality control time period, comprising: and traversing the mapping table QcDict, acquiring time points of all values, and sequencing the time points in a descending order to obtain a time point group Tarr. Creating a mapping table TarrDict consisting of Key Value pairs, taking the element content in the array Tarr as Key in a Reverse index (Reverse Indexing) mode, taking the index of the corresponding element content in the Tarr as Value, and obtaining the complete QcDict after traversing.
5) Creating a 3D tensor composed of timing, classification, and number of shots per second, comprising: and creating a mapping table RawDict consisting of Key Value pairs, storing the part classification character strings as keys, and taking the frequency array of the occurrence of each time point of each classification part as Value. Traversing the QcDict, creating a temporary array temp in each round of circulation, storing the position in the length of the Tarr array (default to 0), and if the quantity of each round of circulation is i, then the date is at the index position in the TarrDict: index [ Index ] = TarrDict [ i ], and the array assignment method is as follows: and (3) placing temp [ Index ] = QcDict [ Index ] +1, putting temp into Value of the RawDict corresponding to the part classification, finally obtaining a multi-classification time sequence frequency 3D data table, and obtaining a complete RawDict after traversing is completed, wherein the horizontal axis is time, the vertical axis is classification, and the Value is frequency.
6) Linear fitting of the number of images acquired over all time periods for each part classification (filtering of the results, noise removal) can be achieved): creating a two-dimensional array yvals, traversing RawDict, acquiring a time sequence array corresponding to each part Key, and setting the time sequence array as Y; creating an array with the length from 0 to the Tarr array, wherein the element content is a number which is increased from 0 to the length-1 of the Tarr array and is set as X; taking X and Y as input, performing polynomial fitting of 15 th order, taking the generated Y-fitting array as a row, adding the row to the end of the two-dimensional array yvals, and obtaining a complete yvals after traversing is completed, as for effects, refer to fig. 5 and 6, where fig. 5 is an illustration of fig. 6, and fig. 6 shows the effects.
7) The curve is converted into square wave (further filtering of signals can be realized, noise is removed, and the probability of wrong judgment is reduced): creating a two-dimensional array yval _ matrix, traversing the row data of the two-dimensional array yvals to obtain an array temp detected by the time sequence of each classification, obtaining the maximum value y _ max of the array, storing the value of the median size of the array above y _ max/3, setting the value of the median size of the array less than or equal to y _ max/3 as 0, adding the filtered temp to the end of the yval _ matrix array, and obtaining the complete yval _ matrix after traversing.
8) Filtering the results, see fig. 8a and 8b for filtering effect, where fig. 8a is a diagram of fig. 8b, and fig. 8b shows the effect:
a) in upper gastrointestinal examinations, if the stomach is to be examined, an endoscope must pass through the esophagus and the cardia (the junction of the esophagus and the stomach) to examine the interior of the stomach. The observation time for the appearance of esophageal and gastric junction moieties was at the beginning and end of the entire time series array. Therefore, if the examination exceeds a certain time and the esophagus and the stomach conjunction are at the beginning and the end, after the examination is finished, the middle part is judged to be the stomach examination, and timing indexes t _ min and t _ max of non-zero values at the leftmost side and the rightmost side of the esophageal timing are acquired. Traversing each row classification of the yval _ matrix, acquiring a row array row _ temp in the traversing process, setting the index of the row _ temp to be less than t _ min and the value of the row _ temp to be more than t _ max to be 0, and obtaining the complete yval _ matrix after traversing is finished;
b) confirm the relative position of each observation time throughout the examination, for example: after entering the esophagus, the patient firstly enters the duodenal bulb and the descending part for observation, and then the examination of the relevant area of the antrum of the stomach is carried out, wherein the parts are generally at the front section of the whole stomach observation period; and filtering the pictures in the non-time period by a custom method, wherein the process is as follows:
Func Float[] PositionFilter(yval_matrix) :
len = length (yval _ matrix)// number of classifications
y _ human _ matrix [0 … len ]// create result two-dimensional array for storing returned results
Expert experience, defining a time period for each classification, wherein both elements in the Tuple (Tuple) are small numbers in the range of 0-1 in length, indicating a range · based on
human_exp = [(0,0.1),(0.1,0.2) ,…,(0.9,1)]
for i ← 0 to len do:// traverse yval _ matrix
temp = yval _ matrix [ i ]// get array for each category
temp_len = length(temp)
index = temp. index of (max (temp.))// index of maximum value in the acquisition array
Acquiring index array larger than one fourth of maximum value in array
The return result format is a one-dimensional array composed of True or False, for example:
[True,False,True,True],
wherein the satisfied condition is True, and the non-satisfied condition is False +
index_quarter[0…temp_len] = temp> max(temp)/4
If the maximum is in the empirical range, all values outside the range are set to 0 +
if int( human_exp[i][0] * temp_len) <= index
and int((human_exp[i][1] * temp_len) > index):
temp[0:temp<int((human_exp[i][0] * temp_len)] = 0
temp[temp<int((human_exp[i][1] * temp_len): temp_len-1] = 0
If not, only elements with a value greater than 1 out of 4 are retained, the remainder being 0 ^ based
else:
temp[not index_quarter] = 0
y _ human _ matrix. Append (temp)// adding a sorted sequential array at the tail
return y _ human _ matrix// return final result;
c) the empirically filtered result y _ human _ matrix = PositionFilter (yval _ matrix) is obtained.
Based on the steps, the preliminary result obtained by the machine processing is filtered, and the error part is screened out.
9) Recording and linear fitting by site classification and examination time: creating an array Y for counting the time of each part which is found for the first time in the whole inspection, traversing Y _ human _ matrix, adding a first non-zero-time-sequence index scalar of each row to the tail of Y, and obtaining a complete array Y after traversing; an array is created for statistical inspection classification for array X, with values increasing from 0 to y _ human _ matrix length-1. Through 3-order fitting calculation of X and Y, an array Y _ fit is fitted for comparison and judgment, because some parts are not found, the value is 0, the trend cannot be obtained by direct use, and the comparison and judgment are difficult, so the effect of linear fitting processing on the result is shown in FIG. 9.
10) Comparison of fitted curves for standard and actual examinations: respectively acquiring an actual operation process curve and a standard process curve through the steps 1) to 9), comparing the effects of multiple inspections and standard inspections as shown in fig. 10a and 10b, and calculating the central lines of two formulas through an algorithm, wherein the contents are as follows:
Func Float[] CalMeanCurve(y1, y2):
y [0 … n ] = 0// n number of site classifications, y is an all-zero array
y1[ 0 … n ], y2[0 … n ]// n site classification number, y1 and y2 are the y-axis value sequence of the fitting curve of standard inspection and actual inspection respectively
Acquisition of array of site indices of y1 at the top of y2
The return result format is a one-dimensional array composed of True or False, for example: [ True, False, True ],
wherein the satisfied condition is True, and the non-satisfied condition is False +
index1 = y2 <= y1
y[index1] = y2[index1]
y [ index1] = y [ index1] + abs (y1[ index1] -y2[ index1 ])/2// acquisition y1 median curve coordinates above y2
index2 = y2 > = y 1// acquire array of part indexes of y2 on the upper part of y1
y[index2] = y1[index2]
y [ index2] = y [ index2] + abs (y1[ index2] -y2[ index2 ])/2// acquisition y2 median curve coordinates above y1
retrun y
Two required comparison curves y1 and y2 are used to obtain a central line y through CalMeanCurve (y1, y2), and the effect of comparing the central line with the standard inspection in multiple inspections is shown in FIGS. 11a and 11 b.
11) Judging whether the test is a standard upper digestive tract test:
a) acquiring the vertical axis arrays y1 and y2 of the actual checking curve and the labeled checking curve respectively, and performing 3-stage linear fitting on the vertical axis arrays by a polynomial fitting method to obtain two fitting functions func _ y1(xx;) and func _ y2 (xx;):
Figure 938290DEST_PATH_IMAGE001
b) the 2 nd derivative of func _ y1 and func _ y2, respectively, yields the formulas y1 "and y 2" as follows:
c) y1 'y 2' is converted to a slope form, since the polynomial coefficients w11, w12, w21, and w22 of the fitting functions func _ y1 and func _ y2 are known. From the linear slope formula, the slopes 6w1 and 6w2 are the slopes of the fitting functions func _ y1 and func _ y2, respectively, which can be written as k1 and k2, and the effect graphs are shown in fig. 13a and 13 b:
Figure 183326DEST_PATH_IMAGE002
d) the corresponding radian r is obtained by the arctangent of k
e) The angles of the 4 quadrants are calculated by the formula:
angle=−r×180×π
f) since the negative numbers are in the second and fourth quadrants, this embodiment requires conversion to the angles of the first and second quadrants. If the angle is less than 0, then the angle result is incremented by 180.
g) Determining the range of the actual curve and the standard curve angle, if the range is within a certain range, judging that the standard examination of the upper digestive tract is met, and re-screening the examination picture.
12) Automatically calculating a current upper gastrointestinal tract examination score: the area S is obtained by performing a definite integration operation on a fitting function func _ y1(& #119909; #119909;) and func _ y2(& #119909; #119909;) of the actual examination centerline to the standard examination curve. And n is the number of the classified parts, and the upper digestive tract examination effect is scored according to the following formula:
Figure 762731DEST_PATH_IMAGE003
Figure 689099DEST_PATH_IMAGE004
Figure 34629DEST_PATH_IMAGE005
Figure 388250DEST_PATH_IMAGE006
Figure 186442DEST_PATH_IMAGE007
according to integral formula
Figure 650921DEST_PATH_IMAGE008
(if a ≠ 1), since it is a constant integral, C =0 finds:
Figure 850959DEST_PATH_IMAGE009
Figure 375481DEST_PATH_IMAGE010
as is known, n, w11, w12, w21 and w22,
Figure 660969DEST_PATH_IMAGE011
Figure 663560DEST_PATH_IMAGE012
Figure 718103DEST_PATH_IMAGE013
Figure 413527DEST_PATH_IMAGE014
Figure 186311DEST_PATH_IMAGE015
Figure 992593DEST_PATH_IMAGE016
the y effect is shown in figures 12a and 12 b.
The larger the area S, the larger the difference from the standard examination, the lower the Score, otherwise, the higher the Score, and the Score calculation formula is as follows:
Figure 901643DEST_PATH_IMAGE017
it is understood that, in the present embodiment, the standard inspection curve corresponding to a certain actual inspection curve is drawn in the same manner as the actual inspection curve, that is, the actual inspection curve and the standard inspection curve that need to be compared are drawn in the same manner.
Wherein W is the number of classifications and L is the total number of time series.
Further, in an embodiment of the present embodiment, the method further includes:
counting the classified number of the covered backgrounds of different parts in each endoscopic examination, and recording the number ratio of a detected area to an undetected area; recording character information of an undetected background area; recording the starting examination time, the accumulated examination time and the detection end time of each part; recording and analyzing the inspection time of each part and whether the part meets the minimum requirement of quality control, and giving a reference score according to a quality control standard rule; the weekly, monthly and yearly average scores, the highest and lowest scores, and the standard deviation and covariance matrices between the scores are counted.
And saving the video for recordation, or playing in real time.
Exemplary devices
Having described the method of the exemplary embodiment of the present invention, next, an apparatus for assisting endoscopy of the exemplary embodiment of the present invention will be described with reference to fig. 14, the apparatus including:
an image acquisition module 210 configured to acquire an image acquired while operating the endoscopic device for examination;
the recognition module 220 is configured to recognize the image through at least one model constructed based on a deep neural network, and obtain a recognition result at least capable of indicating that the image corresponds to a certain part;
a progress determination module 230 configured to determine at least a progress of the examination based on the recognition result;
a quality monitoring module 240 configured to determine at least inspection time information and/or image acquisition quantity of a certain part or each part based on the identification result after determining that the inspection of the certain part or each part is completed based on the inspection progress; drawing an actual examination curve based on examination time information and/or image acquisition quantity of a certain part or each part;
comparing the actual inspection curve with a corresponding standard inspection curve to determine inspection quality.
In an embodiment of this embodiment, the at least one deep neural network-based constructed model includes:
one or more background judgment models for identifying the corresponding part of the current image; or
One or more background judgment models and one or more target judgment models, wherein the target judgment models are used for identifying whether the current image comprises a specific target or not.
In an embodiment of the present invention, the background judgment model is obtained by training based on an image recognition model and a first training data set, where the first training data set includes a plurality of images of a part to be recognized;
the target judgment model is obtained by training based on a target detection model and a second training data set, wherein the second training data set comprises a plurality of images of targets needing to be detected.
In an embodiment of the present invention, the background judgment model uses shuffle netv2 as a backbone network, and adds a random deactivation layer before the last linear transformation layer; and/or
The target judgment model uses a target detection model YoloV3 and takes Darknet-53 as a backbone network.
In an embodiment of the present invention, after at least determining the progress of the examination based on the recognition result, the method further includes:
performing a prompt for an auxiliary operation, the prompt comprising:
the part under examination, the examination starting time and the examination duration are displayed according to preset rules;
the progress of the examination is displayed in three dimensions.
In one embodiment of the present embodiment, the inspection progress displayed in three-dimensional form can indicate at least one of a portion where inspection has been completed, a portion being inspected, a portion to be inspected, and a portion where inspection is missed;
wherein the site being examined is displayed at an angle convenient for viewing.
In one embodiment of the present embodiment, the portions of different inspection progresses are presented with different display effects; and
different parts of the same inspection progress are presented with different display effects; or
Different parts of the same inspection progress are presented with the same display effect.
In an embodiment of this embodiment, determining at least an inspection progress based on the recognition result includes:
determining whether the part corresponding to the image is the same as the part corresponding to the previous image or not according to the recognition result;
if the current position is the same as the position, the inspection progress is not changed, and the inspection duration of the current position is accumulated;
if not, updating the inspection progress, and determining the inspection ending time of the previous part and the inspection starting time of the current part.
In one embodiment of the present invention, after the examination of a certain region is completed, the drawing of an actual examination curve based on examination time information and the number of image acquisitions of the certain region includes:
determining the image acquisition quantity of each moment in the actual inspection period of a certain part based on the inspection time information and the image acquisition quantity of the part;
and drawing an actual examination curve based on the image acquisition quantity of each moment in the actual examination time period of the part according to the time sequence.
In an embodiment of this embodiment, after determining the inspection quality of each site, the method further comprises:
based on the inspection quality of each part, the overall inspection quality is determined.
In one embodiment of the present invention, the drawing of the actual examination curve based on the examination time information and the number of image acquisitions of each part includes:
determining the actual examination time period of each part;
determining the image acquisition quantity of each moment in the actual inspection time period of each part;
and drawing a first overall actual examination curve based on the image acquisition quantity of each moment in the actual examination time period of each part according to the time sequence.
In one embodiment of the present invention, the drawing of an actual examination curve based on examination time information of each part includes:
determining the actual examination duration of each part;
and drawing a second overall actual examination curve based on the actual examination time length of each part.
In one embodiment of the present invention, the drawing of an actual examination curve based on examination time information of each part includes:
determining actual examination start time of each part;
and drawing a third overall actual examination curve based on the actual examination starting time of each part.
In one embodiment of the present embodiment, comparing the actual inspection curve with the corresponding standard inspection curve to determine inspection quality comprises:
respectively acquiring the vertical axis arrays of the actual inspection curve and the standard inspection curve;
respectively performing linear fitting on the two longitudinal axis arrays to obtain an actual inspection fitting function and a standard inspection fitting function;
calculating the slopes of the actual inspection fitting function and the standard inspection fitting function respectively;
calculating respective radians based on slopes of the actual inspection fitting function and the standard inspection fitting function respectively;
calculating the angles of the two based on the respective radians;
if the angle between the two is within the preset range, the actual inspection is judged to be standard;
otherwise, it is not standard.
In one embodiment of this embodiment, comparing the actual inspection curve with a corresponding standard inspection curve to determine inspection quality comprises:
acquiring an inspection center line based on the actual inspection curve and a standard inspection curve;
the abscissa of the inspection center line corresponds to the abscissa of the actual inspection curve and the abscissa of the standard inspection curve one by one, and any ordinate of the inspection center line is as follows:
at the corresponding horizontal axis coordinate, the larger minus smaller difference in the vertical axis coordinate of the actual checking curve and the standard checking curve is added with the smaller vertical axis coordinate;
calculating an actual inspection score based on areas of enclosing and semi-enclosing graphs formed by the intersection of the inspection centerline and the standard inspection curve;
the actual inspection score is inversely proportional to the value of the area.
In one embodiment of the present embodiment, determining the actual examination time length or the actual examination time period of each part includes:
determining a first examination duration or a first examination time period of each part based on the time sequence and the identification result of the images acquired by the corresponding time sequence;
filtering the first check duration or the first check time period based on a preset filtering rule;
and determining the filtered first examination time length or the first examination time period as the actual examination time length or the actual examination time period of the corresponding part.
In an embodiment of the present invention, the filtering the first check duration or the first check period based on a preset filtering rule includes:
aligning the first examination duration or the starting point of the first examination time interval of a certain part with a preset standard examination;
judging whether the first inspection duration or the first inspection time period of a certain part exceeds a preset threshold value or not;
if the number of the parts exceeds the preset number, the exceeding parts of the preset proportion or number are additionally reserved on the basis of the aligning parts, and the rest parts are discarded;
if not, the data is not discarded.
In one example of this embodiment, the location of the missed detection is determined by at least:
and when the proportion of the detected background in the whole background of a certain part reaches a preset threshold value based on the image recognition result, determining that the part is missed for detection.
Exemplary System
Having described the method and apparatus of the exemplary embodiment of the present invention, a system for assisting endoscopy of the exemplary embodiment of the present invention will be described with reference to fig. 15, and with reference to fig. 15, the system includes:
the client device 310 comprises a display module, a communication module and a processing module, wherein the display module is used for displaying prompts and videos of auxiliary operations, and the prompts at least comprise the display of the inspection progress in a three-dimensional form;
the communication module is used for receiving prompts sent by the quality control center and/or the monitoring center, receiving videos collected by the endoscope equipment, decompressing the videos through the processing module, converting formats of the videos and sending the videos to the quality control center;
the quality control center 320 is used for identifying the video through at least one model constructed based on the deep neural network to obtain an identification result which at least can indicate that a current frame image of the video corresponds to a certain part; and
determining at least an inspection progress based on the recognition result; and
determining at least inspection time information and/or image acquisition quantity of a certain part or each part based on the identification result after the inspection of the certain part or each part is determined to be finished based on the inspection progress; and
drawing an actual examination curve based on examination time information and/or image acquisition quantity of a certain part or each part;
and comparing the actual inspection curve with a corresponding standard inspection curve to determine inspection quality.
And the supervision center 330 is used for analyzing and counting the quality information of each inspection so as to perform uniform management on the quality information.
Exemplary Medium
Having described the method and apparatus of the exemplary embodiments of the present invention, a computer-readable storage medium of the exemplary embodiments of the present invention is described with reference to fig. 16, referring to fig. 16, which illustrates an optical disc 40 having a computer program (i.e., a program product) stored thereon, which, when executed by a processor, performs the steps described in the above-described method embodiments, e.g., acquiring images acquired while operating an endoscopic device for examination; identifying the image through at least one model constructed based on a deep neural network to obtain an identification result at least capable of indicating a certain part corresponding to the image; determining at least an inspection progress based on the recognition result; determining at least inspection time information and/or image acquisition quantity of a certain part or each part based on the identification result after the inspection of the certain part or each part is determined to be finished based on the inspection progress; drawing an actual examination curve based on examination time information and/or image acquisition quantity of a certain part or each part; comparing the actual inspection curve with a corresponding standard inspection curve to determine inspection quality; the specific implementation of each step is not repeated here.
It should be noted that examples of the computer-readable storage medium may also include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory, or other optical and magnetic storage media, which are not described in detail herein.
Exemplary computing device
Having described the methods, apparatus and media of exemplary embodiments of the present invention, a computing device for assisting endoscopy of exemplary embodiments of the present invention is next described with reference to fig. 17.
FIG. 17 illustrates a block diagram of an exemplary computing device 50 suitable for use in implementing embodiments of the present invention, the computing device 50 may be a computer system or server. The computing device 50 shown in FIG. 17 is only one example and should not impose any limitations on the functionality or scope of use of embodiments of the present invention.
As shown in fig. 17, components of computing device 50 may include, but are not limited to: one or more processors or processing units 501, a system memory 502, and a bus 503 that couples the various system components (including the system memory 502 and the processing unit 501).
Computing device 50 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computing device 50 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 502 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) 5021 and/or cache memory 5022. Computing device 50 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, the ROM5023 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 17, which is commonly referred to as a "hard drive"). Although not shown in FIG. 17, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to the bus 503 by one or more data media interfaces. At least one program product may be included in system memory 502 having a set (e.g., at least one) of program modules configured to carry out the functions of embodiments of the invention.
A program/utility 5025 having a set (at least one) of program modules 5024 may be stored in, for example, system memory 502, and such program modules 5024 include, but are not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment. The program modules 5024 generally perform the functions and/or methodologies of the described embodiments of the invention.
Computing device 50 may also communicate with one or more external devices 504 (e.g., keyboard, pointing device, display, etc.). Such communication may be through input/output (I/O) interfaces 505. Moreover, computing device 50 may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via network adapter 506. As shown in FIG. 17, network adapter 506 communicates with other modules of computing device 50 (e.g., processing unit 501, etc.) via bus 503. It should be appreciated that although not shown in FIG. 17, other hardware and/or software modules may be used in conjunction with computing device 50.
The processing unit 501 executes various functional applications and data processing, for example, acquisition of images acquired while operating the endoscopic apparatus for examination, by executing a program stored in the system memory 502; identifying the image through at least one model constructed based on a deep neural network to obtain an identification result at least capable of indicating a certain part corresponding to the image; determining at least an inspection progress based on the recognition result; determining at least inspection time information and/or image acquisition quantity of a certain part or each part based on the identification result after the inspection of the certain part or each part is determined to be finished based on the inspection progress; drawing an actual examination curve based on examination time information and/or image acquisition quantity of a certain part or each part; the actual inspection curve is compared with a corresponding standard inspection curve to determine inspection quality. The specific implementation of each step is not repeated here. It should be noted that although in the above detailed description several units/modules or sub-units/sub-modules of the device for assisting endoscopy are mentioned, such a division is merely exemplary and not mandatory. Indeed, the features and functionality of two or more of the units/modules described above may be embodied in one unit/module according to embodiments of the invention. Conversely, the features and functions of one unit/module described above may be further divided into embodiments by a plurality of units/modules.
In the description of the present invention, it should be noted that the terms "first", "second", and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Moreover, while the operations of the method of the invention are depicted in the drawings in a particular order, this does not require or imply that the operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.

Claims (22)

1. A method of assisting endoscopy comprising:
acquiring an image acquired during the inspection of operating the endoscope equipment;
identifying the image through at least one model constructed based on a deep neural network to obtain an identification result at least capable of indicating a certain part corresponding to the image;
determining at least an inspection progress based on the recognition result;
determining at least inspection time information and/or image acquisition quantity of a certain part or each part based on the identification result after the inspection of the certain part or each part is determined to be finished based on the inspection progress;
drawing an actual examination curve based on examination time information and/or image acquisition quantity of a certain part or each part;
the actual inspection curve is compared with a corresponding standard inspection curve to determine inspection quality.
2. The method for assisting endoscopy of claim 1, wherein the at least one model constructed based on a deep neural network comprises:
one or more background judgment models for identifying the corresponding part of the current image; or
One or more background judgment models and one or more target judgment models, wherein the target judgment models are used for identifying whether the current image comprises a specific target or not.
3. The method for assisting endoscopy of claim 2, wherein the background judgment model is trained based on an image recognition model and a first training data set, the first training data set comprising a plurality of images of a region to be recognized;
the target judgment model is obtained by training based on a target detection model and a second training data set, wherein the second training data set comprises a plurality of images of targets needing to be detected.
4. The method for assisted endoscopy of claim 3, wherein the background judgment model is a backbone network with ShuffleNet V2, and a random inactivation layer is added before the last linear transformation layer; and/or
The target judgment model uses a target detection model YoloV3 and takes Darknet-53 as a backbone network.
5. The method for assisting an endoscopic examination according to any one of claims 1 to 4, wherein after at least determining an examination progress based on the recognition result, the method further comprises:
performing a prompt for an auxiliary operation, the prompt comprising:
the part under examination, the examination starting time and the examination duration are displayed according to preset rules;
the progress of the examination is displayed in three dimensions.
6. The method for assisting endoscopic examination according to claim 5, wherein the progress of examination displayed in three-dimensional form is indicative of at least one of a portion where examination has been completed, a portion under examination, a portion to be examined, and a portion missing examination;
wherein the site being examined is displayed at an angle convenient for viewing.
7. The method for assisting endoscopic examinations according to claim 6, wherein the portions of different progress of the examination are presented in different display effects; and
different parts of the same inspection progress are presented with different display effects; or
Different parts of the same inspection progress are presented with the same display effect.
8. The method for assisting endoscopic examinations according to claim 1, wherein at least determining progress of the examination based on said recognition results comprises:
determining whether the part corresponding to the image is the same as the part corresponding to the previous image or not according to the recognition result;
if the current position is the same as the position, the inspection progress is not changed, and the inspection duration of the current position is accumulated;
if not, updating the inspection progress, and determining the inspection ending time of the previous part and the inspection starting time of the current part.
9. The method for assisting endoscopy of claim 1, wherein the plotting an actual examination curve based on the examination time information and the number of image acquisitions at a certain site after completion of the examination at the certain site comprises:
determining the image acquisition quantity of each moment in the actual inspection period of a certain part based on the inspection time information and the image acquisition quantity of the part;
and drawing an actual examination curve based on the image acquisition quantity of each moment in the actual examination time period of the part according to the time sequence.
10. The method for assisted endoscopy of claim 9, wherein after determining the quality of the examination at each site, the method further comprises:
based on the inspection quality of each part, the overall inspection quality is determined.
11. The method for assisting endoscopy of claim 1, wherein the plotting of the actual examination curve based on the examination time information and the number of image acquisitions at each site comprises:
determining the actual examination time period of each part;
determining the image acquisition quantity of each moment in the actual inspection time period of each part;
and drawing a first overall actual examination curve based on the image acquisition quantity of each moment in the actual examination time period of each part according to the time sequence.
12. The method for assisting endoscopy of claim 1, wherein the plotting of the actual examination curve based on the examination time information at each site comprises:
determining the actual examination duration of each part;
and drawing a second overall actual examination curve based on the actual examination time length of each part.
13. The method for assisting endoscopy of claim 1, wherein the plotting of the actual examination curve based on the examination time information at each site comprises:
determining actual examination start time of each part;
and drawing a third overall actual examination curve based on the actual examination starting time of each part.
14. The method of assisting endoscopy of any of claims 9-13, wherein comparing the actual examination profile to a corresponding standard examination profile to determine examination quality comprises:
respectively acquiring the vertical axis arrays of the actual inspection curve and the standard inspection curve;
respectively performing linear fitting on the two longitudinal axis arrays to obtain an actual inspection fitting function and a standard inspection fitting function;
calculating the slopes of the actual inspection fitting function and the standard inspection fitting function respectively;
calculating respective radians based on slopes of the actual inspection fitting function and the standard inspection fitting function respectively;
calculating the angles of the two based on the respective radians;
if the angle between the two is within the preset range, the actual inspection is judged to be standard;
otherwise, it is not standard.
15. The method of assisting endoscopy of any of claims 9-13, wherein comparing the actual examination curve to a corresponding standard examination curve to determine examination quality comprises:
acquiring an inspection center line based on the actual inspection curve and a standard inspection curve;
the abscissa of the inspection center line corresponds to the abscissa of the actual inspection curve and the abscissa of the standard inspection curve one by one, and any ordinate of the inspection center line is as follows:
at the corresponding horizontal axis coordinate, the larger minus smaller difference in the vertical axis coordinate of the actual checking curve and the standard checking curve is added with the smaller vertical axis coordinate;
calculating an actual inspection score based on areas of enclosing and semi-enclosing graphs formed by the intersection of the inspection centerline and the standard inspection curve;
the actual inspection score is inversely proportional to the value of the area.
16. The method of assisting endoscopy of any of claims 9-13, wherein determining an actual examination duration or an actual examination period for each site comprises:
determining a first examination duration or a first examination time period of each part based on the time sequence and the identification result of the images acquired by the corresponding time sequence;
filtering the first check duration or the first check time period based on a preset filtering rule;
and determining the filtered first examination time length or the first examination time period as the actual examination time length or the actual examination time period of the corresponding part.
17. The method for assisting endoscopic examinations according to claim 16, wherein filtering said first examination duration or first examination period based on preset filtering rules comprises:
aligning the first examination duration or the starting point of the first examination time interval of a certain part with a preset standard examination;
judging whether the first inspection duration or the first inspection time period of a certain part exceeds a preset threshold value or not;
if the number of the parts exceeds the preset number, the exceeding parts of the preset proportion or number are additionally reserved on the basis of the aligning parts, and the rest parts are discarded;
if not, the data is not discarded.
18. The method for assisted endoscopy of claim 6, wherein the location of the missed examination is determined by at least:
and when the proportion of the detected background in the whole background of a certain part reaches a preset threshold value based on the image recognition result, determining that the part is missed for detection.
19. An apparatus for assisting endoscopy, comprising:
an image acquisition module configured to acquire an image acquired while operating an endoscopic device for examination;
the recognition module is configured to recognize the image through at least one model constructed based on a deep neural network, and a recognition result at least indicating that the image corresponds to a certain part is obtained;
a progress determination module configured to determine at least an inspection progress based on the recognition result;
the quality monitoring module is configured to determine at least inspection time information and/or image acquisition quantity of a certain part or each part based on the identification result after the inspection of the certain part or each part is determined to be completed based on the inspection progress; and
drawing an actual examination curve based on examination time information and/or image acquisition quantity of a certain part or each part;
the actual inspection curve is compared with a corresponding standard inspection curve to determine inspection quality.
20. A system for assisting endoscopy comprising:
the client equipment comprises a display module, a communication module and a processing module, wherein the display module is used for displaying prompts and videos of auxiliary operations, and the prompts at least comprise the display of inspection progress in a three-dimensional form;
the communication module is used for receiving prompts sent by the quality control center and/or the monitoring center, receiving videos collected by the endoscope equipment, decompressing the videos through the processing module, converting formats of the videos and sending the videos to the quality control center;
the quality control center is used for identifying the video through at least one model constructed based on the deep neural network to obtain an identification result which at least can indicate a certain part corresponding to a current frame image of the video; and
determining at least an inspection progress based on the recognition result; and
determining at least inspection time information and/or image acquisition quantity of a certain part or each part based on the identification result after the inspection of the certain part or each part is determined to be finished based on the inspection progress; and
drawing an actual examination curve based on examination time information and/or image acquisition quantity of a certain part or each part;
comparing the actual inspection curve with a corresponding standard inspection curve to determine inspection quality;
and the supervision center is used for analyzing and counting the quality information of each inspection so as to carry out unified management on the quality information.
21. A storage medium storing a computer program which, when executed by a processor, implements the method of any of claims 1-18 above.
22. A computing device, the computing device comprising: a processor; a memory for storing the processor-executable instructions; the processor configured to perform the method of any of the preceding claims 1-18.
CN202110603794.0A 2021-05-31 2021-05-31 Method, apparatus, system, storage medium and computing device for assisting endoscopy Active CN113052843B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110603794.0A CN113052843B (en) 2021-05-31 2021-05-31 Method, apparatus, system, storage medium and computing device for assisting endoscopy

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110603794.0A CN113052843B (en) 2021-05-31 2021-05-31 Method, apparatus, system, storage medium and computing device for assisting endoscopy

Publications (2)

Publication Number Publication Date
CN113052843A true CN113052843A (en) 2021-06-29
CN113052843B CN113052843B (en) 2021-09-28

Family

ID=76518629

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110603794.0A Active CN113052843B (en) 2021-05-31 2021-05-31 Method, apparatus, system, storage medium and computing device for assisting endoscopy

Country Status (1)

Country Link
CN (1) CN113052843B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116681788A (en) * 2023-06-02 2023-09-01 萱闱(北京)生物科技有限公司 Image electronic dyeing method, device, medium and computing equipment
CN116844697A (en) * 2023-02-24 2023-10-03 萱闱(北京)生物科技有限公司 Image multidimensional visualization method, device, medium and computing equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108615037A (en) * 2018-05-31 2018-10-02 武汉大学人民医院(湖北省人民医院) Controllable capsule endoscopy operation real-time auxiliary system based on deep learning and operating method
CN109146884A (en) * 2018-11-16 2019-01-04 青岛美迪康数字工程有限公司 Endoscopy monitoring method and device
CN110097105A (en) * 2019-04-22 2019-08-06 上海珍灵医疗科技有限公司 A kind of digestive endoscopy based on artificial intelligence is checked on the quality automatic evaluation method and system
EP3539455A1 (en) * 2018-03-14 2019-09-18 Sorbonne Université Method for automatically determining image display quality in an endoscopic video capsule
CN110974122A (en) * 2019-12-23 2020-04-10 山东大学齐鲁医院 Monitoring method and system for judging endoscope entering human digestive tract
CN111000633A (en) * 2019-12-20 2020-04-14 山东大学齐鲁医院 Method and system for monitoring endoscope diagnosis and treatment operation process
CN112785549A (en) * 2020-12-29 2021-05-11 成都微识医疗设备有限公司 Enteroscopy quality evaluation method and device based on image recognition and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3539455A1 (en) * 2018-03-14 2019-09-18 Sorbonne Université Method for automatically determining image display quality in an endoscopic video capsule
CN108615037A (en) * 2018-05-31 2018-10-02 武汉大学人民医院(湖北省人民医院) Controllable capsule endoscopy operation real-time auxiliary system based on deep learning and operating method
CN109146884A (en) * 2018-11-16 2019-01-04 青岛美迪康数字工程有限公司 Endoscopy monitoring method and device
CN110097105A (en) * 2019-04-22 2019-08-06 上海珍灵医疗科技有限公司 A kind of digestive endoscopy based on artificial intelligence is checked on the quality automatic evaluation method and system
CN111000633A (en) * 2019-12-20 2020-04-14 山东大学齐鲁医院 Method and system for monitoring endoscope diagnosis and treatment operation process
CN110974122A (en) * 2019-12-23 2020-04-10 山东大学齐鲁医院 Monitoring method and system for judging endoscope entering human digestive tract
CN112785549A (en) * 2020-12-29 2021-05-11 成都微识医疗设备有限公司 Enteroscopy quality evaluation method and device based on image recognition and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116844697A (en) * 2023-02-24 2023-10-03 萱闱(北京)生物科技有限公司 Image multidimensional visualization method, device, medium and computing equipment
CN116844697B (en) * 2023-02-24 2024-01-09 萱闱(北京)生物科技有限公司 Image multidimensional visualization method, device, medium and computing equipment
CN116681788A (en) * 2023-06-02 2023-09-01 萱闱(北京)生物科技有限公司 Image electronic dyeing method, device, medium and computing equipment
CN116681788B (en) * 2023-06-02 2024-04-02 萱闱(北京)生物科技有限公司 Image electronic dyeing method, device, medium and computing equipment

Also Published As

Publication number Publication date
CN113052843B (en) 2021-09-28

Similar Documents

Publication Publication Date Title
CN110599448B (en) Migratory learning lung lesion tissue detection system based on MaskScoring R-CNN network
JP6927211B2 (en) Image diagnostic learning device, diagnostic imaging device, method and program
CN113052843B (en) Method, apparatus, system, storage medium and computing device for assisting endoscopy
Abadir et al. Artificial intelligence in gastrointestinal endoscopy
CN110796613B (en) Automatic identification method and device for image artifacts
Mira et al. Early Diagnosis of Oral Cancer Using Image Processing and Artificial Intelligence
Goel et al. Dilated CNN for abnormality detection in wireless capsule endoscopy images
CN110600122A (en) Digestive tract image processing method and device and medical system
CN110335241B (en) Method for automatically scoring intestinal tract preparation after enteroscopy
Du et al. Identification of COPD from multi-view snapshots of 3D lung airway tree via deep CNN
CN112669283B (en) Enteroscopy image polyp false detection suppression device based on deep learning
CN111127426B (en) Gastric mucosa cleanliness evaluation method and system based on deep learning
CN112466466B (en) Digestive tract auxiliary detection method and device based on deep learning and computing equipment
JP3842171B2 (en) Tomographic image processing device
KR20190087681A (en) A method for determining whether a subject has an onset of cervical cancer
CN111144271A (en) Method and system for automatically identifying biopsy parts and biopsy quantity under endoscope
CN111401102B (en) Deep learning model training method and device, electronic equipment and storage medium
CN115880266B (en) Intestinal polyp detection system and method based on deep learning
CN110517234B (en) Method and device for detecting characteristic bone abnormality
CN110110750A (en) A kind of classification method and device of original image
CN113344911B (en) Method and device for measuring size of calculus
CN114581402A (en) Capsule endoscope quality inspection method, device and storage medium
KR102136107B1 (en) Apparatus and method for alignment of bone suppressed chest x-ray image
CN113256625A (en) Electronic equipment and recognition device
CN110083727B (en) Method and device for determining classification label

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 100006 office room 787, 7 / F, block 2, xindong'an office building, 138 Wangfujing Street, Dongcheng District, Beijing

Patentee after: Xuanwei (Beijing) Biotechnology Co.,Ltd.

Patentee after: Henan Xuanwei Digital Medical Technology Co.,Ltd.

Address before: 100006 office room 787, 7 / F, block 2, xindong'an office building, 138 Wangfujing Street, Dongcheng District, Beijing

Patentee before: Xuanwei (Beijing) Biotechnology Co.,Ltd.

Patentee before: Henan Xuan Yongtang Medical Information Technology Co.,Ltd.

CP01 Change in the name or title of a patent holder