CN115082739A - Endoscope evaluation method and system based on convolutional neural network - Google Patents

Endoscope evaluation method and system based on convolutional neural network Download PDF

Info

Publication number
CN115082739A
CN115082739A CN202210776537.1A CN202210776537A CN115082739A CN 115082739 A CN115082739 A CN 115082739A CN 202210776537 A CN202210776537 A CN 202210776537A CN 115082739 A CN115082739 A CN 115082739A
Authority
CN
China
Prior art keywords
endoscope
evaluation
target
characteristic
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210776537.1A
Other languages
Chinese (zh)
Other versions
CN115082739B (en
Inventor
曹鱼
张晨曦
陈齐磊
刘本渊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Huiwei Intelligent Medical Technology Co ltd
Original Assignee
Suzhou Huiwei Intelligent Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Huiwei Intelligent Medical Technology Co ltd filed Critical Suzhou Huiwei Intelligent Medical Technology Co ltd
Priority to CN202210776537.1A priority Critical patent/CN115082739B/en
Publication of CN115082739A publication Critical patent/CN115082739A/en
Application granted granted Critical
Publication of CN115082739B publication Critical patent/CN115082739B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/771Feature selection, e.g. selecting representative features from a multi-dimensional feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Endoscopes (AREA)

Abstract

The invention discloses an endoscope evaluation method and an endoscope evaluation system based on a convolutional neural network. The endoscope evaluation method includes: acquiring an evaluation image of an endoscope; acquiring a characteristic target and position information thereof in the evaluation image; calculating the distances among different types of feature targets in the evaluation image and the occurrence proportion of the same type of feature target, and performing feature filtering on the feature targets; and continuously timing the characteristic target which passes through the characteristic filtering, and acquiring an evaluation result for indicating whether the endoscope is normal. The endoscope evaluation method and the endoscope evaluation system provided by the invention can accurately judge whether the preoperative preparation of the endoscope completes the air-jet water-jet examination preparation work and accurately give an instruction whether the preoperative preparation work passes, so that the problem of interruption of operation or examination caused by insufficient preoperative preparation in the operation is avoided, the preoperative preparation efficiency of a doctor is greatly improved, and the energy of the doctor is saved.

Description

Endoscope evaluation method and system based on convolutional neural network
Technical Field
The invention relates to the technical field of image recognition, in particular to an endoscope evaluation method and an endoscope evaluation system based on a convolutional neural network.
Background
In the field of endoscopy, computer-aided diagnosis systems based on deep learning are receiving increasing attention from researchers because of their high application value. The main application fields of the artificial intelligence algorithm based on the convolutional neural network comprise: automatic polyp detection, blind area monitoring, early cancer identification and quality control. A large amount of clinical research data show that mature artificial intelligence algorithms mainly focus on detection and identification of pathological changes in books, such as polyp and early cancer identification, but do not have relevant automatic algorithms for intelligent detection of preoperative equipment conditions as an important part of experiments, such as examination on whether an endoscope can normally spray air and water.
The application of artificial intelligence to endoscope operation quality assessment is still rare, and only some are mostly the assessment to doctor operation in the middle of the operation, and do not have the assessment to whether the operation equipment can normally work before the operation. An important application scenario in preoperative preparation is to evaluate whether the front end of the endoscope can smoothly jet air and water so as to ensure that the operation in the operation can be normally carried out. At present, whether the front end of the endoscope can spray air and water is detected manually by a doctor, whether the doctor can spray air normally or not is judged by observing whether the front end of the endoscope can spray air bubbles in water or not, and whether the doctor can spray water or not is judged by observing whether water drops sprayed out of the front end of the endoscope are smooth or not. As the demand for the detection of the enterogastroscope is increased rapidly, the number of endoscope operating doctors is relatively short, and the doctors sometimes misjudge and miss the preoperative preparation result in the face of a large number of patients, the probability of influencing the normal process of the operation and delaying the illness state of the patients is increased.
Disclosure of Invention
In view of the defects of the prior art, the invention aims to provide an endoscope evaluation method and an endoscope evaluation system based on a convolutional neural network.
In order to achieve the purpose of the invention, the technical scheme adopted by the invention comprises the following steps:
in a first aspect, the present invention provides an endoscope evaluation method based on a convolutional neural network, the endoscope having at least two functions of air injection and water injection, the endoscope evaluation method comprising:
1) acquiring an evaluation image of the endoscope through a camera module;
2) acquiring a characteristic target and position information thereof in the evaluation image through target detection, wherein the characteristic target at least comprises a transparent container containing liquid and bubbles, continuous liquid drops and the front end of the endoscope;
3) calculating the distance between different types of feature targets in the evaluation images based on the feature targets and the position information thereof, calculating the occurrence proportion of the same type of feature target in a plurality of continuous evaluation images, and performing feature filtering on the feature targets based on the distance and the proportion;
4) continuously timing the characteristic targets filtered by the characteristics, and acquiring an evaluation result of the endoscope based on a timing result, wherein the evaluation result is used for indicating whether the air injection and water injection functions of the endoscope are normal.
In a second aspect, the present invention also provides an endoscope evaluation system comprising:
the camera module is used for acquiring an evaluation image of the endoscope;
an object detector for acquiring a characteristic object and position information thereof in the evaluation image by object detection, the characteristic object including at least a transparent container containing liquid and bubbles, a continuous liquid droplet, and a front end of the endoscope;
the characteristic filter is used for calculating the distance between different types of characteristic targets in the evaluation images based on the characteristic targets and the position information thereof, calculating the appearance proportion of the same type of characteristic targets in a plurality of continuous evaluation images, and performing characteristic filtering on the characteristic targets based on the distance and the proportion;
and the signal timer is used for continuously timing the characteristic target filtered by the characteristics and acquiring an evaluation result of the endoscope based on the timing result, wherein the evaluation result is used for indicating whether the air injection and water injection functions of the endoscope are normal or not.
Based on the technical scheme, compared with the prior art, the invention has the beneficial effects that at least:
the endoscope evaluation method and system based on the convolutional neural network can efficiently and accurately find out the transparent container containing liquid and bubbles when the front end of the endoscope is sprayed with air, continuous liquid drops and the front end of the endoscope are placed when water is sprayed, then a detection target result is processed and filtered, only a correct result is output, finally a correct characteristic target signal triggers a timer, and the endoscope is indicated whether to pass evaluation or not based on a timing result of the timer; the method and the system can accurately judge whether the preoperative preparation of the endoscope completes the air-jet water-jet examination preparation work and accurately give an indication whether the preoperative preparation work is passed, so that the problem of interruption of operation or examination caused by insufficient preoperative preparation in the operation is avoided, the preoperative preparation efficiency of a doctor is greatly improved, and the energy of the doctor is saved.
The foregoing description is only an overview of the technical solutions of the present invention, and in order to enable those skilled in the art to more clearly understand the technical solutions of the present invention and to implement them according to the content of the description, the following description is made with reference to the preferred embodiments of the present invention and the detailed drawings.
Drawings
FIG. 1 is a diagram illustrating a usage scenario of a convolutional neural network-based endoscopic evaluation method according to an embodiment of the present invention;
FIG. 2 is an exemplary illustration of an image of a feature object provided in accordance with one embodiment of the present invention;
FIG. 3 is a schematic flow chart diagram illustrating a convolutional neural network-based endoscope evaluation method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a target detector configuration provided in accordance with one embodiment of the present invention;
FIG. 5 is a diagram illustrating the detailed process of signal processing for processing neural network recognition results in a feature filter according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a signal timer according to an embodiment of the present invention.
Detailed Description
It should be noted that although the endoscope is one of medical instruments, it is emphasized that the technical concept and the technical solutions adopted by the present invention are all served for the early detection of the endoscope, and belong to an automated visual detection method for instruments or equipment, which only provides the relevant information on whether the function of the endoscope is normal, but does not relate to the diagnostic information directly used for judging whether the organism is healthy at all, and is not directly related to the treatment and diagnosis method for any disease.
Therefore, the technical scheme protected by the invention does not relate to a method for diagnosing and treating diseases, and completely belongs to the object of patent protection.
Based on the technical scheme of the invention, which belongs to the object of patent protection, the inventor of the invention provides the technical scheme of the invention through long-term research and a great deal of practice in view of the defects in the prior art. The technical solution, its implementation and principles, etc. will be further explained as follows.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced otherwise than as specifically described herein, and thus the scope of the present invention is not limited by the specific embodiments disclosed below.
Moreover, relational terms such as "first" and "second," and the like, may be used solely to distinguish one element or method step from another element or method step having the same name, without necessarily requiring or implying any actual such relationship or order between such elements or method steps.
In a conventional target detection algorithm, artificially designed shape features are generally used as a template to index the image globally to find a target that meets the features. The algorithm of the type has poor universality, needs a large amount of manual design of corresponding characteristics, and can only detect a specific target mostly. The traditional algorithm is to use a Central Processing Unit (CPU) to run the algorithm, and the efficiency of the traditional algorithm is far lower than that of the deep learning algorithm which is run by using a display card at present. With the development of deep learning, the target detection algorithm based on the CNN has a leap development in terms of both accuracy and speed. The CNN-based target detection algorithm is more universal through features extracted by backbone CNN, the features of various targets can be extracted through training, and the deep learning algorithm can find the targets more quickly and accurately by matching with regional candidate networks and other CNN networks. In recent years, with the improvement of the hardware performance of the video card and the rapid iterative optimization of the deep learning algorithm, the Artificial Intelligence (AI) has revolutionary development and progress in many fields. With the rapid development of the whole society towards digitalization, a large amount of data is generated every day around the world. With the support of these data, the AI systems of the existing deep learning algorithms have qualitatively changed their practical performance in many fields, and their performance can approach or even exceed the judgment of human experts. In the medical field, deep learning based algorithms also show great potential, for example in terms of diagnosis of images dealing with skin lesions and diabetic retinopathy, deep learning based AI systems already have the same or even better recognition capability as medical experts during the closure test.
In order to solve the problem of operation quality evaluation in an AI-based endoscope, such as a digestive endoscope CAD system, the invention provides an endoscope preoperative air injection and water injection automatic detection evaluation algorithm and system based on a convolutional neural network. The evaluation method mainly comprises three parts: (1) a detector for detecting the front end of the endoscope, the front end of the endoscope normally sprays air in the water cup, and the front end of the endoscope sprays water drops. (2) And the target detection signal processing algorithm filters the report of the target detector by calculating the relative position between different characteristic targets and a sliding window algorithm. (3) And the detection signal timer is used for timing the correct output signal and outputting a final detection result.
In the present invention, the water spraying function is a short term for a liquid spraying function of an endoscope, and the liquid to be sprayed is not necessarily limited to water, and may be some aqueous solution or some other liquid such as medical solvent or stain.
Specifically, referring to fig. 1 to 3, an endoscope evaluation method based on a convolutional neural network is provided in an embodiment of the present invention, where the endoscope has at least two functions of air injection and water injection, and the endoscope evaluation method includes the following steps:
1) acquiring an evaluation image of the endoscope through a camera module;
2) acquiring a characteristic target and position information thereof in the evaluation image through target detection, wherein the characteristic target at least comprises a transparent container containing liquid and bubbles, continuous liquid drops and the front end of the endoscope;
3) calculating the distance between different types of feature targets in the evaluation images based on the feature targets and the position information thereof, calculating the occurrence proportion of the same type of feature target in a plurality of continuous evaluation images, and performing feature filtering on the feature targets based on the distance and the proportion;
4) continuously timing the characteristic targets filtered by the characteristics, and acquiring an evaluation result of the endoscope based on a timing result, wherein the evaluation result is used for indicating whether the air injection and water injection functions of the endoscope are normal.
As some typical application examples, the implementation manner included in the above technical solution may include, for example, the steps shown in fig. 3, specifically:
the method comprises the following steps: acquiring continuous single-frame images through camera module arranged on preoperative preparation table
Step two: performing target detection on the obtained image by using a CNN target detector
Step three: filtering results of target detector
Step four: the counter times the target of successful detection, and the counter reaches a threshold value and reports that preoperative preparation is successful.
In some embodiments, in step 1), the camera module is fixedly arranged on a preoperative preparation table, and the shooting direction is towards an operation platform of the preoperative preparation table;
in some embodiments, the camera module is 15-24cm from the center of the operation platform.
As some specific functional application scenarios, as shown in the usage scenario pictures shown in fig. 1-2, fig. 1 shows a water spraying operation scenario, a gas spraying operation scenario, and a view angle picture of a camera module from left to right. The red frame in the figure is the placing position of the camera module. Fig. 2 shows, from left to right, a transparent container (a cup) containing liquid but no bubbles, a transparent container with bubbles, a series of droplets and an image of the front end of the endoscope.
In some embodiments, in step 2), the target detection is performed with a target detector;
in some embodiments, the target detector comprises a backbone neural network, a regional candidate network, and a classifier connected in series in sequence;
the backbone neural network is used for generating a characteristic image based on the evaluation image conversion;
the regional candidate network is used for acquiring a detection target and position information thereof based on the characteristic image;
the classifier is used for classifying the detection target into different kinds of the feature targets.
In some embodiments, the backbone neural network comprises any one of ResNet, SqueezeNet, ShuffleNNet, VGGNnet, and DenseNet;
in some embodiments, the classifier comprises a softmax classifier.
As some specific application examples, the target detector is an endoscope air-jet water-jet detector based on a single frame evaluation image. For example, the detector can output the target position and the type of whether the front end of the endoscope can smoothly spray air and water by taking a single-frame evaluation image of a camera module aligned with a preparation operation table before the digestive endoscopy as an input and adopting a current general convolutional neural network.
More specifically, the feature targets to be detected by the target detector in this embodiment are the following three types:
1. water cup with air bubble: when the doctor prepares before doing the operation, can insert the drinking cup that is equipped with water with the endoscope front end at first and jet gas, if jet gas is smooth, a large amount of bubbles can appear in the drinking cup, and it is very big with no bubble drinking cup difference. Therefore, if the target detector can detect that there is a cup containing a large amount of bubbles, it is possible to further judge that the air injection is normal based on this.
2. Water droplets jetted from the distal end of the endoscope: when the water spray at the front end of the endoscope is detected, a doctor can pull out the front end of the endoscope from a water cup, spray water and observe water drops sprayed from the tail end, and if the water drops are continuous and smooth, the device can be determined to spray water smoothly. If the detector can detect continuous and smooth water drops, the normal water spraying of the equipment can be confirmed.
3. Endoscope front end: when the doctor performs the preparation operation before the operation, air and water are both ejected from the distal end of the endoscope. The accuracy of the detection of the water cup and the water drops can be further confirmed by detecting the front end of the endoscope and calculating the relative position of the endoscope and the water cup or the water drops.
In some embodiments, the target detector is obtained by label training.
In some embodiments, the label training comprises pre-training of the backbone neural network.
In some embodiments, the pre-training is performed at ImageNet.
In some embodiments, the label training specifically comprises:
an initial detector of the target, a training image and its corresponding label are provided.
And carrying out target detection on the training image by using the target initial detector to obtain a training detection result.
And updating parameters of the target initial detector based on the training detection result and the label to obtain the target detector.
As another specific application example, a machine learning method based on supervision may be adopted, where a scene video image shown in fig. 2 is first sampled as a data source, a water cup containing bubbles and a picture containing continuous water droplets at the front end of an endoscope are labeled, a training data set is formed after quality audit by a professional doctor, a CNN model is trained after the data set is constructed, and the target detector with higher accuracy is finally obtained after a closure test.
FIG. 4 shows a flowchart of the target detection system in step two, in which the backbone neural network can adopt the current general convolutional neural network, such as ResNet, SqueezeNet, ShuffleNNet, VGGNnet, DenseNet, etc. And processing the single-frame picture through the network to generate a characteristic image to be used as the input of the area candidate network. The area candidate network is a neural network used for distinguishing the detected object from the background, the network processes the feature map, selects the foreground target and outputs the position information of the detected target, and the algorithm can intercept corresponding features from the feature map as the input of the final classifier according to the information. The three-classification classifier in the invention is a softmax classifier. The backbone neural network can be trained in advance in ImageNet, and then the whole network can be trained on manually collected and labeled water cups, water drops and images at the front end of the endoscope.
In some embodiments, step 3) specifically comprises:
and enabling the characteristic target and the position information thereof to enter a distance filter.
A first target distance between the transparent container and a front end of an endoscope is calculated.
A second target distance between the continuous droplet and a forward end of the endoscope is calculated.
And when the first target distance is greater than a first preset threshold value or the second target distance is greater than a second preset threshold value, enabling the corresponding characteristic target to pass through the distance filter.
In some embodiments, step 3) further specifically comprises:
entering a feature object that passes through a distance filter into a sliding window filter, the sliding window filter comprising a sliding window queue having a preset queue size.
And counting the total number of homogeneous feature objects in a plurality of continuous feature objects.
And when the ratio of the total number of the same-class feature targets to the size of the preset queue is larger than a preset ratio, enabling the feature targets to pass through the sliding window filter.
In order to reduce the adverse effect of false alarm of the image detector on the final result of the algorithm, the invention filters the result signal of the target detector by two steps. The method comprises the steps that firstly, false alarm filtering is carried out on the relative positions of a water cup, water drops and the front end of an endoscope, when the position of the water drops at the front end of the endoscope is closer to the position of the water drops in an evaluation image and is smaller than or equal to a certain threshold value, the water cup or continuous water drops containing air bubbles can be determined to be detected correctly, and otherwise, if the position between the water cups is farther and is larger than the certain threshold value, the detected water cup or continuous water drops containing air bubbles can be judged to be false alarm. And secondly, adding a sliding window filtering algorithm, sending the detector result successfully passing the first step of filtering into a sliding window queue, and outputting the final target position and category information when the same category target in the sliding window exceeds a preset sliding window threshold value.
Fig. 5 is a structural example of a target detection result signal processing system (feature filter) in the embodiment of the present invention, which is a second module in the embodiment of the present invention. In some specific examples, the first false alarm filtering of the target detector result may be performed by calculating the relative distance between the bubbling cup (i.e. the above-mentioned transparent container containing liquid and bubbles, the same below) and the endoscope front end, and the relative distance between the continuous water drops and the endoscope front end. The system has the advantages that the first detection target is the front end of an endoscope hose, the center coordinate of the first detection target is c1, the second detection target is a bubbling cup, the center coordinate of the second detection target is c2, the center coordinate of a third detection target continuous water drop is c3, and the detection results can be divided into the following conditions:
if a bubbling cup or continuous water drops are detected in the image but the front end of the endoscope hose is not detected, the detected cup is judged, and the water drops are possibly noise in the background, are not real targets, are judged to be false detection and are filtered by the system.
If a bubbling cup or continuous water drops and the front end of an endoscope hose are detected in an image, setting the straight-line distance between c1 and c2 as d1, if d1 is in a smaller range, namely d1 belongs to [0, 7cm ], judging that the bubbling cup is detected correctly, otherwise, if d1 is larger than 7cm, the cup is too far away from the front end of the hose and false background noise is generated, judging that the cup is false detected, and then filtering by the system, and similarly, setting the straight-line distance between c1 and c3 as d2, if d2 is in a smaller range, namely d2 belongs to [0, 7cm ], judging that the continuous water drops are detected correctly, otherwise, if d1 is larger than 7cm, the water drops are too far away from the front end of the hose and false background noise is generated, judging that the cup is false background noise is detected, and then filtering by the system.
To further filter the test results, the objects passing through the distance filter are fed into the sliding window queues of the sliding window filters for spraying air and water, Q1, Q2, each queue being of size N, inserting a 1 into Q1 whenever there is an evaluation image passing through the bubbling cup where the object is output by the detector and passes through the distance filter, and inserting a 0 if there is no result output by the evaluation image
Figure BDA0003725549510000081
Outputting a detected jet signal, wherein t is N/2 is a preset threshold value, and Q1i is the ith element of a queue Q1. similarly, every time when an evaluation image detects continuous water drops and passes through a distance filter, inserting 1 into Q2, and if the evaluation image has no result, inserting 0
Figure BDA0003725549510000082
Then the detected jet signal is output, wherein t is N/2 which is a preset threshold value, and Q2i is the ith element of the queue Q2.
In some embodiments, step 4) specifically comprises:
and when the feature target passes through the feature filtering, starting timing by a timer corresponding to the type of the feature target, and when the feature target corresponding to the subsequent evaluation image continues to pass through the feature filtering, enabling the timing result of the timer to be increased in a positive direction.
And when the timing result is greater than a preset time threshold, generating an evaluation result of the endoscope as evaluation passing.
In some embodiments, the evaluation results include air injection evaluation results and/or water injection evaluation results.
In some embodiments, the timer comprises a jet timer corresponding to the jet evaluation result, and the characteristic object corresponding to the jet timer comprises the transparent container and a front end of the endoscope.
In some embodiments, the timer further comprises a water spray timer corresponding to the water spray assessment result, the characteristic targets corresponding to the water spray timer comprising the continuous droplet and a front end of the endoscope.
In the part, the corresponding timer can be activated through the target information filtered twice, when the jet quality inspection timer reaches a preset value, the system outputs jet quality inspection to pass, and similarly, when the jet quality inspection timer reaches the preset value, the system outputs jet quality inspection to pass.
As shown in FIG. 6, the detection result (characteristic target) successfully passing through the filter triggers a timer corresponding to the detection result, the timer is increased every time the target signal is correctly detected, and when the cumulative value c of the timer is greater than the preset time t of the doctor, the evaluation is passed, that is, the endoscope functions normally
It can be understood that the above method and the below described evaluation system may further include an output device or an output module for outputting information on whether the endoscope functions normally so as to enable a doctor to know, and the output mode may be voice, or image or text, and the corresponding output method and related settings belong to common technical means and are not described herein again.
With continued reference to fig. 3-6, embodiments of the present invention also provide an endoscopic evaluation system comprising:
and the camera module is used for acquiring an evaluation image of the endoscope.
And an object detector for acquiring a characteristic object and position information thereof in the evaluation image by object detection, the characteristic object including at least a transparent container containing liquid and bubbles, a continuous liquid droplet, and a front end of the endoscope.
And the characteristic filter is used for calculating the distance between different types of characteristic targets in the evaluation images based on the characteristic targets and the position information thereof, calculating the appearance proportion of the same type of characteristic targets in a plurality of continuous evaluation images, and performing characteristic filtering on the characteristic targets based on the distance and the proportion.
And the signal timer is used for continuously timing the characteristic target filtered by the characteristics and acquiring an evaluation result of the endoscope based on the timing result, wherein the evaluation result is used for indicating whether the air injection and water injection functions of the endoscope are normal or not.
The target detector, the feature filter and the signal timer form a pre-operation preparation evaluation algorithm based on the convolutional neural network provided by the embodiment of the invention, and specifically, the three modules respectively may be: 1. based on a convolutional neural network CNN target detector, 2, a target detection signal processing algorithm, 3, a target successful detection signal timer, if the target time continuously detected exceeds a preset threshold value, the preoperative preparation is reported to be successful.
As a specific application scenario, when the endoscope evaluation system and the corresponding evaluation method are used for evaluation, the evaluation accuracy can reach 95%, and doctors can evaluate the endoscope with naked eyes, but the system and the method provided by the embodiment of the invention do not need to consume extra energy of doctors for judgment, and even based on the method and the system, the system and the method can be handed over to other medical staff such as nurses to replace doctors for specialized evaluation, and particularly when a large number of patients to be examined or operated are confronted, the energy of the doctors can be greatly saved, and the quality and the efficiency of the examination or the operation can be improved.
In summary, the method and the system provided by the embodiment of the invention can accurately judge whether the preoperative preparation of the endoscope completes the air-jet water-jet examination preparation work, and accurately give the indication whether the preoperative preparation work passes, so that the problem of interruption of the operation or the examination caused by insufficient preoperative preparation in the operation is avoided, the efficiency of preoperative preparation of a doctor is greatly improved, and the energy of the doctor is saved. Has great significance for the development and application of the endoscope method.
It should be understood that the above-mentioned embodiments are merely illustrative of the technical concepts and features of the present invention, which are intended to enable those skilled in the art to understand the contents of the present invention and implement the present invention, and therefore, the protection scope of the present invention is not limited thereby. All equivalent changes and modifications made according to the spirit of the present invention should be covered in the protection scope of the present invention.

Claims (10)

1. An endoscope evaluation method based on a convolutional neural network, the endoscope at least has two functions of air injection and water injection, and the endoscope evaluation method is characterized by comprising the following steps:
1) acquiring an evaluation image of the endoscope through a camera module;
2) acquiring a characteristic target and position information thereof in the evaluation image through target detection, wherein the characteristic target at least comprises a transparent container containing liquid and bubbles, continuous liquid drops and the front end of the endoscope;
3) calculating the distance between different types of feature targets in the evaluation images based on the feature targets and the position information thereof, calculating the occurrence proportion of the same type of feature target in a plurality of continuous evaluation images, and performing feature filtering on the feature targets based on the distance and the proportion;
4) continuously timing the characteristic targets filtered by the characteristics, and acquiring an evaluation result of the endoscope based on a timing result, wherein the evaluation result is used for indicating whether the air injection and water injection functions of the endoscope are normal.
2. The endoscope evaluation method according to claim 1, wherein in step 1), the camera module is fixedly arranged on a preoperative preparation table, and the shooting direction is towards an operation platform of the preoperative preparation table;
preferably, the camera module is L cm away from the center of the operation platform, wherein L epsilon [15, 24 ].
3. The endoscopy evaluation method of claim 1, wherein in step 2), the target detection is performed with a target detector;
preferably, the target detector comprises a backbone neural network, a regional candidate network and a classifier which are sequentially connected in series;
the backbone neural network is used for generating a characteristic image based on the evaluation image conversion;
the regional candidate network is used for acquiring a detection target and position information thereof based on the characteristic image;
the classifier is used for classifying the detection target into different kinds of the feature targets.
4. The endoscopy evaluation method of claim 3, wherein the backbone neural network comprises any one of ResnNet, SqueezeNet, ShuffleNNet, VGGNnet, and DenseNet;
and/or, the classifier comprises a softmax classifier.
5. The endoscopy assessment method of claim 3, wherein the target detector is trained by markers;
preferably, the label training comprises pre-training of the backbone neural network;
preferably, the pre-training is performed in ImageNet.
6. The endoscopy assessment method of claim 5, wherein the marker training specifically comprises:
providing a target initial detector, a training image and a corresponding label;
carrying out target detection on the training image by using the target initial detector to obtain a training detection result;
and updating parameters of the target initial detector based on the training detection result and the label to obtain the target detector.
7. The endoscopy assessment method of claim 1, wherein step 3) comprises:
causing the detected transparent container containing the liquid and bubbles and/or continuous liquid droplets to be removed when the tip of the endoscope is not detected in the evaluation image;
and/or, when a first target distance between the transparent container and the front end of the endoscope is below a first preset threshold, filtering the transparent container and the front end of the endoscope through the feature;
when a second target distance between the continuous liquid drop and the front end of the endoscope is below a second preset threshold value, enabling the continuous liquid drop and the front end of the endoscope to pass through the distance filter;
preferably, the first preset threshold is 7cm, and the second preset threshold is 7 cm;
preferably, the method specifically comprises the following steps: the first detection target is the tip of the endoscope and has a center coordinate of c1, the second detection target is a transparent container containing liquid and bubbles and has a center coordinate of c2, the third detection target is a continuous liquid droplet and has a center coordinate of c3, and the result of the characteristic filtering includes:
when the transparent container and/or continuous liquid droplets containing liquid and bubbles are detected in the evaluation image and the front end of the endoscope is not detected, judging that the detected transparent container and/or continuous liquid droplets containing liquid and bubbles are noise, judging that the detection is false, and then filtering and removing;
when a transparent container containing liquid and bubbles, a continuous liquid drop and the front end of the endoscope are detected in the evaluation image, the straight-line distance between c1 and c2 is recorded as d1, when d1 belongs to [0, 7cm ], the detection of the transparent container containing liquid and bubbles is judged to be correct, when d1 is more than 7cm, the false alarm of background noise is judged, the detection result is judged to be false detection, and then the corresponding characteristic target is filtered and removed;
the straight-line distance between c1 and c3 is recorded as d2, when d2 belongs to [0, 7cm ], the detection of continuous liquid drops is judged to be correct, when d1 is larger than 7cm, the false alarm of background noise is judged, the detection result is judged to be false detection, and then the corresponding characteristic target is filtered and removed.
8. The endoscopy assessment method of claim 7, wherein step 3) further comprises:
entering a characteristic target that passes through a distance filter into a sliding window filter, the sliding window filter comprising a sliding window queue having a preset queue size;
counting the total number of similar characteristic targets in a plurality of continuous characteristic targets;
when the ratio of the total number of the same-class feature targets to the size of the preset queue is larger than a preset ratio, enabling the feature targets to pass through the sliding window filter;
preferably, the method specifically comprises the following steps: targets passing through the distance filter are fed into sliding window queues of sliding window filters for air injection and water injection, Q1, Q2. the size of each queue is N, and whenever there is a characteristic target output by the evaluation image passing through the detector and a bubbling cup filtered by the characteristics of the distance filter, a 1 is inserted into Q1, when no result is output by the evaluation image, a 0 is inserted, when no result is output by the evaluation image, a zero value is inserted into the evaluation image, and when no result is output by the evaluation image, a zero value is inserted into the evaluation image, and a zero value is inserted into the evaluation image
Figure FDA0003725549500000031
When the jet function is normal, a jet detection signal is output and used for indicating that the jet function is normal, wherein t is N/2 which is a preset threshold value, and Q1 i Is the ith element of queue Q1;
whenever successive water droplets are detected by the evaluation image and filtered by the characteristics of the distance filter, 1 is inserted into Q2, if no result is output from the evaluation image, 0 is inserted, and when no result is output from the evaluation image
Figure FDA0003725549500000032
When the water spray function is normal, outputting a detected water spray signal for indicating that the water spray function is normal, wherein t is N/2 which is a preset threshold value, and Q2 i The i-th element of queue Q2.
9. The endoscopy assessment method of claim 1, wherein step 4) specifically comprises:
when the feature target passes through the feature filtering, a timer corresponding to the type of the feature target starts to time, and when the feature target corresponding to a subsequent evaluation image continues to pass through the feature filtering, the timing result of the timer is increased in the positive direction;
when the timing result is larger than a preset time threshold, generating an evaluation result of the endoscope as evaluation passing;
preferably, the evaluation result comprises a jet evaluation result and/or a jet evaluation result;
preferably, the timer comprises a jet timer corresponding to the jet evaluation result, and the characteristic object corresponding to the jet timer comprises the transparent container and the front end of the endoscope;
preferably, the timer further comprises a water spray timer corresponding to the water spray evaluation result, and the characteristic object corresponding to the water spray timer comprises the continuous liquid droplet and the front end of the endoscope.
10. An endoscopic evaluation system, comprising:
the camera module is used for acquiring an evaluation image of the endoscope;
an object detector for acquiring a characteristic object and position information thereof in the evaluation image by object detection, the characteristic object including at least a transparent container containing liquid and bubbles, a continuous liquid droplet, and a front end of the endoscope;
the characteristic filter is used for calculating the distance between different types of characteristic targets in the evaluation images based on the characteristic targets and the position information thereof, calculating the appearance proportion of the same type of characteristic targets in a plurality of continuous evaluation images, and performing characteristic filtering on the characteristic targets based on the distance and the proportion;
and the signal timer is used for continuously timing the characteristic target filtered by the characteristics and acquiring an evaluation result of the endoscope based on the timing result, wherein the evaluation result is used for indicating whether the air injection and water injection functions of the endoscope are normal or not.
CN202210776537.1A 2022-07-01 2022-07-01 Endoscope evaluation method and system based on convolutional neural network Active CN115082739B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210776537.1A CN115082739B (en) 2022-07-01 2022-07-01 Endoscope evaluation method and system based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210776537.1A CN115082739B (en) 2022-07-01 2022-07-01 Endoscope evaluation method and system based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN115082739A true CN115082739A (en) 2022-09-20
CN115082739B CN115082739B (en) 2023-09-01

Family

ID=83258703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210776537.1A Active CN115082739B (en) 2022-07-01 2022-07-01 Endoscope evaluation method and system based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN115082739B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309605A (en) * 2023-05-24 2023-06-23 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) Endoscopy quality control method and system based on deep learning and state transition

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012070937A (en) * 2010-09-28 2012-04-12 Fujifilm Corp Endoscopic system
CN104797186A (en) * 2013-03-06 2015-07-22 奥林巴斯株式会社 Endoscope system
KR101875004B1 (en) * 2017-01-04 2018-07-05 금오공과대학교 산학협력단 Automated bleeding detection method and computer program in wireless capsule endoscopy videos
CN110363049A (en) * 2018-04-10 2019-10-22 阿里巴巴集团控股有限公司 The method and device that graphic element detection identification and classification determine
CN110837760A (en) * 2018-08-17 2020-02-25 北京四维图新科技股份有限公司 Target detection method, training method and device for target detection
CN110930429A (en) * 2018-09-19 2020-03-27 杭州海康威视数字技术股份有限公司 Target tracking processing method, device and equipment and readable medium
CN111666998A (en) * 2020-06-03 2020-09-15 电子科技大学 Endoscope intelligent intubation decision-making method based on target point detection
WO2021131809A1 (en) * 2019-12-23 2021-07-01 Sony Group Corporation Computer assisted surgery system, surgical control apparatus and surgical control method
CN113768452A (en) * 2021-09-16 2021-12-10 重庆金山医疗技术研究院有限公司 Intelligent timing method and device for electronic endoscope
US20220108546A1 (en) * 2019-06-17 2022-04-07 Huawei Technologies Co., Ltd. Object detection method and apparatus, and computer storage medium
US20220207896A1 (en) * 2020-12-30 2022-06-30 Stryker Corporation Systems and methods for classifying and annotating images taken during a medical procedure

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012070937A (en) * 2010-09-28 2012-04-12 Fujifilm Corp Endoscopic system
CN104797186A (en) * 2013-03-06 2015-07-22 奥林巴斯株式会社 Endoscope system
KR101875004B1 (en) * 2017-01-04 2018-07-05 금오공과대학교 산학협력단 Automated bleeding detection method and computer program in wireless capsule endoscopy videos
CN110363049A (en) * 2018-04-10 2019-10-22 阿里巴巴集团控股有限公司 The method and device that graphic element detection identification and classification determine
CN110837760A (en) * 2018-08-17 2020-02-25 北京四维图新科技股份有限公司 Target detection method, training method and device for target detection
CN110930429A (en) * 2018-09-19 2020-03-27 杭州海康威视数字技术股份有限公司 Target tracking processing method, device and equipment and readable medium
US20220108546A1 (en) * 2019-06-17 2022-04-07 Huawei Technologies Co., Ltd. Object detection method and apparatus, and computer storage medium
WO2021131809A1 (en) * 2019-12-23 2021-07-01 Sony Group Corporation Computer assisted surgery system, surgical control apparatus and surgical control method
CN111666998A (en) * 2020-06-03 2020-09-15 电子科技大学 Endoscope intelligent intubation decision-making method based on target point detection
US20220207896A1 (en) * 2020-12-30 2022-06-30 Stryker Corporation Systems and methods for classifying and annotating images taken during a medical procedure
CN113768452A (en) * 2021-09-16 2021-12-10 重庆金山医疗技术研究院有限公司 Intelligent timing method and device for electronic endoscope

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
CHEN, ZX. ET AL.: "Endoscopic thyroidectomy via the combined trans-oral and chest approach for cT1-2N1bM0 papillary thyroid carcinoma", SURGICAL ENDOSCOPY, vol. 36, no. 12, pages 9092 - 9098 *
CSDN: "2020年医疗影像行业软镜专题研究报告", Retrieved from the Internet <URL:http://t.csdn.cn/8myJN> *
YUMIKO ISHINO ET AL.: "Pitfalls in endoscope reprocessing: brushing of air and water channels is mandatory for high-level disinfection", GASTROINTESTINAL ENDOSCOPY, vol. 53, no. 02, pages 165 - 168 *
付丽平等: "腹腔镜下袖状胃切除治疗病态肥胖症的护理配合", 中西医结合心血管病电子杂志, vol. 08, no. 16, pages 77 - 78 *
李丽娜;郭明学;贾艳红;: "超声内镜引导下经气管针吸活检术375例的护理配合", 中国妇幼健康研究, vol. 28, no. 1, pages 277 - 278 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309605A (en) * 2023-05-24 2023-06-23 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) Endoscopy quality control method and system based on deep learning and state transition
CN116309605B (en) * 2023-05-24 2023-08-22 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) Endoscopy quality control method and system based on deep learning and state transition

Also Published As

Publication number Publication date
CN115082739B (en) 2023-09-01

Similar Documents

Publication Publication Date Title
CN108615051B (en) Diabetic retina image classification method and system based on deep learning
CN109190540B (en) Biopsy region prediction method, image recognition device, and storage medium
CN109858540B (en) Medical image recognition system and method based on multi-mode fusion
CN109102491A (en) A kind of gastroscope image automated collection systems and method
CN107292877B (en) Left and right eye identification method based on fundus image characteristics
EP2685881B1 (en) Medical instrument for examining the cervix
CN109117890B (en) Image classification method and device and storage medium
CN112614128B (en) System and method for assisting biopsy under endoscope based on machine learning
CN112967285B (en) Chloasma image recognition method, system and device based on deep learning
CN113159227A (en) Acne image recognition method, system and device based on neural network
CN111341437B (en) Digestive tract disease judgment auxiliary system based on tongue image
CN111863209B (en) Colonoscopy quality assessment workstation based on image recognition
CN113888518A (en) Laryngopharynx endoscope tumor detection and benign and malignant classification method based on deep learning segmentation and classification multitask
CN110211152A (en) A kind of endoscopic instrument tracking based on machine vision
CN113129287A (en) Automatic lesion mapping method for upper gastrointestinal endoscope image
WO2023143014A1 (en) Endoscope-assisted inspection method and device based on artificial intelligence
CN115082739A (en) Endoscope evaluation method and system based on convolutional neural network
CN111797900B (en) Artery and vein classification method and device for OCT-A image
CN113239805A (en) Mask wearing identification method based on MTCNN
CN111798408A (en) Endoscope interference image detection and grading system and method
CN112712122A (en) Corneal ulcer classification detection method and system based on neural network model
CN109241963A (en) Blutpunkte intelligent identification Method in capsule gastroscope image based on Adaboost machine learning
WO2012121488A2 (en) Method for processing medical blood vessel image
Ghosh et al. Block based histogram feature extraction method for bleeding detection in wireless capsule endoscopy
CN113946217B (en) Intelligent auxiliary evaluation system for enteroscope operation skills

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant