CN115082739B - Endoscope evaluation method and system based on convolutional neural network - Google Patents

Endoscope evaluation method and system based on convolutional neural network Download PDF

Info

Publication number
CN115082739B
CN115082739B CN202210776537.1A CN202210776537A CN115082739B CN 115082739 B CN115082739 B CN 115082739B CN 202210776537 A CN202210776537 A CN 202210776537A CN 115082739 B CN115082739 B CN 115082739B
Authority
CN
China
Prior art keywords
endoscope
evaluation
target
characteristic
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210776537.1A
Other languages
Chinese (zh)
Other versions
CN115082739A (en
Inventor
曹鱼
张晨曦
陈齐磊
刘本渊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Huiwei Intelligent Medical Technology Co ltd
Original Assignee
Suzhou Huiwei Intelligent Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Huiwei Intelligent Medical Technology Co ltd filed Critical Suzhou Huiwei Intelligent Medical Technology Co ltd
Priority to CN202210776537.1A priority Critical patent/CN115082739B/en
Publication of CN115082739A publication Critical patent/CN115082739A/en
Application granted granted Critical
Publication of CN115082739B publication Critical patent/CN115082739B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/771Feature selection, e.g. selecting representative features from a multi-dimensional feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Endoscopes (AREA)

Abstract

The application discloses an endoscope evaluation method and an evaluation system based on a convolutional neural network. The endoscope evaluation method comprises the following steps: acquiring an evaluation image of an endoscope; acquiring a characteristic target and position information of the characteristic target in the evaluation image; calculating the distance between different kinds of characteristic targets in the evaluation image and the occurrence ratio of the characteristic targets of the same kind, and carrying out characteristic filtering on the characteristic targets; and continuously timing the feature targets filtered by the features to obtain an evaluation result, wherein the evaluation result is used for indicating whether the endoscope is normal or not. The endoscope evaluation method and the evaluation system provided by the application can accurately judge whether the preoperative preparation of the endoscope is completed for the air injection and water injection examination preparation work or not, accurately give out an indication of whether the preoperative preparation work is passed or not, avoid the problem of operation or examination interruption caused by insufficient preoperative preparation in the operation, greatly improve the preoperative preparation efficiency of doctors and save the energy of the doctors.

Description

Endoscope evaluation method and system based on convolutional neural network
Technical Field
The application relates to the technical field of image recognition, in particular to an endoscope evaluation method and an evaluation system based on a convolutional neural network.
Background
In the field of endoscopes, deep learning-based computer-aided diagnosis systems are attracting increasing attention from researchers because of their high application value. The main application fields of the artificial intelligence algorithm based on the convolutional neural network include: and (3) automatic detection of polyps, blind area monitoring, early cancer identification and quality control. The large amount of clinical research data shows that the mature artificial intelligent algorithm is mainly focused on detection and identification of lesions, such as polyps and early cancers in books, but intelligent detection of preoperative equipment conditions, such as whether an endoscope can jet air normally or jet water normally, is an important ring of experiments, and no automatic algorithm related to the detection of the preoperative equipment conditions is available.
The application of artificial intelligence to the assessment of the quality of endoscopic operation is also rare, and only some are the assessment of doctor operation during surgery, and there is no pre-operative assessment of whether surgical equipment can function properly. An important application scenario in preoperative preparation is to evaluate whether the front end of an endoscope can jet air and water smoothly, so as to ensure that the operation in the operation can be performed normally. At present, whether the front end of the endoscope can jet air and spray water is detected manually by a doctor, the doctor can judge whether the endoscope can jet air normally by observing whether the front end of the endoscope can jet air bubbles in water, and whether the endoscope can spray water is judged by observing whether water drops sprayed from the front end of the endoscope are smooth or not. Because the demand for gastrointestinal endoscope detection is rapidly increased, the number of endoscopists is relatively lacking, and a great number of patients are faced, doctors sometimes misjudge, and miss the preoperative preparation result, thereby increasing the probability of affecting normal operation and delaying the illness state of the patients.
Disclosure of Invention
Aiming at the defects of the prior art, the application aims to provide an endoscope evaluation method and an endoscope evaluation system based on a convolutional neural network.
In order to achieve the purpose of the application, the technical scheme adopted by the application comprises the following steps:
in a first aspect, the present application provides an endoscope evaluation method based on a convolutional neural network, where the endoscope has at least two functions of air injection and water spraying, and the endoscope evaluation method includes:
1) Acquiring an evaluation image of the endoscope through an image pickup module;
2) Acquiring a characteristic target and position information thereof in the evaluation image through target detection, wherein the characteristic target at least comprises a transparent container containing liquid and bubbles, continuous liquid drops and the front end of the endoscope;
3) Calculating the distance between different types of feature targets in the evaluation image based on the feature targets and the position information thereof, calculating the occurrence proportion of the feature targets of the same type in a plurality of continuous evaluation images, and performing feature filtering on the feature targets based on the distance and the proportion;
4) Continuously timing the feature targets filtered by the features, and acquiring an evaluation result of the endoscope based on the timing result, wherein the evaluation result is used for indicating whether the air injection and water injection functions of the endoscope are normal or not.
In a second aspect, the present application also provides an endoscope evaluation system comprising:
the camera module is used for acquiring an evaluation image of the endoscope;
a target detector for acquiring a characteristic target and position information thereof in the evaluation image by target detection, the characteristic target including at least a transparent container containing liquid and air bubbles, continuous liquid droplets, and a front end of the endoscope;
the feature filter is used for calculating the distance between different types of feature targets in the evaluation image based on the feature targets and the position information thereof, calculating the occurrence proportion of the feature targets of the same type in a plurality of continuous evaluation images, and carrying out feature filtering on the feature targets based on the distance and the proportion;
and the signal timer is used for continuously timing the characteristic targets filtered by the characteristics and acquiring an evaluation result of the endoscope based on the timing result, wherein the evaluation result is used for indicating whether the air injection and water injection functions of the endoscope are normal or not.
Based on the technical scheme, compared with the prior art, the application has the beneficial effects that:
the endoscope evaluation method and the evaluation system based on the convolutional neural network can efficiently and accurately find out the transparent container containing liquid and bubbles when the front end of the endoscope is sprayed with air, and then the transparent container is placed into continuous liquid drops and the front end of the endoscope when the front end of the endoscope is sprayed with water, then the detection target result is processed and filtered, only a correct result is output, finally a timer is triggered by a correct characteristic target signal, and whether the endoscope passes through evaluation is indicated based on the timing result of the timer; the method and the system can accurately judge whether the preoperative preparation of the endoscope completes the air injection and water injection examination preparation work or not, accurately give out an indication of whether the preoperative preparation work is passed or not, avoid the problem of operation or examination interruption caused by insufficient preoperative preparation in operation, greatly improve the preoperative preparation efficiency of doctors and save the energy of the doctors.
The above description is only an overview of the technical solutions of the present application, and in order to enable those skilled in the art to more clearly understand the technical means of the present application, the present application may be implemented according to the content of the specification, and the following description is given of the preferred embodiments of the present application with reference to the detailed drawings.
Drawings
FIG. 1 is a field Jing Shitu of use of an endoscopic evaluation method based on convolutional neural networks, provided in accordance with an embodiment of the present application;
FIG. 2 is an exemplary image of a feature object provided in accordance with one embodiment of the present application;
FIG. 3 is a general flow diagram of an endoscopic evaluation method based on convolutional neural network according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a target detector according to an embodiment of the present application;
FIG. 5 is a detailed schematic diagram of the signal processing of the neural network recognition result in the feature filter according to an embodiment of the present application;
fig. 6 is a schematic diagram of a signal timer according to an embodiment of the present application.
Detailed Description
It should be stated first that, although the endoscope belongs to one of the medical instruments, it should be emphasized that the technical concept and the technical solution adopted by the present application are all to serve the early detection of the endoscope, and belong to an automatic visual detection method for instruments or devices, and the method only gives the relevant information about whether the function of the endoscope is normal, but does not relate to the diagnostic information directly used for judging whether the organism is healthy, and has no direct relevance to any disease treatment and diagnostic method, specifically, the doctor cannot directly and individually judge the health condition and disease state of the organism simply according to the method provided by the embodiment of the present application, and the evaluation method provided by the embodiment of the present application only tells the doctor whether the used endoscope is normal.
Therefore, the technical scheme protected by the application does not relate to a disease diagnosis and treatment method, and belongs to the object of patent protection.
On the basis of the technical scheme of the application belonging to patent protection objects, in view of the defects in the prior art, the inventor of the application can put forward the technical scheme of the application through long-term research and a large number of practices. The technical scheme, the implementation process, the principle and the like are further explained as follows.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, however, the present application may be practiced otherwise than as described herein, and therefore the scope of the present application is not limited to the specific embodiments disclosed below.
Moreover, relational terms such as "first" and "second", and the like, may be used solely to distinguish one from another component or method step having the same name, without necessarily requiring or implying any actual such relationship or order between such components or method steps.
In conventional object detection algorithms, the image is generally globally indexed using manually designed shape features as templates to find feature-compliant objects. The algorithm has poor universality, a large number of manual designs of corresponding features are needed, and a specific target can be detected mostly. Conventional algorithms should use a Central Processing Unit (CPU) to run the algorithm, which is also far less efficient than the deep learning algorithms that are now run using graphics cards. With the development of deep learning, a target detection algorithm based on CNN has leap development in accuracy and speed. The CNN-based target detection algorithm is more universal in characteristics extracted through backbone CNNs, the characteristics of various targets can be extracted through training, CNN networks such as a region candidate network and the like are matched, and the deep learning algorithm can also find the targets faster and more accurately. In recent years, with the improvement of the hardware performance of a display card and the rapid iterative optimization of a deep learning algorithm, the development and progress of Artificial Intelligence (AI) are revolutionary in various fields. With the rapid progress of the whole society to digitization, a large amount of data is generated every day around the world. With the support of these data, the practical performance of AI systems of existing deep learning algorithms in many fields has changed substantially, and their performance has been approaching or even surpassing the judgment of human experts. In the medical field, deep learning-based algorithms also show great potential, for example in coping with diagnosis of dermatological lesions and diabetic retinopathy images, deep learning-based AI systems already have the same or even better recognition capabilities as medical professionals in the closure test procedure.
In order to solve the problem of operation quality evaluation in AI-based endoscopes, such as digestive endoscopy CAD systems, the application provides an automatic detection and evaluation algorithm and system for preoperative jet water injection of an endoscope based on a convolutional neural network. The evaluation method in the application mainly comprises three parts: (1) And the detector is used for detecting the front end of the endoscope, jetting air normally in the water cup and jetting water drops from the front end of the endoscope. (2) The object detection signal processing algorithm filters the reports of the object detector by calculating the relative positions between different characteristic objects and the sliding window algorithm. (3) And the detection signal timer is used for timing the correct output signal and outputting a final detection result.
In the present application, the water spraying function is simply referred to as a function of spraying liquid from the endoscope, and is not necessarily limited to water, but may be some aqueous solution or some medical solvent or dye, and other liquids.
Referring specifically to fig. 1-3, an embodiment of the present application provides an endoscope evaluation method based on a convolutional neural network, where the endoscope has at least two functions of air injection and water spraying, and the endoscope evaluation method includes the following steps:
1) Acquiring an evaluation image of the endoscope through an image pickup module;
2) Acquiring a characteristic target and position information thereof in the evaluation image through target detection, wherein the characteristic target at least comprises a transparent container containing liquid and bubbles, continuous liquid drops and the front end of the endoscope;
3) Calculating the distance between different types of feature targets in the evaluation image based on the feature targets and the position information thereof, calculating the occurrence proportion of the feature targets of the same type in a plurality of continuous evaluation images, and performing feature filtering on the feature targets based on the distance and the proportion;
4) Continuously timing the feature targets filtered by the features, and acquiring an evaluation result of the endoscope based on the timing result, wherein the evaluation result is used for indicating whether the air injection and water injection functions of the endoscope are normal or not.
As some typical application examples, the embodiments included in the above technical solutions may include steps as shown in fig. 3, specifically:
step one: continuous single-frame image acquisition by an imaging module placed on a preoperative preparation table
Step two: target detection is carried out on the acquired image by using a CNN target detector
Step three: filtering the results of the object detector
Step four: the counter counts the successfully detected targets, reaches a threshold, and reports the pre-operative preparation success.
In some embodiments, in step 1), the image capturing module is fixed on a pre-operation preparation table, and the capturing direction faces to an operation platform of the pre-operation preparation table;
in some embodiments, the camera module is 15-24cm from the center of the operating platform.
As some specific examples of the functional application scenarios, as shown in the usage scenario pictures shown in fig. 1-2, fig. 1 shows a water spraying operation scenario, a gas spraying operation scenario, and a view angle picture of the camera module from left to right. The red frame in the figure is the placement position of the camera module. Fig. 2 shows, from left to right, a transparent container (a cup) containing liquid but no bubbles, a transparent container with bubbles, continuous droplets, and a front image of an endoscope.
In some embodiments, in step 2), the target detection is performed with a target detector;
in some embodiments, the target detector comprises a backbone neural network, a region candidate network, and a classifier in series in order;
the backbone neural network is used for generating a characteristic image based on the evaluation image conversion;
the region candidate network is used for acquiring a detection target and position information thereof based on the characteristic image;
the classifier is used for classifying the detection targets into different kinds of characteristic targets.
In some embodiments, the backbone neural network comprises any one of ResNet, squeezeNet, shuffleNNet, VGGNnet and DenseNet;
in some embodiments, the classifier comprises a softmax classifier.
As some specific examples of applications, the target detector is an endoscopic jet water detector based on a single frame of evaluation image. For example, the detector can take as input a single frame evaluation image of an imaging module aligned with a pre-digestive endoscopy preparation console, and output a target position and category of whether the front end of an endoscope can smoothly jet water or not by adopting a convolutional neural network which is currently common.
More specifically, the feature targets to be detected by the target detector in this embodiment include the following three types:
1. cup with air bubble: when a doctor prepares before operation, the front end of the endoscope is inserted into a water cup filled with water for air injection, and if the air injection is smooth, a large amount of bubbles can appear in the water cup, and the difference between the water cup and the bubble-free water cup is large. Therefore, if the object detector can detect that the water cup containing a large number of bubbles is present, it can be further judged that the air injection is normal based on this.
2. Water drops sprayed from the front end of the endoscope: when the water spray at the front end of the endoscope is detected, a doctor pulls out the water cup at the front end of the endoscope, performs water spray operation and observes the water drops sprayed from the tail end, and if the water drops are continuous and smooth, the device can be confirmed to spray water smoothly. If the detector can detect continuous and smooth water drops, the device can be confirmed to spray water normally.
3. Endoscope front end: when a doctor performs a preoperative preparation operation, air and water are both ejected from the distal end of the endoscope. By detecting the front end of the endoscope and calculating the relative position of the front end of the endoscope and the water cup or the water drop, the accuracy of the detection of the water cup and the water drop can be further confirmed.
In some embodiments, the target detector is derived by marker training.
In some embodiments, the marker training comprises pre-training of the diaphyseal neural network.
In some embodiments, the pre-training is performed at ImageNet.
In some embodiments, the marker training specifically comprises:
an initial detector of the target, a training image, and their corresponding labels are provided.
And carrying out target detection on the training image by using the target initial detector to obtain a training detection result.
And updating parameters of the target initial detector based on the training detection result and the label to obtain the target detector.
As other specific application examples, a machine learning method based on supervision may be adopted, firstly, a scene video image as shown in fig. 2 is sampled as a data source, a water cup containing bubbles and a picture containing continuous water drops at the front end of an endoscope are marked, after quality audit is performed by a professional doctor, a training data set is formed, after the data set is built, a CNN model is trained, and after closure test, the target detector with higher accuracy is finally obtained.
FIG. 4 is a flow chart of the target detection system in the second step, wherein the backbone neural network may be a convolutional neural network commonly used in the prior art, such as ResNet, squeezeNet, shuffleNNet, VGGNnet, denseNet. The single frame picture is processed by the network to generate a characteristic image which is used as the input of the area candidate network. The area candidate network is a neural network for distinguishing the detected object from the background, the network processes the feature map, selects a foreground object and outputs the position information of the detected object, and the algorithm can intercept corresponding features from the feature map as the input of the final classifier through the information. The three-class classifier in the application is a softmax classifier. The backbone neural network can be trained and trained in image net in advance, and then the whole network can be trained on manually collected and marked water cups, water drops and endoscope front-end pictures.
In some embodiments, step 3) specifically comprises:
and enabling the characteristic target and the position information thereof to enter a distance filter.
A first target distance between the transparent container and a front end of an endoscope is calculated.
A second target distance between the successive droplets and the front end of the endoscope is calculated.
And when the first target distance is larger than a first preset threshold value or the second target distance is larger than a second preset threshold value, enabling the corresponding characteristic targets to pass through the distance filter.
In some embodiments, step 3) specifically further comprises:
the characteristic target passing through the distance filter is entered into a sliding window filter comprising a sliding window queue having a preset queue size.
And counting the total number of similar feature targets in a plurality of continuous feature targets.
And when the ratio of the total number of the similar characteristic targets to the preset queue size is larger than a preset ratio, enabling the characteristic targets to pass through the sliding window filter.
In order to reduce the adverse effect of false alarms of the image detector on the final result of the algorithm, the application filters the result signal of the target detector in two steps. In the first step, false alarm filtering is carried out through the relative positions of the water cup, the water drop and the front end of the endoscope, when the front end of the endoscope is close to the water cup and the water drop is smaller than or equal to a certain threshold value in an evaluation image, the water cup or continuous water drop containing bubbles can be determined to be correctly detected, otherwise, if the position between the front end of the endoscope and the water cup is far and is larger than the certain threshold value, the detected water cup or continuous water drop containing bubbles can be judged to be false alarm. And secondly, adding a sliding window filtering algorithm, sending the detector result which smoothly passes through the first filtering step into a sliding window queue, and outputting the final target position and the category information when the same category target in the sliding window exceeds a preset sliding window threshold value.
Fig. 5 is a structural example of a target detection result signal processing system (feature filter) according to an embodiment of the present application, which is a second module according to an embodiment of the present application. In specific examples, the target detector result may be first false-positive filtered by calculating the relative distance between the bubble cup (i.e., the above-mentioned transparent container containing liquid and bubbles, the same applies below) and, for example, the front end of the endoscope, and the relative distance between the continuous water drop and the front end of the endoscope. The first detection target of the system is the front end of an endoscope hose, the center of the first detection target is marked as c1, the second detection target is an bubbling water cup, the center of the second detection target is marked as c2, the center coordinate of a continuous water drop of the third detection target is marked as c3, and the detection result can be divided into the following cases:
if an bubbling cup or continuous water drop is detected in the image, but the front end of the endoscope hose is not detected, the detected cup is judged, the water drop possibly being noise in the background, not being a real target, is judged to be erroneously detected, and then the detected water drop is filtered by the system.
If bubbling water cup or continuous water drop is detected in the image and the front end of the endoscope hose, the straight line distance between c1 and c2 is set as d1, if d1 is in a smaller range, namely d1 epsilon [0,7cm ], the bubbling water cup is judged to be detected correctly, otherwise, if d1 is more than 7cm, the water cup is far away from the front end of the hose, the false detection is judged to be wrong detection, and then the system filters the false detection, and similarly, if d2 is in a smaller range, namely d2 epsilon [0,7cm ], the continuous water drop detection is judged to be correct, otherwise, if d1 is more than 7cm, the water drop is far away from the front end of the hose, the false detection is judged to be wrong detection, and then the system filters the false detection.
To further filter the detection results, the objects passing through the distance filter are sent into the sliding window queues of the sliding window filters for jet and water spray respectively, Q1, Q2. Each queue has a size N, every time there is an bubbling cup for the evaluation image to pass through the detector output object and the distance filter, 1 is inserted into Q1, if the evaluation image has no result output, 0 is inserted, whenOutputting a detected jet signal, wherein t=n/2 is a predetermined threshold, Q1i is the ith element of the queue Q1. Similarly, whenever there is an evaluation image that detects a continuous drop and passes the distance filter, 1 is inserted into Q2, if the evaluation image has no output, 0 is inserted when->Then a detected jet signal is output, where t=n/2 is a preset threshold and Q2i is the i-th element of the queue Q2.
In some embodiments, step 4) specifically includes:
and when the feature targets pass the feature filtering, starting to count by a timer corresponding to the types of the feature targets, and when the feature targets corresponding to the subsequent evaluation images continue to pass the feature filtering, enabling the count result of the timer to be increased forward.
And when the timing result is greater than a preset time threshold, generating an evaluation result of the endoscope as an evaluation passing.
In some embodiments, the evaluation results include jet evaluation results and/or water jet evaluation results.
In some embodiments, the timer comprises a jet timer corresponding to the jet evaluation result, and the feature target corresponding to the jet timer comprises the transparent container and the front end of the endoscope.
In some embodiments, the timer further comprises a water jet timer corresponding to the water jet evaluation result, the feature target corresponding to the water jet timer comprising the continuous droplet and a front end of an endoscope.
In the above part, the target information filtered twice activates the corresponding timer, when the air injection quality inspection timer reaches a preset value, the system outputs the air injection quality inspection to pass, and similarly, when the water injection quality inspection timer reaches the preset value, the system outputs the water injection quality inspection to pass.
As shown in FIG. 6, the detection result (characteristic target) passing the filtering will trigger the corresponding timer, each time the target signal is detected correctly will increase the timer, when the accumulated value c of the timer is greater than the preset time t of doctor, the passing is estimated, i.e. the endoscope is functioning normally
It can be understood that the above method and the below evaluation system may further include an output device, or an output module, configured to output information about whether the function of the endoscope is normal, so that a doctor can learn that the output mode of the endoscope may be voice, or may be an image or text, and the corresponding output method and related settings belong to common technical means and are not repeated herein.
With continued reference to fig. 3-6, an embodiment of the present application also provides an endoscope evaluation system comprising:
and the image pickup module is used for acquiring an evaluation image of the endoscope.
And the object detector is used for acquiring a characteristic object and position information thereof in the evaluation image through object detection, wherein the characteristic object at least comprises a transparent container containing liquid and bubbles, continuous liquid drops and the front end of the endoscope.
And the feature filter is used for calculating the distance between different types of feature targets in the evaluation image based on the feature targets and the position information thereof, calculating the occurrence proportion of the feature targets of the same type in a plurality of continuous evaluation images, and carrying out feature filtering on the feature targets based on the distance and the proportion.
And the signal timer is used for continuously timing the characteristic targets filtered by the characteristics and acquiring an evaluation result of the endoscope based on the timing result, wherein the evaluation result is used for indicating whether the air injection and water injection functions of the endoscope are normal or not.
The above-mentioned object detector, feature filter and signal timer constitute the preoperative preparation evaluation algorithm based on convolutional neural network provided in the embodiment of the present application, and specifically, the three modules may be: 1. based on a convolutional neural network CNN target detector, 2, a target detection signal processing algorithm, 3, a target success detection signal timer, and reporting preoperative preparation success if the target time is continuously detected to exceed a preset threshold.
As a specific application scenario, when the endoscope evaluation system and the corresponding evaluation method are adopted for evaluation, the evaluation accuracy can reach 95%, and the doctor can evaluate the endoscope evaluation system with naked eyes, but by adopting the system and the method provided by the embodiment of the application, doctors do not need to consume extra energy for evaluation, even based on the method and the system, other medical staff such as nurses can be handed over to replace doctors for specialized evaluation, especially when a large number of patients to be inspected or operated are faced, the energy of the doctors can be greatly saved, and the quality and efficiency of the inspection or operation can be improved.
In summary, the method and the system provided by the embodiment of the application can accurately judge whether the preoperative preparation of the endoscope completes the air injection and water injection examination preparation work or not, accurately give out an indication of whether the preoperative preparation work is passed or not, avoid the problem of operation or examination interruption caused by insufficient preoperative preparation in the operation, greatly improve the preoperative preparation efficiency of doctors and save the energy of the doctors. Has great significance for development and application of an endoscope method.
It should be understood that the above embodiments are merely for illustrating the technical concept and features of the present application, and are intended to enable those skilled in the art to understand the present application and implement the same according to the present application without limiting the scope of the present application. All equivalent changes or modifications made in accordance with the spirit of the present application should be construed to be included in the scope of the present application.

Claims (6)

1. An endoscope evaluation method based on a convolutional neural network, wherein the endoscope has at least two functions of air injection and water spraying, and the endoscope evaluation method comprises the following steps:
1) Acquiring an evaluation image of the endoscope through an image pickup module;
2) The method comprises the steps that a target detector is utilized to obtain a characteristic target and position information thereof in an evaluation image through target detection, the characteristic target at least comprises a transparent container containing liquid and bubbles, continuous liquid drops and the front end of an endoscope, the target detector comprises a backbone neural network, a region candidate network and a classifier which are sequentially connected in series, the backbone neural network is used for generating a characteristic image based on conversion of the evaluation image, the region candidate network is used for obtaining a detection target and position information thereof based on the characteristic image, and the classifier is used for classifying the detection target into different types of characteristic targets;
3) Calculating the distance between different kinds of characteristic targets in the evaluation image based on the characteristic targets and the position information thereof, and calculating the occurrence proportion of the characteristic targets of the same kind in a plurality of continuous evaluation images, and performing characteristic filtering on the characteristic targets based on the distance and the proportion, wherein a first detection target is the front end of an endoscope, the center of which is marked as c1, a second detection target is a transparent container containing liquid and bubbles, the center of which is marked as c2, a third detection target is marked as continuous liquid drop, the center of which is marked as c3, and the result of the characteristic filtering comprises:
when the transparent container and/or continuous liquid drop containing liquid and air bubble are detected in the evaluation image and the front end of the endoscope is not detected, judging that the detected transparent container and/or continuous liquid drop containing liquid and air bubble is noise, judging that the detected transparent container and/or continuous liquid drop containing liquid and air bubble is false detection, and then filtering and removing the detected transparent container and/or continuous liquid drop;
when a transparent container containing liquid and air bubbles, continuous liquid drops and the front end of the endoscope are detected in the evaluation image, the straight line distance between c1 and c2 is noted as d1, when d1[0,7cm]When judging that the liquid and the gas are containedThe detection of the bubble transparent container is correct, when d1>When 7cm, determining that the background noise is false, determining the detection result as false detection, and then filtering and removing the corresponding characteristic target;
the linear distance between c1 and c3 is denoted as d2, when d2[0,7cm]When d1, the detection of the continuous droplet is judged to be correct>When 7cm, determining that the background noise is misreported, determining the detection result as false detection, and then filtering and removing the corresponding characteristic target;
the objects passing through the distance filter are fed into the sliding window queues of the sliding window filters for jet and water respectively, Q1, Q2, each queue is N in size, 1 is inserted into Q1 when the evaluated image detects bubbling and is filtered by the characteristics of the distance filter, 0 is inserted when the evaluated image has no result output, and>at t, outputting a detected air injection signal for indicating normal air injection function, wherein t=N/2 is a preset threshold value, Q1 i Is the ith element of queue Q1;
every time there is an evaluation image that detects continuous water drops and passes feature filtering of the distance filter, 1 is inserted into Q2, if the evaluation image is output without result, 0 is inserted, when >At t, outputting a detected water spray signal for indicating that the water spray function is normal, wherein t=N/2 is a preset threshold value, Q2 i Is the ith element of queue Q2;
4) When the feature targets pass through the feature filtering, starting to count the time by a timer corresponding to the types of the feature targets, and when the feature targets corresponding to the subsequent evaluation images continue to pass through the feature filtering, enabling the counting result of the timer to be positively increased;
when the timing result is greater than a preset time threshold, generating an evaluation result of the endoscope as passing evaluation;
the evaluation results comprise jet evaluation results and/or water spray evaluation results;
the timer comprises a jet timer corresponding to the jet evaluation result, and the feature target corresponding to the jet timer comprises the transparent container and the front end of the endoscope, and the appearance form is the sliding window queue Q1; the timer also comprises a water spraying timer corresponding to the water spraying evaluation result, and the characteristic target corresponding to the water spraying timer comprises the continuous liquid drops and the front end of the endoscope, and the characteristic target is represented by the sliding window queue Q2.
2. The endoscope evaluation method according to claim 1, wherein in step 1), the image pickup module is fixedly arranged on a preoperative preparation table, and a shooting direction faces an operation platform of the preoperative preparation table;
the camera shooting module is away from the center of the operation platformWherein->
3. The endoscopic evaluation method according to claim 1, wherein the backbone neural network comprises any one of ResNet, squeezeNet, shuffleNet, VGGNet and DenseNet;
and/or, the classifier comprises a softmax classifier.
4. The endoscopic evaluation method according to claim 1, wherein said object detector is obtained by marker training;
the marker training includes pre-training of the diaphyseal neural network;
the pre-training is performed in ImageNet.
5. The endoscopic evaluation method according to claim 4, wherein the marker training specifically comprises:
providing an initial target detector, a training image and a label corresponding to the training image;
performing target detection on the training image by using the target initial detector to obtain a training detection result;
and updating parameters of the target initial detector based on the training detection result and the label to obtain the target detector.
6. An endoscope evaluation system for implementing the endoscope evaluation method of any one of claims 1 to 5, comprising:
the camera module is used for acquiring an evaluation image of the endoscope;
a target detector for acquiring a characteristic target and position information thereof in the evaluation image by target detection, the characteristic target including at least a transparent container containing liquid and air bubbles, continuous liquid droplets, and a front end of the endoscope;
the feature filter is used for calculating the distance between different types of feature targets in the evaluation image based on the feature targets and the position information thereof, calculating the occurrence proportion of the feature targets of the same type in a plurality of continuous evaluation images, and carrying out feature filtering on the feature targets based on the distance and the proportion;
and the signal timer is used for continuously timing the characteristic targets filtered by the characteristics and acquiring an evaluation result of the endoscope based on the timing result, wherein the evaluation result is used for indicating whether the air injection and water injection functions of the endoscope are normal or not.
CN202210776537.1A 2022-07-01 2022-07-01 Endoscope evaluation method and system based on convolutional neural network Active CN115082739B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210776537.1A CN115082739B (en) 2022-07-01 2022-07-01 Endoscope evaluation method and system based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210776537.1A CN115082739B (en) 2022-07-01 2022-07-01 Endoscope evaluation method and system based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN115082739A CN115082739A (en) 2022-09-20
CN115082739B true CN115082739B (en) 2023-09-01

Family

ID=83258703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210776537.1A Active CN115082739B (en) 2022-07-01 2022-07-01 Endoscope evaluation method and system based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN115082739B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309605B (en) * 2023-05-24 2023-08-22 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) Endoscopy quality control method and system based on deep learning and state transition

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012070937A (en) * 2010-09-28 2012-04-12 Fujifilm Corp Endoscopic system
CN104797186A (en) * 2013-03-06 2015-07-22 奥林巴斯株式会社 Endoscope system
KR101875004B1 (en) * 2017-01-04 2018-07-05 금오공과대학교 산학협력단 Automated bleeding detection method and computer program in wireless capsule endoscopy videos
CN110837760A (en) * 2018-08-17 2020-02-25 北京四维图新科技股份有限公司 Target detection method, training method and device for target detection
CN110930429A (en) * 2018-09-19 2020-03-27 杭州海康威视数字技术股份有限公司 Target tracking processing method, device and equipment and readable medium
CN111666998A (en) * 2020-06-03 2020-09-15 电子科技大学 Endoscope intelligent intubation decision-making method based on target point detection
CN113768452A (en) * 2021-09-16 2021-12-10 重庆金山医疗技术研究院有限公司 Intelligent timing method and device for electronic endoscope

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110363049B (en) * 2018-04-10 2024-01-12 阿里巴巴集团控股有限公司 Method and device for detecting, identifying and determining categories of graphic elements
CN110378381B (en) * 2019-06-17 2024-01-19 华为技术有限公司 Object detection method, device and computer storage medium
EP4057882A1 (en) * 2019-12-23 2022-09-21 Sony Group Corporation Computer assisted surgery system, surgical control apparatus and surgical control method
EP4272182A1 (en) * 2020-12-30 2023-11-08 Stryker Corporation Systems and methods for classifying and annotating images taken during a medical procedure

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012070937A (en) * 2010-09-28 2012-04-12 Fujifilm Corp Endoscopic system
CN104797186A (en) * 2013-03-06 2015-07-22 奥林巴斯株式会社 Endoscope system
KR101875004B1 (en) * 2017-01-04 2018-07-05 금오공과대학교 산학협력단 Automated bleeding detection method and computer program in wireless capsule endoscopy videos
CN110837760A (en) * 2018-08-17 2020-02-25 北京四维图新科技股份有限公司 Target detection method, training method and device for target detection
CN110930429A (en) * 2018-09-19 2020-03-27 杭州海康威视数字技术股份有限公司 Target tracking processing method, device and equipment and readable medium
CN111666998A (en) * 2020-06-03 2020-09-15 电子科技大学 Endoscope intelligent intubation decision-making method based on target point detection
CN113768452A (en) * 2021-09-16 2021-12-10 重庆金山医疗技术研究院有限公司 Intelligent timing method and device for electronic endoscope

Also Published As

Publication number Publication date
CN115082739A (en) 2022-09-20

Similar Documents

Publication Publication Date Title
CN109190540B (en) Biopsy region prediction method, image recognition device, and storage medium
EP2685881B1 (en) Medical instrument for examining the cervix
Zheng et al. Localisation of colorectal polyps by convolutional neural network features learnt from white light and narrow band endoscopic images of multiple databases
CN107256552A (en) Polyp image identification system and method
CN109102491A (en) A kind of gastroscope image automated collection systems and method
EP2557539B1 (en) Image processing apparatus, image processing method, and image processing program
JP7182019B2 (en) Colonoscopy Quality Assessment Workstation Based on Image Recognition
CN115082739B (en) Endoscope evaluation method and system based on convolutional neural network
JP6716853B2 (en) Information processing apparatus, control method, and program
CN113129287A (en) Automatic lesion mapping method for upper gastrointestinal endoscope image
WO2023143014A1 (en) Endoscope-assisted inspection method and device based on artificial intelligence
CN113888518A (en) Laryngopharynx endoscope tumor detection and benign and malignant classification method based on deep learning segmentation and classification multitask
CN111798408B (en) Endoscope interference image detection and classification system and method
CN114708258B (en) Eye fundus image detection method and system based on dynamic weighted attention mechanism
CN113017702A (en) Method and system for identifying extension length of small probe of ultrasonic endoscope and storage medium
CN109241963A (en) Blutpunkte intelligent identification Method in capsule gastroscope image based on Adaboost machine learning
Ghosh et al. Block based histogram feature extraction method for bleeding detection in wireless capsule endoscopy
Zabulis et al. Lumen detection for capsule endoscopy
CN114359131A (en) Helicobacter pylori stomach video full-automatic intelligent analysis system and marking method thereof
CN112419246A (en) Depth detection network for quantifying esophageal mucosa IPCLs blood vessel morphological distribution
CN114332025B (en) Digestive endoscopy oropharynx passing time automatic detection system and method
Li et al. Computer aided detection of bleeding in capsule endoscopy images
CN114693912A (en) Endoscope inspection system with eyeball tracking function, storage medium and equipment
Lopes et al. A deep learning approach to detect hyoid bone in ultrasound exam
CN111446003A (en) Infectious disease detection robot based on visual identification and detection method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant