CN114332019A - Endoscope image detection assistance system, method, medium, and electronic apparatus - Google Patents

Endoscope image detection assistance system, method, medium, and electronic apparatus Download PDF

Info

Publication number
CN114332019A
CN114332019A CN202111643635.XA CN202111643635A CN114332019A CN 114332019 A CN114332019 A CN 114332019A CN 202111643635 A CN202111643635 A CN 202111643635A CN 114332019 A CN114332019 A CN 114332019A
Authority
CN
China
Prior art keywords
endoscope
image
tissue
mode
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111643635.XA
Other languages
Chinese (zh)
Other versions
CN114332019B (en
Inventor
边成
李剑
赵秋阳
赵家英
石小周
杨志雄
薛云鹤
李帅
刘威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaohe Medical Instrument Hainan Co ltd
Original Assignee
Xiaohe Medical Instrument Hainan Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaohe Medical Instrument Hainan Co ltd filed Critical Xiaohe Medical Instrument Hainan Co ltd
Priority to CN202111643635.XA priority Critical patent/CN114332019B/en
Publication of CN114332019A publication Critical patent/CN114332019A/en
Priority to PCT/CN2022/137565 priority patent/WO2023124876A1/en
Application granted granted Critical
Publication of CN114332019B publication Critical patent/CN114332019B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Endoscopes (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The present disclosure relates to an endoscope image detection assistance system, method, medium, and electronic apparatus, the system including: the image processing module is used for processing the endoscope image acquired by the endoscope in real time to obtain a tissue image; the cavity positioning module is used for determining a target point of a tissue cavity corresponding to the tissue image, and the target point is used for indicating a next target moving point of the endoscope at the current position of the endoscope; when a tissue cavity exists in the tissue image, the target point is a central point of the tissue cavity corresponding to the endoscope image, and when the tissue cavity does not exist in the tissue image, the target point is a direction point of the tissue cavity corresponding to the endoscope image; the polyp identification module is used for carrying out real-time polyp identification on a tissue image obtained by the endoscope in a scope withdrawal mode to obtain a polyp identification result; and the display module is used for displaying the target point and the polyp identification result.

Description

Endoscope image detection assistance system, method, medium, and electronic apparatus
Technical Field
The present disclosure relates to the field of image processing, and in particular, to an endoscope image detection assistance system, method, medium, and electronic apparatus.
Background
Endoscopy is a commonly used examination means, which enables a physician to observe the actual state of the internal environment of a human body more directly, and is widely used in the medical field for detecting polyp lesions and cancers, so that patients can be effectively intervened and treated in the early stage of diseases.
Endoscopy is generally divided into endoscope entering and endoscope withdrawing processes, the endoscope entering process is generally controlled by a doctor to operate, however, due to the complexity of the internal environment of a human body, different imaging quality of image acquisition equipment, different operation levels of the doctor, uneven preparation quality of intestinal tracts of patients, relative motion state of the endoscope and the intestinal tracts all the time, and the like, air bubble blocking, overexposure, motion blurring and the like may exist in an endoscope image, and a visual field blind area is easy to occur, so that the doctor needs to control the endoscope by depending on own experience, the endoscope entering is slow or even fails, even injury may be caused to a user to be examined, more energy and time of the doctor are needed, and the technical and experience requirements of the doctor are higher.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In a first aspect, the present disclosure provides an endoscopic image detection assistance system, the system comprising:
the image processing module is used for processing the endoscope image acquired by the endoscope in real time to obtain a tissue image;
the cavity positioning module is used for determining a target point of a tissue cavity corresponding to the tissue image, and the target point is used for indicating a next target moving point of the endoscope at the current position of the endoscope; when a tissue cavity exists in the tissue image, the target point is a central point of the tissue cavity corresponding to the endoscope image, and when the tissue cavity does not exist in the tissue image, the target point is a direction point of the tissue cavity corresponding to the endoscope image;
the polyp identification module is used for carrying out real-time polyp identification on the tissue image obtained by the endoscope in a scope withdrawal mode to obtain a polyp identification result;
and the display module is used for displaying the target point and the polyp identification result.
In a second aspect, the present disclosure provides an endoscopic image detection assistance method, the method comprising:
processing an endoscope image acquired by an endoscope in real time to obtain a tissue image;
in response to determining that the endoscope is in the scope entering mode, determining a target point of a tissue cavity corresponding to the tissue image, wherein the target point is used for indicating a next target moving point of the endoscope at the current position of the endoscope; when a tissue cavity exists in the tissue image, the target point is a central point of the tissue cavity corresponding to the endoscope image, and when the tissue cavity does not exist in the tissue image, the target point is a direction point of the tissue cavity corresponding to the endoscope image;
performing real-time polyp recognition on a tissue image obtained by the endoscope in a scope retracting mode in response to determining that the endoscope is in the scope retracting mode, and obtaining a polyp recognition result;
and displaying the target point and the polyp identification result in an image display area in a display interface.
In a third aspect, the present disclosure provides a computer readable medium having stored thereon a computer program which, when executed by a processing apparatus, performs the steps of the method of the second aspect.
In a fourth aspect, the present disclosure provides an electronic device comprising:
a storage device having a computer program stored thereon;
processing means for executing the computer program in the storage means to carry out the steps of the method of the second aspect.
Therefore, by the technical scheme, the target point in the tissue image can be determined by detecting and identifying the cavity in the tissue image in real time in the endoscope entering process of the endoscope, so that reliable and accurate automatic navigation can be provided for the endoscope entering process, the endoscope entering efficiency and the endoscope entering accuracy are improved, the high requirement of the endoscope on the experience of doctors is reduced, the damage to the examinees is avoided, and the user experience is improved.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale. In the drawings:
FIG. 1 is a block diagram of an endoscopic image detection assistance system provided in accordance with an embodiment of the present disclosure;
fig. 2A and 2B are schematic illustrations of the display of a target point provided according to one embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a display interface provided in accordance with one embodiment of the present disclosure;
FIG. 4 is a schematic illustration of a three-dimensional reconstructed bowel;
FIG. 5 is a flow diagram of an endoscopic image detection assistance method provided in accordance with an embodiment of the present disclosure;
FIG. 6 is a block diagram illustrating an electronic device suitable for use in implementing embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Fig. 1 is a block diagram of an endoscopic image detection assistance system according to an embodiment of the present disclosure, and as shown in fig. 1, the endoscopic image detection assistance system 10 may include:
the image processing module 100 is configured to process an endoscope image acquired by an endoscope in real time to obtain a tissue image.
In medical endoscope image recognition, an endoscope performs real-time shooting inside a living body such as a human body to obtain a video stream, so that frames can be extracted from the video stream according to a preset acquisition cycle to acquire an endoscope image. As an example, the captured endoscopic image may be processed, such as cropping, normalizing, resampling, etc., to a preset size, so as to obtain the tissue image, i.e., the image of the corresponding tissue in the current detection process, so as to perform uniform processing on the tissue image. For example, the tissue currently being examined may be the intestine, the thorax, the abdominal cavity, and the like.
In the embodiment of the present disclosure, when the image processing module performs processing, the tissue image may be detected from the endoscope image based on a Yolo V4 algorithm, and then the device information and the personal information are cut and deleted, so that the tissue image only includes an in-vivo image corresponding to the endoscope, which facilitates subsequent processing of the image while protecting privacy information. In addition, during the endoscopic examination, many invalid images, such as images blocked by obstacles or with too low definition, may be collected during the movement of the endoscope due to the instability of the endoscope entering process or the improper position of the endoscope. These invalid images can interfere with the results of the endoscopic examination. Therefore, after obtaining the endoscopic image, it is possible to determine whether or not the endoscopic image is valid, and if the endoscopic image is an invalid image, the endoscopic image can be discarded as it is. If the endoscope image is an effective image, the corresponding tissue image is determined based on the endoscope image, so that unnecessary data processing is reduced, and the processing speed is increased. For example, the tissue image may be recognized by using a pre-trained recognition model to determine whether the tissue image is valid, and the recognition model may be obtained by training based on a convolutional neural network, for example, which is not specifically limited by the present disclosure.
A cavity positioning module 200, configured to determine a target point of a tissue cavity corresponding to the tissue image, where the target point is used to indicate a next target moving point of the endoscope at a current position of the endoscope; when a tissue cavity exists in the tissue image, the target point is a central point of the tissue cavity corresponding to the endoscope image, and when the tissue cavity does not exist in the tissue image, the target point is a direction point of the tissue cavity corresponding to the endoscope image.
As an example, if there is a tissue cavity in the tissue image, the target point is a central point of the tissue cavity corresponding to the endoscope image, as shown by a point a in fig. 2A; if no tissue cavity exists in the tissue image, the target point is a direction point of the tissue cavity corresponding to the endoscopic image, as shown in fig. 2B, and if no tissue cavity exists in the tissue image, a direction point of navigation can be determined, i.e., as shown by a point B in fig. 2B.
Illustratively, the physician user may select the current mode during use of the endoscope, i.e., an advanced mode for controlling the endoscope to the blind return position or an retracted mode for performing an examination of tissue within the body. The endoscope needs to move along the central point of the tissue cavity as much as possible in the process of endoscope entering, so that the tissue surface mucous membrane of an inspected person can be effectively prevented from being touched when the endoscope enters, and the inspected person is prevented from being injured. Thus, in this embodiment, where it is determined that the endoscope is in the scope-advance mode, it may be further determined whether a tissue cavity is present in the tissue image.
For example, a detection model may be pre-trained to detect the presence of a tissue cavity in a tissue image. For example, the detection model may be trained according to a training sample acquired in advance, where the training sample may include a training image and a label corresponding to the training image, and the label is used to indicate whether a tissue cavity exists in the training image, so that the training image may be used as an input of the model, and the label corresponding to the training image is used as a target output of the model to perform update training on the model, so as to obtain the detection model. For example, the detection model may be CNN (Convolutional Neural Networks) or LSTM (Long Short-Term Memory), Encoder in transform, etc., and the disclosure is not limited thereto.
As an example, the tissue cavity may be an intestinal cavity, a gastric cavity, or the like, taking the intestinal cavity as an example, if the tissue cavity exists in the collected tissue image of the intestinal cavity after the endoscope enters the intestinal cavity, the central point of the intestinal cavity, that is, the center of the cross section of the space surrounded by the intestinal cavity wall, may be further determined, and when the endoscope moves forward along the central point of the intestinal cavity, the endoscope is automatically navigated.
The center point of the tissue cavity may be identified by a keypoint identification model, in which embodiment the tissue image may be labeled, for example, by a professional physician based on experience. For convenience of labeling, the position of the central point of the tissue cavity in the training image can be annotated, the center of the labeling circle is the position of the central point, namely the label corresponding to the training image, and a training sample containing the training image and the label is obtained. In addition, in order to improve the generalization of the model, the training sample may include an unlabeled training image. The key point identification model may include a student sub-network, a teacher sub-network, and a judgment sub-network, the student sub-network and the teacher sub-network have the same network structure, the teacher sub-network is configured to determine the prediction labeling feature corresponding to the training image in the student sub-network, and in the training process of the key point identification model, the weight of the prediction loss corresponding to the student sub-network is determined based on the prediction labeling feature of the teacher sub-network and the judgment sub-network.
In order to improve the accuracy of model identification, different preprocessing methods may be used to preprocess the training image based on the training sample to obtain different processed images corresponding to the training image, for example, the preprocessing method may be data enhancement, and may be, for example, a non-affine transformation method such as color, luminance, chrominance, saturation transformation, and the like, so as to ensure that the position is not deformed. Thus, different processed images can be used as input images of the teacher sub-network and the student sub-network respectively to train the key point identification model.
If no tissue cavity exists in the collected tissue image of the intestinal cavity after the endoscope enters the intestinal cavity, a direction point of the intestinal cavity can be further determined, wherein the direction point is a relative position point of the central point of the predicted tissue cavity relative to the tissue image, and indicates that the endoscope should deviate towards the direction of the direction point, so as to provide direction guidance for the advancing of the endoscope.
In this embodiment, in the case where it is determined that there is no tissue cavity in the tissue image, the nearest N tissue images including the tissue image may be formed into an image sequence, and prediction of the direction points may be performed based on the image sequence and the direction point identification model. Where N may be a positive integer, representing the number of tissue images included in the image sequence, which may be set by the actual application scenario. Illustratively, the direction point identification model comprises a convolution sub-network, a time circulation sub-network and a decoding sub-network, wherein the convolution sub-network can be used for acquiring the spatial features of the image sequence, the time circulation sub-network can be used for acquiring the temporal features of the image sequence, and the decoding sub-network can be used for decoding based on the spatial features and the temporal features to acquire the direction points, so that the accuracy of direction point identification is guaranteed.
For example, the number of training images included in the training image sequence of the direction point identification model may be limited according to an actual usage scenario, for example, N may be set to 5, that is, 5 training images may be included in each training image sequence, that is, the direction point of the tissue cavity in the current state may be predicted based on the latest 5 training images. The label images corresponding to the training image sequence are used for indicating the positions of the direction points of the tissue cavity in the last image predicted based on the plurality of images, so that the direction point identification model can be trained based on the training image sequence.
And a polyp recognition module 300, configured to perform real-time polyp recognition on the tissue image obtained by the endoscope in the scope-retracting mode, so as to obtain a polyp recognition result.
A display module 400 for displaying the target point and the polyp identification result.
The physician can detect the human body during the endoscope withdrawal process, for example, polyp detection can be performed. In this procedure, for the convenience of the physician's observation, polyp identification can be performed in real time on the tissue image obtained during the endoscopic withdrawal procedure, so that the physician can be provided with a data reference during the real-time detection procedure.
Therefore, by the technical scheme, the target point in the tissue image can be determined by detecting and identifying the cavity in the tissue image in real time in the endoscope entering process of the endoscope, so that reliable and accurate automatic navigation can be provided for the endoscope entering process, the endoscope entering efficiency and the endoscope entering accuracy are improved, the high requirement of the endoscope on the experience of doctors is reduced, the damage to the examinees is avoided, and the user experience is improved.
In one possible embodiment, the polyp identification module comprises:
and the polyp detection submodule is used for detecting the tissue image obtained by the endoscope in the scope retracting mode based on a detection model, and determining the position information of the polyp when the polyp exists in the tissue image.
For example, the polyp detection sub-module may perform detection by using a detection model implemented by GFLv2(Generalized local V2), where the detection model can output a predicted position and determine the prediction reliability of the predicted position according to the distribution, and the training method is the prior art and will not be described herein again. For example, the predicted position with the prediction reliability greater than a threshold may be displayed in the image display area in the display interface, and the prediction reliability may also be displayed, as shown in fig. 3, the predicted position may be displayed in the form of a solid line detection frame to assist a physician in prompting, so as to improve the accuracy of polyp detection and the detection level of polyps with high specificity.
Optionally, before the tissue image is input into the detection model in the polyp detection sub-module for detection, the tissue image may be preprocessed by the preprocessing module, and for example, the tissue image may be uniformly normalized to a preset size, for example, the preset size may be 512 × 512, so that the image processed by the preprocessing module is input into the detection model for detection, and the position information of the polyp detected in the tissue image is obtained.
A polyp recognition sub-module for extracting a detection image corresponding to the position information from the tissue image; determining a classification of the polyp from the detection image and a recognition model;
the display module is further configured to display the tissue image, and display an identifier corresponding to the location information of the polyp and the classification in the tissue image.
As an example, after the polyp detection sub-module determines the position information of the polyp in the tissue image, a detection image corresponding to the position information may be extracted from the tissue image based on the position information. For example, when the detection image is extracted, the region corresponding to the position information may be enlarged and extracted to ensure the integrity of the extracted detection image. For example, as shown in fig. 3, if the detected image is extracted based on the position of a detection frame corresponding to the position information of the detected polyp, the detection frame may be enlarged by 1.5 times, that is, the image corresponding to the dashed line frame in fig. 3 may be extracted as the detected image. And then, inputting the detection image into a polyp recognition submodule for polyp classification recognition.
The polyp recognition submodule may include a classification model implemented based on resnet18, and the classification model may obtain corresponding training samples by labeling and classifying a large number of polyp images in advance, so that the polyp images are used as input of the model, and the labeled classification is used as target output for training to obtain the classification model. For example, the classification category may cover a variety of classifications for adenomas, hyperplastic polyps, carcinomas, inflammatory polyps, submucosal tumors, and the like. Similarly, before the detection image is input into the classification model for classification in the polyp recognition sub-module, the detection image may be preprocessed by the preprocessing module, for example, the detection image may be uniformly normalized to a preset size, for example, the preset size may be 256 × 256, so that the image processed by the preprocessing module is input into the classification model for classification, and a polyp recognition result is obtained, so that the identifier corresponding to the position information of the polyp and the classification may be displayed in the image display region in the display interface, and the identifier corresponding to the position information may be represented in a manner of detecting a square frame or a circular frame.
Therefore, according to the technical scheme, the polyp position in the tissue image can be detected first, the polyp detection accuracy in the tissue image is improved, and the polyp missing rate is reduced. And furthermore, a detection image for polyp identification can be extracted based on the detected position information, so that the data processing amount of polyp identification can be reduced, the influence of other parts in the tissue image on polyp identification can be avoided, and the accuracy of polyp identification can be further improved.
As indicated above, a mode selection control may be provided in a control area in the display interface, and the current mode of use of the endoscope may be determined in response to user operation of the mode selection control in the control area. In another possible embodiment, the system further comprises:
the mode identification module is used for identifying the tissue image of the endoscope according to an image identification model, determining that the image mode of the endoscope is a scope entering mode under the condition that the parameter which is output by the image identification model and corresponds to the in-vivo image is larger than a first parameter threshold value, and determining that the image mode of the endoscope is a scope withdrawing mode under the condition that the parameter which is output by the image identification model and corresponds to the ileocecal valve image is larger than a second parameter threshold value.
In this embodiment, the image recognition model may be a network model based on vit (vision transform), and three categories, namely an in-vitro image, an in-vivo image, and a blind-return valve image, may be labeled in advance on a training image, so that the training image may be used as an input, and the labeled category may be used as a target output for training, so as to obtain the image recognition model. Accordingly, in the detection process of the endoscope, the tissue image can be classified and identified based on the image identification model, wherein the parameter output by the image identification model and corresponding to the in-vivo image is a probability value corresponding to the in-vivo image, namely the output category of the image identification model is the in-vivo image, and similarly, the parameter output by the image identification model and corresponding to the ileocecal valve image is a probability value corresponding to the image identification model when the output category is the ileocecal valve image.
The first parameter threshold and the second parameter threshold may be set according to an actual application scenario, may be the same, or may be different, and the disclosure is not particularly limited. Illustratively, the first parameter threshold is 0.85, the second parameter threshold is 0.9, the result output after the tissue image is input into the image recognition model is an in-vivo image, and the corresponding probability value is 0.9, when it is determined that the image mode of the endoscope should be the scope entering mode. In the subsequent detection process, for another tissue image, the output result after the tissue image is input into the image recognition model is a ileocecal valve image, and the corresponding probability value is 0.92, at this time, it is determined that the image mode of the endoscope should be a scope-withdrawal mode.
The mode switching module is used for switching the image mode of the endoscope to the endoscope entering mode when the image mode of the endoscope is the in vitro mode and the mode identification module determines that the image mode of the endoscope is the endoscope entering mode, switching the image mode of the endoscope to the endoscope exiting mode and outputting second prompt information when the image mode of the endoscope is the endoscope entering mode and the mode identification module determines that the image mode of the endoscope is the endoscope exiting mode, wherein the second prompt information is used for prompting the endoscope to enter the endoscope exiting mode.
The mode sequence corresponding to the endoscope in the using process of the endoscope sequentially corresponds to an extracorporeal mode, a scope entering mode and a scope retreating mode, so that after the mode identification module determines the image mode of the endoscope, whether switching is needed or not can be determined by combining the current image mode of the endoscope and the identified image mode. For example, the current image mode of the endoscope is an in vitro mode, and the image mode determined based on the tissue image obtained in real time is an in-vivo mode, which means that the endoscope enters the body at this time, the mode may be automatically switched to the in-vivo mode, and if the current image mode of the endoscope is the same as the image mode determined by the mode recognition module, the switching is not required. Then, when the current image mode of the endoscope is a scope entering mode and the image mode determined based on the tissue image obtained in real time is a scope exiting mode, the endoscope reaches the blind returning position at the moment, and then the scope exiting examination process is carried out, the endoscope can be automatically switched to the scope exiting mode, and a user is prompted to prompt that the current endoscope enters the scope exiting mode, namely, the examination stage is carried out next.
Therefore, by the technical scheme, the automatic identification and switching of the image mode of the endoscope can be realized based on the image collected by the endoscope in the using process of the endoscope, the manual operation of a user is not needed, the using mode of the endoscope is matched with the actual using state of the endoscope, auxiliary reference and convenience are provided for the user to use the endoscope, the using process of the endoscope can be identified, and reliable data support is provided for the subsequent processing of different modes.
The internal tissues of the body under endoscopy are usually soft tissues, peristalsis occurs in the process of moving the endoscope by a doctor, for example, intestinal tracts and the like, and the doctor can perform flushing, tab releasing and the like in the process of endoscopy, so that the doctor cannot clearly know the examination range in the process of endoscopy. Based on this, the present disclosure also provides the following embodiments.
In one possible embodiment, the system further comprises:
the endoscope comprises a blind area proportion detection module, a display proportion and a blind area proportion, wherein the display proportion and the blind area proportion are determined according to a tissue image obtained by the endoscope in a withdrawal mode, the sum of the display proportion and the blind area proportion is 1, the blind area can be a missed detection area or a detection blind area and the like caused by factors such as a mucous membrane and the like, and the blind area proportion can be understood as the proportion of the blind area (namely, the part which cannot be observed in the visual field of the endoscope) in the process of endoscopy to the whole surface area (namely, all the parts which should be observed in the endoscopy) in the interior of the tissue.
For example, the blind area proportion detection may be automatically started when the endoscope is determined to be in the endoscope withdrawing mode, or the blind area proportion detection may be started in response to a selection operation of a user on a blind area proportion detection control in a control area in the display interface, so as to start to execute the above functions and determine the blind area proportion and the display proportion.
As an example, the display scale and the blind area scale may be determined according to a sharpness of the tissue image, based on the tissue image obtained by the endoscope in the scope retracting mode. In an actual application scene, when the definition of the tissue image is low, the view field blind area in the tissue image is relatively more, and the blind area proportion in the view field can be predicted based on the definition of the tissue image. Therefore, blind area proportion labeling can be carried out on the historically collected tissue images by experienced physicians, the historically collected tissue images and the labeled blind area proportion can be used as the input of a model, the labeled blind area proportion is used as the target output of the model, and the neural network model is trained to obtain a blind area proportion prediction model for predicting the corresponding blind area proportion based on the tissue images. The training mode of the blind area proportion prediction model may be a training mode commonly used in the art, and is not described herein again. Therefore, the tissue image obtained by the endoscope in the endoscope withdrawal mode can be input into the blind area proportion prediction model, so that the corresponding blind area proportion is obtained, and the display proportion is further determined based on the blind area proportion.
As another example, the blind zone proportion detection module is further to: performing three-dimensional reconstruction according to a tissue image obtained by the endoscope in a scope withdrawing mode to obtain a three-dimensional tissue image; and determining a display proportion and a blind area proportion according to the three-dimensional tissue image and a target fitting tissue image corresponding to the three-dimensional tissue image.
As shown in fig. 4, taking the intestinal tract as an example, the three-dimensional tissue image obtained by performing three-dimensional reconstruction based on the endoscopic image, that is, a schematic view of the intestinal mucosa. Illustratively, the reconstruction may be performed using three-dimensional reconstruction techniques known in the art, such as the SurfelMeshing fusion algorithm. For example, two adjacent frames of tissue images can be respectively input into a Depth Network and a Pose Network, and then corresponding Depth map and Pose information can be obtained, wherein the Pose information can represent the motion process of the endoscope in the tissue, and for example, the motion process can include a rotation matrix and a translation vector. And then, three-dimensional reconstruction is carried out on the tissue image, the depth map and the posture information based on the three-dimensional reconstruction model, and a three-dimensional tissue image is obtained. Both the deep network and the attitude network can be trained and implemented based on the ResNet50 network, which is not described herein again.
In which the intestine may be similar to a tubular structure, due to the limitation of the endoscopic field of view, when the intestine is reconstructed based on the endoscopic image, it may occur a hollow position as shown at W1, W2, W3, W4 in fig. 4, that is, the hollow position does not occur in the tissue image, that is, an invisible portion in the endoscopic examination, that is, a blind area as described in the present disclosure. The physician cannot observe the partial region during the endoscopic examination, and if the invisible region is too large, the missing detection phenomenon is likely to occur. In this embodiment, the proportion of the invisible part of the mucous membrane in the whole tissue area appearing in the examination image can be represented by the blind area proportion, and the invisible part of the current tissue can be suggested by the blind area proportion so as to represent the comprehensiveness of the endoscopy.
Illustratively, the blind zone proportion detection module is further configured to:
and determining the ratio of the point cloud number in the three-dimensional tissue image to the target fitting point cloud number of the target fitting tissue image as the display proportion, and determining the blind area proportion according to the display proportion.
The three-dimensional tissue image is obtained after reconstruction is performed based on the two-dimensional tissue image, then the three-dimensional tissue image can be projected to the corresponding target fitting tissue image, and the display scale is determined according to the overlapped area of the projection of the three-dimensional tissue image and the target fitting tissue image. The target-fitted tissue image corresponding to the three-dimensional tissue image, that is, the complete cavity structure predicted based on the structure corresponding to the tissue in the tissue image, is, for example, an intestinal tract in fig. 4, and the target-fitted tissue image corresponding to the tissue image is a corresponding tubular structure image, and can be fitted into a cylindrical structure. And after the three-dimensional tissue image is determined, fitting can be performed based on the standard structural features to obtain the target fitting tissue image. For example, a monte carlo method may be adopted, K test points (K ≧ 100) are uniformly distributed on the target fitting tissue image, and then the number Λ of test points in the visible region and the number Ω of test points in the blind region are counted respectively, so that the display ratio Φ ═ Λ/(Λ + Ω), and the blind region ratio is 1- Φ.
The display module is further used for displaying the display proportion and/or the blind area proportion, and displaying first prompt information under the condition that the blind area proportion is larger than a blind area threshold value, wherein the first prompt information is used for indicating that the detection missing risk exists.
Wherein, the display scale is used for representing the proportion of the region observed by the doctor in the process of performing the endoscope image detection to the whole tissue, and the blind region scale is used for representing the proportion of the region not observed by the doctor in the process of performing the endoscope image detection to the whole tissue, so in the embodiment, the display scale can be displayed, the blind region scale can also be displayed, or the display scale and the blind region scale can also be simultaneously displayed, thereby being convenient for the doctor to know the accuracy and the comprehensiveness of the detection process in time, and simultaneously, when the blind region scale is larger, the blind region scale is prompted, for example, the prompt message is displayed in the display interface, for example, the prompt message can be 'high current missed detection risk', 'please review', 'please execute the retreat mirror', can be directly displayed, can also be prompted by voice, and can also be prompted by a popup window, thereby suggestion to the doctor for the doctor can in time know its retreat mirror in-process inspection area's mucous membrane coverage is not enough, appears lou examining the phenomenon easily, and the doctor can adjust the direction of endoscope like this according to the tip information, perhaps carries out and retreat the mirror, perhaps carries out the process of retreating again, thereby can reduce the risk that endoscopy missed examination to a certain extent, for follow-up polyp discernment and inspection provide reliable and comprehensive data support, the user of being convenient for simultaneously uses.
Optionally, the system further comprises:
the three-dimensional positioning module is used for carrying out three-dimensional reconstruction according to a tissue image obtained by the endoscope in a scope entering mode to obtain a three-dimensional tissue image; and determining a point on the center line of the three-dimensional tissue image, which is closest to the target point, as a three-dimensional target point of the target point in the three-dimensional tissue image.
The three-dimensional reconstruction method is described in detail above, and is not described herein again. As an example, the tissue cavity is generally fitted to a tubular structure, and thus, the center line of the three-dimensional tissue image can be the center line of the fitted tubular structure, so as to ensure the distance from the periphery of the tissue mucosa during the movement of the endoscope and avoid damaging the tissue mucosa. Therefore, after the target point in the tissue image is determined, in order to ensure the endoscope entering accuracy, the target point can be mapped to the corresponding position on the central line, namely, the point closest to the target point, so as to ensure the accuracy and the reasonability of the three-dimensional target point.
And the attitude determination module is used for determining the attitude information of the endoscope according to the tissue image corresponding to the current position and the tissue image in the historical time period.
Illustratively, images during endoscopic advancement may be monitored and displayed. Illustratively, the pose information may be added to the actual image to form a data set based on manual annotation of the real endoscopic image and its bounding box in advance. The pose estimation model can then be implemented based on the ResNet50 network. For example, the endoscope image sequence can be used as an input, the labeled posture information can be used as a target output of the model, and the regression can be performed by adjusting the network, so that the training of the model is realized. Thus, the tissue image corresponding to the current position and the tissue images in the history period can be formed into an image sequence to determine the pose information of the endoscope based on the pose estimation model described above.
And the track determining module is used for generating a navigation track according to the current position, the attitude information and the three-dimensional target point. The pose information is used for representing the pose of the current endoscope, and the three-dimensional target point is used for representing the moving end point of the endoscope, so that the track information of the moving three-dimensional target point, namely the navigation track, can be determined based on the current position and the pose information. The trajectory prediction method may be any prediction method commonly used in the art, and the disclosure is not limited thereto.
The display module is further used for displaying the posture information of the endoscope and the three-dimensional tissue image, and displaying the three-dimensional target point and the navigation track in the three-dimensional tissue image.
As an example, three-dimensional positioning and scope navigation may be automatically turned on when controlling the endoscope to advance the scope. As another example, a three-dimensional navigation control for zooming in may be displayed in a control area in the display interface, and when a user needs to navigate, the user may click the control, and then, in response to a selection operation of the user on the three-dimensional navigation control for zooming in the control area of the display interface, the user may perform positioning through the three-dimensional positioning module. Thereafter, the attitude information of the endoscope can be further determined to determine the navigation track. As an example, the three-dimensional tissue image may be displayed in a display interface, and the three-dimensional target point and the navigation track are displayed in the three-dimensional tissue image, that is, the next moving position point and the suggested path moving to the position point are displayed, so that when the endoscope is automatically navigated, a physician can know the moving path and state of the current endoscope in the body in time, manual intervention endoscope entering is performed conveniently, the accuracy of the endoscope entering process is ensured, and the user experience is improved.
In the practical application scene, the enteroscope and the gastroscope share one endoscope host, if the examined person is the gastroscope and the enteroscope to carry out examination, the videos received by the system are frequently that the gastroscope and the enteroscope are alternate, the detection to different tissue parts is mostly different, and the detection algorithm aiming at the enteroscope is not suitable for the gastroscope. Based on the above, in one possible embodiment, a control of a corresponding detection mode may be displayed in a control area in the display interface, and in response to a selection operation of the control, the selected mode may be taken as the detection mode of the endoscope.
Therefore, the present disclosure also provides the following embodiments to be applied to the endoscopic image detection process in multiple scenes. In one possible embodiment, the system further comprises:
the image classification module is used for classifying the tissue images of the endoscope in the endoscope entering mode to obtain target classifications corresponding to the tissue images, wherein the target classifications comprise an abnormal classification and a plurality of endoscope classifications.
For example, in the present disclosure, an image classification model may be pre-trained to classify tissue images, the image classification model may be obtained based on PreResNet18 training, the categories of the image classification may include an anomaly classification and an endoscope classification, wherein the anomaly classification includes a no-signal image, an in vitro image, and the like, and the endoscope classification may include an enteroscope image and a gastroscope image.
In this embodiment, the endoscope images can be classified and labeled in advance by experienced physicians, and can be labeled as abnormal classification for no-signal images, in-vitro images and the like, as enteroscope classification for enteroscope images, and as gastroscope classification for gastroscope images, so as to obtain training samples. Then, the endoscope image can be used as the input of the model, the label corresponding to the endoscope image can be used as the target output of the model, the model is trained to obtain the image classification model, and the tissue image collected by the endoscope is classified.
And the detection mode identification module is used for updating the counter of each endoscope classification according to the target classification corresponding to the tissue image, stopping counting operation of each counter when the numerical value of the counter corresponding to any endoscope classification reaches a counting threshold value, and determining the detection mode of the endoscope according to the endoscope classification of which the numerical value reaches the counting threshold value, wherein each endoscope classification is provided with the counter corresponding to the endoscope classification, and the numerical value of each counter is initially zero.
In this embodiment, the detection mode currently employed by the endoscope may be determined by a corresponding classification of the tissue images acquired during movement of the endoscope. As an example, classification may be made for tissue images during movement of the endoscope. The detection mode recognition module updates a counter of each endoscope classification according to the target classification corresponding to the tissue image in the following way:
when the target classification is the endoscope classification, adding one to a counter corresponding to the target classification, and subtracting one from a counter corresponding to the endoscope classification except the target classification, wherein the counter value of the counter is not zero; and when the target classification is an abnormal classification, subtracting one from the counter corresponding to each endoscope classification with the counter value not being zero so as to update the counter of each endoscope classification.
Illustratively, the endoscopic classification includes an enteroscopy classification and a gastroscope classification, the counter corresponding to the enteroscopy classification is count1, and the counter corresponding to the gastroscope classification is count2, respectively initialized to 0. And if the determined classification corresponding to the tissue image is an abnormal classification, the counters corresponding to the enteroscope classification and the gastroscope classification are both 0 at the moment. When the determined classification corresponding to the tissue image is an enteroscopy classification, an operation may be added to the counter count1 corresponding to the enteroscopy classification, where the counter count1 is 1. The process is then repeated for other tissue images, with a count1 of 49, a count2 of 3, and a count threshold of 50. When the next determined classification corresponding to the tissue image is an enteroscopy classification, at this time, an adding operation is performed on count1, that is, a count1 is 50, and a subtracting operation is performed on count2, that is, a count2 is 2, at this time, the value of the counter corresponding to the enteroscopy classification reaches a count threshold, at this time, it may be determined that the detection mode is the enteroscopy mode, and the counting operation of each counter is stopped. Therefore, the detection mode of the endoscope can be automatically determined by carrying out classification statistics on the tissue images of the part of the endoscope in the moving process, so that the operation of a user is saved, and the using process of the endoscope by the user is assisted.
The polyp identification module is further to: and performing real-time polyp recognition on the tissue image obtained by the endoscope in a scope withdrawal mode according to the polyp recognition model corresponding to the detection mode determined by the detection mode recognition module.
In this embodiment, a polyp recognition model for performing polyp recognition in each detection mode may be determined in advance for each detection mode, where the determination manner of the polyp recognition model is described in detail above and is not described herein again.
Therefore, by the above example, the detection mode of the endoscope can be automatically determined by classifying the tissue images acquired in the moving process of the endoscope, and the corresponding polyp recognition model under the detection mode is adopted for image recognition, so that the automation level of the endoscope use can be improved, the applicability degree with the practical application scene can be improved, the polyp recognition accuracy can be improved to a certain extent, and reliable data support is provided for the analysis of the endoscope result by a doctor.
In one possible embodiment, the system further comprises:
and the speed determining module is used for acquiring a plurality of historical tissue images corresponding to the current tissue image when the mode of the endoscope image is a scope withdrawing mode, calculating the similarity between each historical tissue image and the current tissue image, and determining the scope withdrawing speed according to the plurality of similarities.
The display module is also used for displaying the endoscope retreating speed corresponding to the endoscope.
In the moving process of the endoscope, if the moving speed of the endoscope is slow, the adjacent tissue images should be similar, so the endoscope retracting speed can be evaluated based on the similarity between the adjacent tissue images in the embodiment. The method for calculating the similarity between the images may be a calculation method commonly used in the art, and the present disclosure does not limit this.
For example, a mapping relationship between the similarity interval and the mirror-down speed may be preset, wherein the higher the similarity is, the slower the mirror-down speed is. For example, the average similarity corresponding to the plurality of tissue images in the moving process may be determined from an average value of the determined plurality of similarities, and based on a similarity section to which the average similarity belongs, a speed corresponding to the similarity section may be set as the corresponding speed for retracting the endoscope in the moving process. And further the speed of the mirror-down can be displayed in the image display area of the display interface in the process of mirror-down. From this, can through the detection retreat mirror speed in order to confirm between the adjacent tissue image of mirror in-process that retreats to show this retreat mirror speed, thereby can indicate the user, avoid appearing the condition that the polyp that leads to owing to retreat mirror speed is too fast to a certain extent lou examines, improve the convenience that the user used.
In the using process of an endoscope, cleanliness in a tissue cavity is one of important indexes for measuring inspection quality, and a Boston scoring method is generally adopted at present. However, as physicians focus primarily on the endoscopic procedure and lesion discovery procedure during endoscopy, the attention allocated to cleanliness assessment is limited. And different physicians are subjective in assessing cleanliness.
Based on this, the present disclosure also provides the following embodiments. Optionally, the system further comprises:
the cleanliness determination module is used for carrying out cleanliness classification on the tissue images of the endoscope in a scope withdrawal mode and determining the number of the tissue images in each cleanliness classification; and determining the cleanliness of the tissue corresponding to the tissue image according to the number of the tissue images under each cleanliness classification.
As an example, the cleanliness detection may be automatically turned on after determining that the endoscope is in the endoscope-withdrawing mode, or may be turned on in response to a user selecting an operation of a cleanliness detection control in a control area in the display interface. In practical application scenarios, there is a cleaning operation during the endoscope-entering stage of the endoscope, and it is not suitable to include the cleanliness in the tissue cavity before the cleaning operation into the overall evaluation. Therefore, in this embodiment, by performing cleanliness classification on the tissue image in the endoscope removal mode, for cleanliness classification of a single tissue image, the classification can be performed based on a cleanliness classification model implemented by a Vision Transformer network. The endoscopic image is labeled in advance, and may be labeled as 0, 1, 2, and 3 according to the boston scale. Then, the endoscope image can be used as the input of the model, and the label corresponding to the endoscope image can be used as the target output of the model for training. From this, can guarantee the cleanliness laminating actual application scene of the tissue of determining, improve the accuracy of the cleanliness of determining.
In one possible embodiment, the cleanliness determination module is further configured to:
acquiring the number of tissue images under a cleanliness classification according to the sequence of scores corresponding to the cleanliness classification from small to large;
determining the size relation between the proportion of the number of the tissue images under the current cleanliness classification and the total number of targets and a threshold corresponding to the current cleanliness classification, wherein the total number of the targets is the sum of the number of the tissue images under each cleanliness classification;
if the ratio of the number of the tissue images under the current cleanliness classification to the total number of the targets is greater than or equal to the threshold value corresponding to the cleanliness classification, taking the score corresponding to the cleanliness classification as the cleanliness of the tissue;
if the proportion of the number of the tissue images under the current cleanliness classification to the total number of the targets is smaller than the threshold corresponding to the cleanliness classification, taking the next cleanliness classification as a new current cleanliness classification under the condition that the next cleanliness classification is not the cleanliness classification with the maximum score, and re-executing the step of determining the size relation between the proportion of the number of the tissue images under the current cleanliness classification to the total number of the targets and the threshold corresponding to the current cleanliness classification; in the case where the next cleanliness class is the cleanliness class having the largest score, the score of the next cleanliness class is determined as the cleanliness of the tissue.
As described in the above example, the scores corresponding to the cleanliness classifications are 0, 1, 2, and 3 in order from small to large, and the number of tissue images in each cleanliness classification can be acquired in this order. Illustratively, the number of tissue images having a score of 0 of the cleanliness class is obtained first, and illustratively, the number is S0, and then the relationship between the ratio of the number of tissue images in the current cleanliness class to the total number of targets and the threshold corresponding to the current cleanliness class is further determined. The total number of the targets is Sum, and the threshold corresponding to each cleanliness class can be set according to an actual application scene. For example, if the score of the cleanliness classification is N0 corresponding to the threshold value of 0, then when S0/Sum is greater than or equal to N0, the cleanliness of the tissue is determined to be 0. Otherwise, the number of tissue images with a score of 1 of the cleanliness classification is further acquired, for example, the number is S1, the score of the cleanliness classification is N1 of the threshold corresponding to 1, and then at S1/Sum greater than or equal to N1, the cleanliness of the tissue is determined to be 1. Otherwise, the number of tissue images with a score of 2 of the cleanliness classification is further acquired, for example, the number is S2, the score of the cleanliness classification is N2 of the threshold corresponding to 2, and when S2/Sum is greater than or equal to N2, the cleanliness of the tissue is determined to be 2. Otherwise, the next cleanliness class, that is, the cleanliness class with the score of 3 is obtained, and the class is the cleanliness class with the largest score, so that the score of the next cleanliness class can be directly determined as the cleanliness of the tissue, that is, the cleanliness of the tissue is determined to be 3.
Therefore, according to the technical scheme, only the tissue image in the process of endoscope withdrawal can be subjected to cleanliness classification detection, and the data processing amount is reduced. Meanwhile, the cleanliness in the process of endoscope withdrawal can be integrally evaluated based on the data of the tissue images under all the classifications, the accuracy of the determined cleanliness is improved, and the method better meets the Boston scoring standard and is convenient for users to use.
In one possible embodiment, when the endoscope withdrawal process is finished and the examination is completed, the polyp identification result, the cleanliness, the withdrawal speed in the endoscope withdrawal process and other information obtained in the above process can be output as a report sheet for uniform viewing and management.
Based on the same inventive concept, the present disclosure also provides an endoscopic image detection assisting method, as shown in fig. 5, the method including:
in step 11, endoscope images acquired by an endoscope are processed in real time to obtain tissue images;
in step 12, in response to determining that the endoscope is in the scope entering mode, determining a target point of a tissue cavity corresponding to the tissue image, wherein the target point is used for indicating a next target moving point of the endoscope at the current position; when a tissue cavity exists in the tissue image, the target point is a central point of the tissue cavity corresponding to the endoscope image, and when the tissue cavity does not exist in the tissue image, the target point is a direction point of the tissue cavity corresponding to the endoscope image;
in step 13, in response to determining that the endoscope is in the endoscope retracting mode, performing real-time polyp recognition on a tissue image obtained by the endoscope in the endoscope retracting mode to obtain a polyp recognition result;
in step 14, the target point and the polyp recognition result are displayed in an image display area in a display interface.
Optionally, the method further comprises:
determining a display proportion and a blind area proportion according to a tissue image obtained by the endoscope in a scope withdrawing mode, wherein the sum of the display proportion and the blind area proportion is 1;
and displaying the display proportion and/or the blind area proportion, and displaying first prompt information under the condition that the blind area proportion is greater than a blind area threshold value, wherein the first prompt information is used for indicating that the missing detection risk exists.
Optionally, the determining a display ratio and a blind area ratio according to the tissue image obtained by the endoscope in the endoscope retracting mode includes:
performing three-dimensional reconstruction according to a tissue image obtained by the endoscope in a scope withdrawing mode to obtain a three-dimensional tissue image;
and determining a display proportion and a blind area proportion according to the three-dimensional tissue image and a target fitting tissue image corresponding to the three-dimensional tissue image.
Optionally, the determining a display ratio and a blind area ratio according to the three-dimensional tissue image and a target fitting tissue image corresponding to the three-dimensional tissue image includes:
and determining the ratio of the point cloud number in the three-dimensional tissue image to the target fitting point cloud number of the target fitting tissue image as the display proportion, and determining the blind area proportion according to the display proportion.
Optionally, the method further comprises:
performing three-dimensional reconstruction according to a tissue image obtained by the endoscope in a scope entering mode to obtain a three-dimensional tissue image;
determining a point on the central line of the three-dimensional tissue image, which is closest to the target point, as a three-dimensional target point of the target point in the three-dimensional tissue image;
determining the attitude information of the endoscope according to the tissue image corresponding to the current position and the tissue image in the historical time period, and generating a navigation track according to the current position, the attitude information and the three-dimensional target point;
and displaying the three-dimensional tissue image, and displaying the three-dimensional target point and the navigation track in the three-dimensional tissue image.
Optionally, the method further comprises:
in response to a user's selection operation of a detection mode in a control area in a display interface, determining the detection mode selected by the user as the detection mode of the endoscope;
the real-time polyp identification of the tissue image obtained by the endoscope in the endoscope withdrawal mode comprises the following steps:
and performing real-time polyp recognition on the tissue image obtained by the endoscope in the endoscope withdrawal mode according to the polyp recognition model corresponding to the detection mode.
Optionally, the method further comprises:
classifying tissue images of the endoscope in a scope entering mode to obtain target classifications corresponding to the tissue images, wherein the target classifications comprise an abnormal classification and a plurality of endoscope classifications;
updating counters of each endoscope classification according to the target classification corresponding to the tissue image, stopping counting operation of each counter when the numerical value of the counter corresponding to any endoscope classification reaches a counting threshold value, and determining a detection mode of the endoscope according to the endoscope classification of which the numerical value reaches the counting threshold value, wherein each endoscope classification is provided with the counter corresponding to the endoscope classification, and the numerical value of each counter is initially zero;
the real-time polyp identification of the tissue image obtained by the endoscope in the endoscope withdrawal mode comprises the following steps:
and performing real-time polyp recognition on the tissue image obtained by the endoscope in the endoscope withdrawal mode according to the polyp recognition model corresponding to the detection mode.
Optionally, the updating the counter of each endoscope classification according to the target classification corresponding to the tissue image includes:
when the target classification is the endoscope classification, adding one to a counter corresponding to the target classification, and subtracting one from a counter corresponding to the endoscope classification except the target classification, wherein the counter value of the counter is not zero; and when the target classification is an abnormal classification, subtracting one from the counter corresponding to each endoscope classification with the counter value not being zero so as to update the counter of each endoscope classification.
Optionally, the method further comprises:
identifying the tissue image of the endoscope according to an image identification model;
if the parameter corresponding to the in-vivo image output by the image recognition model is larger than a first parameter threshold value, determining that the image mode of the endoscope is a scope entering mode, and if the parameter corresponding to the blind returning valve image output by the image recognition model is larger than a second parameter threshold value, determining that the image mode of the endoscope is a scope withdrawing mode;
and when the image mode of the endoscope is the endoscope entering mode and the image mode of the endoscope determined based on the image recognition model is the endoscope withdrawing mode, switching the image mode of the endoscope to the endoscope withdrawing mode and outputting second prompt information, wherein the second prompt information is used for prompting to enter the endoscope withdrawing mode.
Optionally, the method further comprises:
when the mode of the endoscope image is a scope withdrawing mode, acquiring a plurality of historical tissue images corresponding to a current tissue image, calculating the similarity between each historical tissue image and the current tissue image, and determining a scope withdrawing speed according to the plurality of similarities;
and displaying the endoscope retreating speed corresponding to the endoscope in the image display area.
Optionally, the method further comprises:
carrying out cleanliness classification on the tissue images of the endoscope in a scope withdrawing mode, and determining the number of the tissue images under each cleanliness classification;
and determining the cleanliness of the tissue corresponding to the tissue image according to the number of the tissue images under each cleanliness classification.
Optionally, the determining the cleanliness of the tissue corresponding to the tissue image according to the number of the tissue images under each cleanliness classification includes:
acquiring the number of tissue images under a cleanliness classification according to the sequence of scores corresponding to the cleanliness classification from small to large;
determining the size relation between the proportion of the number of the tissue images under the current cleanliness classification and the total number of targets and a threshold corresponding to the current cleanliness classification, wherein the total number of the targets is the sum of the number of the tissue images under each cleanliness classification;
if the ratio of the number of the tissue images under the current cleanliness classification to the total number of the targets is greater than or equal to the threshold value corresponding to the cleanliness classification, taking the score corresponding to the cleanliness classification as the cleanliness of the tissue;
if the proportion of the number of the tissue images under the current cleanliness classification to the total number of the targets is smaller than the threshold corresponding to the cleanliness classification, taking the next cleanliness classification as a new current cleanliness classification under the condition that the next cleanliness classification is not the cleanliness classification with the maximum score, and re-executing the step of determining the size relation between the proportion of the number of the tissue images under the current cleanliness classification to the total number of the targets and the threshold corresponding to the current cleanliness classification; in the case where the next cleanliness class is the cleanliness class having the largest score, the score of the next cleanliness class is determined as the cleanliness of the tissue.
Optionally, the performing polyp recognition on the tissue image obtained by the endoscope in the endoscope-withdrawing mode in real time comprises:
detecting a tissue image obtained by the endoscope in a scope withdrawal mode based on a detection model, and determining the position information of polyps when polyps exist in the tissue image;
extracting a detection image corresponding to the position information from the tissue image, and determining classification of the polyp according to the detection image and a recognition model;
and displaying the tissue image in the image display area, and displaying the identification corresponding to the position information of the polyp and the classification in the tissue image.
The specific implementation of the above steps is described in detail above, and is not described herein again.
Referring now to FIG. 6, a block diagram of an electronic device 600 suitable for use in implementing embodiments of the present disclosure is shown. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: processing an endoscope image acquired by an endoscope in real time to obtain a tissue image; in response to determining that the endoscope is in the scope entering mode, determining a target point of a tissue cavity corresponding to the tissue image, wherein the target point is used for indicating a next target moving point of the endoscope at the current position of the endoscope; when a tissue cavity exists in the tissue image, the target point is a central point of the tissue cavity corresponding to the endoscope image, and when the tissue cavity does not exist in the tissue image, the target point is a direction point of the tissue cavity corresponding to the endoscope image; performing real-time polyp recognition on a tissue image obtained by the endoscope in a scope retracting mode in response to determining that the endoscope is in the scope retracting mode, and obtaining a polyp recognition result; and displaying the target point and the polyp identification result in an image display area in a display interface.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented by software or hardware. The name of the module does not limit the module itself in some cases, for example, the image processing module may also be described as a "module for processing endoscope images acquired by an endoscope in real time to obtain tissue images".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Example 1 provides an endoscopic image detection assistance system according to one or more embodiments of the present disclosure, the system including:
the image processing module is used for processing the endoscope image acquired by the endoscope in real time to obtain a tissue image;
the cavity positioning module is used for determining a target point of a tissue cavity corresponding to the tissue image, and the target point is used for indicating a next target moving point of the endoscope at the current position of the endoscope; when a tissue cavity exists in the tissue image, the target point is a central point of the tissue cavity corresponding to the endoscope image, and when the tissue cavity does not exist in the tissue image, the target point is a direction point of the tissue cavity corresponding to the endoscope image;
the polyp identification module is used for carrying out real-time polyp identification on the tissue image obtained by the endoscope in a scope withdrawal mode to obtain a polyp identification result;
and the display module is used for displaying the target point and the polyp identification result.
Example 2 provides the system of example 1, wherein the system further comprises:
the blind area proportion detection module is used for determining a display proportion and a blind area proportion according to a tissue image obtained by the endoscope in a scope withdrawing mode, and the sum of the display proportion and the blind area proportion is 1;
the display module is further used for displaying the display proportion and/or the blind area proportion, and displaying first prompt information under the condition that the blind area proportion is larger than a blind area threshold value, wherein the first prompt information is used for indicating that the detection missing risk exists.
Example 3 provides the system of example 2, wherein the blind zone proportion detection module is further to: performing three-dimensional reconstruction according to a tissue image obtained by the endoscope in a scope withdrawing mode to obtain a three-dimensional tissue image; and determining a display proportion and a blind area proportion according to the three-dimensional tissue image and a target fitting tissue image corresponding to the three-dimensional tissue image.
Example 4 provides the system of example 3, wherein the blind zone proportion detection module is further to:
and determining the ratio of the point cloud number in the three-dimensional tissue image to the target fitting point cloud number of the target fitting tissue image as the display proportion, and determining the blind area proportion according to the display proportion.
Example 5 provides the system of example 1, wherein the system further comprises:
the three-dimensional positioning module is used for carrying out three-dimensional reconstruction according to a tissue image obtained by the endoscope in a scope entering mode to obtain a three-dimensional tissue image; determining a point on the central line of the three-dimensional tissue image, which is closest to the target point, as a three-dimensional target point of the target point in the three-dimensional tissue image;
the gesture determining module is used for determining gesture information of the endoscope according to the tissue image corresponding to the current position and the tissue image in the historical time period;
the track determining module is used for generating a navigation track according to the current position, the attitude information and the three-dimensional target point;
the display module is further used for displaying the posture information of the endoscope and the three-dimensional tissue image, and displaying the three-dimensional target point and the navigation track in the three-dimensional tissue image.
Example 6 provides the system of example 1, wherein the system further comprises:
the image classification module is used for classifying the tissue images of the endoscope in an endoscope entering mode to obtain target classifications corresponding to the tissue images, wherein the target classifications comprise an abnormal classification and a plurality of endoscope classifications;
the detection mode identification module is used for updating the counter of each endoscope classification according to the target classification corresponding to the tissue image, stopping counting operation of each counter when the numerical value of the counter corresponding to any endoscope classification reaches a counting threshold value, and determining the detection mode of the endoscope according to the endoscope classification of which the numerical value reaches the counting threshold value, wherein each endoscope classification is provided with the counter corresponding to the endoscope classification, and the numerical value of each counter is initially zero;
the polyp identification module is further to: and performing real-time polyp recognition on the tissue image obtained by the endoscope in a scope withdrawal mode according to the polyp recognition model corresponding to the detection mode determined by the detection mode recognition module.
Example 7 provides the system of example 6, wherein the detection pattern recognition module updates the counter for each endoscopic classification according to the target classification to which the tissue image corresponds by:
when the target classification is the endoscope classification, adding one to a counter corresponding to the target classification, and subtracting one from a counter corresponding to the endoscope classification except the target classification, wherein the counter value of the counter is not zero; and when the target classification is an abnormal classification, subtracting one from the counter corresponding to each endoscope classification with the counter value not being zero so as to update the counter of each endoscope classification.
Example 8 provides the system of example 1, wherein the system further comprises, in accordance with one or more embodiments of the present disclosure:
the mode identification module is used for identifying a tissue image of the endoscope according to an image identification model, determining that the image mode of the endoscope is a scope entering mode under the condition that a parameter which is output by the image identification model and corresponds to an in-vivo image is larger than a first parameter threshold value, and determining that the image mode of the endoscope is a scope withdrawing mode under the condition that a parameter which is output by the image identification model and corresponds to a blind returning valve image is larger than a second parameter threshold value;
the mode switching module is used for switching the image mode of the endoscope to the endoscope entering mode when the image mode of the endoscope is the in vitro mode and the mode identification module determines that the image mode of the endoscope is the endoscope entering mode, switching the image mode of the endoscope to the endoscope exiting mode and outputting second prompt information when the image mode of the endoscope is the endoscope entering mode and the mode identification module determines that the image mode of the endoscope is the endoscope exiting mode, wherein the second prompt information is used for prompting the endoscope to enter the endoscope exiting mode.
Example 9 provides the system of example 1, wherein the system further comprises:
the speed determining module is used for acquiring a plurality of historical tissue images corresponding to the current tissue image when the mode of the endoscope image is a scope withdrawing mode, calculating the similarity between each historical tissue image and the current tissue image, and determining the scope withdrawing speed according to the plurality of similarities;
the display module is also used for displaying the endoscope retreating speed corresponding to the endoscope.
Example 10 provides the system of example 9, wherein the system further comprises, in accordance with one or more embodiments of the present disclosure:
the cleanliness determination module is used for carrying out cleanliness classification on the tissue images of the endoscope in a scope withdrawal mode and determining the number of the tissue images in each cleanliness classification; and determining the cleanliness of the tissue corresponding to the tissue image according to the number of the tissue images under each cleanliness classification.
Example 11 provides the system of example 1, wherein the polyp identification module comprises:
a polyp detection submodule for detecting a tissue image obtained by the endoscope in a scope retracting mode based on a detection model, and determining position information of polyps when polyps exist in the tissue image;
a polyp recognition sub-module for extracting a detection image corresponding to the position information from the tissue image; determining a classification of the polyp from the detection image and a recognition model;
the display module is further configured to display the tissue image, and display an identifier corresponding to the location information of the polyp and the classification in the tissue image.
Example 12 provides an endoscopic image detection assistance method according to one or more embodiments of the present disclosure, the method including:
processing an endoscope image acquired by an endoscope in real time to obtain a tissue image;
in response to determining that the endoscope is in the scope entering mode, determining a target point of a tissue cavity corresponding to the tissue image, wherein the target point is used for indicating a next target moving point of the endoscope at the current position of the endoscope; when a tissue cavity exists in the tissue image, the target point is a central point of the tissue cavity corresponding to the endoscope image, and when the tissue cavity does not exist in the tissue image, the target point is a direction point of the tissue cavity corresponding to the endoscope image;
performing real-time polyp recognition on a tissue image obtained by the endoscope in a scope retracting mode in response to determining that the endoscope is in the scope retracting mode, and obtaining a polyp recognition result;
and displaying the target point and the polyp identification result in an image display area in a display interface.
Example 13 provides the method of example 12, wherein the method further comprises:
determining a display proportion and a blind area proportion according to a tissue image obtained by the endoscope in a scope withdrawing mode, wherein the sum of the display proportion and the blind area proportion is 1;
and displaying the display proportion and/or the blind area proportion, and displaying first prompt information under the condition that the blind area proportion is greater than a blind area threshold value, wherein the first prompt information is used for indicating that the missing detection risk exists.
Example 14 provides the method of example 13, wherein the determining a display scale and a blind area scale from the tissue image obtained by the endoscope in the scope-retracting mode includes:
performing three-dimensional reconstruction according to a tissue image obtained by the endoscope in a scope withdrawing mode to obtain a three-dimensional tissue image;
and determining a display proportion and a blind area proportion according to the three-dimensional tissue image and a target fitting tissue image corresponding to the three-dimensional tissue image.
Example 15 provides the method of example 14, wherein the determining a display scale and a blind area scale from the three-dimensional tissue image and a target fit tissue image to which the three-dimensional tissue image corresponds, includes:
and determining the ratio of the point cloud number in the three-dimensional tissue image to the target fitting point cloud number of the target fitting tissue image as the display proportion, and determining the blind area proportion according to the display proportion.
Example 16 provides the method of example 12, wherein the method further comprises:
performing three-dimensional reconstruction according to a tissue image obtained by the endoscope in a scope entering mode to obtain a three-dimensional tissue image;
determining a point on the central line of the three-dimensional tissue image, which is closest to the target point, as a three-dimensional target point of the target point in the three-dimensional tissue image;
determining the attitude information of the endoscope according to the tissue image corresponding to the current position and the tissue image in the historical time period, and generating a navigation track according to the current position, the attitude information and the three-dimensional target point;
and displaying the three-dimensional tissue image, and displaying the three-dimensional target point and the navigation track in the three-dimensional tissue image.
Example 17 provides the method of example 12, wherein the method further comprises:
in response to a user's selection operation of a detection mode in a control area in a display interface, determining the detection mode selected by the user as the detection mode of the endoscope;
the real-time polyp identification of the tissue image obtained by the endoscope in the endoscope withdrawal mode comprises the following steps:
and performing real-time polyp recognition on the tissue image obtained by the endoscope in the endoscope withdrawal mode according to the polyp recognition model corresponding to the detection mode.
Example 18 provides the method of example 12, wherein the method further comprises, in accordance with one or more embodiments of the present disclosure:
classifying tissue images of the endoscope in a scope entering mode to obtain target classifications corresponding to the tissue images, wherein the target classifications comprise an abnormal classification and a plurality of endoscope classifications;
updating counters of each endoscope classification according to the target classification corresponding to the tissue image, stopping counting operation of each counter when the numerical value of the counter corresponding to any endoscope classification reaches a counting threshold value, and determining a detection mode of the endoscope according to the endoscope classification of which the numerical value reaches the counting threshold value, wherein each endoscope classification is provided with the counter corresponding to the endoscope classification, and the numerical value of each counter is initially zero;
the real-time polyp identification of the tissue image obtained by the endoscope in the endoscope withdrawal mode comprises the following steps:
and performing real-time polyp recognition on the tissue image obtained by the endoscope in the endoscope withdrawal mode according to the polyp recognition model corresponding to the detection mode.
Example 19 provides the method of example 18, wherein the updating the counter for each endoscopic classification according to the target classification to which the tissue image corresponds, according to one or more embodiments of the present disclosure includes:
when the target classification is the endoscope classification, adding one to a counter corresponding to the target classification, and subtracting one from a counter corresponding to the endoscope classification except the target classification, wherein the counter value of the counter is not zero; and when the target classification is an abnormal classification, subtracting one from the counter corresponding to each endoscope classification with the counter value not being zero so as to update the counter of each endoscope classification.
Example 20 provides the method of example 12, wherein the method further comprises, in accordance with one or more embodiments of the present disclosure:
identifying the tissue image of the endoscope according to an image identification model;
if the parameter corresponding to the in-vivo image output by the image recognition model is larger than a first parameter threshold value, determining that the image mode of the endoscope is a scope entering mode, and if the parameter corresponding to the blind returning valve image output by the image recognition model is larger than a second parameter threshold value, determining that the image mode of the endoscope is a scope withdrawing mode;
and when the image mode of the endoscope is the endoscope entering mode and the image mode of the endoscope determined based on the image recognition model is the endoscope withdrawing mode, switching the image mode of the endoscope to the endoscope withdrawing mode and outputting second prompt information, wherein the second prompt information is used for prompting to enter the endoscope withdrawing mode.
Example 21 provides the method of example 12, wherein the method further comprises:
when the mode of the endoscope image is a scope withdrawing mode, acquiring a plurality of historical tissue images corresponding to a current tissue image, calculating the similarity between each historical tissue image and the current tissue image, and determining a scope withdrawing speed according to the plurality of similarities;
and displaying the endoscope retreating speed corresponding to the endoscope in the image display area.
Example 22 provides the method of example 21, wherein the method further comprises, in accordance with one or more embodiments of the present disclosure:
carrying out cleanliness classification on the tissue images of the endoscope in a scope withdrawing mode, and determining the number of the tissue images under each cleanliness classification;
and determining the cleanliness of the tissue corresponding to the tissue image according to the number of the tissue images under each cleanliness classification.
Example 23 provides the method of example 12, wherein the performing polyp recognition in real time on the tissue image obtained by the endoscope in the scope undocking mode comprises:
detecting a tissue image obtained by the endoscope in a scope withdrawal mode based on a detection model, and determining the position information of polyps when polyps exist in the tissue image;
extracting a detection image corresponding to the position information from the tissue image, and determining classification of the polyp according to the detection image and a recognition model;
and displaying the tissue image in the image display area, and displaying the identification corresponding to the position information of the polyp and the classification in the tissue image.
Example 24 provides a computer-readable medium, on which is stored a computer program that, when executed by a processing device, implements the steps of the method of any of examples 12-23, in accordance with one or more embodiments of the present disclosure.
Example 25 provides, in accordance with one or more embodiments of the present disclosure, an electronic device, comprising:
a storage device having a computer program stored thereon;
processing means for executing said computer program in said storage means to carry out the steps of the method of any of examples 12-23.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims. With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.

Claims (25)

1. An endoscopic image detection assistance system, characterized in that the system comprises:
the image processing module is used for processing the endoscope image acquired by the endoscope in real time to obtain a tissue image;
the cavity positioning module is used for determining a target point of a tissue cavity corresponding to the tissue image, and the target point is used for indicating a next target moving point of the endoscope at the current position of the endoscope; when a tissue cavity exists in the tissue image, the target point is a central point of the tissue cavity corresponding to the endoscope image, and when the tissue cavity does not exist in the tissue image, the target point is a direction point of the tissue cavity corresponding to the endoscope image;
the polyp identification module is used for carrying out real-time polyp identification on the tissue image obtained by the endoscope in a scope withdrawal mode to obtain a polyp identification result;
and the display module is used for displaying the target point and the polyp identification result.
2. The system of claim 1, further comprising:
the blind area proportion detection module is used for determining a display proportion and a blind area proportion according to a tissue image obtained by the endoscope in a scope withdrawing mode, and the sum of the display proportion and the blind area proportion is 1;
the display module is further used for displaying the display proportion and/or the blind area proportion, and displaying first prompt information under the condition that the blind area proportion is larger than a blind area threshold value, wherein the first prompt information is used for indicating that the detection missing risk exists.
3. The system of claim 2, wherein the blind zone proportion detection module is further configured to: performing three-dimensional reconstruction according to a tissue image obtained by the endoscope in a scope withdrawing mode to obtain a three-dimensional tissue image; and determining a display proportion and a blind area proportion according to the three-dimensional tissue image and a target fitting tissue image corresponding to the three-dimensional tissue image.
4. The system of claim 3, wherein the blind zone proportion detection module is further configured to:
and determining the ratio of the point cloud number in the three-dimensional tissue image to the target fitting point cloud number of the target fitting tissue image as the display proportion, and determining the blind area proportion according to the display proportion.
5. The system of claim 1, further comprising:
the three-dimensional positioning module is used for carrying out three-dimensional reconstruction according to a tissue image obtained by the endoscope in a scope entering mode to obtain a three-dimensional tissue image; determining a point on the central line of the three-dimensional tissue image, which is closest to the target point, as a three-dimensional target point of the target point in the three-dimensional tissue image;
the gesture determining module is used for determining gesture information of the endoscope according to the tissue image corresponding to the current position and the tissue image in the historical time period;
the track determining module is used for generating a navigation track according to the current position, the attitude information and the three-dimensional target point;
the display module is further used for displaying the posture information of the endoscope and the three-dimensional tissue image, and displaying the three-dimensional target point and the navigation track in the three-dimensional tissue image.
6. The system of claim 1, further comprising:
the image classification module is used for classifying the tissue images of the endoscope in an endoscope entering mode to obtain target classifications corresponding to the tissue images, wherein the target classifications comprise an abnormal classification and a plurality of endoscope classifications;
the detection mode identification module is used for updating the counter of each endoscope classification according to the target classification corresponding to the tissue image, stopping counting operation of each counter when the numerical value of the counter corresponding to any endoscope classification reaches a counting threshold value, and determining the detection mode of the endoscope according to the endoscope classification of which the numerical value reaches the counting threshold value, wherein each endoscope classification is provided with the counter corresponding to the endoscope classification, and the numerical value of each counter is initially zero;
the polyp identification module is further to: and performing real-time polyp recognition on the tissue image obtained by the endoscope in a scope withdrawal mode according to the polyp recognition model corresponding to the detection mode determined by the detection mode recognition module.
7. The system of claim 6, wherein the detection pattern recognition module updates a counter for each endoscopic classification based on the target classification corresponding to the tissue image by:
when the target classification is the endoscope classification, adding one to a counter corresponding to the target classification, and subtracting one from a counter corresponding to the endoscope classification except the target classification, wherein the counter value of the counter is not zero; and when the target classification is an abnormal classification, subtracting one from the counter corresponding to each endoscope classification with the counter value not being zero so as to update the counter of each endoscope classification.
8. The system of claim 1, further comprising:
the mode identification module is used for identifying a tissue image of the endoscope according to an image identification model, determining that the image mode of the endoscope is a scope entering mode under the condition that a parameter which is output by the image identification model and corresponds to an in-vivo image is larger than a first parameter threshold value, and determining that the image mode of the endoscope is a scope withdrawing mode under the condition that a parameter which is output by the image identification model and corresponds to a blind returning valve image is larger than a second parameter threshold value;
the mode switching module is used for switching the image mode of the endoscope to the endoscope entering mode when the image mode of the endoscope is the in vitro mode and the mode identification module determines that the image mode of the endoscope is the endoscope entering mode, switching the image mode of the endoscope to the endoscope exiting mode and outputting second prompt information when the image mode of the endoscope is the endoscope entering mode and the mode identification module determines that the image mode of the endoscope is the endoscope exiting mode, wherein the second prompt information is used for prompting the endoscope to enter the endoscope exiting mode.
9. The system of claim 1, further comprising:
the speed determining module is used for acquiring a plurality of historical tissue images corresponding to the current tissue image when the mode of the endoscope image is a scope withdrawing mode, calculating the similarity between each historical tissue image and the current tissue image, and determining the scope withdrawing speed according to the plurality of similarities;
the display module is also used for displaying the endoscope retreating speed corresponding to the endoscope.
10. The system of claim 9, further comprising:
the cleanliness determination module is used for carrying out cleanliness classification on the tissue images of the endoscope in a scope withdrawal mode and determining the number of the tissue images in each cleanliness classification; and determining the cleanliness of the tissue corresponding to the tissue image according to the number of the tissue images under each cleanliness classification.
11. The system of claim 1, wherein the polyp identification module comprises:
a polyp detection submodule for detecting a tissue image obtained by the endoscope in a scope retracting mode based on a detection model, and determining position information of polyps when polyps exist in the tissue image;
a polyp recognition sub-module for extracting a detection image corresponding to the position information from the tissue image; determining a classification of the polyp from the detection image and a recognition model;
the display module is further configured to display the tissue image, and display an identifier corresponding to the location information of the polyp and the classification in the tissue image.
12. An endoscopic image detection assistance method characterized by comprising:
processing an endoscope image acquired by an endoscope in real time to obtain a tissue image;
in response to determining that the endoscope is in the scope entering mode, determining a target point of a tissue cavity corresponding to the tissue image, wherein the target point is used for indicating a next target moving point of the endoscope at the current position of the endoscope; when a tissue cavity exists in the tissue image, the target point is a central point of the tissue cavity corresponding to the endoscope image, and when the tissue cavity does not exist in the tissue image, the target point is a direction point of the tissue cavity corresponding to the endoscope image;
performing real-time polyp recognition on a tissue image obtained by the endoscope in a scope retracting mode in response to determining that the endoscope is in the scope retracting mode, and obtaining a polyp recognition result;
and displaying the target point and the polyp identification result in an image display area in a display interface.
13. The method of claim 12, further comprising:
determining a display proportion and a blind area proportion according to a tissue image obtained by the endoscope in a scope withdrawing mode, wherein the sum of the display proportion and the blind area proportion is 1;
and displaying the display proportion and/or the blind area proportion, and displaying first prompt information under the condition that the blind area proportion is greater than a blind area threshold value, wherein the first prompt information is used for indicating that the missing detection risk exists.
14. The method of claim 13, wherein determining a display scale and a blind spot scale from the tissue images acquired by the endoscope in the scope-retracting mode comprises:
performing three-dimensional reconstruction according to a tissue image obtained by the endoscope in a scope withdrawing mode to obtain a three-dimensional tissue image;
and determining a display proportion and a blind area proportion according to the three-dimensional tissue image and a target fitting tissue image corresponding to the three-dimensional tissue image.
15. The method of claim 14, wherein determining a display scale and a blind spot scale from the three-dimensional tissue image and a target fit tissue image corresponding to the three-dimensional tissue image comprises:
and determining the ratio of the point cloud number in the three-dimensional tissue image to the target fitting point cloud number of the target fitting tissue image as the display proportion, and determining the blind area proportion according to the display proportion.
16. The method of claim 12, further comprising:
performing three-dimensional reconstruction according to a tissue image obtained by the endoscope in a scope entering mode to obtain a three-dimensional tissue image;
determining a point on the central line of the three-dimensional tissue image, which is closest to the target point, as a three-dimensional target point of the target point in the three-dimensional tissue image;
determining the attitude information of the endoscope according to the tissue image corresponding to the current position and the tissue image in the historical time period, and generating a navigation track according to the current position, the attitude information and the three-dimensional target point;
and displaying the three-dimensional tissue image, and displaying the three-dimensional target point and the navigation track in the three-dimensional tissue image.
17. The method of claim 12, further comprising:
in response to a user's selection operation of a detection mode in a control area in a display interface, determining the detection mode selected by the user as the detection mode of the endoscope;
the real-time polyp identification of the tissue image obtained by the endoscope in the endoscope withdrawal mode comprises the following steps:
and performing real-time polyp recognition on the tissue image obtained by the endoscope in the endoscope withdrawal mode according to the polyp recognition model corresponding to the detection mode.
18. The method of claim 12, further comprising:
classifying tissue images of the endoscope in a scope entering mode to obtain target classifications corresponding to the tissue images, wherein the target classifications comprise an abnormal classification and a plurality of endoscope classifications;
updating counters of each endoscope classification according to the target classification corresponding to the tissue image, stopping counting operation of each counter when the numerical value of the counter corresponding to any endoscope classification reaches a counting threshold value, and determining a detection mode of the endoscope according to the endoscope classification of which the numerical value reaches the counting threshold value, wherein each endoscope classification is provided with the counter corresponding to the endoscope classification, and the numerical value of each counter is initially zero;
the real-time polyp identification of the tissue image obtained by the endoscope in the endoscope withdrawal mode comprises the following steps:
and performing real-time polyp recognition on the tissue image obtained by the endoscope in the endoscope withdrawal mode according to the polyp recognition model corresponding to the detection mode.
19. The method of claim 18, wherein updating the counter for each endoscopic classification based on the target classification corresponding to the tissue image comprises:
when the target classification is the endoscope classification, adding one to a counter corresponding to the target classification, and subtracting one from a counter corresponding to the endoscope classification except the target classification, wherein the counter value of the counter is not zero; and when the target classification is an abnormal classification, subtracting one from the counter corresponding to each endoscope classification with the counter value not being zero so as to update the counter of each endoscope classification.
20. The method of claim 12, further comprising:
identifying the tissue image of the endoscope according to an image identification model;
if the parameter corresponding to the in-vivo image output by the image recognition model is larger than a first parameter threshold value, determining that the image mode of the endoscope is a scope entering mode, and if the parameter corresponding to the blind returning valve image output by the image recognition model is larger than a second parameter threshold value, determining that the image mode of the endoscope is a scope withdrawing mode;
and when the image mode of the endoscope is the endoscope entering mode and the image mode of the endoscope determined based on the image recognition model is the endoscope withdrawing mode, switching the image mode of the endoscope to the endoscope withdrawing mode and outputting second prompt information, wherein the second prompt information is used for prompting to enter the endoscope withdrawing mode.
21. The method of claim 12, further comprising:
when the mode of the endoscope image is a scope withdrawing mode, acquiring a plurality of historical tissue images corresponding to a current tissue image, calculating the similarity between each historical tissue image and the current tissue image, and determining a scope withdrawing speed according to the plurality of similarities;
and displaying the endoscope retreating speed corresponding to the endoscope in the image display area.
22. The method of claim 21, further comprising:
carrying out cleanliness classification on the tissue images of the endoscope in a scope withdrawing mode, and determining the number of the tissue images under each cleanliness classification;
and determining the cleanliness of the tissue corresponding to the tissue image according to the number of the tissue images under each cleanliness classification.
23. The method of claim 12, wherein said performing polyp recognition in real time on tissue images obtained by said endoscope in a demagnifying mode comprises:
detecting a tissue image obtained by the endoscope in a scope withdrawal mode based on a detection model, and determining the position information of polyps when polyps exist in the tissue image;
extracting a detection image corresponding to the position information from the tissue image, and determining classification of the polyp according to the detection image and a recognition model;
and displaying the tissue image in the image display area, and displaying the identification corresponding to the position information of the polyp and the classification in the tissue image.
24. A computer-readable medium, on which a computer program is stored, which, when being executed by processing means, carries out the steps of the method according to any one of claims 12-23.
25. An electronic device, comprising:
a storage device having a computer program stored thereon;
processing means for executing the computer program in the storage means to carry out the steps of the method according to any one of claims 12-23.
CN202111643635.XA 2021-12-29 2021-12-29 Endoscopic image detection assistance system, method, medium, and electronic device Active CN114332019B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111643635.XA CN114332019B (en) 2021-12-29 2021-12-29 Endoscopic image detection assistance system, method, medium, and electronic device
PCT/CN2022/137565 WO2023124876A1 (en) 2021-12-29 2022-12-08 Endoscope image detection auxiliary system and method, medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111643635.XA CN114332019B (en) 2021-12-29 2021-12-29 Endoscopic image detection assistance system, method, medium, and electronic device

Publications (2)

Publication Number Publication Date
CN114332019A true CN114332019A (en) 2022-04-12
CN114332019B CN114332019B (en) 2023-07-04

Family

ID=81017199

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111643635.XA Active CN114332019B (en) 2021-12-29 2021-12-29 Endoscopic image detection assistance system, method, medium, and electronic device

Country Status (2)

Country Link
CN (1) CN114332019B (en)
WO (1) WO2023124876A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023124876A1 (en) * 2021-12-29 2023-07-06 小荷医疗器械(海南)有限公司 Endoscope image detection auxiliary system and method, medium and electronic device
CN116523907A (en) * 2023-06-28 2023-08-01 浙江华诺康科技有限公司 Endoscope imaging quality detection method, device, equipment and storage medium
CN117137410A (en) * 2023-10-31 2023-12-01 广东实联医疗器械有限公司 Medical endoscope image processing method and system
CN117252927A (en) * 2023-11-20 2023-12-19 华中科技大学同济医学院附属协和医院 Catheter lower intervention target positioning method and system based on small target detection
CN117392449A (en) * 2023-10-24 2024-01-12 青岛美迪康数字工程有限公司 Enteroscopy part identification method, device and equipment based on endoscopic image features
WO2024109610A1 (en) * 2022-11-21 2024-05-30 杭州海康慧影科技有限公司 Endoscope system, and apparatus and method for measuring spacing between in-vivo tissue features

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117576097B (en) * 2024-01-16 2024-03-22 华伦医疗用品(深圳)有限公司 Endoscope image processing method and system based on AI auxiliary image processing information

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109447973A (en) * 2018-10-31 2019-03-08 腾讯科技(深圳)有限公司 A kind for the treatment of method and apparatus and system of polyp of colon image
CN110495847A (en) * 2019-08-23 2019-11-26 重庆天如生物科技有限公司 Alimentary canal morning cancer assistant diagnosis system and check device based on deep learning
CN112070124A (en) * 2020-08-18 2020-12-11 苏州慧维智能医疗科技有限公司 Digestive endoscopy video scene classification method based on convolutional neural network
KR20210055881A (en) * 2019-11-08 2021-05-18 주식회사 인트로메딕 System and method for diagnosing small bowel preparation scale
CN113240662A (en) * 2021-05-31 2021-08-10 萱闱(北京)生物科技有限公司 Endoscope inspection auxiliary system based on artificial intelligence
US20210280312A1 (en) * 2020-03-06 2021-09-09 Verily Life Sciences Llc Detecting deficient coverage in gastroenterological procedures
CN113470030A (en) * 2021-09-03 2021-10-01 北京字节跳动网络技术有限公司 Method and device for determining cleanliness of tissue cavity, readable medium and electronic equipment
CN113487608A (en) * 2021-09-06 2021-10-08 北京字节跳动网络技术有限公司 Endoscope image detection method, endoscope image detection device, storage medium, and electronic apparatus
CN113487605A (en) * 2021-09-03 2021-10-08 北京字节跳动网络技术有限公司 Tissue cavity positioning method, device, medium and equipment for endoscope
CN113570592A (en) * 2021-08-05 2021-10-29 印迹信息科技(北京)有限公司 Gastrointestinal disease detection and model training method, device, equipment and medium
CN113706536A (en) * 2021-10-28 2021-11-26 武汉大学 Sliding mirror risk early warning method and device and computer readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114332019B (en) * 2021-12-29 2023-07-04 小荷医疗器械(海南)有限公司 Endoscopic image detection assistance system, method, medium, and electronic device

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109447973A (en) * 2018-10-31 2019-03-08 腾讯科技(深圳)有限公司 A kind for the treatment of method and apparatus and system of polyp of colon image
US20210004959A1 (en) * 2018-10-31 2021-01-07 Tencent Technology (Shenzhen) Company Limited Colon polyp image processing method and apparatus, and system
CN110495847A (en) * 2019-08-23 2019-11-26 重庆天如生物科技有限公司 Alimentary canal morning cancer assistant diagnosis system and check device based on deep learning
KR20210055881A (en) * 2019-11-08 2021-05-18 주식회사 인트로메딕 System and method for diagnosing small bowel preparation scale
US20210280312A1 (en) * 2020-03-06 2021-09-09 Verily Life Sciences Llc Detecting deficient coverage in gastroenterological procedures
CN112070124A (en) * 2020-08-18 2020-12-11 苏州慧维智能医疗科技有限公司 Digestive endoscopy video scene classification method based on convolutional neural network
CN113240662A (en) * 2021-05-31 2021-08-10 萱闱(北京)生物科技有限公司 Endoscope inspection auxiliary system based on artificial intelligence
CN113570592A (en) * 2021-08-05 2021-10-29 印迹信息科技(北京)有限公司 Gastrointestinal disease detection and model training method, device, equipment and medium
CN113470030A (en) * 2021-09-03 2021-10-01 北京字节跳动网络技术有限公司 Method and device for determining cleanliness of tissue cavity, readable medium and electronic equipment
CN113487605A (en) * 2021-09-03 2021-10-08 北京字节跳动网络技术有限公司 Tissue cavity positioning method, device, medium and equipment for endoscope
CN113487608A (en) * 2021-09-06 2021-10-08 北京字节跳动网络技术有限公司 Endoscope image detection method, endoscope image detection device, storage medium, and electronic apparatus
CN113706536A (en) * 2021-10-28 2021-11-26 武汉大学 Sliding mirror risk early warning method and device and computer readable storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
PENG CUI ET AL.: "Tissue Recognition in Spinal Endoscopic Surgery Using Deep Learning", 《2019 IEEE 10TH INTERNATIONAL CONFERENCE ON AWARENESS SCIENCE AND TECHNOLOGY》, pages 1 - 10 *
潘卫东等: "CT结肠镜在结肠息肉诊断中的应用", 《中国医学科学院学报》 *
潘卫东等: "CT结肠镜在结肠息肉诊断中的应用", 《中国医学科学院学报》, no. 01, 28 February 2006 (2006-02-28), pages 92 - 96 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023124876A1 (en) * 2021-12-29 2023-07-06 小荷医疗器械(海南)有限公司 Endoscope image detection auxiliary system and method, medium and electronic device
WO2024109610A1 (en) * 2022-11-21 2024-05-30 杭州海康慧影科技有限公司 Endoscope system, and apparatus and method for measuring spacing between in-vivo tissue features
CN116523907A (en) * 2023-06-28 2023-08-01 浙江华诺康科技有限公司 Endoscope imaging quality detection method, device, equipment and storage medium
CN116523907B (en) * 2023-06-28 2023-10-31 浙江华诺康科技有限公司 Endoscope imaging quality detection method, device, equipment and storage medium
CN117392449A (en) * 2023-10-24 2024-01-12 青岛美迪康数字工程有限公司 Enteroscopy part identification method, device and equipment based on endoscopic image features
CN117137410A (en) * 2023-10-31 2023-12-01 广东实联医疗器械有限公司 Medical endoscope image processing method and system
CN117137410B (en) * 2023-10-31 2024-01-23 广东实联医疗器械有限公司 Medical endoscope image processing method and system
CN117252927A (en) * 2023-11-20 2023-12-19 华中科技大学同济医学院附属协和医院 Catheter lower intervention target positioning method and system based on small target detection
CN117252927B (en) * 2023-11-20 2024-02-02 华中科技大学同济医学院附属协和医院 Catheter lower intervention target positioning method and system based on small target detection

Also Published As

Publication number Publication date
CN114332019B (en) 2023-07-04
WO2023124876A1 (en) 2023-07-06

Similar Documents

Publication Publication Date Title
CN114332019B (en) Endoscopic image detection assistance system, method, medium, and electronic device
CN108685560B (en) Automated steering system and method for robotic endoscope
CN113573654B (en) AI system, method and storage medium for detecting and determining lesion size
US20180263568A1 (en) Systems and Methods for Clinical Image Classification
US9104902B2 (en) Instrument-based image registration for fusing images with tubular structures
US20220254017A1 (en) Systems and methods for video-based positioning and navigation in gastroenterological procedures
JP6254053B2 (en) Endoscopic image diagnosis support apparatus, system and program, and operation method of endoscopic image diagnosis support apparatus
CN113470029B (en) Training method and device, image processing method, electronic device and storage medium
CN112541928A (en) Network training method and device, image segmentation method and device and electronic equipment
WO2023029741A1 (en) Tissue cavity locating method and apparatus for endoscope, medium and device
CN112070763A (en) Image data processing method and device, electronic equipment and storage medium
CN113496512B (en) Tissue cavity positioning method, device, medium and equipment for endoscope
JPWO2018216618A1 (en) Information processing apparatus, control method, and program
JP2012024518A (en) Device, method, and program for assisting endoscopic observation
CN112967291A (en) Image processing method and device, electronic equipment and storage medium
CN111724364B (en) Method and device based on lung lobes and trachea trees, electronic equipment and storage medium
CN112116575A (en) Image processing method and device, electronic equipment and storage medium
CN111798498A (en) Image processing method and device, electronic equipment and storage medium
WO2023095208A1 (en) Endoscope insertion guide device, endoscope insertion guide method, endoscope information acquisition method, guide server device, and image inference model learning method
CN114782388A (en) Endoscope advance and retreat time determining method and device based on image recognition
van der Stap et al. Towards automated visual flexible endoscope navigation
CN114332033A (en) Endoscope image processing method, apparatus, medium, and device based on artificial intelligence
WO2024164912A1 (en) Endoscopic target structure evaluation system and method, device, and storage medium
CN113143168A (en) Medical auxiliary operation method, device, equipment and computer storage medium
CN112967276A (en) Object detection method, object detection device, endoscope system, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant