WO2023124876A1 - Endoscope image detection auxiliary system and method, medium and electronic device - Google Patents

Endoscope image detection auxiliary system and method, medium and electronic device Download PDF

Info

Publication number
WO2023124876A1
WO2023124876A1 PCT/CN2022/137565 CN2022137565W WO2023124876A1 WO 2023124876 A1 WO2023124876 A1 WO 2023124876A1 CN 2022137565 W CN2022137565 W CN 2022137565W WO 2023124876 A1 WO2023124876 A1 WO 2023124876A1
Authority
WO
WIPO (PCT)
Prior art keywords
endoscope
image
tissue
mode
tissue image
Prior art date
Application number
PCT/CN2022/137565
Other languages
French (fr)
Chinese (zh)
Inventor
边成
李剑
赵秋阳
赵家英
石小周
杨志雄
薛云鹤
李帅
刘威
Original Assignee
小荷医疗器械(海南)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 小荷医疗器械(海南)有限公司 filed Critical 小荷医疗器械(海南)有限公司
Publication of WO2023124876A1 publication Critical patent/WO2023124876A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Definitions

  • the present disclosure relates to the field of image processing, and in particular, to an endoscope image detection auxiliary system, method, medium and electronic equipment.
  • endoscopy can enable doctors to observe the real state of the internal environment of the human body more intuitively. It is widely used in the medical field for the detection of polyp lesions and cancer, so that patients can receive effective intervention in the early stages of the disease. and treatment.
  • Endoscopic examination is usually divided into the process of entering and withdrawing the mirror.
  • the process of entering the mirror is usually controlled by the physician. 1.
  • the quality of patient’s intestinal preparation is uneven, and the endoscope and intestinal tract are always in a state of relative motion. There may be bubble occlusion, overexposure, motion blur, etc. in the endoscopic image, and blind areas of vision are prone to appear, which makes doctors You need to rely on your own experience to control the endoscope, which may lead to slow or even failure of the endoscope, and may even cause harm to the examinee. higher.
  • the present disclosure provides an endoscopic image detection assistance system, the system comprising:
  • the image processing module is used to process the endoscope image collected by the endoscope in real time to obtain tissue images
  • the cavity positioning module is used to determine the target point of the tissue cavity corresponding to the tissue image, and the target point is used to indicate the next target movement point of the endoscope at its current position; wherein, When there is a tissue cavity in the tissue image, the target point is the center point of the tissue cavity corresponding to the endoscopic image, and when there is no tissue cavity in the tissue image, the target point is The direction point of the tissue cavity corresponding to the endoscopic image;
  • the polyp identification module is used to perform real-time polyp identification on the tissue image obtained by the endoscope in the retracting mirror mode, and obtain a polyp identification result;
  • a display module configured to display the target point and the polyp recognition result.
  • the present disclosure provides a method for assisting endoscopic image detection, the method comprising:
  • target point In response to determining that the endoscope is in the mirror-in mode, determine a target point of the tissue cavity corresponding to the tissue image, and the target point is used to indicate the next time the endoscope is in the mirror-in mode at its current position.
  • Target moving point wherein, when there is a tissue cavity in the tissue image, the target point is the center point of the tissue cavity corresponding to the endoscopic image, and when there is no tissue cavity in the tissue image , the target point is a direction point of the tissue cavity corresponding to the endoscopic image;
  • the target point and the polyp recognition result are displayed.
  • the present disclosure provides a computer-readable medium on which a computer program is stored, and when the program is executed by a processing device, the steps of the method described in the second aspect are implemented.
  • an electronic device including:
  • a processing device configured to execute the computer program in the storage device to implement the steps of the method of the second aspect.
  • the target point in the tissue image can be determined through real-time detection and recognition of the cavity in the tissue image during the endoscope's lensing process, thereby providing reliable support for the lensing process.
  • Accurate automatic navigation improves the efficiency and accuracy of endoscope entry, reduces the high requirements for physician experience in the use of endoscopes, avoids harm to the examinee, and improves user experience.
  • Fig. 1 is a block diagram of an endoscopic image detection auxiliary system provided according to an embodiment of the present disclosure
  • FIGS. 2A and 2B are schematic diagrams showing target points according to an embodiment of the present disclosure.
  • Fig. 3 is a schematic diagram of a display interface provided according to an embodiment of the present disclosure.
  • Figure 4 is a schematic diagram of a three-dimensional reconstruction of the intestinal tract
  • Fig. 5 is a flowchart of an endoscopic image detection assistance method provided according to an embodiment of the present disclosure
  • FIG. 6 is a schematic structural diagram showing an electronic device suitable for implementing an embodiment of the present disclosure.
  • the term “comprise” and its variations are open-ended, ie “including but not limited to”.
  • the term “based on” is “based at least in part on”.
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one further embodiment”; the term “some embodiments” means “at least some embodiments.” Relevant definitions of other terms will be given in the description below.
  • FIG. 1 it is a block diagram of an endoscopic image detection auxiliary system provided according to an embodiment of the present disclosure.
  • the endoscopic image detection auxiliary system 10 may include:
  • the image processing module 100 is configured to perform real-time processing on endoscopic images collected by the endoscope to obtain tissue images.
  • the endoscope performs real-time shooting inside the living body such as the human body to obtain a video stream, so that frames can be extracted from the video stream according to a preset acquisition period to acquire endoscopic images.
  • the collected endoscopic image can be processed, such as cropping, normalizing, resampling, etc. to process the endoscopic image to a preset size, so as to obtain the tissue image, that is, in the current detection process, the corresponding The image of the tissue in order to facilitate the unified processing of the tissue image.
  • the currently detected tissue may be intestinal tract, chest cavity, abdominal cavity, etc.
  • the original signal of the image collected by the endoscope usually contains the equipment information of the endoscope and the personal information of the examinee.
  • the image processing module can process the tissue based on the Yolo V4 algorithm
  • the image is detected from the endoscope image, and then the device information and personal information are cut and deleted, so that the tissue image only contains the in-vivo image corresponding to the endoscope, which protects privacy information and facilitates subsequent image processing.
  • many invalid images may be collected during the moving process of the endoscope due to instability in the process of entering the mirror or improper position of the endoscope. Low quality image. These invalid images can interfere with the endoscopic inspection results.
  • the endoscopic image after the endoscopic image is obtained, it can be judged first whether the endoscopic image is valid, and if the endoscopic image is an invalid image, the endoscopic image can be directly discarded. If the endoscopic image is a valid image, then the corresponding tissue image is determined based on the endoscopic image, so as to reduce unnecessary data processing and improve processing speed.
  • a pre-trained recognition model can be used to recognize the tissue image to determine whether the tissue image is valid.
  • the recognition model can be obtained by training based on a convolutional neural network, which is not specifically limited in the present disclosure.
  • the cavity positioning module 200 is configured to determine the target point of the tissue cavity corresponding to the tissue image, and the target point is used to indicate the next target movement point of the endoscope at its current position; wherein , when there is a tissue cavity in the tissue image, the target point is the center point of the tissue cavity corresponding to the endoscopic image, and when there is no tissue cavity in the tissue image, the target point is the direction point of the tissue cavity corresponding to the endoscopic image.
  • the target point is the center point of the tissue cavity corresponding to the endoscopic image, such as point A in Figure 2A; if there is no tissue cavity in the tissue image, the target point The point is the direction point of the tissue cavity corresponding to the endoscopic image, as shown in Figure 2B, if there is no tissue cavity in the tissue image, then the direction point of the navigation can be determined, that is, point B in Figure 2B Show.
  • the physician user can select the current mode during the use of the endoscope, that is, the mirror-in mode or the mirror-out mode.
  • the mirror-in mode it is used to control the endoscope to reach the ileocele.
  • the endoscope needs to move along the center of the tissue cavity as far as possible during the endoscope entry process, so that it can effectively avoid touching the tissue surface mucosa of the examinee and avoid causing damage to the examinee. Therefore, in this embodiment, when it is determined that the endoscope is in the mirror-in mode, it may be further determined whether there is a tissue cavity in the tissue image.
  • a detection model may be pre-trained to identify and detect whether there is a tissue cavity in the tissue image.
  • the detection model can be trained according to pre-collected training samples, which can include training images and labels corresponding to the training images, and the labels are used to indicate whether there are tissue cavities in the training images, so that The training image is used as the input of the model, and the label corresponding to the training image is used as the target output of the model to update and train the model to obtain the detection model.
  • the detection model may be CNN (Convolutional Neural Networks, convolutional neural network) or LSTM (Long Short-Term Memory, long-term short-term memory network), Encoder in Transformer, etc., which is not specifically limited in this disclosure.
  • the tissue cavity can be intestinal cavity, gastric cavity, etc.
  • the intestinal cavity if there is a tissue cavity in the collected tissue image of the intestinal cavity after the endoscope enters the intestinal cavity, then further The central point of the intestinal lumen is determined, that is, the center of the space section surrounded by the intestinal lumen wall. When the endoscope moves forward along the central point of the intestinal lumen, the automatic navigation of the endoscope is realized.
  • the central point of the tissue cavity may be identified through a key point identification model.
  • the tissue image may be marked by a professional physician based on experience.
  • the position of the center point of the tissue cavity in the training image can be circled, and the center of the marked circle is the position of the center point, that is, the label corresponding to the training image, and a training sample including the training image and label can be obtained .
  • unlabeled training images can also be included in the training samples.
  • the key point identification model may include a student sub-network, a teacher sub-network and a discriminant sub-network, the student sub-network and the teacher sub-network have the same network structure, and the teacher sub-network is used to determine the student sub-network
  • the predicted labeling features corresponding to the training images in , during the training process of the key point recognition model, the weight of the prediction loss corresponding to the student sub-network is based on the predicted labeling features of the teacher sub-network and the discriminant sub-network OK out.
  • the preprocessing method can be data enhancement, for example, it can be Color, brightness, chroma, saturation transformation and other non-affine transformation methods to ensure that the position is not deformed.
  • different processed images can be used as the input images of the teacher sub-network and the student sub-network respectively to train the key point recognition model.
  • the direction point of the intestinal cavity can be further determined, and the direction point is the center point of the predicted tissue cavity relative to The point relative to the tissue image indicates that the endoscope should be deflected in the direction of the direction point, so as to provide direction guidance for the advancement of the endoscope.
  • the latest N tissue images including the tissue image can be formed into an image sequence, based on the image sequence and the direction point recognition model Make direction point predictions.
  • N may be a positive integer, representing the number of tissue images included in the image sequence, which may be set according to actual application scenarios.
  • the direction point recognition model includes a convolutional subnetwork, a time cyclic subnetwork and a decoding subnetwork
  • the convolutional subnetwork can be used to obtain the spatial features of the image sequence
  • the time cyclic subnetwork can be used to obtain the image sequence time features
  • the decoding subnetwork can be used to decode based on the spatial features and the time features to obtain the direction points and ensure the accuracy of direction point recognition.
  • the number of training images contained in the training image sequence of the direction point recognition model can be limited according to the actual use scenario, for example, N can be set to 5, that is, each training image sequence can contain 5 training images, that is, Predict the direction point of the tissue cavity in the current state based on the last 5 training images.
  • the label image corresponding to the training image sequence is used to indicate the position of the direction point of the tissue cavity in the last image predicted based on the multiple images, so that the direction point recognition model can be performed based on the above training image sequence train.
  • the polyp identification module 300 is configured to perform real-time polyp identification on the tissue image obtained by the endoscope in the withdrawal mode, and obtain a polyp identification result.
  • the display module 400 is configured to display the target point and the polyp recognition result.
  • Physicians can detect the human body during the withdrawal process of the endoscope, for example, polyps can be detected.
  • real-time polyp identification can be performed on the tissue image obtained during the retraction process, so as to provide data reference for the physician during the real-time detection process.
  • the target point in the tissue image can be determined through real-time detection and recognition of the cavity in the tissue image during the endoscope's lensing process, thereby providing reliable support for the lensing process.
  • Accurate automatic navigation improves the efficiency and accuracy of endoscope entry, reduces the high requirements for physician experience in the use of endoscopes, avoids harm to the examinee, and improves user experience.
  • the polyp identification module includes:
  • the polyp detection sub-module is configured to detect the tissue image obtained by the endoscope in the retracting mirror mode based on the detection model, and determine the position information of the polyp when it is detected that there is a polyp in the tissue image.
  • the detection model implemented by GFLv2 (Generalized Focal Loss V2) can be used in the polyp detection sub-module for detection.
  • the detection model can judge the prediction reliability of the predicted position according to the distribution while outputting the predicted position.
  • the training method is There are technologies, so I won't repeat them here.
  • the predicted position whose predicted reliability is greater than a threshold can be displayed in the image display area in the display interface, and the predicted reliability can also be displayed at the same time, as shown in Figure 3, the predicted position can be detected by a solid line It is displayed in the form of a box to provide an auxiliary reminder to the doctor, so as to improve the accuracy of polyp detection and the detection level of polyps with high specificity.
  • the tissue image before the tissue image is input into the detection model in the polyp detection sub-module for detection, the tissue image can be preprocessed by the preprocessing module.
  • the tissue image can be uniformly standardized to a preset size, for example The preset size may be 512*512, so that the image processed by the preprocessing module is input into the detection model for detection, and the position information of the polyp detected in the tissue image is obtained.
  • a polyp identification submodule configured to extract a detected image corresponding to the position information from the tissue image; determine the classification of the polyp according to the detected image and the identification model;
  • the display module is further configured to display the tissue image, and display the identification corresponding to the position information of the polyp and the classification in the tissue image.
  • the detection image corresponding to the position information may be extracted from the tissue image based on the position information. For example, when extracting a detection image, the region corresponding to the location information may be enlarged and then extracted, so as to ensure the integrity of the extracted detection image. For example, as shown in Figure 3, it is the detection frame corresponding to the position information of the detected polyp, then when the detection image is extracted based on this position, the detection frame can be enlarged by 1.5 times, that is, the corresponding frame in the dotted line frame in Figure 3 can be extracted The image of is used as the detection image. Afterwards, the detection image is input into the polyp recognition sub-module for polyp classification and recognition.
  • the polyp identification sub-module can include a classification model based on resnet18.
  • This classification model can obtain corresponding training samples by pre-labeling and classifying a large number of polyp images, so that the polyp image is used as the input of the model, and the labeled classification is used as the target.
  • the output is trained to obtain the classification model.
  • the classification category may cover various classifications such as adenoma, hyperplastic polyp, carcinoma, inflammatory polyp, and submucosal tumor.
  • the detection image can be preprocessed by the preprocessing module.
  • the detection image can be uniformly standardized to a preset size, such as a preset size It can be 256*256, so that the image processed by the preprocessing module is input into the classification model for classification, and the polyp recognition result is obtained, so that the polyp can be displayed in the image display area of the display interface and in the tissue image
  • a preset size such as a preset size It can be 256*256
  • the polyp position in the tissue image can be detected first, the accuracy of polyp detection in the tissue image can be improved, and the missed detection rate of polyps can be reduced. Furthermore, the detection image for polyp identification can be extracted based on the detected position information, thereby reducing the amount of data processing for polyp identification, while avoiding the influence of other parts of the tissue image on polyp identification, and further improving polyp identification. accuracy.
  • a mode selection control can be set in the control area of the display interface, and then the current use mode of the endoscope can be determined in response to the user's operation on the mode selection control in the control area.
  • the system further includes:
  • a pattern recognition module configured to recognize the tissue image of the endoscope according to the image recognition model, and determine that the endoscope The image mode of the endoscope is the mirror-in mode, and when the parameter corresponding to the ileocecal valve image output by the image recognition model is greater than the second parameter threshold, it is determined that the image mode of the endoscope is the mirror-out mode.
  • the image recognition model can be a network model based on ViT (Vision Transformer), and the training images can be pre-labeled in three categories: in vitro images, in vivo images and ileocecal valve images, so that the training images can be trained
  • the image is used as input, and the labeled category is used as the target output for training to obtain the image recognition model.
  • the tissue image can be classified and recognized based on the image recognition model, wherein the parameters output by the image recognition model correspond to the in-vivo image, that is, the output category of the image recognition model is the in-vivo image Similarly, the parameter corresponding to the ileocecal valve image output by the image recognition model is the corresponding probability value when the output category of the image recognition model is the ileocecal valve image.
  • the first parameter threshold and the second parameter threshold may be set according to actual application scenarios, and may be the same or different, and are not specifically limited in the present disclosure.
  • the threshold value of the first parameter is 0.85
  • the threshold value of the second parameter is 0.9.
  • the output result is an in-vivo image, and the corresponding probability value is 0.9.
  • the image mode of the endoscope is determined Should be in mirror mode.
  • the output result is an ileocecal valve image, and the corresponding probability value is 0.92. At this time, it is determined that the image mode of the endoscope should be Mirror mode.
  • a mode switching module used to switch the image mode of the endoscope when the image mode of the endoscope is an in vitro mode and the mode recognition module determines that the image mode of the endoscope is an in-scope mode
  • the mode is switched to the mirror-back mode, and second prompt information is output, and the second prompt information is used to remind to enter the mirror-back mode.
  • the corresponding mode sequence should be in vitro mode, mirror-in mode, and mirror-out mode. Therefore, after the pattern recognition module determines the image mode of the endoscope, it can be combined with the current endoscope's image mode.
  • the image mode and the recognized image mode determine whether switching is required. For example, the current image mode of the endoscope is the in vitro mode, and the image mode determined based on the tissue images obtained in real time is the endoscope mode, which means that the endoscope enters the body at this time, and then it can automatically switch to the endoscope mode, If the current image mode of the endoscope is the same as the image mode determined by the pattern recognition module, there is no need to switch.
  • the current image mode of the endoscope is the mirror-in mode, and the image mode determined based on the tissue images obtained in real time is the mirror-out mode, it means that the endoscope has reached the ileocele at this time, and the next step is to withdraw the mirror During the inspection process, it can automatically switch to the mirror withdrawal mode, and prompt the user to prompt the current endoscope to enter the mirror withdrawal mode, that is, to enter the inspection stage next.
  • the internal tissues of the human body for endoscopic examination are usually soft tissues.
  • the intestinal tract will peristalsis, and the doctor will perform operations such as flushing and undoing loops during the endoscopic examination, which will cause the doctor to It is difficult to clearly understand the extent of its inspection during endoscopy. Based on this, the present disclosure also provides the following embodiments.
  • the system also includes:
  • the blind area ratio detection module is used to determine the display ratio and the blind area ratio according to the tissue image obtained by the endoscope in the mirror withdrawal mode, wherein the sum of the display ratio and the blind area ratio is 1, and the blind area can be Due to factors such as mucous membranes, missed inspection areas or inspection blind areas, etc.
  • the blind area ratio can be understood as the blind area (that is, the part that cannot be observed in the field of view of the endoscope) in the process of endoscopic examination to the total internal surface area of the tissue (that is, the endoscopic The proportion of all) that should be observed in the inspection.
  • the blind spot ratio detection can be automatically turned on, or the blind spot ratio detection can be turned on in response to the user's selection operation of the blind spot ratio detection control in the control area of the display interface, To start performing the above functions, determine the blind area ratio and display ratio.
  • the determination of the display ratio and the blind area ratio may be determined according to the clarity of the tissue image.
  • the proportion of blind spots in the field of view can be predicted based on the clarity of the tissue image. Therefore, experienced physicians can annotate the proportion of the blind area on the historically collected tissue images, then use the historically collected tissue images as the input of the model, and use the marked blind area ratio as the target output of the model to train the neural network model to obtain the blind area.
  • the training method of the blind area proportion prediction model can adopt the training method commonly used in this field to carry out training, will not go into details here.
  • the tissue image obtained by the endoscope in the retracting mirror mode can be input into the blind area ratio prediction model, so as to obtain the corresponding blind area ratio, and further determine the display ratio based on the blind area ratio.
  • the blind area ratio detection module is further configured to: perform three-dimensional reconstruction according to the tissue image obtained by the endoscope in the mirror-back mode to obtain a three-dimensional tissue image; and obtain a three-dimensional tissue image according to the three-dimensional tissue image and the three-dimensional The target corresponding to the tissue image is fitted to the tissue image, and the display ratio and blind area ratio are determined.
  • FIG. 4 taking the intestinal tract as an example, it is a schematic diagram of a three-dimensional tissue image obtained through three-dimensional reconstruction based on an endoscopic image, that is, the intestinal mucosa.
  • the three-dimensional reconstruction technology in the field can be used for reconstruction, such as the SurfelMeshing fusion algorithm.
  • two adjacent frames of tissue images can be input into the depth network Depth Network and the pose network Pose Network respectively, then the corresponding depth map and pose information can be obtained, and the pose information can represent the movement of the endoscope in the tissue Processes, for example, can include rotation matrices and translation vectors.
  • the tissue image, depth map and posture information are reconstructed based on the 3D reconstruction model to obtain a 3D tissue image.
  • both the depth network and the attitude network can be trained based on the ResNet50 network, which will not be repeated here.
  • the intestinal tract can be similar to a tubular structure. Due to the limitation of the field of view of the endoscope, when the intestinal tract is reconstructed based on the endoscopic image, it may appear as shown at W1, W2, W3, and W4 in Figure 4.
  • the cavity position that is, the cavity position does not appear in the tissue image, that is, the invisible part in endoscopic examination, that is, the blind area described in the present disclosure. Physicians cannot observe this part of the area during endoscopic examination, and if there are too many unseen areas, it is easy to miss detection.
  • the ratio of the invisible part of the mucosa appearing in the inspection image to the total mucosal area of the tissue can be represented by the proportion of the blind area, and the proportion of the blind area can be used to indicate the invisible part of the current tissue, so as to facilitate the endoscope
  • the comprehensiveness of the inspection is characterized.
  • the blind spot ratio detection module is also used for:
  • the ratio of the number of point clouds in the three-dimensional tissue image to the number of point clouds of the target fitting tissue image is determined as the display ratio, and the blind area ratio is determined according to the display ratio.
  • the three-dimensional tissue image is obtained after reconstruction based on the two-dimensional tissue image, and then the three-dimensional tissue image can be projected to its corresponding target fitting tissue image, and according to the projection of the three-dimensional tissue image and the target fitting tissue image
  • the overlapping area determines the aspect ratio.
  • the target fitting tissue image corresponding to the three-dimensional tissue image is the complete cavity structure predicted based on the structure corresponding to the tissue in the tissue image. Taking the intestinal tract in Figure 4 as an example, the corresponding target fitting tissue image is corresponding to The image of the tubular structure can be fitted into a cylindrical structure.
  • corresponding standard structural features can be set for different tissues, and after the three-dimensional tissue image is determined, fitting can be performed based on the standard structural features to obtain the target fitted tissue image.
  • the display module is also used to display the display ratio and/or the blind area ratio, and when the blind area ratio is greater than the blind area threshold, display first prompt information, the first prompt information is used to indicate that there is a leak check risk.
  • the display ratio is used to represent the proportion of the area viewed by the physician in the process of endoscopic image detection to the overall tissue
  • the blind area ratio is used to represent the proportion of the area not viewed by the physician in the process of endoscopic image detection.
  • the display ratio can be displayed, the blind area ratio can also be displayed, or the display ratio and the blind area ratio can be displayed at the same time, so that it is convenient for the doctor to know the accuracy of the detection process in time.
  • the proportion of the blind area is large, it will be prompted.
  • the prompt message is displayed on the display interface.
  • the system also includes:
  • the three-dimensional positioning module is used to perform three-dimensional reconstruction according to the tissue image obtained by the endoscope in the endoscopic mode to obtain a three-dimensional tissue image; A point is determined as a three-dimensional target point of the target point in the three-dimensional tissue image.
  • the manner of performing 3D reconstruction has been described in detail above, and will not be repeated here.
  • the tissue cavity is usually fitted to a tubular structure, therefore, the centerline of the three-dimensional tissue image can be the centerline of the fitted tubular structure, to ensure the distance from the surrounding tissue mucosa during the movement of the endoscope, Avoid damage to the mucous membrane of the tissue. Therefore, after determining the target point in the tissue image, in order to ensure the accuracy of the endoscope, the target point can be mapped to the corresponding position on the center line, that is, the point closest to the target point , to ensure the accuracy and rationality of the three-dimensional target point.
  • the attitude determination module is configured to determine the attitude information of the endoscope according to the tissue image corresponding to the current location and the tissue image in the historical period.
  • the images during the process of endoscope mirroring can be monitored and displayed.
  • a dataset can be formed in advance based on manual annotations of real endoscopic images and their bounding boxes and adding pose information to real images.
  • the pose estimation model can be implemented based on the ResNet50 network.
  • endoscopic image sequences can be used as input, and the labeled pose information can be used as the target output of the model to adjust the network and perform regression to achieve model training.
  • the tissue image corresponding to the current location and the tissue images in the historical period can be formed into an image sequence, so as to determine the posture information of the endoscope based on the above-mentioned posture estimation model.
  • a trajectory determination module configured to generate a navigation trajectory according to the current location, the attitude information and the three-dimensional target point.
  • the attitude information is used to characterize the attitude of the current endoscope
  • the three-dimensional target point is used to represent the end point of the movement of the endoscope, so that the movement to the three-dimensional target point can be determined based on the current position and attitude information.
  • the trajectory information of that is, the navigation trajectory.
  • the manner of trajectory prediction may adopt a prediction manner commonly used in the art, which is not limited in the present disclosure.
  • the display module is also used to display the posture information of the endoscope and the three-dimensional tissue image, and display the three-dimensional target point and the navigation track in the three-dimensional tissue image.
  • the three-dimensional positioning and the endoscope navigation can be automatically started.
  • a 3D camera navigation control can be displayed in the control area of the display interface, then when the user needs to navigate, the control can be clicked, and then in response to the user performing the 3D navigation control in the control area of the display interface
  • the selection operation of the mirror navigation control can be positioned through the three-dimensional positioning module. Afterwards, the attitude information of the endoscope can be further determined to determine the navigation trajectory.
  • the three-dimensional tissue image can be displayed on the display interface, and the three-dimensional target point and the navigation track can be displayed in the three-dimensional tissue image, that is, the position point of the next movement is displayed, and the position point of moving to the position point is displayed.
  • the proposed path realizes the automatic navigation of the endoscope, and at the same time, it is also convenient for the doctor to know the current movement path and status of the endoscope in the body, and it is convenient to perform manual intervention to ensure the accuracy of the endoscope process. , to improve user experience.
  • the colonoscope and the gastroscope share a set of endoscope hosts. If the examinee is inspected by both the gastroscope and the colonoscope, the video received by the system often alternates between the gastroscope and the colonoscope. Most of the detections are different, and the detection algorithm for colonoscopy is not suitable for gastroscopy. Based on this, in a possible embodiment, the control of the corresponding detection mode can be displayed in the control area of the display interface, and in response to the selection operation of the control, the selected mode can be used as the detection mode of the endoscope. detection mode.
  • the present disclosure also provides the following embodiments to be applicable to the endoscopic image detection process in multiple scenarios.
  • the system also includes:
  • the image classification module is configured to classify the tissue images of the endoscope in the endoscope mode, and obtain target classifications corresponding to the tissue images, wherein the target classification includes abnormal classification and multiple endoscope classifications.
  • an image classification model can be pre-trained to classify tissue images.
  • the image classification model can be obtained by training based on PreResNet18.
  • the categories of image classification can include abnormal classification and endoscope classification, wherein the abnormal classification includes none Signal images, in vitro images, etc.
  • Endoscope classification can include colonoscopy images and gastroscopic images.
  • endoscopic images can be pre-classified and marked by experienced physicians. No-signal images and in-vitro images can be marked as abnormal classification, colonoscopic images can be marked as colonoscopic classification, and gastroscopic images can be marked as Classification of gastroscopy to obtain training samples.
  • the endoscope image can be used as the input of the model, and the label corresponding to the endoscope image can be used as the target output of the model to train the model to obtain the image classification model, so as to classify the tissue images collected by the endoscope.
  • the detection pattern recognition module is used to update the counter of each endoscope classification according to the target classification corresponding to the tissue image, and when the value of the counter corresponding to any endoscope classification reaches a counting threshold, stop each of the The counting operation of the counter, and determine the detection mode of the endoscope according to the endoscope classification whose value of the counter reaches the counting threshold, wherein, each of the endoscope classifications has its corresponding counter, and each of the counters The value is initially zero.
  • the detection mode currently adopted by the endoscope can be determined according to the classification corresponding to the tissue images collected during the movement of the endoscope.
  • classification may be performed on images of tissues during movement of the endoscope.
  • the detection pattern recognition module updates the counter of each endoscope classification according to the target classification corresponding to the tissue image in the following manner:
  • the target classification is an endoscope classification
  • the endoscope classification includes colonoscopy classification and gastroscope classification
  • the counter corresponding to the colonoscopy classification is count1
  • the counter corresponding to the gastroscopy classification is count2, which are initialized to 0 respectively. If the determined classification corresponding to the tissue image is an abnormal classification, then the counters corresponding to the colonoscopy classification and the gastroscopy classification are both 0.
  • the classification corresponding to the determined tissue image is colonoscopy classification, at this time, an operation may be added to the counter count1 corresponding to the colonoscopy classification, and count1 is 1 at this time. Then repeat the above process for other tissue images, if the determined count1 is 49, count2 is 3, and the count threshold is 50.
  • the classification corresponding to the next determined tissue image is colonoscopy classification
  • the corresponding colonoscopy classification When the value of the counter reaches the counting threshold, it can be determined that the detection mode is the colonoscopy mode, and the counting operation of each of the counters is stopped. In this way, the detection mode of the endoscope can be automatically determined by classifying and counting the part of the tissue images during the moving process of the endoscope, which saves user operations and assists the user in using the endoscope.
  • the polyp identification module is further configured to: perform real-time polyp identification on the tissue image obtained by the endoscope in the retracting mode according to the polyp identification model corresponding to the detection mode determined by the detection mode identification module.
  • the classification and features of image recognition for colonoscopy classification and gastroscopy classification may be different. Therefore, in this embodiment, for each detection mode, the polyps used for polyp identification in this detection mode can be pre-determined.
  • the identification model wherein, the manner of determining the polyp identification model has been described in detail above, and will not be repeated here.
  • the detection mode of the endoscope can be automatically determined by classifying the tissue images collected during the moving process of the endoscope, and the corresponding polyp recognition model under the detection model can be used for image recognition, thereby It can improve the automation level of endoscope use, improve the degree of applicability to actual application scenarios, and improve the accuracy of polyp identification to a certain extent, providing reliable data support for physicians to analyze endoscopic results.
  • the system also includes:
  • a speed determination module configured to acquire a plurality of historical tissue images corresponding to the current tissue image when the mode of the endoscopic image is the mirror-back mode, and calculate the similarity between each of the historical tissue images and the current tissue image degree, and determine the mirror moving speed according to the plurality of similarities.
  • the display module is also used to display the mirror withdrawal speed corresponding to the endoscope.
  • the back-off can be performed based on the similarity between adjacent tissue images.
  • Mirror speed is evaluated.
  • the method for calculating the similarity between images may be calculated by using a common calculation method in the art, which is not limited in the present disclosure.
  • a mapping relationship between the similarity interval and the mirror-off speed can be preset, wherein the higher the similarity, the slower the mirror-out speed.
  • the determined average value of the plurality of similarities can be used to determine the corresponding average similarity during the moving process of the plurality of tissue images, and based on the similarity interval to which the average similarity belongs, the speed corresponding to the similarity interval can be used as the The corresponding mirror retraction speed during the process.
  • the mirror retraction speed can be displayed in the image display area of the display interface during the mirror retraction process.
  • the mirror retraction speed can be determined by detecting the similarity between adjacent tissue images in the mirror retraction process, and the mirror retraction speed is displayed, so that the user can be prompted, to a certain extent On the one hand, it can avoid the missed detection of polyps caused by the excessive speed of withdrawing the mirror, and improve the convenience of users.
  • the cleanliness of the tissue cavity is one of the important indicators to measure the quality of the inspection, and the Boston scoring method is usually used at present.
  • the attention allocated to cleanliness evaluation is limited. And there is subjectivity among different physicians in assessing cleanliness.
  • the system also includes:
  • the cleanliness determination module is used to classify the tissue images of the endoscope in the mirror withdrawal mode, and determine the number of tissue images under each cleanliness category; according to each of the cleanliness categories The number of tissue images determines the cleanliness of the tissue corresponding to the tissue images.
  • the cleanliness detection can be automatically started after it is determined that the endoscope is in the mirror withdrawal mode, or it can be started in response to the user's selection operation of the cleanliness detection control in the control area of the display interface.
  • there is a cleaning operation in the endoscope's lens entry stage and it is not suitable to include the cleanliness in the tissue cavity before the cleaning operation into the overall evaluation. Therefore, in this embodiment, by classifying the cleanliness of the tissue images in the mirror-back mode, the cleanliness classification of a single tissue image can be classified based on the cleanliness classification model implemented by the Vision Transformer network.
  • the endoscopic images are marked in advance, and can be marked as 0, 1, 2, 3 according to the Boston scoring method.
  • the endoscopic image can be used as the input of the model, and the label corresponding to the endoscopic image can be used as the target output of the model for training. In this way, it can be ensured that the determined cleanliness of the tissue fits the actual application scenario, and the accuracy of the determined cleanliness can be improved.
  • the cleanliness determination module is also used for:
  • the score corresponding to the cleanliness classification is used as the cleanliness of the organization
  • the next cleanliness classification is not the cleanliness classification with the largest score, the next cleanliness classification will be used as new current cleanliness classification, and re-execute the step of determining the ratio of the number of tissue images under the current cleanliness classification to the target total quantity and the threshold value corresponding to the current cleanliness classification; In the case of the largest cleanliness category, the score of the next cleanliness category is determined as the cleanliness of the tissue.
  • the scores corresponding to the cleanliness categories are 0, 1, 2, and 3 in descending order, and the number of tissue images under each cleanliness category can be acquired in this order. For example, first obtain the number of tissue images whose cleanliness classification score is 0, for example, the number is S0, then further determine the ratio of the number of tissue images under the current cleanliness category to the total target number and the current cleanliness The size relationship of the threshold corresponding to the classification. Among them, the total number of targets is Sum, and the threshold corresponding to each cleanliness category can be set according to the actual application scenario. For example, if the score of cleanliness classification is 0, which corresponds to the threshold N0, then when S0/Sum is greater than or equal to N0, it is determined that the cleanliness of the tissue is 0.
  • the next cleanliness classification is obtained, that is, the cleanliness classification with a score of 3, which is the cleanliness classification with the largest score, and the score of the next cleanliness classification can be directly determined as the cleanliness of the organization, namely Determine the cleanliness of the tissue as 3.
  • the tissue image in the process of retracting the mirror can be detected for the classification of cleanliness, thereby reducing the amount of data processing.
  • the overall evaluation of the cleanliness in the mirror removal process can be performed, the accuracy of the determined cleanliness can be improved, and it is more in line with the Boston scoring standard, which is convenient for users to use.
  • the polyp identification result, cleanliness, and mirror retraction speed obtained in the above process can be formed as Reports are output for unified viewing and management.
  • the present disclosure also provides an auxiliary method for endoscopic image detection, as shown in FIG. 5 , the method includes:
  • step 11 the endoscope image collected by the endoscope is processed in real time to obtain a tissue image
  • step 12 in response to determining that the endoscope is in the endoscope mode, determine the target point of the tissue cavity corresponding to the tissue image, and the target point is used to indicate that the endoscope is at its current position The next target moving point of the endoscope; wherein, when there is a tissue cavity in the tissue image, the target point is the center point of the tissue cavity corresponding to the endoscopic image, and there is no tissue cavity in the tissue image When there is a tissue cavity, the target point is a direction point of the tissue cavity corresponding to the endoscopic image;
  • step 13 in response to determining that the endoscope is in the retracting mirror mode, real-time polyp identification is performed on the tissue image obtained by the endoscope in the retracting mirror mode, and a polyp identification result is obtained;
  • step 14 the target point and the polyp recognition result are displayed in the image display area of the display interface.
  • the method also includes:
  • a display ratio and a blind area ratio are determined, and the sum of the display ratio and the blind area ratio is 1;
  • the determination of the display ratio and the blind area ratio according to the tissue image obtained by the endoscope in the mirror withdrawal mode includes:
  • a display ratio and a blind area ratio are determined according to the three-dimensional tissue image and the target fitting tissue image corresponding to the three-dimensional tissue image.
  • the determining the display ratio and blind area ratio according to the three-dimensional tissue image and the target fitting tissue image corresponding to the three-dimensional tissue image includes:
  • the ratio of the number of point clouds in the three-dimensional tissue image to the number of point clouds of the target fitting tissue image is determined as the display ratio, and the blind area ratio is determined according to the display ratio.
  • the method also includes:
  • the three-dimensional tissue image is displayed, and the three-dimensional target point and the navigation track are displayed in the three-dimensional tissue image.
  • the method also includes:
  • the detection mode selected by the user is determined as the detection mode of the endoscope
  • the real-time polyp identification of the tissue image obtained by the endoscope in the mirror withdrawal mode includes:
  • the method also includes:
  • the counter of each endoscope classification is updated, and when the value of the counter corresponding to any endoscope classification reaches the counting threshold, the counting operation of each of the counters is stopped, and according to The endoscope classification whose value of the counter reaches the counting threshold determines the detection mode of the endoscope, wherein each of the endoscope classifications has its corresponding counter, and the value of each of the counters is initially zero;
  • the real-time polyp identification of the tissue image obtained by the endoscope in the mirror withdrawal mode includes:
  • updating the counter of each endoscope classification according to the target classification corresponding to the tissue image includes:
  • the target classification is an endoscope classification
  • the method also includes:
  • the image mode of the endoscope is the endoscope mode; If the parameter corresponding to the ileocecal valve image output by the image recognition model Greater than the second parameter threshold, it is determined that the image mode of the endoscope is the mirror-back mode;
  • the image mode of the endoscope When the image mode of the endoscope is an in vitro mode and the image mode of the endoscope determined based on the image recognition model is an endoscope mode, switch the image mode of the endoscope to an endoscope mode mode, when the image mode of the endoscope is the mirror-in mode, and the image mode of the endoscope determined based on the image recognition model is the mirror-back mode, the image of the endoscope The mode is switched to the mirror-back mode, and second prompt information is output, and the second prompt information is used to remind to enter the mirror-back mode.
  • the method also includes:
  • the mode of the endoscopic image is the mirror-back mode
  • obtain a plurality of historical tissue images corresponding to the current tissue image and calculate the similarity between each of the historical tissue images and the current tissue image, according to the plurality of historical tissue images The above similarity determines the mirror withdrawal speed
  • the mirror withdrawal speed corresponding to the endoscope is displayed in the image display area.
  • the method also includes:
  • the cleanliness of the tissue corresponding to the tissue image is determined.
  • the determining the cleanliness of the tissue corresponding to the tissue image according to the number of tissue images under each cleanliness category includes:
  • the score corresponding to the cleanliness classification is used as the cleanliness of the organization
  • the next cleanliness classification is not the cleanliness classification with the largest score, the next cleanliness classification will be used as new current cleanliness classification, and re-execute the step of determining the ratio of the number of tissue images under the current cleanliness classification to the target total quantity and the threshold value corresponding to the current cleanliness classification; In the case of the largest cleanliness category, the score of the next cleanliness category is determined as the cleanliness of the tissue.
  • the real-time identification of polyps on the tissue images obtained by the endoscope in the retracting mirror mode includes:
  • the tissue image obtained by the endoscope in the withdrawal mode is detected, and when polyps are detected in the tissue image, the position information of the polyps is determined;
  • the tissue image is displayed in the image display area, and the identification corresponding to the position information of the polyp and the classification are displayed in the tissue image.
  • FIG. 6 it shows a schematic structural diagram of an electronic device 600 suitable for implementing an embodiment of the present disclosure.
  • the terminal equipment in the embodiment of the present disclosure may include but not limited to such as mobile phone, notebook computer, digital broadcast receiver, PDA (personal digital assistant), PAD (tablet computer), PMP (portable multimedia player), vehicle terminal (such as mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers and the like.
  • the electronic device shown in FIG. 6 is only an example, and should not limit the functions and application scope of the embodiments of the present disclosure.
  • an electronic device 600 may include a processing device (such as a central processing unit, a graphics processing unit, etc.) 601, which may be randomly accessed according to a program stored in a read-only memory (ROM) 602 or loaded from a storage device 608.
  • a processing device such as a central processing unit, a graphics processing unit, etc.
  • RAM memory
  • various programs and data necessary for the operation of the electronic device 600 are also stored.
  • the processing device 601, ROM 602, and RAM 603 are connected to each other through a bus 604.
  • An input/output (I/O) interface 605 is also connected to the bus 604 .
  • the following devices can be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speaker, vibration an output device 607 such as a computer; a storage device 608 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 609.
  • the communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While FIG. 6 shows electronic device 600 having various means, it should be understood that implementing or having all of the means shown is not a requirement. More or fewer means may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product, which includes a computer program carried on a non-transitory computer readable medium, where the computer program includes program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from a network via communication means 609, or from storage means 608, or from ROM 602.
  • the processing device 601 When the computer program is executed by the processing device 601, the above-mentioned functions defined in the methods of the embodiments of the present disclosure are performed.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the two.
  • a computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination thereof. More specific examples of computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can transmit, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted by any appropriate medium, including but not limited to wires, optical cables, RF (radio frequency), etc., or any suitable combination of the above.
  • the client and the server can communicate using any currently known or future network protocols such as HTTP (HyperText Transfer Protocol, Hypertext Transfer Protocol), and can communicate with digital data in any form or medium
  • HTTP HyperText Transfer Protocol
  • the communication eg, communication network
  • Examples of communication networks include local area networks (“LANs”), wide area networks (“WANs”), internetworks (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network of.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device, or may exist independently without being incorporated into the electronic device.
  • the computer-readable medium carries one or more programs, and when the one or more programs are executed by the electronic device, the electronic device: performs real-time processing on endoscopic images collected by the endoscope to obtain tissue images; In response to determining that the endoscope is in the mirror-in mode, determine a target point of the tissue cavity corresponding to the tissue image, and the target point is used to indicate the next time the endoscope is in the mirror-in mode at its current position.
  • Target moving point wherein, when there is a tissue cavity in the tissue image, the target point is the center point of the tissue cavity corresponding to the endoscopic image, and when there is no tissue cavity in the tissue image , the target point is the direction point of the tissue cavity corresponding to the endoscope image; in response to determining that the endoscope is in the retracting mirror mode, the tissue image obtained by the endoscope in the retracting mirror mode is performed Real-time polyp recognition, obtaining polyp recognition results; displaying the target points and the polyp recognition results in the image display area in the display interface.
  • Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, or combinations thereof, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and Includes conventional procedural programming languages - such as "C" or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (for example, using an Internet service provider to connected via the Internet).
  • LAN local area network
  • WAN wide area network
  • Internet service provider for example, using an Internet service provider to connected via the Internet.
  • each block in a flowchart or block diagram may represent a module, program segment, or portion of code that contains one or more logical functions for implementing specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified functions or operations , or may be implemented by a combination of dedicated hardware and computer instructions.
  • the modules involved in the embodiments described in the present disclosure may be implemented by software or by hardware. Among them, the name of the module does not constitute a limitation of the module itself under certain circumstances.
  • the image processing module can also be described as "a module for real-time processing of endoscopic images collected by the endoscope to obtain tissue images" .
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSPs Application Specific Standard Products
  • SOCs System on Chips
  • CPLD Complex Programmable Logical device
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer discs, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.
  • Example 1 provides an endoscopic image detection assistance system, the system comprising:
  • the image processing module is used to process the endoscope image collected by the endoscope in real time to obtain tissue images
  • the cavity positioning module is used to determine the target point of the tissue cavity corresponding to the tissue image, and the target point is used to indicate the next target movement point of the endoscope at its current position; wherein, When there is a tissue cavity in the tissue image, the target point is the center point of the tissue cavity corresponding to the endoscopic image, and when there is no tissue cavity in the tissue image, the target point is The direction point of the tissue cavity corresponding to the endoscopic image;
  • the polyp identification module is used to perform real-time polyp identification on the tissue image obtained by the endoscope in the retracting mirror mode, and obtain a polyp identification result;
  • a display module configured to display the target point and the polyp recognition result.
  • Example 2 provides the system of Example 1, wherein the system further includes:
  • a blind area ratio detection module configured to determine a display ratio and a blind area ratio based on the tissue image obtained by the endoscope in the mirror withdrawal mode, and the sum of the display ratio and the blind area ratio is 1;
  • the display module is also used to display the display ratio and/or the blind area ratio, and when the blind area ratio is greater than the blind area threshold, display first prompt information, the first prompt information is used to indicate that there is a leak check risk.
  • Example 3 provides the system of Example 2, wherein the blind spot ratio detection module is further used for: performing three-dimensional reconstruction according to the tissue image obtained by the endoscope in the retracting mode , obtaining a three-dimensional tissue image; and determining a display ratio and a blind area ratio according to the three-dimensional tissue image and the target fitting tissue image corresponding to the three-dimensional tissue image.
  • Example 4 provides the system of Example 3, wherein the dead zone ratio detection module is further used for:
  • the ratio of the number of point clouds in the three-dimensional tissue image to the number of point clouds of the target fitting tissue image is determined as the display ratio, and the blind area ratio is determined according to the display ratio.
  • Example 5 provides the system of Example 1, wherein the system further includes:
  • the three-dimensional positioning module is used to perform three-dimensional reconstruction according to the tissue image obtained by the endoscope in the endoscopic mode to obtain a three-dimensional tissue image;
  • a point is determined as a three-dimensional target point of the target point in the three-dimensional tissue image;
  • a posture determination module configured to determine the posture information of the endoscope according to the tissue image corresponding to the current location and the tissue image in the historical period;
  • a trajectory determination module configured to generate a navigation trajectory according to the current location, the attitude information and the three-dimensional target point;
  • the display module is also used to display the posture information of the endoscope and the three-dimensional tissue image, and display the three-dimensional target point and the navigation track in the three-dimensional tissue image.
  • Example 6 provides the system of Example 1, wherein the system further includes:
  • An image classification module configured to classify the tissue image of the endoscope in the endoscope mode, and obtain the target classification corresponding to the tissue image, wherein the target classification includes abnormal classification and multiple endoscope classifications;
  • the detection pattern recognition module is used to update the counter of each endoscope classification according to the target classification corresponding to the tissue image, and when the value of the counter corresponding to any endoscope classification reaches a counting threshold, stop each of the The counting operation of the counter, and determine the detection mode of the endoscope according to the endoscope classification whose value of the counter reaches the counting threshold, wherein, each of the endoscope classifications has its corresponding counter, and each of the counters The value is initially zero;
  • the polyp identification module is further configured to: perform real-time polyp identification on the tissue image obtained by the endoscope in the retracting mode according to the polyp identification model corresponding to the detection mode determined by the detection mode identification module.
  • Example 7 provides the system of Example 6, wherein the detection pattern recognition module classifies the counter of each endoscope according to the target classification corresponding to the tissue image in the following manner Make an update:
  • the target classification is an endoscope classification
  • Example 8 provides the system of Example 1, wherein the system further includes:
  • a pattern recognition module configured to recognize the tissue image of the endoscope according to the image recognition model, and determine that the endoscope The image mode of the mirror is the mirror-in mode, and when the parameter corresponding to the ileocecal valve image output by the image recognition model is greater than the second parameter threshold, it is determined that the image mode of the endoscope is the mirror-out mode;
  • a mode switching module used to switch the image mode of the endoscope when the image mode of the endoscope is an in vitro mode and the mode recognition module determines that the image mode of the endoscope is an in-scope mode
  • the mode is switched to the mirror-back mode, and second prompt information is output, and the second prompt information is used to remind to enter the mirror-back mode.
  • Example 9 provides the system of Example 1, wherein the system further includes:
  • a speed determination module configured to acquire a plurality of historical tissue images corresponding to the current tissue image when the mode of the endoscopic image is the mirror-back mode, and calculate the similarity between each of the historical tissue images and the current tissue image Degree, according to a plurality of described similarities, determine the mirror-removing speed;
  • the display module is also used to display the mirror withdrawal speed corresponding to the endoscope.
  • Example 10 provides the system of Example 9, wherein the system further includes:
  • the cleanliness determination module is used to classify the tissue images of the endoscope in the mirror withdrawal mode, and determine the number of tissue images under each cleanliness category; according to each of the cleanliness categories The number of tissue images determines the cleanliness of the tissue corresponding to the tissue images.
  • Example 11 provides the system of Example 1, wherein the polyp identification module includes:
  • the polyp detection sub-module is used to detect the tissue image obtained by the endoscope in the withdrawal mode based on the detection model, and determine the position information of the polyp when it is detected that there is a polyp in the tissue image;
  • a polyp identification submodule configured to extract a detected image corresponding to the position information from the tissue image; determine the classification of the polyp according to the detected image and the identification model;
  • the display module is further configured to display the tissue image, and display the identification corresponding to the position information of the polyp and the classification in the tissue image.
  • Example 12 provides an endoscopic image detection assistance method, the method comprising:
  • target point In response to determining that the endoscope is in the mirror-in mode, determine a target point of the tissue cavity corresponding to the tissue image, and the target point is used to indicate the next time the endoscope is in the mirror-in mode at its current position.
  • Target moving point wherein, when there is a tissue cavity in the tissue image, the target point is the center point of the tissue cavity corresponding to the endoscopic image, and when there is no tissue cavity in the tissue image , the target point is a direction point of the tissue cavity corresponding to the endoscopic image;
  • the target point and the polyp recognition result are displayed.
  • Example 13 provides the method of Example 12, wherein the method further includes:
  • a display ratio and a blind area ratio are determined, and the sum of the display ratio and the blind area ratio is 1;
  • Example 14 provides the method of Example 13, wherein the determining the display ratio and the blind area ratio according to the tissue image obtained by the endoscope in the retracting mirror mode includes:
  • a display ratio and a blind area ratio are determined according to the three-dimensional tissue image and the target fitting tissue image corresponding to the three-dimensional tissue image.
  • Example 15 provides the method of Example 14, wherein, according to the three-dimensional tissue image and the target fitting tissue image corresponding to the three-dimensional tissue image, the display ratio and the blind area ratio are determined ,include:
  • the ratio of the number of point clouds in the three-dimensional tissue image to the number of point clouds of the target fitting tissue image is determined as the display ratio, and the blind area ratio is determined according to the display ratio.
  • Example 16 provides the method of Example 12, wherein the method further includes:
  • the three-dimensional tissue image is displayed, and the three-dimensional target point and the navigation track are displayed in the three-dimensional tissue image.
  • Example 17 provides the method of Example 12, wherein the method further includes:
  • the detection mode selected by the user is determined as the detection mode of the endoscope
  • the real-time polyp identification of the tissue image obtained by the endoscope in the mirror withdrawal mode includes:
  • Example 18 provides the method of Example 12, wherein the method further includes:
  • the counter of each endoscope classification is updated, and when the value of the counter corresponding to any endoscope classification reaches the counting threshold, the counting operation of each of the counters is stopped, and according to The endoscope category whose value of the counter reaches the counting threshold determines the detection mode of the endoscope, wherein each of the endoscope categories has its corresponding counter, and the value of each of the counters is initially zero;
  • the real-time polyp identification of the tissue image obtained by the endoscope in the mirror withdrawal mode includes:
  • Example 19 provides the method of Example 18, wherein the updating the counter of each endoscope classification according to the object classification corresponding to the tissue image includes:
  • the target classification is an endoscope classification
  • Example 20 provides the method of Example 12, wherein the method further includes:
  • the image mode of the endoscope is the endoscope mode; If the parameter corresponding to the ileocecal valve image output by the image recognition model Greater than the second parameter threshold, it is determined that the image mode of the endoscope is the mirror-back mode;
  • the image mode of the endoscope When the image mode of the endoscope is an in vitro mode and the image mode of the endoscope determined based on the image recognition model is an endoscope mode, switch the image mode of the endoscope to an endoscope mode mode, when the image mode of the endoscope is the mirror-in mode, and the image mode of the endoscope determined based on the image recognition model is the mirror-back mode, the image of the endoscope The mode is switched to the mirror-back mode, and second prompt information is output, and the second prompt information is used to remind to enter the mirror-back mode.
  • Example 21 provides the method of Example 12, wherein the method further includes:
  • the mode of the endoscopic image is the mirror-back mode
  • obtain a plurality of historical tissue images corresponding to the current tissue image and calculate the similarity between each of the historical tissue images and the current tissue image, according to the plurality of historical tissue images The above similarity determines the mirror withdrawal speed
  • the mirror withdrawal speed corresponding to the endoscope is displayed in the image display area.
  • Example 22 provides the method of Example 21, wherein the method further includes:
  • the cleanliness of the tissue corresponding to the tissue image is determined.
  • Example 23 provides the method of Example 12, wherein the real-time identification of polyps on the tissue image obtained by the endoscope in the retracting mirror mode includes:
  • the tissue image obtained by the endoscope in the withdrawal mode is detected, and when polyps are detected in the tissue image, the position information of the polyps is determined;
  • the tissue image is displayed in the image display area, and the identification corresponding to the position information of the polyp and the classification are displayed in the tissue image.
  • Example 24 provides a computer-readable medium on which a computer program is stored, and when the program is executed by a processing device, the steps of the method described in any one of Examples 12-23 are implemented .
  • Example 25 provides an electronic device, comprising:
  • a processing device configured to execute the computer program in the storage device, so as to implement the steps of the method in any one of Examples 12-23.

Abstract

The present invention relates to an endoscope image detection auxiliary system and method, a medium and an electronic device. The system comprises an image processing module which is used for processing in real time an endoscope image collected by an endoscope so as to obtain a tissue image; a cavity positioning module which is used for determining a target point of a tissue cavity corresponding to the tissue image, wherein, when there is a tissue cavity in the tissue image, the target point is a center point of the tissue cavity corresponding to the endoscope image, and when there is no tissue cavity in the tissue image, the target point is a direction point of the tissue cavity corresponding to the endoscope image; a polyp recognition module which is used for carrying out real-time polyp recognition on the tissue image obtained by the endoscope in a withdrawal mode, thus obtaining a polyp recognition result; and a display module which is used for displaying the target point and the polyp recognition result.

Description

内窥镜图像检测辅助系统、方法、介质和电子设备Endoscope image detection auxiliary system, method, medium and electronic device
本公开要求于2021年12月29日提交的,申请名称为“内窥镜图像检测辅助系统、方法、介质和电子设备”的、中国专利申请号为“202111643635.X”的优先权,该中国专利申请的全部内容通过引用结合在本公开中。This disclosure claims the priority of the Chinese patent application number "202111643635.X" filed on December 29, 2021, with the title of "Endoscopic Image Detection Auxiliary System, Method, Medium, and Electronic Equipment". The entire content of the patent application is incorporated by reference in this disclosure.
技术领域technical field
本公开涉及图像处理领域,具体地,涉及一种内窥镜图像检测辅助系统、方法、介质和电子设备。The present disclosure relates to the field of image processing, and in particular, to an endoscope image detection auxiliary system, method, medium and electronic equipment.
背景技术Background technique
内窥镜检查作为常用的检查手段,能够使得医师更加直观看观察到人体内部环境的真实状态,在医疗领域被广泛用于息肉病变、癌症的检测,以使得患者能够在疾病初期得到有效的干预和治疗。As a commonly used inspection method, endoscopy can enable doctors to observe the real state of the internal environment of the human body more intuitively. It is widely used in the medical field for the detection of polyp lesions and cancer, so that patients can receive effective intervention in the early stages of the disease. and treatment.
内窥镜检查通常分为进镜和退镜过程,进镜过程通常是由医师控制内窥镜操作,然而由于人体内部环境的复杂性、图像采集设备成像质量好坏不同、医生操作水平高低不一、患者肠道准备质量参差不齐、内窥镜和肠道一直处于相对运动状态等,内窥镜图像中可能存在气泡遮挡、过度曝光、运动模糊等情况,并且容易出现视野盲区,使得医师需要靠自己的经验对内窥镜进行控制,导致进镜过慢甚至进镜失败,甚至可能给被检查者户造成伤害,不仅需要花费医师更多的精力和时间,对于医师的技术和经验要求较高。Endoscopic examination is usually divided into the process of entering and withdrawing the mirror. The process of entering the mirror is usually controlled by the physician. 1. The quality of patient’s intestinal preparation is uneven, and the endoscope and intestinal tract are always in a state of relative motion. There may be bubble occlusion, overexposure, motion blur, etc. in the endoscopic image, and blind areas of vision are prone to appear, which makes doctors You need to rely on your own experience to control the endoscope, which may lead to slow or even failure of the endoscope, and may even cause harm to the examinee. higher.
发明内容Contents of the invention
提供该发明内容部分以便以简要的形式介绍构思,这些构思将在后面的具体实施方式部分被详细描述。该发明内容部分并不旨在标识要求保护的技术方案的关键特征或必要特征,也不旨在用于限制所要求的保护的技术方案的范围。This Summary is provided to introduce a simplified form of concepts that are described in detail later in the Detailed Description. This summary of the invention is not intended to identify key features or essential features of the claimed technical solution, nor is it intended to be used to limit the scope of the claimed technical solution.
第一方面,本公开提供一种内窥镜图像检测辅助系统,所述系统包括:In a first aspect, the present disclosure provides an endoscopic image detection assistance system, the system comprising:
图像处理模块,用于对内窥镜采集的内窥镜图像进行实时处理,获得组织图像;The image processing module is used to process the endoscope image collected by the endoscope in real time to obtain tissue images;
腔体定位模块,用于确定所述组织图像对应的组织腔体的目标点,所述目标点用于指示所述内窥镜在其当前所处位置进镜的下一目标移动点;其中,在所述组织图像中存在组织腔体时,所述目标点为所述内窥镜图像对应的组织腔体的中心点,在所述组织图像中不存在组织腔体时,所述目标点为所述内窥镜图像对应的组织腔体的方向点;The cavity positioning module is used to determine the target point of the tissue cavity corresponding to the tissue image, and the target point is used to indicate the next target movement point of the endoscope at its current position; wherein, When there is a tissue cavity in the tissue image, the target point is the center point of the tissue cavity corresponding to the endoscopic image, and when there is no tissue cavity in the tissue image, the target point is The direction point of the tissue cavity corresponding to the endoscopic image;
息肉识别模块,用于对所述内窥镜在退镜模式下获得的组织图像进行实时的息肉识别,获得息肉识别结果;The polyp identification module is used to perform real-time polyp identification on the tissue image obtained by the endoscope in the retracting mirror mode, and obtain a polyp identification result;
显示模块,用于对所述目标点和所述息肉识别结果进行显示。A display module, configured to display the target point and the polyp recognition result.
第二方面,本公开提供一种内窥镜图像检测辅助方法,所述方法包括:In a second aspect, the present disclosure provides a method for assisting endoscopic image detection, the method comprising:
对内窥镜采集的内窥镜图像进行实时处理,获得组织图像;Real-time processing of endoscopic images collected by the endoscope to obtain tissue images;
响应于确定所述内窥镜处于进镜模式,确定所述组织图像对应的组织腔体的目标点,所述目标点用于指示所述内窥镜在其当前所处位置进镜的下一目标移动点;其中,在所述组织图像中存在组织腔体时,所述目标点为所述内窥镜图像对应的组织腔体的中心点,在所述组织图像中不存在组织腔体时,所述目标点为所述内窥镜图像对应的组织腔体的方向点;In response to determining that the endoscope is in the mirror-in mode, determine a target point of the tissue cavity corresponding to the tissue image, and the target point is used to indicate the next time the endoscope is in the mirror-in mode at its current position. Target moving point; wherein, when there is a tissue cavity in the tissue image, the target point is the center point of the tissue cavity corresponding to the endoscopic image, and when there is no tissue cavity in the tissue image , the target point is a direction point of the tissue cavity corresponding to the endoscopic image;
响应于确定所述内窥镜处于退镜模式,对所述内窥镜在退镜模式下获得的组织图像进行实时的息肉识别,获得息肉识别结果;In response to determining that the endoscope is in the retracting mirror mode, perform real-time polyp identification on the tissue image obtained by the endoscope in the retracting mirror mode, and obtain a polyp identification result;
在显示界面中的图像展示区域中,对所述目标点和所述息肉识别结果进行显示。In the image display area in the display interface, the target point and the polyp recognition result are displayed.
第三方面,本公开提供一种计算机可读介质,其上存储有计算机程序,该程序被处理装置执行时实现第二方面所述方法的步骤。In a third aspect, the present disclosure provides a computer-readable medium on which a computer program is stored, and when the program is executed by a processing device, the steps of the method described in the second aspect are implemented.
第四方面,本公开提供一种电子设备,包括:In a fourth aspect, the present disclosure provides an electronic device, including:
存储装置,其上存储有计算机程序;a storage device on which a computer program is stored;
处理装置,用于执行所述存储装置中的所述计算机程序,以实现第二方面所述方法的步骤。A processing device configured to execute the computer program in the storage device to implement the steps of the method of the second aspect.
由此,通过上述技术方案,可以在内窥镜的进镜过程中,通过对组织图像中的腔体的实时检测和识别,确定出组织图像中的目标点,从而可以为进镜过程提供可靠准确的自动导航,提高内窥镜的进镜效率和进镜准确度,降低内窥镜使用对医师经验的高要求避免对被检查者造成伤害,提升用户使用体验。Thus, through the above technical solution, the target point in the tissue image can be determined through real-time detection and recognition of the cavity in the tissue image during the endoscope's lensing process, thereby providing reliable support for the lensing process. Accurate automatic navigation improves the efficiency and accuracy of endoscope entry, reduces the high requirements for physician experience in the use of endoscopes, avoids harm to the examinee, and improves user experience.
本公开的其他特征和优点将在随后的具体实施方式部分予以详细说明。Other features and advantages of the present disclosure will be described in detail in the detailed description that follows.
附图说明Description of drawings
结合附图并参考以下具体实施方式,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。在附图中:The above and other features, advantages and aspects of the various embodiments of the present disclosure will become more apparent with reference to the following detailed description in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numerals denote the same or similar elements. It should be understood that the drawings are schematic and that elements and elements are not necessarily drawn to scale. In the attached picture:
图1是根据本公开的一种实施方式提供的内窥镜图像检测辅助系统的框图;Fig. 1 is a block diagram of an endoscopic image detection auxiliary system provided according to an embodiment of the present disclosure;
图2A和图2B是根据本公开的一种实施方式提供的目标点的显示示意图;2A and 2B are schematic diagrams showing target points according to an embodiment of the present disclosure;
图3是根据本公开的一种实施方式提供的显示界面的示意图;Fig. 3 is a schematic diagram of a display interface provided according to an embodiment of the present disclosure;
图4是三维重建肠道的示意图;Figure 4 is a schematic diagram of a three-dimensional reconstruction of the intestinal tract;
图5是根据本公开的一种实施方式提供的内窥镜图像检测辅助方法的流程图;Fig. 5 is a flowchart of an endoscopic image detection assistance method provided according to an embodiment of the present disclosure;
图6是示出了适于用来实现本公开实施例的电子设备的结构示意图。FIG. 6 is a schematic structural diagram showing an electronic device suitable for implementing an embodiment of the present disclosure.
具体实施方式Detailed ways
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. Although certain embodiments of the present disclosure are shown in the drawings, it should be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein; A more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for exemplary purposes only, and are not intended to limit the protection scope of the present disclosure.
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。It should be understood that the various steps described in the method implementations of the present disclosure may be executed in different orders, and/or executed in parallel. Additionally, method embodiments may include additional steps and/or omit performing illustrated steps. The scope of the present disclosure is not limited in this regard.
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。As used herein, the term "comprise" and its variations are open-ended, ie "including but not limited to". The term "based on" is "based at least in part on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one further embodiment"; the term "some embodiments" means "at least some embodiments." Relevant definitions of other terms will be given in the description below.
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。It should be noted that concepts such as "first" and "second" mentioned in this disclosure are only used to distinguish different devices, modules or units, and are not used to limit the sequence of functions performed by these devices, modules or units or interdependence.
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。It should be noted that the modifications of "one" and "multiple" mentioned in the present disclosure are illustrative and not restrictive, and those skilled in the art should understand that unless the context clearly indicates otherwise, it should be understood as "one or more" multiple".
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。The names of messages or information exchanged between multiple devices in the embodiments of the present disclosure are used for illustrative purposes only, and are not used to limit the scope of these messages or information.
图1所示,为根据本公开的一种实施方式提供的内窥镜图像检测辅助系统的框图,如图1所示,所述内窥镜图像检测辅助系统10可以包括:As shown in FIG. 1, it is a block diagram of an endoscopic image detection auxiliary system provided according to an embodiment of the present disclosure. As shown in FIG. 1, the endoscopic image detection auxiliary system 10 may include:
图像处理模块100,用于对内窥镜采集的内窥镜图像进行实时处理,获得组织图像。The image processing module 100 is configured to perform real-time processing on endoscopic images collected by the endoscope to obtain tissue images.
其中,在医疗内窥镜图像识别中,内窥镜在生物体例如人体内部进行实时拍摄获得视频流,从而可以从该视频流中按照预设采集周期进行抽帧以采集内窥镜图像。作为示例,可以对采集的内窥镜图像进行处理,如裁剪、归一化、重采样等方式将内窥镜图像处理到预设尺寸,以获得该组织图像,即在当前检测的过程中对应的组织的图像,以便于对组织图像进行统一的处理。示例地,当前检测的组织可以肠道、胸腔、腹腔等。Among them, in medical endoscope image recognition, the endoscope performs real-time shooting inside the living body such as the human body to obtain a video stream, so that frames can be extracted from the video stream according to a preset acquisition period to acquire endoscopic images. As an example, the collected endoscopic image can be processed, such as cropping, normalizing, resampling, etc. to process the endoscopic image to a preset size, so as to obtain the tissue image, that is, in the current detection process, the corresponding The image of the tissue in order to facilitate the unified processing of the tissue image. Exemplarily, the currently detected tissue may be intestinal tract, chest cavity, abdominal cavity, etc.
在内窥镜采集图像的原始信号中通常会包含内窥镜的设备信息以及被检查者的个人信息等,在本公开实施例中,该图像处理模块中进行处理时可以基于Yolo V4算法将组织图像从内窥镜图像中检测出来,之后将所述设备信息和个人信息等进行裁剪删除,以使得组织图像只包含内窥镜对应的体内图像,保护隐私信息的同时便于图像的后续处理。并且,在内窥镜检查过程中,可能由于进镜过程不稳定,或者内窥镜的位置不合适等原因,导致内窥镜移动过程中采集到很多的无效图像,例如障碍物遮挡或清晰度过低等图像。这些无效图像会对内窥镜的检查结果产生干扰。因此,在得到内窥镜图像之后,可以先判断内窥镜图像是否有效,若内窥镜图像为无效图像,可以直接丢弃该内窥镜图像。若内窥镜图像为有效图像,再基于该内窥镜图像确定出对应的组织图像,以减少不必要的数据处理,提高处理速度。例如,可以利用预先训练的识别模型对组织图像进行识别,以确定组织图像是否有效,识别模型例如可以是基于卷积神经网络进行训练获得的,本公开对此不作具体限定。The original signal of the image collected by the endoscope usually contains the equipment information of the endoscope and the personal information of the examinee. In the embodiment of the present disclosure, the image processing module can process the tissue based on the Yolo V4 algorithm The image is detected from the endoscope image, and then the device information and personal information are cut and deleted, so that the tissue image only contains the in-vivo image corresponding to the endoscope, which protects privacy information and facilitates subsequent image processing. In addition, during the endoscopic inspection process, many invalid images may be collected during the moving process of the endoscope due to instability in the process of entering the mirror or improper position of the endoscope. Low quality image. These invalid images can interfere with the endoscopic inspection results. Therefore, after the endoscopic image is obtained, it can be judged first whether the endoscopic image is valid, and if the endoscopic image is an invalid image, the endoscopic image can be directly discarded. If the endoscopic image is a valid image, then the corresponding tissue image is determined based on the endoscopic image, so as to reduce unnecessary data processing and improve processing speed. For example, a pre-trained recognition model can be used to recognize the tissue image to determine whether the tissue image is valid. The recognition model can be obtained by training based on a convolutional neural network, which is not specifically limited in the present disclosure.
腔体定位模块200,用于确定所述组织图像对应的组织腔体的目标点,所述目标点用于指示所述内窥镜在其当前所处位置进镜的下一目标移动点;其中,在所述组织图像中存在组织腔体时,所述目标点为所述内窥镜图像对应的组织腔体的中心点,在所述组织图像中不存在组织腔体时,所述目标点为所述内窥镜图像对应的组织腔体的方向点。The cavity positioning module 200 is configured to determine the target point of the tissue cavity corresponding to the tissue image, and the target point is used to indicate the next target movement point of the endoscope at its current position; wherein , when there is a tissue cavity in the tissue image, the target point is the center point of the tissue cavity corresponding to the endoscopic image, and when there is no tissue cavity in the tissue image, the target point is the direction point of the tissue cavity corresponding to the endoscopic image.
作为示例,若组织图像中存在组织腔体,则该目标点为内窥镜图像对应的组织腔体的中心点,如图2A中点A;若组织图像中不存在组织腔体,则该目标点为所述内窥镜图像对应的组织腔体的方向点,如图2B所示,该在组织图像中不存在组织腔体,则可以确定出导航的方向点,即图2B中点B所示。As an example, if there is a tissue cavity in the tissue image, the target point is the center point of the tissue cavity corresponding to the endoscopic image, such as point A in Figure 2A; if there is no tissue cavity in the tissue image, the target point The point is the direction point of the tissue cavity corresponding to the endoscopic image, as shown in Figure 2B, if there is no tissue cavity in the tissue image, then the direction point of the navigation can be determined, that is, point B in Figure 2B Show.
示例地,医师用户在使用内窥镜的过程中可以选择当前的模式,即进镜模式或者退镜模式,进镜模式下用于控制内窥镜到达回盲部位,退镜模式下用于进行人体内组织检查。其中,在内窥镜进镜过程中需要尽量沿组织腔体的中心点进行移动,使得在进镜时可以有效避免碰到被检查者的组织表面黏膜,避免对被检查者造成伤害。因此,在该实施例中,在确定内窥镜处于进镜模式的情况下,可以进一步确定该组织图像中是否存在组织腔体。For example, the physician user can select the current mode during the use of the endoscope, that is, the mirror-in mode or the mirror-out mode. In the mirror-in mode, it is used to control the endoscope to reach the ileocele. Human internal tissue examination. Among them, the endoscope needs to move along the center of the tissue cavity as far as possible during the endoscope entry process, so that it can effectively avoid touching the tissue surface mucosa of the examinee and avoid causing damage to the examinee. Therefore, in this embodiment, when it is determined that the endoscope is in the mirror-in mode, it may be further determined whether there is a tissue cavity in the tissue image.
示例地,可以通过预训练一个检测模型以对组织图像中是否存在组织腔体进行识别检测。示例地,该检测模型可以根据预先采集的训练样本进行训练,在该训练样本中可以包含训练图像以及该训练图像对应的标签,标签用于指示该训练图像中是否存在组织腔体,从而可以以该训练图像作为模型的输入,以该训练图像对应的标签作为模型的目标输出对模型进行更新训练,获得该检测模型。示例地,该检测模型可以是CNN(Convolutional Neural Networks,卷积神经网络)或者LSTM(Long Short-Term Memory,长短期记忆网络)、Transformer中的Encoder等,本公开对此不作具体限定。For example, a detection model may be pre-trained to identify and detect whether there is a tissue cavity in the tissue image. For example, the detection model can be trained according to pre-collected training samples, which can include training images and labels corresponding to the training images, and the labels are used to indicate whether there are tissue cavities in the training images, so that The training image is used as the input of the model, and the label corresponding to the training image is used as the target output of the model to update and train the model to obtain the detection model. For example, the detection model may be CNN (Convolutional Neural Networks, convolutional neural network) or LSTM (Long Short-Term Memory, long-term short-term memory network), Encoder in Transformer, etc., which is not specifically limited in this disclosure.
作为示例,该组织腔体可以为肠腔、胃腔等,以肠腔为例,若在内窥镜进入肠腔后,该采集到的肠腔的组织图像中存在组织腔体,则可以进一步确定该肠腔的中心点,即为肠腔壁所包围空间截面的中心,在内窥镜沿肠腔中心点进行前进移动时,从而实现内窥镜的进镜自动导航。As an example, the tissue cavity can be intestinal cavity, gastric cavity, etc. Taking the intestinal cavity as an example, if there is a tissue cavity in the collected tissue image of the intestinal cavity after the endoscope enters the intestinal cavity, then further The central point of the intestinal lumen is determined, that is, the center of the space section surrounded by the intestinal lumen wall. When the endoscope moves forward along the central point of the intestinal lumen, the automatic navigation of the endoscope is realized.
示例地,可以通过关键点识别模型对组织腔体的中心点进行识别,在该实施例中,示例地,可以由专业的医师基于经验对该组织图像进行标注。为便于标注,可以将该训练图像中的组织腔体的中心点的位置进行圈注,标注圆圈的中心即为中心点的位置,即训练图像对应的标签,获得包含训练图像和标签的训练样本。并且,为了提高模型的泛化性,训练样本中也可以包含未标注的训练图像。其中,该关键点识别模型可以包含学生子网络、教师子网络以及判别子网络,所述学生子网络和所述教师子网络的网络结构相同,所述教师子网络用于确定所述学生子网络中的训练图像对应的预测标注特征,在所述关键点识别模型的训练过程中,所述学生子网络对应的预测损失的权重是基于所述教师子网络的预测标注特征和所述判别子网络确定出的。Exemplarily, the central point of the tissue cavity may be identified through a key point identification model. In this embodiment, for example, the tissue image may be marked by a professional physician based on experience. For the convenience of labeling, the position of the center point of the tissue cavity in the training image can be circled, and the center of the marked circle is the position of the center point, that is, the label corresponding to the training image, and a training sample including the training image and label can be obtained . Moreover, in order to improve the generalization of the model, unlabeled training images can also be included in the training samples. Wherein, the key point identification model may include a student sub-network, a teacher sub-network and a discriminant sub-network, the student sub-network and the teacher sub-network have the same network structure, and the teacher sub-network is used to determine the student sub-network The predicted labeling features corresponding to the training images in , during the training process of the key point recognition model, the weight of the prediction loss corresponding to the student sub-network is based on the predicted labeling features of the teacher sub-network and the discriminant sub-network OK out.
为了提高模型识别的准确性,在基于训练样本可以采用不同的预处理方式对训练图像进行预处理,获得与训练图像对应的不同的处理图像,如该预处理方式可以是数据增强,例如可以是颜色、亮度、色度、饱和度变换等无仿射变换方式,以保证位置不变形。由此,可以将不同的处理图像分别作为教师子网络和学生子网络的输入图像,以对关键点识别模型进行训练。In order to improve the accuracy of model recognition, different preprocessing methods can be used to preprocess the training images based on the training samples to obtain different processed images corresponding to the training images. For example, the preprocessing method can be data enhancement, for example, it can be Color, brightness, chroma, saturation transformation and other non-affine transformation methods to ensure that the position is not deformed. Thus, different processed images can be used as the input images of the teacher sub-network and the student sub-network respectively to train the key point recognition model.
若在内窥镜进入肠腔后,采集到的肠腔的组织图像中不存在组织腔体,则可以进一步确定该肠腔的方向点,该方向点为预测出的组织腔体的中心点相对于该组织图像的相对位置点,表示内窥镜应该朝该方向点的方向偏移,以为内窥镜的前进提供方向引导。If there is no tissue cavity in the tissue image of the intestinal cavity collected after the endoscope enters the intestinal cavity, the direction point of the intestinal cavity can be further determined, and the direction point is the center point of the predicted tissue cavity relative to The point relative to the tissue image indicates that the endoscope should be deflected in the direction of the direction point, so as to provide direction guidance for the advancement of the endoscope.
在该实施例中,在确定组织图像中不存在组织腔体的情况下,可以将包含该组织图像在内的最近的N张组织图像形成为一图像序列,基于该图像序列和方向点识别模型进行方向点的预测。其中,N可以为正整数,表示图像序列中包含的组织图像的数量,其可以通过实际应用场景进行设置。示例地,方向点识别模型包括卷积子网络、时间循环子网络和解码子网络,所述卷积子网络可以用于获取图像序列的空间特征,所述时间循环子网络可以用于获取图像序列的时间特征,所述解码子网络可以用于基于所述空间特征和所述时间特征进行解码,以获得所述方向点,保证方向点识别的准确性。In this embodiment, when it is determined that there is no tissue cavity in the tissue image, the latest N tissue images including the tissue image can be formed into an image sequence, based on the image sequence and the direction point recognition model Make direction point predictions. Wherein, N may be a positive integer, representing the number of tissue images included in the image sequence, which may be set according to actual application scenarios. Exemplarily, the direction point recognition model includes a convolutional subnetwork, a time cyclic subnetwork and a decoding subnetwork, the convolutional subnetwork can be used to obtain the spatial features of the image sequence, and the time cyclic subnetwork can be used to obtain the image sequence time features, the decoding subnetwork can be used to decode based on the spatial features and the time features to obtain the direction points and ensure the accuracy of direction point recognition.
示例地,方向点识别模型的训练图像序列中包含的训练图像的数量可以根据实际使用场景进行限定,例如,N可以设置为5,即每一训练图像序列中可以包含5张训练图像,即可以基于最近的5张训练图像预测当前状态下的组织腔体的方向点。其中,该训练图像序列对应的标签图像用于指示基于该多张图像预测出的最后一张图像中的组织腔体的方向点的位 置,从而可以基于该上述训练图像序列对方向点识别模型进行训练。For example, the number of training images contained in the training image sequence of the direction point recognition model can be limited according to the actual use scenario, for example, N can be set to 5, that is, each training image sequence can contain 5 training images, that is, Predict the direction point of the tissue cavity in the current state based on the last 5 training images. Wherein, the label image corresponding to the training image sequence is used to indicate the position of the direction point of the tissue cavity in the last image predicted based on the multiple images, so that the direction point recognition model can be performed based on the above training image sequence train.
息肉识别模块300,用于对所述内窥镜在退镜模式下获得的组织图像进行实时的息肉识别,获得息肉识别结果。The polyp identification module 300 is configured to perform real-time polyp identification on the tissue image obtained by the endoscope in the withdrawal mode, and obtain a polyp identification result.
显示模块400,用于对所述目标点和所述息肉识别结果进行显示。The display module 400 is configured to display the target point and the polyp recognition result.
在内窥镜退镜过程中医师可以对人体进行检测,例如可以进行息肉检测。在该过程中,为了便于医师的观察,可以对退镜过程中获得的组织图像进行实时的息肉识别,从而可以在实时的检测过程中为医师提供数据参考。Physicians can detect the human body during the withdrawal process of the endoscope, for example, polyps can be detected. In this process, in order to facilitate the physician's observation, real-time polyp identification can be performed on the tissue image obtained during the retraction process, so as to provide data reference for the physician during the real-time detection process.
由此,通过上述技术方案,可以在内窥镜的进镜过程中,通过对组织图像中的腔体的实时检测和识别,确定出组织图像中的目标点,从而可以为进镜过程提供可靠准确的自动导航,提高内窥镜的进镜效率和进镜准确度,降低内窥镜使用对医师经验的高要求避免对被检查者造成伤害,提升用户使用体验。Thus, through the above technical solution, the target point in the tissue image can be determined through real-time detection and recognition of the cavity in the tissue image during the endoscope's lensing process, thereby providing reliable support for the lensing process. Accurate automatic navigation improves the efficiency and accuracy of endoscope entry, reduces the high requirements for physician experience in the use of endoscopes, avoids harm to the examinee, and improves user experience.
在一种可能的实施例中,所述息肉识别模块包括:In a possible embodiment, the polyp identification module includes:
息肉检测子模块,用于基于检测模型对所述内窥镜在退镜模式下获得的组织图像进行检测,在检测到所述组织图像存在息肉时,确定所述息肉的位置信息。The polyp detection sub-module is configured to detect the tissue image obtained by the endoscope in the retracting mirror mode based on the detection model, and determine the position information of the polyp when it is detected that there is a polyp in the tissue image.
作为示例,该息肉检测子模块中可以采用GFLv2(Generalized Focal Loss V2)实现的检测模型进行检测,检测模型能够在输出预测位置的同时根据分布判断预测位置的预测可信度,其训练方式为现有技术,在此不再赘述。示例地,可以将预测可信度大于一阈值的预测位置在显示界面中的图像展示区域中进行显示,同时也可以显示该预测可信度,如图3所示,预测位置可以以实线检测框的形式进行显示,以对医师进行辅助提示,以提高息肉检测的准确度和对特异度较高的息肉的检出水平。As an example, the detection model implemented by GFLv2 (Generalized Focal Loss V2) can be used in the polyp detection sub-module for detection. The detection model can judge the prediction reliability of the predicted position according to the distribution while outputting the predicted position. The training method is There are technologies, so I won't repeat them here. For example, the predicted position whose predicted reliability is greater than a threshold can be displayed in the image display area in the display interface, and the predicted reliability can also be displayed at the same time, as shown in Figure 3, the predicted position can be detected by a solid line It is displayed in the form of a box to provide an auxiliary reminder to the doctor, so as to improve the accuracy of polyp detection and the detection level of polyps with high specificity.
在一些实施例中,在息肉检测子模块中将组织图像输入检测模型进行检测之前,可以通过预处理模块对该组织图像进行预处理,示例地,可以将组织图像统一标准化至预设尺寸,例如预设尺寸可以为512*512,从而将预处理模块处理后的图像输入该检测模型进行检测,获得组织图像中检测到的息肉的位置信息。In some embodiments, before the tissue image is input into the detection model in the polyp detection sub-module for detection, the tissue image can be preprocessed by the preprocessing module. For example, the tissue image can be uniformly standardized to a preset size, for example The preset size may be 512*512, so that the image processed by the preprocessing module is input into the detection model for detection, and the position information of the polyp detected in the tissue image is obtained.
息肉识别子模块,用于从所述组织图像中提取与所述位置信息对应的检测图像;根据所述检测图像和识别模型确定所述息肉的分类;A polyp identification submodule, configured to extract a detected image corresponding to the position information from the tissue image; determine the classification of the polyp according to the detected image and the identification model;
所述显示模块还用于显示所述组织图像,并在所述组织图像中显示所述息肉的位置信息对应的标识和所述分类。The display module is further configured to display the tissue image, and display the identification corresponding to the position information of the polyp and the classification in the tissue image.
作为示例,在息肉检测子模块确定出组织图像中的息肉的位置信息后,可以基于该位置信息从组织图像中提取出该位置信息对应的检测图像。例如,在提取检测图像时,可以将该位置信息对应的区域放大后提取,以保证提取到的检测图像的完整性。示例地,如图3所示,为检测到的息肉的位置信息对应的检测框,则基于该位置进行检测图像提取时,可以将该检测框放大1.5倍,即提取图3中虚线框中对应的图像作为该检测图像。之后将检测图像输入息肉识别子模块进行息肉分类识别。As an example, after the polyp detection submodule determines the position information of the polyp in the tissue image, the detection image corresponding to the position information may be extracted from the tissue image based on the position information. For example, when extracting a detection image, the region corresponding to the location information may be enlarged and then extracted, so as to ensure the integrity of the extracted detection image. For example, as shown in Figure 3, it is the detection frame corresponding to the position information of the detected polyp, then when the detection image is extracted based on this position, the detection frame can be enlarged by 1.5 times, that is, the corresponding frame in the dotted line frame in Figure 3 can be extracted The image of is used as the detection image. Afterwards, the detection image is input into the polyp recognition sub-module for polyp classification and recognition.
其中,息肉识别子模块中可以包含基于resnet18实现的分类模型,该分类模型可以通过预先对大量息肉图像进行标注分类获得对应的训练样本,从而以息肉图像作为模型的输入,以标注的分类作为目标输出进行训练,获得该分类模型。示例地,分类类别可以覆盖腺瘤、增生性息肉、癌、炎症性息肉、黏膜下肿瘤等多种分类。同样地,在息肉识别子模块中将检测图像输入分类模型进行分类之前,可以通过预处理模块对该检测图像进行预处理,示例地,可以将检测图像统一标准化至预设尺寸,例如预设尺寸可以为256*256,从而将预处理模块处理后的图像输入该分类模型进行分类,获得息肉识别结果,从而可以在显示界面中的图像展示区域中,并在所述组织图像中显示所述息肉的位置信息对应的标识和所述分类,位置信息对应的标识可以是检测方形框或圆形框的方式进行表示。Among them, the polyp identification sub-module can include a classification model based on resnet18. This classification model can obtain corresponding training samples by pre-labeling and classifying a large number of polyp images, so that the polyp image is used as the input of the model, and the labeled classification is used as the target. The output is trained to obtain the classification model. Exemplarily, the classification category may cover various classifications such as adenoma, hyperplastic polyp, carcinoma, inflammatory polyp, and submucosal tumor. Similarly, before the detection image is input into the classification model in the polyp recognition sub-module for classification, the detection image can be preprocessed by the preprocessing module. For example, the detection image can be uniformly standardized to a preset size, such as a preset size It can be 256*256, so that the image processed by the preprocessing module is input into the classification model for classification, and the polyp recognition result is obtained, so that the polyp can be displayed in the image display area of the display interface and in the tissue image The identification corresponding to the location information and the classification, the identification corresponding to the location information may be represented by detecting a square box or a circular box.
由此,通过上述技术方案,可以先检测组织图像中的息肉位置,提高对组织图像中的息肉检出的准确度,降低息肉的漏检率。并且进一步地可以基于检测到的位置信息提取出用于进行息肉识别的检测图像,从而可以降低息肉识别的数据处理量的同时,避免组织图像中的其他部分对息肉识别的影响,进一步提高息肉识别的准确性。Therefore, through the above technical solution, the polyp position in the tissue image can be detected first, the accuracy of polyp detection in the tissue image can be improved, and the missed detection rate of polyps can be reduced. Furthermore, the detection image for polyp identification can be extracted based on the detected position information, thereby reducing the amount of data processing for polyp identification, while avoiding the influence of other parts of the tissue image on polyp identification, and further improving polyp identification. accuracy.
如上文所示,可以在显示界面中的控件区域中设置模式选择控件,则可以响应于用户在控件区域中对该模式选择控件的操作,确定当前内窥镜的使用模式。在另一种可能的实施例中,所述系统还包括:As shown above, a mode selection control can be set in the control area of the display interface, and then the current use mode of the endoscope can be determined in response to the user's operation on the mode selection control in the control area. In another possible embodiment, the system further includes:
模式识别模块,用于根据图像识别模型对所述内窥镜的组织图像进行识别,在所述图像识别模型输出的对应于体内图 像的参数大于第一参数阈值的情况下,确定所述内窥镜的图像模式为进镜模式,在所述图像识别模型输出的对应于回盲瓣图像的参数大于第二参数阈值的情况下,确定所述内窥镜的图像模式为退镜模式。a pattern recognition module, configured to recognize the tissue image of the endoscope according to the image recognition model, and determine that the endoscope The image mode of the endoscope is the mirror-in mode, and when the parameter corresponding to the ileocecal valve image output by the image recognition model is greater than the second parameter threshold, it is determined that the image mode of the endoscope is the mirror-out mode.
在该实施例中,该图像识别模型可以是以ViT(Vision Transformer)为基础的网络模型,可以预先对训练图像进行体外图像、体内图像和回盲瓣图像三个类别的标注,从而可以以训练图像作为输入,以标注的类别作为目标输出进行训练,获得该图像识别模型。相应地,在内窥镜的检测过程中,可以基于图像识别模型对组织图像进行分类识别,其中,图像识别模型输出的对应于体内图像的参数,即该图像识别模型的输出的类别为体内图像时对应的概率值,同样地,所述图像识别模型输出的对应于回盲瓣图像的参数即该图像识别模型的输出的类别为回盲瓣图像时对应的概率值。In this embodiment, the image recognition model can be a network model based on ViT (Vision Transformer), and the training images can be pre-labeled in three categories: in vitro images, in vivo images and ileocecal valve images, so that the training images can be trained The image is used as input, and the labeled category is used as the target output for training to obtain the image recognition model. Correspondingly, in the detection process of the endoscope, the tissue image can be classified and recognized based on the image recognition model, wherein the parameters output by the image recognition model correspond to the in-vivo image, that is, the output category of the image recognition model is the in-vivo image Similarly, the parameter corresponding to the ileocecal valve image output by the image recognition model is the corresponding probability value when the output category of the image recognition model is the ileocecal valve image.
其中,第一参数阈值和第二参数阈值可以根据实际应用场景进行设置,可以相同,也可以不同,本公开不作具体限定。示例地,第一参数阈值为0.85,第二参数阈值为0.9,在将组织图像输入图像识别模型后输出的结果为体内图像,且对应的概率值为0.9,此时确定内窥镜的图像模式应为进镜模式。在后续的检测过程中,针对另一组织图像,在将组织图像输入图像识别模型后输出的结果为回盲瓣图像,且对应的概率值为0.92,此时确定内窥镜的图像模式应为退镜模式。Wherein, the first parameter threshold and the second parameter threshold may be set according to actual application scenarios, and may be the same or different, and are not specifically limited in the present disclosure. For example, the threshold value of the first parameter is 0.85, and the threshold value of the second parameter is 0.9. After the tissue image is input into the image recognition model, the output result is an in-vivo image, and the corresponding probability value is 0.9. At this time, the image mode of the endoscope is determined Should be in mirror mode. In the subsequent detection process, for another tissue image, after inputting the tissue image into the image recognition model, the output result is an ileocecal valve image, and the corresponding probability value is 0.92. At this time, it is determined that the image mode of the endoscope should be Mirror mode.
模式切换模块,用于在内窥镜的图像模式为体外模式、且所述模式识别模块确定所述内窥镜的图像模式为进镜模式的情况下,将所述内窥镜的图像模式切换至进镜模式,在所述内窥镜的图像模式为进镜模式、且所述模式识别模块确定所述内窥镜的图像模式为退镜模式的情况下,将所述内窥镜的图像模式切换至退镜模式,并输出第二提示信息,所述第二提示信息用于提示进入退镜模式。A mode switching module, used to switch the image mode of the endoscope when the image mode of the endoscope is an in vitro mode and the mode recognition module determines that the image mode of the endoscope is an in-scope mode To the mirror-in mode, when the image mode of the endoscope is the mirror-in mode and the pattern recognition module determines that the image mode of the endoscope is the mirror-back mode, the image of the endoscope The mode is switched to the mirror-back mode, and second prompt information is output, and the second prompt information is used to remind to enter the mirror-back mode.
其中,内窥镜的使用过程中其对应的模式顺序依次应该体外模式、进镜模式、退镜模式,因此在模式识别模块确定出内窥镜的图像模式后,可以结合当前的内窥镜的图像模式和识别出的图像模式确定是否需要切换。示例地,当前的内窥镜的图像模式为体外模式,基于实时获得的组织图像确定出的图像模式为进镜模式,则表示此时内窥镜进入体内,则可以自动切换至进镜模式,若当前的内窥镜的图像模式和模式识别模块确定出的图像模式相同,则无需切换。之后,在当前内窥镜的图像模式为进镜模式,且基于实时获得的组织图像确定出的图像模式为退镜模式,则表示此时内窥镜已到达回盲部位,接下来是退镜检查过程,则可以自动切换至退镜模式,并对用户进行提示,以提示当前内窥镜使用进入退镜模式,即接下来进入检查阶段。Among them, during the use of the endoscope, the corresponding mode sequence should be in vitro mode, mirror-in mode, and mirror-out mode. Therefore, after the pattern recognition module determines the image mode of the endoscope, it can be combined with the current endoscope's image mode. The image mode and the recognized image mode determine whether switching is required. For example, the current image mode of the endoscope is the in vitro mode, and the image mode determined based on the tissue images obtained in real time is the endoscope mode, which means that the endoscope enters the body at this time, and then it can automatically switch to the endoscope mode, If the current image mode of the endoscope is the same as the image mode determined by the pattern recognition module, there is no need to switch. Afterwards, when the current image mode of the endoscope is the mirror-in mode, and the image mode determined based on the tissue images obtained in real time is the mirror-out mode, it means that the endoscope has reached the ileocele at this time, and the next step is to withdraw the mirror During the inspection process, it can automatically switch to the mirror withdrawal mode, and prompt the user to prompt the current endoscope to enter the mirror withdrawal mode, that is, to enter the inspection stage next.
由此,通过上述技术方案,可以在内窥镜的使用过程中基于内窥镜采集的图像实现内窥镜的图像模式的自动识别以及切换,无需用户手动操作,并且使得内窥镜的使用模式与其实际使用状态相匹配,为用户对内窥镜的使用提供辅助参考和便利,并且可以对内窥镜的使用过程进行标识,为后续进行不同模式的处理提供可靠的数据支持。Thus, through the above technical solution, automatic recognition and switching of the image mode of the endoscope can be realized based on the images collected by the endoscope during the use of the endoscope, without manual operation by the user, and the use mode of the endoscope can Matching with its actual use status, it provides auxiliary reference and convenience for users to use the endoscope, and can identify the use process of the endoscope, providing reliable data support for subsequent processing of different modes.
内窥镜检查的人体内部组织通常是软组织,在医师运镜的过程中,例如肠道等会发生蠕动,并且在内窥镜检查的过程中医师会有冲水、解襻等操作,导致医师难以明确了解到其在内窥镜检查过程中的检查范围。基于此,本公开还提供以下实施例。The internal tissues of the human body for endoscopic examination are usually soft tissues. During the operation of the doctor's mirror, for example, the intestinal tract will peristalsis, and the doctor will perform operations such as flushing and undoing loops during the endoscopic examination, which will cause the doctor to It is difficult to clearly understand the extent of its inspection during endoscopy. Based on this, the present disclosure also provides the following embodiments.
在一种可能的实施例中,所述系统还包括:In a possible embodiment, the system also includes:
盲区比例检测模块,用于根据所述内窥镜在退镜模式下获得的组织图像,确定显示比例和盲区比例,其中,所述显示比例和所述盲区比例之和为1,该盲区可以是由于黏膜等因素导致的漏检区域或者检查盲区等,盲区比例可以理解为内窥镜检查过程中盲区(即内窥镜的视野中无法观测到的部分)占组织内部整体表面积(即内窥镜检查中应该观测到的全部)的比例。The blind area ratio detection module is used to determine the display ratio and the blind area ratio according to the tissue image obtained by the endoscope in the mirror withdrawal mode, wherein the sum of the display ratio and the blind area ratio is 1, and the blind area can be Due to factors such as mucous membranes, missed inspection areas or inspection blind areas, etc., the blind area ratio can be understood as the blind area (that is, the part that cannot be observed in the field of view of the endoscope) in the process of endoscopic examination to the total internal surface area of the tissue (that is, the endoscopic The proportion of all) that should be observed in the inspection.
示例地,可以在确定内窥镜处于退镜模式时,自动开启盲区比例检测,也可以是响应于用户在显示界面中的控件区域中对盲区比例检测控件的选择操作,从而开启盲区比例检测,以开始执行上述功能,确定盲区比例和显示比例。For example, when it is determined that the endoscope is in the mirror retracting mode, the blind spot ratio detection can be automatically turned on, or the blind spot ratio detection can be turned on in response to the user's selection operation of the blind spot ratio detection control in the control area of the display interface, To start performing the above functions, determine the blind area ratio and display ratio.
作为示例,可以根据所述内窥镜在退镜模式下获得的组织图像,确定显示比例和盲区比例可以是根据所述组织图像的清晰度进行确定。在实际应用场景中,组织图像的清晰度较低时该组织图像中的视野盲区则会相对较多,则可以基于组织图像的清晰度对该视野中盲区比例进行预测。因此,可以通过经验医师对历史采集的组织图像进行盲区比例标注,则可以与历史采集的组织图像作为模型的输入,以标注的盲区比例作为模型的目标输出,对神经网络模型进行训练以获得盲区比例预测模型,用于基于组织图像预测其对应的盲区比例。其中,盲区比例预测模型的训练方式可以采用本领域中常用的训 练方式进行训练,在此不再赘述。由此,可以将所述内窥镜在退镜模式下获得的组织图像输入该盲区比例预测模型,从而获得对应的盲区比例,进一步地基于盲区比例确定出显示比例。As an example, according to the tissue image obtained by the endoscope in the retracting mirror mode, the determination of the display ratio and the blind area ratio may be determined according to the clarity of the tissue image. In an actual application scenario, when the definition of the tissue image is low, there will be relatively more blind spots in the tissue image, and the proportion of blind spots in the field of view can be predicted based on the clarity of the tissue image. Therefore, experienced physicians can annotate the proportion of the blind area on the historically collected tissue images, then use the historically collected tissue images as the input of the model, and use the marked blind area ratio as the target output of the model to train the neural network model to obtain the blind area. A scale prediction model for predicting the corresponding blind zone scale based on tissue images. Wherein, the training method of the blind area proportion prediction model can adopt the training method commonly used in this field to carry out training, will not go into details here. Thus, the tissue image obtained by the endoscope in the retracting mirror mode can be input into the blind area ratio prediction model, so as to obtain the corresponding blind area ratio, and further determine the display ratio based on the blind area ratio.
作为另一示例,所述盲区比例检测模块还用于:根据所述内窥镜在退镜模式下获得的组织图像进行三维重建,获得三维组织图像;并根据所述三维组织图像和所述三维组织图像对应的目标拟合组织图像,确定显示比例和盲区比例。As another example, the blind area ratio detection module is further configured to: perform three-dimensional reconstruction according to the tissue image obtained by the endoscope in the mirror-back mode to obtain a three-dimensional tissue image; and obtain a three-dimensional tissue image according to the three-dimensional tissue image and the three-dimensional The target corresponding to the tissue image is fitted to the tissue image, and the display ratio and blind area ratio are determined.
如图4所示,以肠道为例,为基于内窥镜图像进行三维重建所得的三维组织图像,即肠道黏膜的示意图。示例地,可以采用本领域的三维重建技术进行重建,如SurfelMeshing融合算法。示例地,可以将相邻的两帧组织图像分别输入到深度网络Depth Network和姿态网络Pose Network中,则可以获得对应的深度图和姿态信息,该姿态信息能够表征内窥镜在组织内的运动过程,例如可以包括旋转矩阵和平移向量。之后将组织图像、深度图和姿态信息基于三维重建模型进行三维重建,获得三维组织图像。其中,深度网络和姿态网络均可以基于ResNet50网络进行训练实现,在此不再赘述。As shown in FIG. 4 , taking the intestinal tract as an example, it is a schematic diagram of a three-dimensional tissue image obtained through three-dimensional reconstruction based on an endoscopic image, that is, the intestinal mucosa. Exemplarily, the three-dimensional reconstruction technology in the field can be used for reconstruction, such as the SurfelMeshing fusion algorithm. For example, two adjacent frames of tissue images can be input into the depth network Depth Network and the pose network Pose Network respectively, then the corresponding depth map and pose information can be obtained, and the pose information can represent the movement of the endoscope in the tissue Processes, for example, can include rotation matrices and translation vectors. Afterwards, the tissue image, depth map and posture information are reconstructed based on the 3D reconstruction model to obtain a 3D tissue image. Among them, both the depth network and the attitude network can be trained based on the ResNet50 network, which will not be repeated here.
其中,肠道可以类似于管状结构,由于内窥镜视野的局限性,在基于内窥镜图像进行肠道重建时,其可能会出现如图4中W1、W2、W3、W4处所示的空洞位置,即该空洞位置并未出现在组织图像中,即内窥镜检查中的未可见部分,即本公开中所述的盲区。医师在进行内窥镜检查时无法观察到该部分区域,若未可见区域过多则容易出现漏检现象。在该实施例中,可以通过盲区比例表征出现在检查图像中的黏膜未可见部分占组织整体的黏膜区域的比例,则通过盲区比例可以提示当前组织中未可见的部分,以便于对内窥镜检查的全面性进行表征。Among them, the intestinal tract can be similar to a tubular structure. Due to the limitation of the field of view of the endoscope, when the intestinal tract is reconstructed based on the endoscopic image, it may appear as shown at W1, W2, W3, and W4 in Figure 4. The cavity position, that is, the cavity position does not appear in the tissue image, that is, the invisible part in endoscopic examination, that is, the blind area described in the present disclosure. Physicians cannot observe this part of the area during endoscopic examination, and if there are too many unseen areas, it is easy to miss detection. In this embodiment, the ratio of the invisible part of the mucosa appearing in the inspection image to the total mucosal area of the tissue can be represented by the proportion of the blind area, and the proportion of the blind area can be used to indicate the invisible part of the current tissue, so as to facilitate the endoscope The comprehensiveness of the inspection is characterized.
示例地,所述盲区比例检测模块还用于:Exemplarily, the blind spot ratio detection module is also used for:
将所述三维组织图像中的点云数量、和所述目标拟合组织图像的目拟合点云数量的比值,确定为所述显示比例,并根据所述显示比例确定所述盲区比例。The ratio of the number of point clouds in the three-dimensional tissue image to the number of point clouds of the target fitting tissue image is determined as the display ratio, and the blind area ratio is determined according to the display ratio.
其中,在基于二维的组织图像进行重建之后获得三维组织图像,之后可以将三维组织图像投影至其对应的目标拟合组织图像,并根据该三维组织图像的投影与目标拟合组织图像中的重叠区域,确定该显示比例。三维组织图像对应的目标拟合组织图像,即基于该组织图像中的组织对应的结构预测出的完整腔体结构,以图4中肠道为例,其对应的目标拟合组织图像为相对应的管状结构图像,可以拟合化为圆柱体结构。其中,针对不同的组织可以设置其对应的标准结构特征,则在确定出三维组织图像后可以基于该标准结构特征进行拟合,获得该目标拟合组织图像。示例地,可以采用蒙特卡洛方法,在目标拟合组织图像上均匀分布K个测试点(K≥100),然后分别统计可见区域内测试点的数量Λ,以及盲区区域内测试点的数量Ω,那么显示比例φ=Λ/(Λ+Ω),则盲区比例为1-φ。Wherein, the three-dimensional tissue image is obtained after reconstruction based on the two-dimensional tissue image, and then the three-dimensional tissue image can be projected to its corresponding target fitting tissue image, and according to the projection of the three-dimensional tissue image and the target fitting tissue image The overlapping area determines the aspect ratio. The target fitting tissue image corresponding to the three-dimensional tissue image is the complete cavity structure predicted based on the structure corresponding to the tissue in the tissue image. Taking the intestinal tract in Figure 4 as an example, the corresponding target fitting tissue image is corresponding to The image of the tubular structure can be fitted into a cylindrical structure. Wherein, corresponding standard structural features can be set for different tissues, and after the three-dimensional tissue image is determined, fitting can be performed based on the standard structural features to obtain the target fitted tissue image. For example, the Monte Carlo method can be used to evenly distribute K test points (K≥100) on the target fitting tissue image, and then count the number Λ of test points in the visible area and the number Ω of test points in the blind area , then the display ratio φ=Λ/(Λ+Ω), then the blind area ratio is 1-φ.
所述显示模块还用于显示所述显示比例和/或所述盲区比例,并在所述盲区比例大于盲区阈值的情况下,显示第一提示信息,所述第一提示信息用于指示存在漏检风险。The display module is also used to display the display ratio and/or the blind area ratio, and when the blind area ratio is greater than the blind area threshold, display first prompt information, the first prompt information is used to indicate that there is a leak check risk.
其中,所述显示比例用于表征医师进行内窥镜图像检测的过程中查看到的区域占整体组织的比例,盲区比例用于表征医师进行内窥镜图像检测的过程中未查看到的区域占整体组织的比例,则该在实施例中,可以将显示比例进行显示,也可以将盲区比例进行显示,或者也可以将显示比例和盲区比例同时显示,从而可以便于医师及时了解其检测过程的准确性和全面性,同时在盲区比例较大时对其进行提示,如显示界面中显示该提示消息,例如,提示信息可以是“当前漏检风险高”、“请重新检查”、“请执行退镜”,可以直接显示该提示消息,也可以语音提示,也可以通过弹窗提示,从而对医生进行提示,使得医生可以及时了解其退镜过程中检查区域的黏膜覆盖范围不足,容易出现漏检现象,这样医生可以根据提示信息,调整内窥镜的方向,或者执行退镜,再或者重新执行退镜过程,从而可以在一定程度上降低内窥镜检查漏检的风险,为后续进行息肉识别和检查提供可靠且全面的数据支持,同时便于用户使用。Wherein, the display ratio is used to represent the proportion of the area viewed by the physician in the process of endoscopic image detection to the overall tissue, and the blind area ratio is used to represent the proportion of the area not viewed by the physician in the process of endoscopic image detection. In the embodiment, the display ratio can be displayed, the blind area ratio can also be displayed, or the display ratio and the blind area ratio can be displayed at the same time, so that it is convenient for the doctor to know the accuracy of the detection process in time. At the same time, when the proportion of the blind area is large, it will be prompted. For example, the prompt message is displayed on the display interface. Mirror", you can directly display the prompt message, you can also prompt by voice, or you can prompt through a pop-up window, so as to remind the doctor, so that the doctor can know in time that the mucosa coverage of the inspection area is insufficient during the process of withdrawing the mirror, and it is prone to missed inspections phenomenon, so that the doctor can adjust the direction of the endoscope according to the prompt information, or execute the endoscope retraction process, or re-execute the mirror retraction process, so as to reduce the risk of missed detection of endoscopy to a certain extent, and lay a solid foundation for subsequent polyp detection. Identification and inspection provides reliable and comprehensive data support while being user-friendly.
在一些实施例中,所述系统还包括:In some embodiments, the system also includes:
三维定位模块,用于根据所述内窥镜在进镜模式下获得的组织图像进行三维重建,获得三维组织图像;并将所述三维组织图像的中心线上、与所述目标点距离最近的点,确定为所述目标点在所述三维组织图像的三维目标点。The three-dimensional positioning module is used to perform three-dimensional reconstruction according to the tissue image obtained by the endoscope in the endoscopic mode to obtain a three-dimensional tissue image; A point is determined as a three-dimensional target point of the target point in the three-dimensional tissue image.
其中,进行三维重建的方式已在上文进行详述,在此不再赘述。作为示例,组织腔体通常拟合为管状结构,因此,在三维组织图像的中心线可以是拟合后的管状结构的中心线,保证在内窥镜移动的过程中与组织粘膜四周的距离,避免损坏组织粘膜。因此,在确定出该组织图像中的目标点后,为了保证内窥镜进镜的准确性,则可以将该目标点映射到该中心线 上对应的位置,即与该目标点距离最近的点,以保证该三维目标点的准确性和合理性。Wherein, the manner of performing 3D reconstruction has been described in detail above, and will not be repeated here. As an example, the tissue cavity is usually fitted to a tubular structure, therefore, the centerline of the three-dimensional tissue image can be the centerline of the fitted tubular structure, to ensure the distance from the surrounding tissue mucosa during the movement of the endoscope, Avoid damage to the mucous membrane of the tissue. Therefore, after determining the target point in the tissue image, in order to ensure the accuracy of the endoscope, the target point can be mapped to the corresponding position on the center line, that is, the point closest to the target point , to ensure the accuracy and rationality of the three-dimensional target point.
姿态确定模块,用于根据当前所处位置对应的组织图像、历史时段内的组织图像,确定所述内窥镜的姿态信息。The attitude determination module is configured to determine the attitude information of the endoscope according to the tissue image corresponding to the current location and the tissue image in the historical period.
示例地,可以对内窥镜进镜过程中的图像进行监测和显示。示例地,可以预先基于真实的内窥镜图像及其边界框的手动注释并将姿态信息添加到实际图像中,以形成数据集。之后可以基于ResNet50网络实现姿态估计模型。例如,可以以内窥镜图像序列作为输入,以标注的姿态信息作为模型的目标输出,以对网络进行调整执行回归,实现模型的训练。由此,可以将当前所处位置对应的组织图像和历史时段内的组织图像形成图像序列,以基于上文所述的姿态估计模型确定内窥镜的姿态信息。Exemplarily, the images during the process of endoscope mirroring can be monitored and displayed. Illustratively, a dataset can be formed in advance based on manual annotations of real endoscopic images and their bounding boxes and adding pose information to real images. Then the pose estimation model can be implemented based on the ResNet50 network. For example, endoscopic image sequences can be used as input, and the labeled pose information can be used as the target output of the model to adjust the network and perform regression to achieve model training. Thus, the tissue image corresponding to the current location and the tissue images in the historical period can be formed into an image sequence, so as to determine the posture information of the endoscope based on the above-mentioned posture estimation model.
轨迹确定模块,用于根据所述当前所处位置、所述姿态信息和所述三维目标点生成导航轨迹。其中,所述姿态信息用于表征当前内窥镜的姿态,所述三维目标点用于表示所述内窥镜移动的终点,从而可以基于当前位置和姿态信息确定出移动到所述三维目标点的轨迹信息,即该导航轨迹。其中,该轨迹预测的方式可以采用本领域中常用的预计方式,本公开对此不进行限定。A trajectory determination module, configured to generate a navigation trajectory according to the current location, the attitude information and the three-dimensional target point. Wherein, the attitude information is used to characterize the attitude of the current endoscope, and the three-dimensional target point is used to represent the end point of the movement of the endoscope, so that the movement to the three-dimensional target point can be determined based on the current position and attitude information. The trajectory information of , that is, the navigation trajectory. Wherein, the manner of trajectory prediction may adopt a prediction manner commonly used in the art, which is not limited in the present disclosure.
所述显示模块还用于显示所述内窥镜的姿态信息和所述三维组织图像,并在所述三维组织图像中显示所述三维目标点和所述导航轨迹。The display module is also used to display the posture information of the endoscope and the three-dimensional tissue image, and display the three-dimensional target point and the navigation track in the three-dimensional tissue image.
作为示例,可以在控制内窥镜进行进镜时自动开启三维定位和进镜导航。作为另一示例,可以在显示界面中的控件区域中显示三维进镜导航控件,则在用户需要进行导航时,可以点击该控件,进而响应于该用户在显示界面的控件区域中对该三维进镜导航控件的选择操作,则可以通过该三维定位模块进行定位。之后,可以进一步确定出该内窥镜的姿态信息,以确定出导航轨迹。作为示例,可以在显示界面中显示所述三维组织图像,并在所述三维组织图像中显示所述三维目标点和所述导航轨迹,即显示下一移动的位置点,以及移动到该位置点的建议路径,实现对内窥镜进镜自动导航的同时,也便于医师及时了解当前内窥镜在体内的移动路径和状态,便于执行人工介入进镜,保证内窥镜进镜过程的准确性,提升用户使用体验。As an example, when the endoscope is controlled to enter the endoscope, the three-dimensional positioning and the endoscope navigation can be automatically started. As another example, a 3D camera navigation control can be displayed in the control area of the display interface, then when the user needs to navigate, the control can be clicked, and then in response to the user performing the 3D navigation control in the control area of the display interface The selection operation of the mirror navigation control can be positioned through the three-dimensional positioning module. Afterwards, the attitude information of the endoscope can be further determined to determine the navigation trajectory. As an example, the three-dimensional tissue image can be displayed on the display interface, and the three-dimensional target point and the navigation track can be displayed in the three-dimensional tissue image, that is, the position point of the next movement is displayed, and the position point of moving to the position point is displayed. The proposed path realizes the automatic navigation of the endoscope, and at the same time, it is also convenient for the doctor to know the current movement path and status of the endoscope in the body, and it is convenient to perform manual intervention to ensure the accuracy of the endoscope process. , to improve user experience.
在实际应用场景中,肠镜和胃镜共用一套内窥镜主机,若被检查者是胃镜和肠镜都进行检查时,系统接收到的视频时常是胃镜和肠镜交替的,对不同的组织部位进行的检测大都不同,针对肠镜的检测算法并不适用于胃镜。基于此,在一种可能的实施例中,可以在显示界面中的控件区域中显示对应的检测模式的控件,响应于对该控件的选择操作,则可以将选择的模式作为该内窥镜的检测模式。In the actual application scenario, the colonoscope and the gastroscope share a set of endoscope hosts. If the examinee is inspected by both the gastroscope and the colonoscope, the video received by the system often alternates between the gastroscope and the colonoscope. Most of the detections are different, and the detection algorithm for colonoscopy is not suitable for gastroscopy. Based on this, in a possible embodiment, the control of the corresponding detection mode can be displayed in the control area of the display interface, and in response to the selection operation of the control, the selected mode can be used as the detection mode of the endoscope. detection mode.
因此,本公开还提供以下实施例,以适用于多场景下的内窥镜图像检测过程。在一种可能的实施例中,所述系统还包括:Therefore, the present disclosure also provides the following embodiments to be applicable to the endoscopic image detection process in multiple scenarios. In a possible embodiment, the system also includes:
图像分类模块,用于对所述内窥镜在进镜模式下的组织图像进行分类,获得所述组织图像对应的目标分类,其中,所述目标分类包括异常分类和多个内窥镜分类。The image classification module is configured to classify the tissue images of the endoscope in the endoscope mode, and obtain target classifications corresponding to the tissue images, wherein the target classification includes abnormal classification and multiple endoscope classifications.
示例地,在本公开中可以预训练图像分类模型以对组织图像进行分类,该图像分类模型可以基于PreResNet18进行训练获得,图像分类的类别可以包括异常分类和内窥镜分类,其中异常分类包括无信号图像、体外图像等,内窥镜分类可以包含肠镜图像和胃镜图像。For example, in the present disclosure, an image classification model can be pre-trained to classify tissue images. The image classification model can be obtained by training based on PreResNet18. The categories of image classification can include abnormal classification and endoscope classification, wherein the abnormal classification includes none Signal images, in vitro images, etc. Endoscope classification can include colonoscopy images and gastroscopic images.
在该实施例中,可以通过经验医师对内窥镜图像进行预先分类标注,针对无信号图像、体外图像等可以标注为异常分类,针对肠镜图像则标注为肠镜分类,针对胃镜图像标注为胃镜分类,以获得训练样本。之后,可以以内窥镜图像为模型的输入,以内窥镜图像对应的标注作为模型的目标输出,对模型进行训练,获得该图像分类模型,从而对内窥镜采集到的组织图像进行分类。In this embodiment, endoscopic images can be pre-classified and marked by experienced physicians. No-signal images and in-vitro images can be marked as abnormal classification, colonoscopic images can be marked as colonoscopic classification, and gastroscopic images can be marked as Classification of gastroscopy to obtain training samples. After that, the endoscope image can be used as the input of the model, and the label corresponding to the endoscope image can be used as the target output of the model to train the model to obtain the image classification model, so as to classify the tissue images collected by the endoscope.
检测模式识别模块,用于根据所述组织图像对应的目标分类,对每一内窥镜分类的计数器进行更新,并在任一内窥镜分类对应的计数器的数值达到计数阈值时,停止各个所述计数器的计数操作,并根据计数器的数值达到计数阈值的内窥镜分类确定所述内窥镜的检测模式,其中,每一所述内窥镜分类具有其对应的计数器,每一所述计数器的数值初始为零。The detection pattern recognition module is used to update the counter of each endoscope classification according to the target classification corresponding to the tissue image, and when the value of the counter corresponding to any endoscope classification reaches a counting threshold, stop each of the The counting operation of the counter, and determine the detection mode of the endoscope according to the endoscope classification whose value of the counter reaches the counting threshold, wherein, each of the endoscope classifications has its corresponding counter, and each of the counters The value is initially zero.
在该实施例中,可以通过内窥镜移动过程中采集的组织图像对应的分类,确定该内窥镜当前采用的检测模式。作为示例,可以针对内窥镜移动过程中的组织图像进行分类。所述检测模式识别模块通过如下方式根据所述组织图像对应的目标 分类,对每一内窥镜分类的计数器进行更新:In this embodiment, the detection mode currently adopted by the endoscope can be determined according to the classification corresponding to the tissue images collected during the movement of the endoscope. As an example, classification may be performed on images of tissues during movement of the endoscope. The detection pattern recognition module updates the counter of each endoscope classification according to the target classification corresponding to the tissue image in the following manner:
在所述目标分类为内窥镜分类时,对所述目标分类对应的计数器进行加一操作,对除所述目标分类之外、计数器数值不为零的内窥镜分类对应的计数器进行减一操作;在所述目标分类为异常分类时,则对计数器数值不为零的各个内窥镜分类对应的计数器进行减一操作,以对每一内窥镜分类的计数器进行更新。When the target classification is an endoscope classification, add one to the counter corresponding to the target classification, and subtract one from the counter corresponding to the endoscope classification whose counter value is not zero except for the target classification Operation: when the target category is an abnormal category, the counter corresponding to each endoscope category whose counter value is not zero is decremented by one, so as to update the counter of each endoscope category.
示例地,内窥镜分类包括肠镜分类和胃镜分类,肠镜分类对应的计数器为count1,胃镜分类对应的计数器为count2,分别初始化为0。若确定出的组织图像对应的分类为异常分类,此时肠镜分类和胃镜分类对应的计数器均为0。在确定出的组织图像对应的分类为肠镜分类,此时可以对肠镜分类对应的计数器count1加一操作,此时count1为1。之后针对其他组织图像重复上述过程,若确定出的count1为49,count2为3,计数阈值为50。在下一确定出的组织图像对应的分类为肠镜分类时,此时针对count1执行加一操作,即count1为50,并针对count2执行减一操作,即count2为2,此时肠镜分类对应的计数器的数值达到计数阈值,此时可以确定检测模式为肠镜模式,并停止各个所述计数器的计数操作。由此,可以通过对内窥镜移动过程中的部分的组织图像进行分类统计,自动确定该内窥镜的检测模式,节省用户操作,辅助用户对内窥镜的使用过程。Exemplarily, the endoscope classification includes colonoscopy classification and gastroscope classification, the counter corresponding to the colonoscopy classification is count1, and the counter corresponding to the gastroscopy classification is count2, which are initialized to 0 respectively. If the determined classification corresponding to the tissue image is an abnormal classification, then the counters corresponding to the colonoscopy classification and the gastroscopy classification are both 0. The classification corresponding to the determined tissue image is colonoscopy classification, at this time, an operation may be added to the counter count1 corresponding to the colonoscopy classification, and count1 is 1 at this time. Then repeat the above process for other tissue images, if the determined count1 is 49, count2 is 3, and the count threshold is 50. When the classification corresponding to the next determined tissue image is colonoscopy classification, at this time, add one operation to count1, that is, count1 is 50, and perform a subtraction operation on count2, that is, count2 is 2. At this time, the corresponding colonoscopy classification When the value of the counter reaches the counting threshold, it can be determined that the detection mode is the colonoscopy mode, and the counting operation of each of the counters is stopped. In this way, the detection mode of the endoscope can be automatically determined by classifying and counting the part of the tissue images during the moving process of the endoscope, which saves user operations and assists the user in using the endoscope.
所述息肉识别模块还用于:根据所述检测模式识别模块确定出的检测模式所对应的息肉识别模型,对所述内窥镜在退镜模式下获得的组织图像进行实时的息肉识别。The polyp identification module is further configured to: perform real-time polyp identification on the tissue image obtained by the endoscope in the retracting mode according to the polyp identification model corresponding to the detection mode determined by the detection mode identification module.
其中,针对于肠镜分类和胃镜分类下的图像识别的分类和特征可能不同,因此,在该实施例中,可以针对每一检测模式,预先确定出该检测模式下用于进行息肉识别的息肉识别模型,其中,息肉识别模型的确定方式已在上文进行详述,在此不再赘述。Wherein, the classification and features of image recognition for colonoscopy classification and gastroscopy classification may be different. Therefore, in this embodiment, for each detection mode, the polyps used for polyp identification in this detection mode can be pre-determined. The identification model, wherein, the manner of determining the polyp identification model has been described in detail above, and will not be repeated here.
由此,通过上述示例,可以通过对内窥镜移动过程中采集的组织图像进行分类从而自动确定出该内窥镜的检测模式,并采用该检测模型下对应的息肉识别模型进行图像识别,从而可以提高内窥镜使用的自动化水平,提高与实际应用场景的适用程度,并且可以在一定程度上提高息肉识别的准确性,为医师对内窥镜结果进行分析提供可靠的数据支持。Thus, through the above example, the detection mode of the endoscope can be automatically determined by classifying the tissue images collected during the moving process of the endoscope, and the corresponding polyp recognition model under the detection model can be used for image recognition, thereby It can improve the automation level of endoscope use, improve the degree of applicability to actual application scenarios, and improve the accuracy of polyp identification to a certain extent, providing reliable data support for physicians to analyze endoscopic results.
在一种可能的实施例中,所述系统还包括:In a possible embodiment, the system also includes:
速度确定模块,用于在所述内窥镜图像的模式为退镜模式时,获取当前组织图像对应的多个历史组织图像,并计算每一所述历史组织图像与当前组织图像之间的相似度,根据多个所述相似度确定退镜速度。A speed determination module, configured to acquire a plurality of historical tissue images corresponding to the current tissue image when the mode of the endoscopic image is the mirror-back mode, and calculate the similarity between each of the historical tissue images and the current tissue image degree, and determine the mirror moving speed according to the plurality of similarities.
所述显示模块还用于显示所述内窥镜对应的所述退镜速度。The display module is also used to display the mirror withdrawal speed corresponding to the endoscope.
其中,在内窥镜移动过程中,若内窥镜移动速度较慢则相邻的组织图像应该是相似的,因此,在该实施例中可以基于相邻的组织图像之间的相似度对退镜速度进行评估。计算图像之间的相似度的方式可以采用本领域中常用的计算方式进行计算,本公开对此不作限定。Wherein, during the moving process of the endoscope, if the moving speed of the endoscope is slow, the adjacent tissue images should be similar. Therefore, in this embodiment, the back-off can be performed based on the similarity between adjacent tissue images. Mirror speed is evaluated. The method for calculating the similarity between images may be calculated by using a common calculation method in the art, which is not limited in the present disclosure.
示例地,可以预先设置相似度区间和退镜速度之间的映射关系,其中相似度越高,退镜速度越慢。例如,可以将确定出的多个相似度的平均值确定该多个组织图像移动过程中对应的平均相似度,基于该平均相似度所属的相似度区间,将该相似度区间对应的速度作为该过程中对应的退镜速度。并且进一步地可以在退镜过程中在显示界面的图像展示区域中对退镜速度进行显示。由此,可以通过检测退镜过程中相邻的组织图像之间的相似度以确定退镜过程中的退镜速度,并对该退镜速度进行显示,从而可以对用户进行提示,在一定程度上避免出现由于退镜速度过快而导致的息肉漏检的情况,提高用户使用的便利性。For example, a mapping relationship between the similarity interval and the mirror-off speed can be preset, wherein the higher the similarity, the slower the mirror-out speed. For example, the determined average value of the plurality of similarities can be used to determine the corresponding average similarity during the moving process of the plurality of tissue images, and based on the similarity interval to which the average similarity belongs, the speed corresponding to the similarity interval can be used as the The corresponding mirror retraction speed during the process. And further, the mirror retraction speed can be displayed in the image display area of the display interface during the mirror retraction process. Thus, the mirror retraction speed can be determined by detecting the similarity between adjacent tissue images in the mirror retraction process, and the mirror retraction speed is displayed, so that the user can be prompted, to a certain extent On the one hand, it can avoid the missed detection of polyps caused by the excessive speed of withdrawing the mirror, and improve the convenience of users.
在内窥镜的使用过程中,组织腔体内的清洁度是衡量检查质量的重要指标之一,目前通常采用的波士顿评分法。然而由于医师在内窥镜检查时精力主要集中在进镜过程和病变发现过程,分配到清洁度评估的注意力有限。并且不同医师在评估清洁度时存在主观性。During the use of the endoscope, the cleanliness of the tissue cavity is one of the important indicators to measure the quality of the inspection, and the Boston scoring method is usually used at present. However, because physicians mainly focus on the endoscopic examination process and lesion discovery process, the attention allocated to cleanliness evaluation is limited. And there is subjectivity among different physicians in assessing cleanliness.
基于此,本公开还提供以下实施例。在一些实施例中,所述系统还包括:Based on this, the present disclosure also provides the following embodiments. In some embodiments, the system also includes:
清洁度确定模块,用于对所述内窥镜在退镜模式下的组织图像进行清洁度分类,并确定每一清洁度分类下的组织图像的数量;根据每一所述清洁度分类下的组织图像的数量,确定所述组织图像对应的组织的清洁度。The cleanliness determination module is used to classify the tissue images of the endoscope in the mirror withdrawal mode, and determine the number of tissue images under each cleanliness category; according to each of the cleanliness categories The number of tissue images determines the cleanliness of the tissue corresponding to the tissue images.
作为示例,可以在确定内窥镜处于退镜模式后自动开启清洁度检测,也可以是响应于用户在显示界面中的控件区域中 对清洁度检测控件的选择操作,开启清洁度检测。在实际应用场景中,内窥镜的进镜阶段存在清洗操作,并不适合将清洗操作之前的组织腔体内的清洁度纳入整体评估。因此,在该实施例中,通过对退镜模式下的组织图像进行清洁度分类,对于单张组织图像的清洁度分类,可以基于Vision Transformer网络实现的清洁度分类模型进行分类。预先对内窥镜图像进行标注,根据波士顿评分法,可以标注为0、1、2、3。之后,可以以内窥镜图像作为模型的输入,以内窥镜图像对应的标注做为模型的目标输出,以进行训练。由此,可以保证确定出的组织的清洁度贴合实际的应用场景,提高确定出的清洁度的准确性。As an example, the cleanliness detection can be automatically started after it is determined that the endoscope is in the mirror withdrawal mode, or it can be started in response to the user's selection operation of the cleanliness detection control in the control area of the display interface. In practical application scenarios, there is a cleaning operation in the endoscope's lens entry stage, and it is not suitable to include the cleanliness in the tissue cavity before the cleaning operation into the overall evaluation. Therefore, in this embodiment, by classifying the cleanliness of the tissue images in the mirror-back mode, the cleanliness classification of a single tissue image can be classified based on the cleanliness classification model implemented by the Vision Transformer network. The endoscopic images are marked in advance, and can be marked as 0, 1, 2, 3 according to the Boston scoring method. Afterwards, the endoscopic image can be used as the input of the model, and the label corresponding to the endoscopic image can be used as the target output of the model for training. In this way, it can be ensured that the determined cleanliness of the tissue fits the actual application scenario, and the accuracy of the determined cleanliness can be improved.
在一种可能的实施例中,所述清洁度确定模块还用于:In a possible embodiment, the cleanliness determination module is also used for:
根据所述清洁度分类对应的分数由小到大的顺序,获取一清洁度分类下的组织图像的数量;Acquire the number of tissue images under a cleanliness category according to the order of the scores corresponding to the cleanliness category from small to large;
确定当前清洁度分类下的组织图像的数量与目标总数量的比例与当前清洁度分类对应的阈值的大小关系,其中,所述目标总数量为各个清洁度分下的组织图像的数量之和;Determine the relationship between the ratio of the number of tissue images under the current cleanliness classification to the target total number and the threshold corresponding to the current cleanliness classification, wherein the target total number is the sum of the numbers of tissue images under each cleanliness category;
若当前清洁度分类下的组织图像的数量与目标总数量的比例大于或等于该清洁度分类对应的阈值,则将该清洁度分类对应的分数作为所述组织的清洁度;If the ratio of the number of tissue images under the current cleanliness classification to the total target quantity is greater than or equal to the threshold corresponding to the cleanliness classification, then the score corresponding to the cleanliness classification is used as the cleanliness of the organization;
若当前清洁度分类下的组织图像的数量与目标总数量的比例小于该清洁度分类对应的阈值,则在下一清洁度分类不是分数最大的清洁度分类的情况下,将下一清洁度分类作为新的当前清洁度分类,并重新执行所述确定当前清洁度分类下的组织图像的数量与目标总数量的比例与当前清洁度分类对应的阈值的大小关系的步骤;在下一清洁度分类为分数最大的清洁度分类的情况下,将该下一清洁度分类的分数确定为所述组织的清洁度。If the ratio of the number of tissue images under the current cleanliness classification to the total number of targets is less than the threshold corresponding to the cleanliness classification, then if the next cleanliness classification is not the cleanliness classification with the largest score, the next cleanliness classification will be used as new current cleanliness classification, and re-execute the step of determining the ratio of the number of tissue images under the current cleanliness classification to the target total quantity and the threshold value corresponding to the current cleanliness classification; In the case of the largest cleanliness category, the score of the next cleanliness category is determined as the cleanliness of the tissue.
如上文所述示例,清洁度分类对应的分数从小到大依次为0、1、2、3,则可以按照该顺序获取每一清洁度分类下的组织图像的数量。示例地,首先获取该清洁度分类的分数为0的组织图像的数量,示例地,该数量为S0,则进一步确定当前清洁度分类下的组织图像的数量与目标总数量的比例与当前清洁度分类对应的阈值的大小关系。其中,目标总数量为Sum,每一清洁度分类对应的阈值可以根据实际应用场景进行设置。示例地,清洁度分类的分数为0对应的阈值的N0,则在S0/Sum大于或等于N0时,确定组织的清洁度为0。否则,进一步获取清洁度分类的分数为1的组织图像的数量,示例地,该数量为S1,清洁度分类的分数为1对应的阈值的N1,则在S1/Sum大于或等于N1,确定组织的清洁度为1。否则,进一步获取清洁度分类的分数为2的组织图像的数量,示例地,该数量为S2,清洁度分类的分数为2对应的阈值的N2,则在S2/Sum大于或等于N2时,确定组织的清洁度为2。否则,获取下一清洁度分类,即分数为3的清洁度分类,该分类为分数最大的清洁度分类,则可以直接将该下一清洁度分类的分数确定为所述组织的清洁度,即确定组织的清洁度为3。As in the example above, the scores corresponding to the cleanliness categories are 0, 1, 2, and 3 in descending order, and the number of tissue images under each cleanliness category can be acquired in this order. For example, first obtain the number of tissue images whose cleanliness classification score is 0, for example, the number is S0, then further determine the ratio of the number of tissue images under the current cleanliness category to the total target number and the current cleanliness The size relationship of the threshold corresponding to the classification. Among them, the total number of targets is Sum, and the threshold corresponding to each cleanliness category can be set according to the actual application scenario. For example, if the score of cleanliness classification is 0, which corresponds to the threshold N0, then when S0/Sum is greater than or equal to N0, it is determined that the cleanliness of the tissue is 0. Otherwise, further obtain the number of tissue images whose cleanliness classification score is 1, for example, the number is S1, and the cleanliness classification score is 1 corresponding to the threshold N1, then when S1/Sum is greater than or equal to N1, determine the tissue cleanliness is 1. Otherwise, further obtain the number of tissue images whose cleanliness classification score is 2, for example, the number is S2, and the cleanliness classification score is 2 corresponding to the threshold N2, then when S2/Sum is greater than or equal to N2, determine The cleanliness of the tissue is 2. Otherwise, the next cleanliness classification is obtained, that is, the cleanliness classification with a score of 3, which is the cleanliness classification with the largest score, and the score of the next cleanliness classification can be directly determined as the cleanliness of the organization, namely Determine the cleanliness of the tissue as 3.
由此,通过上述技术方案可以只对退镜过程中的组织图像进行清洁度分类的检测,降低数据处理量。同时可以基于各个分类下的组织图像的数据对该退镜过程中的清洁度进行整体评价,提高确定出的清洁度的准确性,并且更加符合波士顿评分标准,便于用户使用。Therefore, through the above technical solution, only the tissue image in the process of retracting the mirror can be detected for the classification of cleanliness, thereby reducing the amount of data processing. At the same time, based on the data of tissue images under each category, the overall evaluation of the cleanliness in the mirror removal process can be performed, the accuracy of the determined cleanliness can be improved, and it is more in line with the Boston scoring standard, which is convenient for users to use.
在一种可能的实施例中,在内窥镜退镜过程结束完成检查时,可以将上文所述过程中获得的息肉识别结果、清洁度以及退镜过程中的退镜速度等信息形成为报告单进行输出,以便于统一查看和管理。In a possible embodiment, when the inspection is completed after the endoscope retraction process, the polyp identification result, cleanliness, and mirror retraction speed obtained in the above process can be formed as Reports are output for unified viewing and management.
基于同样的发明构思,本公开还提供一种内窥镜图像检测辅助方法,如图5所示,所述方法包括:Based on the same inventive concept, the present disclosure also provides an auxiliary method for endoscopic image detection, as shown in FIG. 5 , the method includes:
在步骤11中,对内窥镜采集的内窥镜图像进行实时处理,获得组织图像;In step 11, the endoscope image collected by the endoscope is processed in real time to obtain a tissue image;
在步骤12中,响应于确定所述内窥镜处于进镜模式,确定所述组织图像对应的组织腔体的目标点,所述目标点用于指示所述内窥镜在其当前所处位置进镜的下一目标移动点;其中,在所述组织图像中存在组织腔体时,所述目标点为所述内窥镜图像对应的组织腔体的中心点,在所述组织图像中不存在组织腔体时,所述目标点为所述内窥镜图像对应的组织腔体的方向点;In step 12, in response to determining that the endoscope is in the endoscope mode, determine the target point of the tissue cavity corresponding to the tissue image, and the target point is used to indicate that the endoscope is at its current position The next target moving point of the endoscope; wherein, when there is a tissue cavity in the tissue image, the target point is the center point of the tissue cavity corresponding to the endoscopic image, and there is no tissue cavity in the tissue image When there is a tissue cavity, the target point is a direction point of the tissue cavity corresponding to the endoscopic image;
在步骤13中,响应于确定所述内窥镜处于退镜模式,对所述内窥镜在退镜模式下获得的组织图像进行实时的息肉识别,获得息肉识别结果;In step 13, in response to determining that the endoscope is in the retracting mirror mode, real-time polyp identification is performed on the tissue image obtained by the endoscope in the retracting mirror mode, and a polyp identification result is obtained;
在步骤14中,在显示界面中的图像展示区域中,对所述目标点和所述息肉识别结果进行显示。In step 14, the target point and the polyp recognition result are displayed in the image display area of the display interface.
在一些实施例中,所述方法还包括:In some embodiments, the method also includes:
根据所述内窥镜在退镜模式下获得的组织图像,确定显示比例和盲区比例,所述显示比例和所述盲区比例之和为1;According to the tissue image obtained by the endoscope in the mirror withdrawal mode, a display ratio and a blind area ratio are determined, and the sum of the display ratio and the blind area ratio is 1;
显示所述显示比例和/或所述盲区比例,并在所述盲区比例大于盲区阈值的情况下,显示第一提示信息,所述第一提示信息用于指示存在漏检风险。Displaying the display ratio and/or the blind area ratio, and displaying first prompt information when the blind area ratio is greater than a blind area threshold, where the first prompt information is used to indicate that there is a risk of missed detection.
在一些实施例中,所述根据所述内窥镜在退镜模式下获得的组织图像,确定显示比例和盲区比例,包括:In some embodiments, the determination of the display ratio and the blind area ratio according to the tissue image obtained by the endoscope in the mirror withdrawal mode includes:
根据所述内窥镜在退镜模式下获得的组织图像进行三维重建,获得三维组织图像;Performing three-dimensional reconstruction according to the tissue image obtained by the endoscope in the retracting mirror mode to obtain a three-dimensional tissue image;
根据所述三维组织图像和所述三维组织图像对应的目标拟合组织图像,确定显示比例和盲区比例。A display ratio and a blind area ratio are determined according to the three-dimensional tissue image and the target fitting tissue image corresponding to the three-dimensional tissue image.
在一些实施例中,所述根据所述三维组织图像和所述三维组织图像对应的目标拟合组织图像,确定显示比例和盲区比例,包括:In some embodiments, the determining the display ratio and blind area ratio according to the three-dimensional tissue image and the target fitting tissue image corresponding to the three-dimensional tissue image includes:
将所述三维组织图像中的点云数量、和所述目标拟合组织图像的目拟合点云数量的比值,确定为所述显示比例,并根据所述显示比例确定所述盲区比例。The ratio of the number of point clouds in the three-dimensional tissue image to the number of point clouds of the target fitting tissue image is determined as the display ratio, and the blind area ratio is determined according to the display ratio.
在一些实施例中,所述方法还包括:In some embodiments, the method also includes:
根据所述内窥镜在进镜模式下获得的组织图像进行三维重建,获得三维组织图像;Performing three-dimensional reconstruction according to the tissue image obtained by the endoscope in the endoscope mode to obtain a three-dimensional tissue image;
将所述三维组织图像的中心线上、与所述目标点距离最近的点,确定为所述目标点在所述三维组织图像的三维目标点;determining a point on the centerline of the three-dimensional tissue image that is closest to the target point as the three-dimensional target point of the target point on the three-dimensional tissue image;
根据当前所处位置对应的组织图像、历史时段内的组织图像,确定所述内窥镜的姿态信息,并根据所述当前所处位置、所述姿态信息和所述三维目标点生成导航轨迹;Determine the posture information of the endoscope according to the tissue image corresponding to the current location and the tissue image in the historical period, and generate a navigation track according to the current location, the posture information and the three-dimensional target point;
显示所述三维组织图像,并在所述三维组织图像中显示所述三维目标点和所述导航轨迹。The three-dimensional tissue image is displayed, and the three-dimensional target point and the navigation track are displayed in the three-dimensional tissue image.
在一些实施例中,所述方法还包括:In some embodiments, the method also includes:
响应于用户在显示界面中的控件区域中对检测模式的选择操作,将用户选择的检测模式确定为所述内窥镜的检测模式;In response to the user's selection operation of the detection mode in the control area of the display interface, the detection mode selected by the user is determined as the detection mode of the endoscope;
所述对所述内窥镜在退镜模式下获得的组织图像进行实时的息肉识别,包括:The real-time polyp identification of the tissue image obtained by the endoscope in the mirror withdrawal mode includes:
根据所述检测模式所对应的息肉识别模型,对所述内窥镜在退镜模式下获得的组织图像进行实时的息肉识别。According to the polyp identification model corresponding to the detection mode, real-time polyp identification is performed on the tissue image obtained by the endoscope in the retracting mirror mode.
在一些实施例中,所述方法还包括:In some embodiments, the method also includes:
对所述内窥镜在进镜模式下的组织图像进行分类,获得所述组织图像对应的目标分类,其中,所述目标分类包括异常分类和多个内窥镜分类;Classifying the tissue image of the endoscope in the endoscope mode to obtain a target classification corresponding to the tissue image, wherein the target classification includes an abnormal classification and a plurality of endoscope classifications;
根据所述组织图像对应的目标分类,对每一内窥镜分类的计数器进行更新,并在任一内窥镜分类对应的计数器的数值达到计数阈值时,停止各个所述计数器的计数操作,并根据计数器的数值达到计数阈值的内窥镜分类确定所述内窥镜的检测模式,其中,每一所述内窥镜分类具有其对应的计数器,每一所述计数器的数值初始为零;According to the target classification corresponding to the tissue image, the counter of each endoscope classification is updated, and when the value of the counter corresponding to any endoscope classification reaches the counting threshold, the counting operation of each of the counters is stopped, and according to The endoscope classification whose value of the counter reaches the counting threshold determines the detection mode of the endoscope, wherein each of the endoscope classifications has its corresponding counter, and the value of each of the counters is initially zero;
所述对所述内窥镜在退镜模式下获得的组织图像进行实时的息肉识别,包括:The real-time polyp identification of the tissue image obtained by the endoscope in the mirror withdrawal mode includes:
根据所述检测模式所对应的息肉识别模型,对所述内窥镜在退镜模式下获得的组织图像进行实时的息肉识别。According to the polyp identification model corresponding to the detection mode, real-time polyp identification is performed on the tissue image obtained by the endoscope in the retracting mirror mode.
在一些实施例中,所述根据所述组织图像对应的目标分类,对每一内窥镜分类的计数器进行更新,包括:In some embodiments, updating the counter of each endoscope classification according to the target classification corresponding to the tissue image includes:
在所述目标分类为内窥镜分类时,对所述目标分类对应的计数器进行加一操作,对除所述目标分类之外、计数器数值不为零的内窥镜分类对应的计数器进行减一操作;在所述目标分类为异常分类时,则对计数器数值不为零的各个内窥镜分类对应的计数器进行减一操作,以对每一内窥镜分类的计数器进行更新。When the target classification is an endoscope classification, add one to the counter corresponding to the target classification, and subtract one from the counter corresponding to the endoscope classification whose counter value is not zero except for the target classification Operation: when the target category is an abnormal category, the counter corresponding to each endoscope category whose counter value is not zero is decremented by one, so as to update the counter of each endoscope category.
在一些实施例中,所述方法还包括:In some embodiments, the method also includes:
根据图像识别模型对所述内窥镜的组织图像进行识别;Recognizing the tissue image of the endoscope according to the image recognition model;
若所述图像识别模型输出的对应于体内图像的参数大于第一参数阈值,确定所述内窥镜的图像模式为进镜模式,若所述图像识别模型输出的对应于回盲瓣图像的参数大于第二参数阈值,确定所述内窥镜的图像模式为退镜模式;If the parameter corresponding to the in-vivo image output by the image recognition model is greater than the first parameter threshold, it is determined that the image mode of the endoscope is the endoscope mode; if the parameter corresponding to the ileocecal valve image output by the image recognition model Greater than the second parameter threshold, it is determined that the image mode of the endoscope is the mirror-back mode;
在内窥镜的图像模式为体外模式、且基于所述图像识别模型确定出的所述内窥镜的图像模式为进镜模式的情况下,将所述内窥镜的图像模式切换至进镜模式,在所述内窥镜的图像模式为进镜模式、且基于所述图像识别模型确定出的所述内窥镜的图像模式为退镜模式的情况下,将所述内窥镜的图像模式切换至退镜模式,并输出第二提示信息,所述第二提示信 息用于提示进入退镜模式。When the image mode of the endoscope is an in vitro mode and the image mode of the endoscope determined based on the image recognition model is an endoscope mode, switch the image mode of the endoscope to an endoscope mode mode, when the image mode of the endoscope is the mirror-in mode, and the image mode of the endoscope determined based on the image recognition model is the mirror-back mode, the image of the endoscope The mode is switched to the mirror-back mode, and second prompt information is output, and the second prompt information is used to remind to enter the mirror-back mode.
在一些实施例中,所述方法还包括:In some embodiments, the method also includes:
在所述内窥镜图像的模式为退镜模式时,获取当前组织图像对应的多个历史组织图像,并计算每一所述历史组织图像与当前组织图像之间的相似度,根据多个所述相似度确定退镜速度;When the mode of the endoscopic image is the mirror-back mode, obtain a plurality of historical tissue images corresponding to the current tissue image, and calculate the similarity between each of the historical tissue images and the current tissue image, according to the plurality of historical tissue images The above similarity determines the mirror withdrawal speed;
在所述图像展示区域中显示所述内窥镜对应的所述退镜速度。The mirror withdrawal speed corresponding to the endoscope is displayed in the image display area.
在一些实施例中,所述方法还包括:In some embodiments, the method also includes:
对所述内窥镜在退镜模式下的组织图像进行清洁度分类,并确定每一清洁度分类下的组织图像的数量;Carrying out cleanliness classification on the tissue images of the endoscope in the withdrawal mode, and determining the number of tissue images under each cleanliness category;
根据每一所述清洁度分类下的组织图像的数量,确定所述组织图像对应的组织的清洁度。According to the quantity of tissue images under each cleanliness category, the cleanliness of the tissue corresponding to the tissue image is determined.
在一些实施例中,所述根据每一所述清洁度分类下的组织图像的数量,确定所述组织图像对应的组织的清洁度,包括:In some embodiments, the determining the cleanliness of the tissue corresponding to the tissue image according to the number of tissue images under each cleanliness category includes:
根据所述清洁度分类对应的分数由小到大的顺序,获取一清洁度分类下的组织图像的数量;Acquire the number of tissue images under a cleanliness category according to the order of the scores corresponding to the cleanliness category from small to large;
确定当前清洁度分类下的组织图像的数量与目标总数量的比例与当前清洁度分类对应的阈值的大小关系,其中,所述目标总数量为各个清洁度分下的组织图像的数量之和;Determine the relationship between the ratio of the number of tissue images under the current cleanliness classification to the target total number and the threshold corresponding to the current cleanliness classification, wherein the target total number is the sum of the numbers of tissue images under each cleanliness category;
若当前清洁度分类下的组织图像的数量与目标总数量的比例大于或等于该清洁度分类对应的阈值,则将该清洁度分类对应的分数作为所述组织的清洁度;If the ratio of the number of tissue images under the current cleanliness classification to the total target quantity is greater than or equal to the threshold corresponding to the cleanliness classification, then the score corresponding to the cleanliness classification is used as the cleanliness of the organization;
若当前清洁度分类下的组织图像的数量与目标总数量的比例小于该清洁度分类对应的阈值,则在下一清洁度分类不是分数最大的清洁度分类的情况下,将下一清洁度分类作为新的当前清洁度分类,并重新执行所述确定当前清洁度分类下的组织图像的数量与目标总数量的比例与当前清洁度分类对应的阈值的大小关系的步骤;在下一清洁度分类为分数最大的清洁度分类的情况下,将该下一清洁度分类的分数确定为所述组织的清洁度。If the ratio of the number of tissue images under the current cleanliness classification to the total number of targets is less than the threshold corresponding to the cleanliness classification, then if the next cleanliness classification is not the cleanliness classification with the largest score, the next cleanliness classification will be used as new current cleanliness classification, and re-execute the step of determining the ratio of the number of tissue images under the current cleanliness classification to the target total quantity and the threshold value corresponding to the current cleanliness classification; In the case of the largest cleanliness category, the score of the next cleanliness category is determined as the cleanliness of the tissue.
在一些实施例中,所述对所述内窥镜在退镜模式下获得的组织图像进行实时的息肉识别,包括:In some embodiments, the real-time identification of polyps on the tissue images obtained by the endoscope in the retracting mirror mode includes:
基于检测模型对所述内窥镜在退镜模式下获得的组织图像进行检测,在检测到所述组织图像存在息肉时,确定所述息肉的位置信息;Based on the detection model, the tissue image obtained by the endoscope in the withdrawal mode is detected, and when polyps are detected in the tissue image, the position information of the polyps is determined;
从所述组织图像中提取与所述位置信息对应的检测图像,根据所述检测图像和识别模型确定所述息肉的分类;extracting a detected image corresponding to the location information from the tissue image, and determining the classification of the polyp according to the detected image and the recognition model;
在所述图像展示区域显示所述组织图像,并在所述组织图像中显示所述息肉的位置信息对应的标识和所述分类。The tissue image is displayed in the image display area, and the identification corresponding to the position information of the polyp and the classification are displayed in the tissue image.
其中,上述步骤的具体实现方式已在上文进行详述,在此不再赘述。Wherein, the specific implementation manners of the above steps have been described in detail above, and will not be repeated here.
下面参考图6,其示出了适于用来实现本公开实施例的电子设备600的结构示意图。本公开实施例中的终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图6示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。Referring now to FIG. 6 , it shows a schematic structural diagram of an electronic device 600 suitable for implementing an embodiment of the present disclosure. The terminal equipment in the embodiment of the present disclosure may include but not limited to such as mobile phone, notebook computer, digital broadcast receiver, PDA (personal digital assistant), PAD (tablet computer), PMP (portable multimedia player), vehicle terminal (such as mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers and the like. The electronic device shown in FIG. 6 is only an example, and should not limit the functions and application scope of the embodiments of the present disclosure.
如图6所示,电子设备600可以包括处理装置(例如中央处理器、图形处理器等)601,其可以根据存储在只读存储器(ROM)602中的程序或者从存储装置608加载到随机访问存储器(RAM)603中的程序而执行各种适当的动作和处理。在RAM 603中,还存储有电子设备600操作所需的各种程序和数据。处理装置601、ROM 602以及RAM 603通过总线604彼此相连。输入/输出(I/O)接口605也连接至总线604。As shown in FIG. 6, an electronic device 600 may include a processing device (such as a central processing unit, a graphics processing unit, etc.) 601, which may be randomly accessed according to a program stored in a read-only memory (ROM) 602 or loaded from a storage device 608. Various appropriate actions and processes are executed by programs in the memory (RAM) 603 . In the RAM 603, various programs and data necessary for the operation of the electronic device 600 are also stored. The processing device 601, ROM 602, and RAM 603 are connected to each other through a bus 604. An input/output (I/O) interface 605 is also connected to the bus 604 .
通常,以下装置可以连接至I/O接口605:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置606;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置607;包括例如磁带、硬盘等的存储装置608;以及通信装置609。通信装置609可以允许电子设备600与其他设备进行无线或有线通信以交换数据。虽然图6示出了具有各种装置的电子设备600,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。Typically, the following devices can be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speaker, vibration an output device 607 such as a computer; a storage device 608 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While FIG. 6 shows electronic device 600 having various means, it should be understood that implementing or having all of the means shown is not a requirement. More or fewer means may alternatively be implemented or provided.
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置609从网络上被下载和安装,或者从存储装置608 被安装,或者从ROM 602被安装。在该计算机程序被处理装置601执行时,执行本公开实施例的方法中限定的上述功能。In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts can be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product, which includes a computer program carried on a non-transitory computer readable medium, where the computer program includes program code for executing the method shown in the flowchart. In such an embodiment, the computer program may be downloaded and installed from a network via communication means 609, or from storage means 608, or from ROM 602. When the computer program is executed by the processing device 601, the above-mentioned functions defined in the methods of the embodiments of the present disclosure are performed.
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。It should be noted that the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination thereof. More specific examples of computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above. In the present disclosure, a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device. In the present disclosure, however, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing. A computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can transmit, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device . Program code embodied on a computer readable medium may be transmitted by any appropriate medium, including but not limited to wires, optical cables, RF (radio frequency), etc., or any suitable combination of the above.
在一些实施方式中,客户端、服务器可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。In some embodiments, the client and the server can communicate using any currently known or future network protocols such as HTTP (HyperText Transfer Protocol, Hypertext Transfer Protocol), and can communicate with digital data in any form or medium The communication (eg, communication network) interconnections. Examples of communication networks include local area networks ("LANs"), wide area networks ("WANs"), internetworks (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network of.
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。The above-mentioned computer-readable medium may be included in the above-mentioned electronic device, or may exist independently without being incorporated into the electronic device.
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:对内窥镜采集的内窥镜图像进行实时处理,获得组织图像;响应于确定所述内窥镜处于进镜模式,确定所述组织图像对应的组织腔体的目标点,所述目标点用于指示所述内窥镜在其当前所处位置进镜的下一目标移动点;其中,在所述组织图像中存在组织腔体时,所述目标点为所述内窥镜图像对应的组织腔体的中心点,在所述组织图像中不存在组织腔体时,所述目标点为所述内窥镜图像对应的组织腔体的方向点;响应于确定所述内窥镜处于退镜模式,对所述内窥镜在退镜模式下获得的组织图像进行实时的息肉识别,获得息肉识别结果;在显示界面中的图像展示区域中,对所述目标点和所述息肉识别结果进行显示。The computer-readable medium carries one or more programs, and when the one or more programs are executed by the electronic device, the electronic device: performs real-time processing on endoscopic images collected by the endoscope to obtain tissue images; In response to determining that the endoscope is in the mirror-in mode, determine a target point of the tissue cavity corresponding to the tissue image, and the target point is used to indicate the next time the endoscope is in the mirror-in mode at its current position. Target moving point; wherein, when there is a tissue cavity in the tissue image, the target point is the center point of the tissue cavity corresponding to the endoscopic image, and when there is no tissue cavity in the tissue image , the target point is the direction point of the tissue cavity corresponding to the endoscope image; in response to determining that the endoscope is in the retracting mirror mode, the tissue image obtained by the endoscope in the retracting mirror mode is performed Real-time polyp recognition, obtaining polyp recognition results; displaying the target points and the polyp recognition results in the image display area in the display interface.
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言——诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)——连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, or combinations thereof, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and Includes conventional procedural programming languages - such as "C" or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In cases involving a remote computer, the remote computer may be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (for example, using an Internet service provider to connected via the Internet).
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in a flowchart or block diagram may represent a module, program segment, or portion of code that contains one or more logical functions for implementing specified executable instructions. It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved. It should also be noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by a dedicated hardware-based system that performs the specified functions or operations , or may be implemented by a combination of dedicated hardware and computer instructions.
描述于本公开实施例中所涉及到的模块可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,模块的名称在某种情况下并不构成对该模块本身的限定,例如,图像处理模块还可以被描述为“对内窥镜采集的内窥镜图像进行实时处理,获得组织图像模块”。The modules involved in the embodiments described in the present disclosure may be implemented by software or by hardware. Among them, the name of the module does not constitute a limitation of the module itself under certain circumstances. For example, the image processing module can also be described as "a module for real-time processing of endoscopic images collected by the endoscope to obtain tissue images" .
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。The functions described herein above may be performed at least in part by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), System on Chips (SOCs), Complex Programmable Logical device (CPLD) and so on.
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。In the context of the present disclosure, a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device. A machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing. More specific examples of machine-readable storage media would include one or more wire-based electrical connections, portable computer discs, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
根据本公开的一个或多个实施例,示例1提供了一种内窥镜图像检测辅助系统,所述系统包括:According to one or more embodiments of the present disclosure, Example 1 provides an endoscopic image detection assistance system, the system comprising:
图像处理模块,用于对内窥镜采集的内窥镜图像进行实时处理,获得组织图像;The image processing module is used to process the endoscope image collected by the endoscope in real time to obtain tissue images;
腔体定位模块,用于确定所述组织图像对应的组织腔体的目标点,所述目标点用于指示所述内窥镜在其当前所处位置进镜的下一目标移动点;其中,在所述组织图像中存在组织腔体时,所述目标点为所述内窥镜图像对应的组织腔体的中心点,在所述组织图像中不存在组织腔体时,所述目标点为所述内窥镜图像对应的组织腔体的方向点;The cavity positioning module is used to determine the target point of the tissue cavity corresponding to the tissue image, and the target point is used to indicate the next target movement point of the endoscope at its current position; wherein, When there is a tissue cavity in the tissue image, the target point is the center point of the tissue cavity corresponding to the endoscopic image, and when there is no tissue cavity in the tissue image, the target point is The direction point of the tissue cavity corresponding to the endoscopic image;
息肉识别模块,用于对所述内窥镜在退镜模式下获得的组织图像进行实时的息肉识别,获得息肉识别结果;The polyp identification module is used to perform real-time polyp identification on the tissue image obtained by the endoscope in the retracting mirror mode, and obtain a polyp identification result;
显示模块,用于对所述目标点和所述息肉识别结果进行显示。A display module, configured to display the target point and the polyp recognition result.
根据本公开的一个或多个实施例,示例2提供了示例1的系统,其中,所述系统还包括:According to one or more embodiments of the present disclosure, Example 2 provides the system of Example 1, wherein the system further includes:
盲区比例检测模块,用于根据所述内窥镜在退镜模式下获得的组织图像,确定显示比例和盲区比例,所述显示比例和所述盲区比例之和为1;A blind area ratio detection module, configured to determine a display ratio and a blind area ratio based on the tissue image obtained by the endoscope in the mirror withdrawal mode, and the sum of the display ratio and the blind area ratio is 1;
所述显示模块还用于显示所述显示比例和/或所述盲区比例,并在所述盲区比例大于盲区阈值的情况下,显示第一提示信息,所述第一提示信息用于指示存在漏检风险。The display module is also used to display the display ratio and/or the blind area ratio, and when the blind area ratio is greater than the blind area threshold, display first prompt information, the first prompt information is used to indicate that there is a leak check risk.
根据本公开的一个或多个实施例,示例3提供了示例2的系统,其中,所述盲区比例检测模块还用于:根据所述内窥镜在退镜模式下获得的组织图像进行三维重建,获得三维组织图像;并根据所述三维组织图像和所述三维组织图像对应的目标拟合组织图像,确定显示比例和盲区比例。According to one or more embodiments of the present disclosure, Example 3 provides the system of Example 2, wherein the blind spot ratio detection module is further used for: performing three-dimensional reconstruction according to the tissue image obtained by the endoscope in the retracting mode , obtaining a three-dimensional tissue image; and determining a display ratio and a blind area ratio according to the three-dimensional tissue image and the target fitting tissue image corresponding to the three-dimensional tissue image.
根据本公开的一个或多个实施例,示例4提供了示例3的系统,其中,所述盲区比例检测模块还用于:According to one or more embodiments of the present disclosure, Example 4 provides the system of Example 3, wherein the dead zone ratio detection module is further used for:
将所述三维组织图像中的点云数量、和所述目标拟合组织图像的目拟合点云数量的比值,确定为所述显示比例,并根据所述显示比例确定所述盲区比例。The ratio of the number of point clouds in the three-dimensional tissue image to the number of point clouds of the target fitting tissue image is determined as the display ratio, and the blind area ratio is determined according to the display ratio.
根据本公开的一个或多个实施例,示例5提供了示例1的系统,其中,所述系统还包括:According to one or more embodiments of the present disclosure, Example 5 provides the system of Example 1, wherein the system further includes:
三维定位模块,用于根据所述内窥镜在进镜模式下获得的组织图像进行三维重建,获得三维组织图像;并将所述三维组织图像的中心线上、与所述目标点距离最近的点,确定为所述目标点在所述三维组织图像的三维目标点;The three-dimensional positioning module is used to perform three-dimensional reconstruction according to the tissue image obtained by the endoscope in the endoscopic mode to obtain a three-dimensional tissue image; A point is determined as a three-dimensional target point of the target point in the three-dimensional tissue image;
姿态确定模块,用于根据当前所处位置对应的组织图像、历史时段内的组织图像,确定所述内窥镜的姿态信息;A posture determination module, configured to determine the posture information of the endoscope according to the tissue image corresponding to the current location and the tissue image in the historical period;
轨迹确定模块,用于根据所述当前所处位置、所述姿态信息和所述三维目标点生成导航轨迹;A trajectory determination module, configured to generate a navigation trajectory according to the current location, the attitude information and the three-dimensional target point;
所述显示模块还用于显示所述内窥镜的姿态信息和所述三维组织图像,并在所述三维组织图像中显示所述三维目标点和所述导航轨迹。The display module is also used to display the posture information of the endoscope and the three-dimensional tissue image, and display the three-dimensional target point and the navigation track in the three-dimensional tissue image.
根据本公开的一个或多个实施例,示例6提供了示例1的系统,其中,所述系统还包括:According to one or more embodiments of the present disclosure, Example 6 provides the system of Example 1, wherein the system further includes:
图像分类模块,用于对所述内窥镜在进镜模式下的组织图像进行分类,获得所述组织图像对应的目标分类,其中,所述目标分类包括异常分类和多个内窥镜分类;An image classification module, configured to classify the tissue image of the endoscope in the endoscope mode, and obtain the target classification corresponding to the tissue image, wherein the target classification includes abnormal classification and multiple endoscope classifications;
检测模式识别模块,用于根据所述组织图像对应的目标分类,对每一内窥镜分类的计数器进行更新,并在任一内窥镜分类对应的计数器的数值达到计数阈值时,停止各个所述计数器的计数操作,并根据计数器的数值达到计数阈值的内窥镜分类确定所述内窥镜的检测模式,其中,每一所述内窥镜分类具有其对应的计数器,每一所述计数器的数值初始为零;The detection pattern recognition module is used to update the counter of each endoscope classification according to the target classification corresponding to the tissue image, and when the value of the counter corresponding to any endoscope classification reaches a counting threshold, stop each of the The counting operation of the counter, and determine the detection mode of the endoscope according to the endoscope classification whose value of the counter reaches the counting threshold, wherein, each of the endoscope classifications has its corresponding counter, and each of the counters The value is initially zero;
所述息肉识别模块还用于:根据所述检测模式识别模块确定出的检测模式所对应的息肉识别模型,对所述内窥镜在退镜模式下获得的组织图像进行实时的息肉识别。The polyp identification module is further configured to: perform real-time polyp identification on the tissue image obtained by the endoscope in the retracting mode according to the polyp identification model corresponding to the detection mode determined by the detection mode identification module.
根据本公开的一个或多个实施例,示例7提供了示例6的系统,其中,所述检测模式识别模块通过如下方式根据所述组织图像对应的目标分类,对每一内窥镜分类的计数器进行更新:According to one or more embodiments of the present disclosure, Example 7 provides the system of Example 6, wherein the detection pattern recognition module classifies the counter of each endoscope according to the target classification corresponding to the tissue image in the following manner Make an update:
在所述目标分类为内窥镜分类时,对所述目标分类对应的计数器进行加一操作,对除所述目标分类之外、计数器数值不为零的内窥镜分类对应的计数器进行减一操作;在所述目标分类为异常分类时,则对计数器数值不为零的各个内窥镜分类对应的计数器进行减一操作,以对每一内窥镜分类的计数器进行更新。When the target classification is an endoscope classification, add one to the counter corresponding to the target classification, and subtract one from the counter corresponding to the endoscope classification whose counter value is not zero except for the target classification Operation: when the target category is an abnormal category, the counter corresponding to each endoscope category whose counter value is not zero is decremented by one, so as to update the counter of each endoscope category.
根据本公开的一个或多个实施例,示例8提供了示例1的系统,其中,所述系统还包括:According to one or more embodiments of the present disclosure, Example 8 provides the system of Example 1, wherein the system further includes:
模式识别模块,用于根据图像识别模型对所述内窥镜的组织图像进行识别,在所述图像识别模型输出的对应于体内图像的参数大于第一参数阈值的情况下,确定所述内窥镜的图像模式为进镜模式,在所述图像识别模型输出的对应于回盲瓣图像的参数大于第二参数阈值的情况下,确定所述内窥镜的图像模式为退镜模式;a pattern recognition module, configured to recognize the tissue image of the endoscope according to the image recognition model, and determine that the endoscope The image mode of the mirror is the mirror-in mode, and when the parameter corresponding to the ileocecal valve image output by the image recognition model is greater than the second parameter threshold, it is determined that the image mode of the endoscope is the mirror-out mode;
模式切换模块,用于在内窥镜的图像模式为体外模式、且所述模式识别模块确定所述内窥镜的图像模式为进镜模式的情况下,将所述内窥镜的图像模式切换至进镜模式,在所述内窥镜的图像模式为进镜模式、且所述模式识别模块确定所述内窥镜的图像模式为退镜模式的情况下,将所述内窥镜的图像模式切换至退镜模式,并输出第二提示信息,所述第二提示信息用于提示进入退镜模式。A mode switching module, used to switch the image mode of the endoscope when the image mode of the endoscope is an in vitro mode and the mode recognition module determines that the image mode of the endoscope is an in-scope mode To the mirror-in mode, when the image mode of the endoscope is the mirror-in mode and the pattern recognition module determines that the image mode of the endoscope is the mirror-back mode, the image of the endoscope The mode is switched to the mirror-back mode, and second prompt information is output, and the second prompt information is used to remind to enter the mirror-back mode.
根据本公开的一个或多个实施例,示例9提供了示例1的系统,其中,所述系统还包括:According to one or more embodiments of the present disclosure, Example 9 provides the system of Example 1, wherein the system further includes:
速度确定模块,用于在所述内窥镜图像的模式为退镜模式时,获取当前组织图像对应的多个历史组织图像,并计算每一所述历史组织图像与当前组织图像之间的相似度,根据多个所述相似度确定退镜速度;A speed determination module, configured to acquire a plurality of historical tissue images corresponding to the current tissue image when the mode of the endoscopic image is the mirror-back mode, and calculate the similarity between each of the historical tissue images and the current tissue image Degree, according to a plurality of described similarities, determine the mirror-removing speed;
所述显示模块还用于显示所述内窥镜对应的所述退镜速度。The display module is also used to display the mirror withdrawal speed corresponding to the endoscope.
根据本公开的一个或多个实施例,示例10提供了示例9的系统,其中,所述系统还包括:According to one or more embodiments of the present disclosure, Example 10 provides the system of Example 9, wherein the system further includes:
清洁度确定模块,用于对所述内窥镜在退镜模式下的组织图像进行清洁度分类,并确定每一清洁度分类下的组织图像的数量;根据每一所述清洁度分类下的组织图像的数量,确定所述组织图像对应的组织的清洁度。The cleanliness determination module is used to classify the tissue images of the endoscope in the mirror withdrawal mode, and determine the number of tissue images under each cleanliness category; according to each of the cleanliness categories The number of tissue images determines the cleanliness of the tissue corresponding to the tissue images.
根据本公开的一个或多个实施例,示例11提供了示例1的系统,其中,所述息肉识别模块包括:According to one or more embodiments of the present disclosure, Example 11 provides the system of Example 1, wherein the polyp identification module includes:
息肉检测子模块,用于基于检测模型对所述内窥镜在退镜模式下获得的组织图像进行检测,在检测到所述组织图像存在息肉时,确定所述息肉的位置信息;The polyp detection sub-module is used to detect the tissue image obtained by the endoscope in the withdrawal mode based on the detection model, and determine the position information of the polyp when it is detected that there is a polyp in the tissue image;
息肉识别子模块,用于从所述组织图像中提取与所述位置信息对应的检测图像;根据所述检测图像和识别模型确定所述息肉的分类;A polyp identification submodule, configured to extract a detected image corresponding to the position information from the tissue image; determine the classification of the polyp according to the detected image and the identification model;
所述显示模块还用于显示所述组织图像,并在所述组织图像中显示所述息肉的位置信息对应的标识和所述分类。The display module is further configured to display the tissue image, and display the identification corresponding to the position information of the polyp and the classification in the tissue image.
根据本公开的一个或多个实施例,示例12提供了一种内窥镜图像检测辅助方法,所述方法包括:According to one or more embodiments of the present disclosure, Example 12 provides an endoscopic image detection assistance method, the method comprising:
对内窥镜采集的内窥镜图像进行实时处理,获得组织图像;Real-time processing of endoscopic images collected by the endoscope to obtain tissue images;
响应于确定所述内窥镜处于进镜模式,确定所述组织图像对应的组织腔体的目标点,所述目标点用于指示所述内窥镜在其当前所处位置进镜的下一目标移动点;其中,在所述组织图像中存在组织腔体时,所述目标点为所述内窥镜图像对应的组织腔体的中心点,在所述组织图像中不存在组织腔体时,所述目标点为所述内窥镜图像对应的组织腔体的方向点;In response to determining that the endoscope is in the mirror-in mode, determine a target point of the tissue cavity corresponding to the tissue image, and the target point is used to indicate the next time the endoscope is in the mirror-in mode at its current position. Target moving point; wherein, when there is a tissue cavity in the tissue image, the target point is the center point of the tissue cavity corresponding to the endoscopic image, and when there is no tissue cavity in the tissue image , the target point is a direction point of the tissue cavity corresponding to the endoscopic image;
响应于确定所述内窥镜处于退镜模式,对所述内窥镜在退镜模式下获得的组织图像进行实时的息肉识别,获得息肉识别结果;In response to determining that the endoscope is in the retracting mirror mode, perform real-time polyp identification on the tissue image obtained by the endoscope in the retracting mirror mode, and obtain a polyp identification result;
在显示界面中的图像展示区域中,对所述目标点和所述息肉识别结果进行显示。In the image display area in the display interface, the target point and the polyp recognition result are displayed.
根据本公开的一个或多个实施例,示例13提供了示例12的方法,其中,所述方法还包括:According to one or more embodiments of the present disclosure, Example 13 provides the method of Example 12, wherein the method further includes:
根据所述内窥镜在退镜模式下获得的组织图像,确定显示比例和盲区比例,所述显示比例和所述盲区比例之和为1;According to the tissue image obtained by the endoscope in the mirror withdrawal mode, a display ratio and a blind area ratio are determined, and the sum of the display ratio and the blind area ratio is 1;
显示所述显示比例和/或所述盲区比例,并在所述盲区比例大于盲区阈值的情况下,显示第一提示信息,所述第一提示信息用于指示存在漏检风险。Displaying the display ratio and/or the blind area ratio, and displaying first prompt information when the blind area ratio is greater than a blind area threshold, where the first prompt information is used to indicate that there is a risk of missed detection.
根据本公开的一个或多个实施例,示例14提供了示例13的方法,其中,所述根据所述内窥镜在退镜模式下获得的组织图像,确定显示比例和盲区比例,包括:According to one or more embodiments of the present disclosure, Example 14 provides the method of Example 13, wherein the determining the display ratio and the blind area ratio according to the tissue image obtained by the endoscope in the retracting mirror mode includes:
根据所述内窥镜在退镜模式下获得的组织图像进行三维重建,获得三维组织图像;Performing three-dimensional reconstruction according to the tissue image obtained by the endoscope in the retracting mirror mode to obtain a three-dimensional tissue image;
根据所述三维组织图像和所述三维组织图像对应的目标拟合组织图像,确定显示比例和盲区比例。A display ratio and a blind area ratio are determined according to the three-dimensional tissue image and the target fitting tissue image corresponding to the three-dimensional tissue image.
根据本公开的一个或多个实施例,示例15提供了示例14的方法,其中,所述根据所述三维组织图像和所述三维组织图像对应的目标拟合组织图像,确定显示比例和盲区比例,包括:According to one or more embodiments of the present disclosure, Example 15 provides the method of Example 14, wherein, according to the three-dimensional tissue image and the target fitting tissue image corresponding to the three-dimensional tissue image, the display ratio and the blind area ratio are determined ,include:
将所述三维组织图像中的点云数量、和所述目标拟合组织图像的目拟合点云数量的比值,确定为所述显示比例,并根据所述显示比例确定所述盲区比例。The ratio of the number of point clouds in the three-dimensional tissue image to the number of point clouds of the target fitting tissue image is determined as the display ratio, and the blind area ratio is determined according to the display ratio.
根据本公开的一个或多个实施例,示例16提供了示例12的方法,其中,所述方法还包括:According to one or more embodiments of the present disclosure, Example 16 provides the method of Example 12, wherein the method further includes:
根据所述内窥镜在进镜模式下获得的组织图像进行三维重建,获得三维组织图像;Performing three-dimensional reconstruction according to the tissue image obtained by the endoscope in the endoscope mode to obtain a three-dimensional tissue image;
将所述三维组织图像的中心线上、与所述目标点距离最近的点,确定为所述目标点在所述三维组织图像的三维目标点;determining a point on the centerline of the three-dimensional tissue image that is closest to the target point as the three-dimensional target point of the target point on the three-dimensional tissue image;
根据当前所处位置对应的组织图像、历史时段内的组织图像,确定所述内窥镜的姿态信息,并根据所述当前所处位置、所述姿态信息和所述三维目标点生成导航轨迹;Determine the posture information of the endoscope according to the tissue image corresponding to the current location and the tissue image in the historical period, and generate a navigation track according to the current location, the posture information and the three-dimensional target point;
显示所述三维组织图像,并在所述三维组织图像中显示所述三维目标点和所述导航轨迹。The three-dimensional tissue image is displayed, and the three-dimensional target point and the navigation track are displayed in the three-dimensional tissue image.
根据本公开的一个或多个实施例,示例17提供了示例12的方法,其中,所述方法还包括:According to one or more embodiments of the present disclosure, Example 17 provides the method of Example 12, wherein the method further includes:
响应于用户在显示界面中的控件区域中对检测模式的选择操作,将用户选择的检测模式确定为所述内窥镜的检测模式;In response to the user's selection operation of the detection mode in the control area of the display interface, the detection mode selected by the user is determined as the detection mode of the endoscope;
所述对所述内窥镜在退镜模式下获得的组织图像进行实时的息肉识别,包括:The real-time polyp identification of the tissue image obtained by the endoscope in the mirror withdrawal mode includes:
根据所述检测模式所对应的息肉识别模型,对所述内窥镜在退镜模式下获得的组织图像进行实时的息肉识别。According to the polyp identification model corresponding to the detection mode, real-time polyp identification is performed on the tissue image obtained by the endoscope in the retracting mirror mode.
根据本公开的一个或多个实施例,示例18提供了示例12的方法,其中,所述方法还包括:According to one or more embodiments of the present disclosure, Example 18 provides the method of Example 12, wherein the method further includes:
对所述内窥镜在进镜模式下的组织图像进行分类,获得所述组织图像对应的目标分类,其中,所述目标分类包括异常分类和多个内窥镜分类;Classifying the tissue image of the endoscope in the endoscope mode to obtain a target classification corresponding to the tissue image, wherein the target classification includes an abnormal classification and a plurality of endoscope classifications;
根据所述组织图像对应的目标分类,对每一内窥镜分类的计数器进行更新,并在任一内窥镜分类对应的计数器的数值达到计数阈值时,停止各个所述计数器的计数操作,并根据计数器的数值达到计数阈值的内窥镜分类确定所述内窥镜的检测模式,其中,每一所述内窥镜分类具有其对应的计数器,每一所述计数器的数值初始为零;According to the target classification corresponding to the tissue image, the counter of each endoscope classification is updated, and when the value of the counter corresponding to any endoscope classification reaches the counting threshold, the counting operation of each of the counters is stopped, and according to The endoscope category whose value of the counter reaches the counting threshold determines the detection mode of the endoscope, wherein each of the endoscope categories has its corresponding counter, and the value of each of the counters is initially zero;
所述对所述内窥镜在退镜模式下获得的组织图像进行实时的息肉识别,包括:The real-time polyp identification of the tissue image obtained by the endoscope in the mirror withdrawal mode includes:
根据所述检测模式所对应的息肉识别模型,对所述内窥镜在退镜模式下获得的组织图像进行实时的息肉识别。According to the polyp identification model corresponding to the detection mode, real-time polyp identification is performed on the tissue image obtained by the endoscope in the retracting mirror mode.
根据本公开的一个或多个实施例,示例19提供了示例18的方法,其中,所述根据所述组织图像对应的目标分类,对每一内窥镜分类的计数器进行更新,包括:According to one or more embodiments of the present disclosure, Example 19 provides the method of Example 18, wherein the updating the counter of each endoscope classification according to the object classification corresponding to the tissue image includes:
在所述目标分类为内窥镜分类时,对所述目标分类对应的计数器进行加一操作,对除所述目标分类之外、计数器数值不为零的内窥镜分类对应的计数器进行减一操作;在所述目标分类为异常分类时,则对计数器数值不为零的各个内窥镜分类对应的计数器进行减一操作,以对每一内窥镜分类的计数器进行更新。When the target classification is an endoscope classification, add one to the counter corresponding to the target classification, and subtract one from the counter corresponding to the endoscope classification whose counter value is not zero except for the target classification Operation: when the target category is an abnormal category, the counter corresponding to each endoscope category whose counter value is not zero is decremented by one, so as to update the counter of each endoscope category.
根据本公开的一个或多个实施例,示例20提供了示例12的方法,其中,所述方法还包括:According to one or more embodiments of the present disclosure, Example 20 provides the method of Example 12, wherein the method further includes:
根据图像识别模型对所述内窥镜的组织图像进行识别;Recognizing the tissue image of the endoscope according to the image recognition model;
若所述图像识别模型输出的对应于体内图像的参数大于第一参数阈值,确定所述内窥镜的图像模式为进镜模式,若所述图像识别模型输出的对应于回盲瓣图像的参数大于第二参数阈值,确定所述内窥镜的图像模式为退镜模式;If the parameter corresponding to the in-vivo image output by the image recognition model is greater than the first parameter threshold, it is determined that the image mode of the endoscope is the endoscope mode; if the parameter corresponding to the ileocecal valve image output by the image recognition model Greater than the second parameter threshold, it is determined that the image mode of the endoscope is the mirror-back mode;
在内窥镜的图像模式为体外模式、且基于所述图像识别模型确定出的所述内窥镜的图像模式为进镜模式的情况下,将所述内窥镜的图像模式切换至进镜模式,在所述内窥镜的图像模式为进镜模式、且基于所述图像识别模型确定出的所述内窥镜的图像模式为退镜模式的情况下,将所述内窥镜的图像模式切换至退镜模式,并输出第二提示信息,所述第二提示信息用于提示进入退镜模式。When the image mode of the endoscope is an in vitro mode and the image mode of the endoscope determined based on the image recognition model is an endoscope mode, switch the image mode of the endoscope to an endoscope mode mode, when the image mode of the endoscope is the mirror-in mode, and the image mode of the endoscope determined based on the image recognition model is the mirror-back mode, the image of the endoscope The mode is switched to the mirror-back mode, and second prompt information is output, and the second prompt information is used to remind to enter the mirror-back mode.
根据本公开的一个或多个实施例,示例21提供了示例12的方法,其中,所述方法还包括:According to one or more embodiments of the present disclosure, Example 21 provides the method of Example 12, wherein the method further includes:
在所述内窥镜图像的模式为退镜模式时,获取当前组织图像对应的多个历史组织图像,并计算每一所述历史组织图像与当前组织图像之间的相似度,根据多个所述相似度确定退镜速度;When the mode of the endoscopic image is the mirror-back mode, obtain a plurality of historical tissue images corresponding to the current tissue image, and calculate the similarity between each of the historical tissue images and the current tissue image, according to the plurality of historical tissue images The above similarity determines the mirror withdrawal speed;
在所述图像展示区域中显示所述内窥镜对应的所述退镜速度。The mirror withdrawal speed corresponding to the endoscope is displayed in the image display area.
根据本公开的一个或多个实施例,示例22提供了示例21的方法,其中,所述方法还包括:According to one or more embodiments of the present disclosure, Example 22 provides the method of Example 21, wherein the method further includes:
对所述内窥镜在退镜模式下的组织图像进行清洁度分类,并确定每一清洁度分类下的组织图像的数量;Carrying out cleanliness classification on the tissue images of the endoscope in the withdrawal mode, and determining the number of tissue images under each cleanliness category;
根据每一所述清洁度分类下的组织图像的数量,确定所述组织图像对应的组织的清洁度。According to the quantity of tissue images under each cleanliness category, the cleanliness of the tissue corresponding to the tissue image is determined.
根据本公开的一个或多个实施例,示例23提供了示例12的方法,其中,所述对所述内窥镜在退镜模式下获得的组织图像进行实时的息肉识别,包括:According to one or more embodiments of the present disclosure, Example 23 provides the method of Example 12, wherein the real-time identification of polyps on the tissue image obtained by the endoscope in the retracting mirror mode includes:
基于检测模型对所述内窥镜在退镜模式下获得的组织图像进行检测,在检测到所述组织图像存在息肉时,确定所述息肉的位置信息;Based on the detection model, the tissue image obtained by the endoscope in the withdrawal mode is detected, and when polyps are detected in the tissue image, the position information of the polyps is determined;
从所述组织图像中提取与所述位置信息对应的检测图像,根据所述检测图像和识别模型确定所述息肉的分类;extracting a detected image corresponding to the location information from the tissue image, and determining the classification of the polyp according to the detected image and the recognition model;
在所述图像展示区域显示所述组织图像,并在所述组织图像中显示所述息肉的位置信息对应的标识和所述分类。The tissue image is displayed in the image display area, and the identification corresponding to the position information of the polyp and the classification are displayed in the tissue image.
根据本公开的一个或多个实施例,示例24提供了一种计算机可读介质,其上存储有计算机程序,该程序被处理装置执行时实现示例12-23中任一示例所述方法的步骤。According to one or more embodiments of the present disclosure, Example 24 provides a computer-readable medium on which a computer program is stored, and when the program is executed by a processing device, the steps of the method described in any one of Examples 12-23 are implemented .
根据本公开的一个或多个实施例,示例25提供了一种电子设备,包括:According to one or more embodiments of the present disclosure, Example 25 provides an electronic device, comprising:
存储装置,其上存储有计算机程序;a storage device on which a computer program is stored;
处理装置,用于执行所述存储装置中的所述计算机程序,以实现示例12-23中任一示例所述方法的步骤。A processing device configured to execute the computer program in the storage device, so as to implement the steps of the method in any one of Examples 12-23.
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。The above description is only a preferred embodiment of the present disclosure and an illustration of the applied technical principles. Those skilled in the art should understand that the disclosure scope involved in this disclosure is not limited to the technical solution formed by the specific combination of the above-mentioned technical features, but also covers the technical solutions formed by the above-mentioned technical features or Other technical solutions formed by any combination of equivalent features. For example, a technical solution formed by replacing the above-mentioned features with (but not limited to) technical features with similar functions disclosed in this disclosure.
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。In addition, while operations are depicted in a particular order, this should not be understood as requiring that the operations be performed in the particular order shown or performed in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while the above discussion contains several specific implementation details, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are merely example forms of implementing the claims. Regarding the apparatus in the foregoing embodiments, the specific manner in which each module executes operations has been described in detail in the embodiments related to the method, and will not be described in detail here.

Claims (25)

  1. 一种内窥镜图像检测辅助系统,其中,所述系统包括:An endoscopic image detection auxiliary system, wherein the system includes:
    图像处理模块,用于对内窥镜采集的内窥镜图像进行实时处理,获得组织图像;The image processing module is used to process the endoscope image collected by the endoscope in real time to obtain tissue images;
    腔体定位模块,用于确定所述组织图像对应的组织腔体的目标点,所述目标点用于指示所述内窥镜在其当前所处位置进镜的下一目标移动点;其中,在所述组织图像中存在组织腔体时,所述目标点为所述内窥镜图像对应的组织腔体的中心点,在所述组织图像中不存在组织腔体时,所述目标点为所述内窥镜图像对应的组织腔体的方向点;The cavity positioning module is used to determine the target point of the tissue cavity corresponding to the tissue image, and the target point is used to indicate the next target movement point of the endoscope at its current position; wherein, When there is a tissue cavity in the tissue image, the target point is the center point of the tissue cavity corresponding to the endoscopic image, and when there is no tissue cavity in the tissue image, the target point is The direction point of the tissue cavity corresponding to the endoscopic image;
    息肉识别模块,用于对所述内窥镜在退镜模式下获得的组织图像进行实时的息肉识别,获得息肉识别结果;The polyp identification module is used to perform real-time polyp identification on the tissue image obtained by the endoscope in the retracting mirror mode, and obtain a polyp identification result;
    显示模块,用于对所述目标点和所述息肉识别结果进行显示。A display module, configured to display the target point and the polyp recognition result.
  2. 根据权利要求1所述的系统,其中,所述系统还包括:The system according to claim 1, wherein said system further comprises:
    盲区比例检测模块,用于根据所述内窥镜在退镜模式下获得的组织图像,确定显示比例和盲区比例,所述显示比例和所述盲区比例之和为1;A blind area ratio detection module, configured to determine a display ratio and a blind area ratio based on the tissue image obtained by the endoscope in the mirror withdrawal mode, and the sum of the display ratio and the blind area ratio is 1;
    所述显示模块还用于显示所述显示比例和/或所述盲区比例,并在所述盲区比例大于盲区阈值的情况下,显示第一提示信息,所述第一提示信息用于指示存在漏检风险。The display module is also used to display the display ratio and/or the blind area ratio, and when the blind area ratio is greater than the blind area threshold, display first prompt information, the first prompt information is used to indicate that there is a leak check risk.
  3. 根据权利要求2所述的系统,其中,所述盲区比例检测模块还用于:根据所述内窥镜在退镜模式下获得的组织图像进行三维重建,获得三维组织图像;并根据所述三维组织图像和所述三维组织图像对应的目标拟合组织图像,确定显示比例和盲区比例。The system according to claim 2, wherein the blind area ratio detection module is further used for: performing three-dimensional reconstruction according to the tissue image obtained by the endoscope in the retracting mirror mode to obtain a three-dimensional tissue image; and according to the three-dimensional The tissue image and the target corresponding to the three-dimensional tissue image are fitted to the tissue image, and a display ratio and a blind area ratio are determined.
  4. 根据权利要求3所述的系统,其中,所述盲区比例检测模块还用于:The system according to claim 3, wherein the blind spot ratio detection module is also used for:
    将所述三维组织图像中的点云数量、和所述目标拟合组织图像的目拟合点云数量的比值,确定为所述显示比例,并根据所述显示比例确定所述盲区比例。The ratio of the number of point clouds in the three-dimensional tissue image to the number of point clouds of the target fitting tissue image is determined as the display ratio, and the blind area ratio is determined according to the display ratio.
  5. 根据权利要求1所述的系统,其中,所述系统还包括:The system according to claim 1, wherein said system further comprises:
    三维定位模块,用于根据所述内窥镜在进镜模式下获得的组织图像进行三维重建,获得三维组织图像;并将所述三维组织图像的中心线上、与所述目标点距离最近的点,确定为所述目标点在所述三维组织图像的三维目标点;The three-dimensional positioning module is used to perform three-dimensional reconstruction according to the tissue image obtained by the endoscope in the endoscopic mode to obtain a three-dimensional tissue image; A point is determined as a three-dimensional target point of the target point in the three-dimensional tissue image;
    姿态确定模块,用于根据当前所处位置对应的组织图像、历史时段内的组织图像,确定所述内窥镜的姿态信息;A posture determination module, configured to determine the posture information of the endoscope according to the tissue image corresponding to the current location and the tissue image in the historical period;
    轨迹确定模块,用于根据所述当前所处位置、所述姿态信息和所述三维目标点生成导航轨迹;A trajectory determination module, configured to generate a navigation trajectory according to the current position, the attitude information and the three-dimensional target point;
    所述显示模块还用于显示所述内窥镜的姿态信息和所述三维组织图像,并在所述三维组织图像中显示所述三维目标点和所述导航轨迹。The display module is also used to display the posture information of the endoscope and the three-dimensional tissue image, and display the three-dimensional target point and the navigation track in the three-dimensional tissue image.
  6. 根据权利要求1所述的系统,其中,所述系统还包括:The system according to claim 1, wherein said system further comprises:
    图像分类模块,用于对所述内窥镜在进镜模式下的组织图像进行分类,获得所述组织图像对应的目标分类,其中,所述目标分类包括异常分类和多个内窥镜分类;An image classification module, configured to classify the tissue image of the endoscope in the endoscope mode, and obtain the target classification corresponding to the tissue image, wherein the target classification includes abnormal classification and multiple endoscope classifications;
    检测模式识别模块,用于根据所述组织图像对应的目标分类,对每一内窥镜分类的计数器进行更新,并在任一内窥镜分类对应的计数器的数值达到计数阈值时,停止各个所述计数器的计数操作,并根据计数器的数值达到计数阈值的内窥镜分类确定所述内窥镜的检测模式,其中,每一所述内窥镜分类具有其对应的计数器,每一所述计数器的数值初始为零;The detection pattern recognition module is used to update the counter of each endoscope classification according to the target classification corresponding to the tissue image, and when the value of the counter corresponding to any endoscope classification reaches a counting threshold, stop each of the The counting operation of the counter, and determine the detection mode of the endoscope according to the endoscope classification whose value of the counter reaches the counting threshold, wherein, each of the endoscope classifications has its corresponding counter, and each of the counters The value is initially zero;
    所述息肉识别模块还用于:根据所述检测模式识别模块确定出的检测模式所对应的息肉识别模型,对所述内窥镜在退镜模式下获得的组织图像进行实时的息肉识别。The polyp identification module is further configured to: perform real-time polyp identification on the tissue image obtained by the endoscope in the retracting mode according to the polyp identification model corresponding to the detection mode determined by the detection mode identification module.
  7. 根据权利要求6所述的系统,其中,所述检测模式识别模块通过如下方式根据所述组织图像对应的目标分类,对每一内窥镜分类的计数器进行更新:The system according to claim 6, wherein the detection pattern recognition module updates the counter of each endoscope classification according to the target classification corresponding to the tissue image in the following manner:
    在所述目标分类为内窥镜分类时,对所述目标分类对应的计数器进行加一操作,对除所述目标分类之外、计数器数值不为零的内窥镜分类对应的计数器进行减一操作;在所述目标分类为异常分类时,则对计数器数值不为零的各个内窥镜分类对应的计数器进行减一操作,以对每一内窥镜分类的计数器进行更新。When the target classification is an endoscope classification, add one to the counter corresponding to the target classification, and subtract one from the counter corresponding to the endoscope classification whose counter value is not zero except for the target classification Operation: when the target category is an abnormal category, the counter corresponding to each endoscope category whose counter value is not zero is decremented by one, so as to update the counter of each endoscope category.
  8. 根据权利要求1所述的系统,其中,所述系统还包括:The system according to claim 1, wherein said system further comprises:
    模式识别模块,用于根据图像识别模型对所述内窥镜的组织图像进行识别,在所述图像识别模型输出的对应于体内图像的参数大于第一参数阈值的情况下,确定所述内窥镜的图像模式为进镜模式,在所述图像识别模型输出的对应于回盲瓣图像的参数大于第二参数阈值的情况下,确定所述内窥镜的图像模式为退镜模式;a pattern recognition module, configured to recognize the tissue image of the endoscope according to the image recognition model, and determine that the endoscope The image mode of the mirror is the mirror-in mode, and when the parameter corresponding to the ileocecal valve image output by the image recognition model is greater than the second parameter threshold, it is determined that the image mode of the endoscope is the mirror-out mode;
    模式切换模块,用于在内窥镜的图像模式为体外模式、且所述模式识别模块确定所述内窥镜的图像模式为进镜模式的情况下,将所述内窥镜的图像模式切换至进镜模式,在所述内窥镜的图像模式为进镜模式、且所述模式识别模块确定所述内窥镜的图像模式为退镜模式的情况下,将所述内窥镜的图像模式切换至退镜模式,并输出第二提示信息,所述第二提示信息用于提示进入退镜模式。A mode switching module, used to switch the image mode of the endoscope when the image mode of the endoscope is an in vitro mode and the mode recognition module determines that the image mode of the endoscope is an in-scope mode To the mirror-in mode, when the image mode of the endoscope is the mirror-in mode and the pattern recognition module determines that the image mode of the endoscope is the mirror-back mode, the image of the endoscope The mode is switched to the mirror-back mode, and second prompt information is output, and the second prompt information is used to remind to enter the mirror-back mode.
  9. 根据权利要求1所述的系统,其中,所述系统还包括:The system according to claim 1, wherein said system further comprises:
    速度确定模块,用于在所述内窥镜图像的模式为退镜模式时,获取当前组织图像对应的多个历史组织图像,并计算每一所述历史组织图像与当前组织图像之间的相似度,根据多个所述相似度确定退镜速度;A speed determination module, configured to acquire a plurality of historical tissue images corresponding to the current tissue image when the mode of the endoscopic image is the mirror-back mode, and calculate the similarity between each of the historical tissue images and the current tissue image Degree, according to a plurality of described similarities, determine the mirror-removing speed;
    所述显示模块还用于显示所述内窥镜对应的所述退镜速度。The display module is also used to display the mirror withdrawal speed corresponding to the endoscope.
  10. 根据权利要求9所述的系统,其中,所述系统还包括:The system according to claim 9, wherein said system further comprises:
    清洁度确定模块,用于对所述内窥镜在退镜模式下的组织图像进行清洁度分类,并确定每一清洁度分类下的组织图像的数量;根据每一所述清洁度分类下的组织图像的数量,确定所述组织图像对应的组织的清洁度。The cleanliness determination module is used to classify the tissue images of the endoscope in the mirror withdrawal mode, and determine the number of tissue images under each cleanliness category; according to each of the cleanliness categories The number of tissue images determines the cleanliness of the tissue corresponding to the tissue images.
  11. 根据权利要求1所述的系统,其中,所述息肉识别模块包括:The system of claim 1, wherein the polyp identification module comprises:
    息肉检测子模块,用于基于检测模型对所述内窥镜在退镜模式下获得的组织图像进行检测,在检测到所述组织图像存在息肉时,确定所述息肉的位置信息;The polyp detection sub-module is used to detect the tissue image obtained by the endoscope in the withdrawal mode based on the detection model, and determine the position information of the polyp when it is detected that there is a polyp in the tissue image;
    息肉识别子模块,用于从所述组织图像中提取与所述位置信息对应的检测图像;根据所述检测图像和识别模型确定所述息肉的分类;A polyp identification submodule, configured to extract a detected image corresponding to the position information from the tissue image; determine the classification of the polyp according to the detected image and the identification model;
    所述显示模块还用于显示所述组织图像,并在所述组织图像中显示所述息肉的位置信息对应的标识和所述分类。The display module is further configured to display the tissue image, and display the identification corresponding to the position information of the polyp and the classification in the tissue image.
  12. 一种内窥镜图像检测辅助方法,其中,所述方法包括:An auxiliary method for endoscopic image detection, wherein the method includes:
    对内窥镜采集的内窥镜图像进行实时处理,获得组织图像;Real-time processing of endoscopic images collected by the endoscope to obtain tissue images;
    响应于确定所述内窥镜处于进镜模式,确定所述组织图像对应的组织腔体的目标点,所述目标点用于指示所述内窥镜在其当前所处位置进镜的下一目标移动点;其中,在所述组织图像中存在组织腔体时,所述目标点为所述内窥镜图像对应的组织腔体的中心点,在所述组织图像中不存在组织腔体时,所述目标点为所述内窥镜图像对应的组织腔体的方向点;In response to determining that the endoscope is in the mirror-in mode, determine a target point of the tissue cavity corresponding to the tissue image, and the target point is used to indicate the next time the endoscope is in the mirror-in mode at its current position. Target moving point; wherein, when there is a tissue cavity in the tissue image, the target point is the center point of the tissue cavity corresponding to the endoscopic image, and when there is no tissue cavity in the tissue image , the target point is a direction point of the tissue cavity corresponding to the endoscopic image;
    响应于确定所述内窥镜处于退镜模式,对所述内窥镜在退镜模式下获得的组织图像进行实时的息肉识别,获得息肉识别结果;In response to determining that the endoscope is in the retracting mirror mode, perform real-time polyp identification on the tissue image obtained by the endoscope in the retracting mirror mode, and obtain a polyp identification result;
    在显示界面中的图像展示区域中,对所述目标点和所述息肉识别结果进行显示。In the image display area in the display interface, the target point and the polyp recognition result are displayed.
  13. 根据权利要求12所述的方法,其中,所述方法还包括:The method according to claim 12, wherein said method further comprises:
    根据所述内窥镜在退镜模式下获得的组织图像,确定显示比例和盲区比例,所述显示比例和所述盲区比例之和为1;According to the tissue image obtained by the endoscope in the mirror withdrawal mode, a display ratio and a blind area ratio are determined, and the sum of the display ratio and the blind area ratio is 1;
    显示所述显示比例和/或所述盲区比例,并在所述盲区比例大于盲区阈值的情况下,显示第一提示信息,所述第一提示信息用于指示存在漏检风险。Displaying the display ratio and/or the blind area ratio, and displaying first prompt information when the blind area ratio is greater than a blind area threshold, where the first prompt information is used to indicate that there is a risk of missed detection.
  14. 根据权利要求13所述的方法,其中,所述根据所述内窥镜在退镜模式下获得的组织图像,确定显示比例和盲区比例,包括:The method according to claim 13, wherein said determining the display ratio and the blind area ratio according to the tissue image obtained by the endoscope in the mirror withdrawal mode includes:
    根据所述内窥镜在退镜模式下获得的组织图像进行三维重建,获得三维组织图像;Performing three-dimensional reconstruction according to the tissue image obtained by the endoscope in the retracting mirror mode to obtain a three-dimensional tissue image;
    根据所述三维组织图像和所述三维组织图像对应的目标拟合组织图像,确定显示比例和盲区比例。A display ratio and a blind area ratio are determined according to the three-dimensional tissue image and the target fitting tissue image corresponding to the three-dimensional tissue image.
  15. 根据权利要求14所述的方法,其中,所述根据所述三维组织图像和所述三维组织图像对应的目标拟合组织图像,确定显示比例和盲区比例,包括:The method according to claim 14, wherein said determining a display ratio and a blind area ratio according to the three-dimensional tissue image and the target fitting tissue image corresponding to the three-dimensional tissue image comprises:
    将所述三维组织图像中的点云数量、和所述目标拟合组织图像的目拟合点云数量的比值,确定为所述显示比例,并根据所述显示比例确定所述盲区比例。The ratio of the number of point clouds in the three-dimensional tissue image to the number of point clouds of the target fitting tissue image is determined as the display ratio, and the blind area ratio is determined according to the display ratio.
  16. 根据权利要求12所述的方法,其中,所述方法还包括:The method according to claim 12, wherein said method further comprises:
    根据所述内窥镜在进镜模式下获得的组织图像进行三维重建,获得三维组织图像;Performing three-dimensional reconstruction according to the tissue image obtained by the endoscope in the endoscope mode to obtain a three-dimensional tissue image;
    将所述三维组织图像的中心线上、与所述目标点距离最近的点,确定为所述目标点在所述三维组织图像的三维目标点;determining a point on the centerline of the three-dimensional tissue image that is closest to the target point as the three-dimensional target point of the target point on the three-dimensional tissue image;
    根据当前所处位置对应的组织图像、历史时段内的组织图像,确定所述内窥镜的姿态信息,并根据所述当前所处位置、所述姿态信息和所述三维目标点生成导航轨迹;Determine the posture information of the endoscope according to the tissue image corresponding to the current location and the tissue image in the historical period, and generate a navigation track according to the current location, the posture information and the three-dimensional target point;
    显示所述三维组织图像,并在所述三维组织图像中显示所述三维目标点和所述导航轨迹。The three-dimensional tissue image is displayed, and the three-dimensional target point and the navigation track are displayed in the three-dimensional tissue image.
  17. 根据权利要求12所述的方法,其中,所述方法还包括:The method according to claim 12, wherein said method further comprises:
    响应于用户在显示界面中的控件区域中对检测模式的选择操作,将用户选择的检测模式确定为所述内窥镜的检测模式;In response to the user's selection operation of the detection mode in the control area of the display interface, the detection mode selected by the user is determined as the detection mode of the endoscope;
    所述对所述内窥镜在退镜模式下获得的组织图像进行实时的息肉识别,包括:The real-time polyp identification of the tissue image obtained by the endoscope in the mirror withdrawal mode includes:
    根据所述检测模式所对应的息肉识别模型,对所述内窥镜在退镜模式下获得的组织图像进行实时的息肉识别。According to the polyp identification model corresponding to the detection mode, real-time polyp identification is performed on the tissue image obtained by the endoscope in the retracting mirror mode.
  18. 根据权利要求12所述的方法,其中,所述方法还包括:The method according to claim 12, wherein said method further comprises:
    对所述内窥镜在进镜模式下的组织图像进行分类,获得所述组织图像对应的目标分类,其中,所述目标分类包括异常分类和多个内窥镜分类;Classifying the tissue image of the endoscope in the endoscope mode to obtain a target classification corresponding to the tissue image, wherein the target classification includes an abnormal classification and a plurality of endoscope classifications;
    根据所述组织图像对应的目标分类,对每一内窥镜分类的计数器进行更新,并在任一内窥镜分类对应的计数器的数值达到计数阈值时,停止各个所述计数器的计数操作,并根据计数器的数值达到计数阈值的内窥镜分类确定所述内窥镜的检测模式,其中,每一所述内窥镜分类具有其对应的计数器,每一所述计数器的数值初始为零;According to the target classification corresponding to the tissue image, the counter of each endoscope classification is updated, and when the value of the counter corresponding to any endoscope classification reaches the counting threshold, the counting operation of each of the counters is stopped, and according to The endoscope classification whose value of the counter reaches the counting threshold determines the detection mode of the endoscope, wherein each of the endoscope classifications has its corresponding counter, and the value of each of the counters is initially zero;
    所述对所述内窥镜在退镜模式下获得的组织图像进行实时的息肉识别,包括:The real-time polyp identification of the tissue image obtained by the endoscope in the mirror withdrawal mode includes:
    根据所述检测模式所对应的息肉识别模型,对所述内窥镜在退镜模式下获得的组织图像进行实时的息肉识别。According to the polyp identification model corresponding to the detection mode, real-time polyp identification is performed on the tissue image obtained by the endoscope in the retracting mirror mode.
  19. 根据权利要求18所述的方法,其中,所述根据所述组织图像对应的目标分类,对每一内窥镜分类的计数器进行更新,包括:The method according to claim 18, wherein said updating the counter of each endoscope classification according to the target classification corresponding to the tissue image comprises:
    在所述目标分类为内窥镜分类时,对所述目标分类对应的计数器进行加一操作,对除所述目标分类之外、计数器数值不为零的内窥镜分类对应的计数器进行减一操作;在所述目标分类为异常分类时,则对计数器数值不为零的各个内窥镜分类对应的计数器进行减一操作,以对每一内窥镜分类的计数器进行更新。When the target classification is an endoscope classification, add one to the counter corresponding to the target classification, and subtract one from the counter corresponding to the endoscope classification whose counter value is not zero except for the target classification Operation: when the target category is an abnormal category, the counter corresponding to each endoscope category whose counter value is not zero is decremented by one, so as to update the counter of each endoscope category.
  20. 根据权利要求12所述的方法,其中,所述方法还包括:The method according to claim 12, wherein said method further comprises:
    根据图像识别模型对所述内窥镜的组织图像进行识别;Recognizing the tissue image of the endoscope according to the image recognition model;
    若所述图像识别模型输出的对应于体内图像的参数大于第一参数阈值,确定所述内窥镜的图像模式为进镜模式,若所述图像识别模型输出的对应于回盲瓣图像的参数大于第二参数阈值,确定所述内窥镜的图像模式为退镜模式;If the parameter corresponding to the in-vivo image output by the image recognition model is greater than the first parameter threshold, it is determined that the image mode of the endoscope is the endoscope mode; if the parameter corresponding to the ileocecal valve image output by the image recognition model Greater than the second parameter threshold, it is determined that the image mode of the endoscope is the mirror-back mode;
    在内窥镜的图像模式为体外模式、且基于所述图像识别模型确定出的所述内窥镜的图像模式为进镜模式的情况下,将所述内窥镜的图像模式切换至进镜模式,在所述内窥镜的图像模式为进镜模式、且基于所述图像识别模型确定出的所述内窥镜的图像模式为退镜模式的情况下,将所述内窥镜的图像模式切换至退镜模式,并输出第二提示信息,所述第二提示信息用于提示进入退镜模式。When the image mode of the endoscope is an in vitro mode and the image mode of the endoscope determined based on the image recognition model is an endoscope mode, switch the image mode of the endoscope to an endoscope mode mode, when the image mode of the endoscope is the mirror-in mode, and the image mode of the endoscope determined based on the image recognition model is the mirror-back mode, the image of the endoscope The mode is switched to the mirror-back mode, and second prompt information is output, and the second prompt information is used to remind to enter the mirror-back mode.
  21. 根据权利要求12所述的方法,其中,所述方法还包括:The method according to claim 12, wherein said method further comprises:
    在所述内窥镜图像的模式为退镜模式时,获取当前组织图像对应的多个历史组织图像,并计算每一所述历史组织图像与当前组织图像之间的相似度,根据多个所述相似度确定退镜速度;When the mode of the endoscopic image is the mirror-back mode, obtain a plurality of historical tissue images corresponding to the current tissue image, and calculate the similarity between each of the historical tissue images and the current tissue image, according to the plurality of historical tissue images The above similarity determines the mirror withdrawal speed;
    在所述图像展示区域中显示所述内窥镜对应的所述退镜速度。The mirror withdrawal speed corresponding to the endoscope is displayed in the image display area.
  22. 根据权利要求21所述的方法,其中,所述方法还包括:The method according to claim 21, wherein said method further comprises:
    对所述内窥镜在退镜模式下的组织图像进行清洁度分类,并确定每一清洁度分类下的组织图像的数量;Carrying out cleanliness classification on the tissue images of the endoscope in the withdrawal mode, and determining the number of tissue images under each cleanliness category;
    根据每一所述清洁度分类下的组织图像的数量,确定所述组织图像对应的组织的清洁度。According to the quantity of tissue images under each of the cleanliness categories, the cleanliness of the tissue corresponding to the tissue images is determined.
  23. 根据权利要求12所述的方法,其中,所述对所述内窥镜在退镜模式下获得的组织图像进行实时的息肉识别,包括:The method according to claim 12, wherein said performing real-time polyp identification on the tissue image obtained by the endoscope in the retracting mirror mode comprises:
    基于检测模型对所述内窥镜在退镜模式下获得的组织图像进行检测,在检测到所述组织图像存在息肉时,确定所述息肉的位置信息;Based on the detection model, the tissue image obtained by the endoscope in the withdrawal mode is detected, and when polyps are detected in the tissue image, the position information of the polyps is determined;
    从所述组织图像中提取与所述位置信息对应的检测图像,根据所述检测图像和识别模型确定所述息肉的分类;extracting a detection image corresponding to the position information from the tissue image, and determining the classification of the polyp according to the detection image and the recognition model;
    在所述图像展示区域显示所述组织图像,并在所述组织图像中显示所述息肉的位置信息对应的标识和所述分类。The tissue image is displayed in the image display area, and the identification corresponding to the position information of the polyp and the classification are displayed in the tissue image.
  24. 一种计算机可读介质,其上存储有计算机程序,其中,该程序被处理装置执行时实现权利要求12-23中任一项所述方法的步骤。A computer-readable medium, on which a computer program is stored, wherein the program implements the steps of any one of claims 12-23 when executed by a processing device.
  25. 一种电子设备,其包括:An electronic device comprising:
    存储装置,其上存储有计算机程序;a storage device on which a computer program is stored;
    处理装置,用于执行所述存储装置中的所述计算机程序,以实现权利要求12-23中任一项所述方法的步骤。A processing device configured to execute the computer program in the storage device to implement the steps of the method according to any one of claims 12-23.
PCT/CN2022/137565 2021-12-29 2022-12-08 Endoscope image detection auxiliary system and method, medium and electronic device WO2023124876A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111643635.XA CN114332019B (en) 2021-12-29 2021-12-29 Endoscopic image detection assistance system, method, medium, and electronic device
CN202111643635.X 2021-12-29

Publications (1)

Publication Number Publication Date
WO2023124876A1 true WO2023124876A1 (en) 2023-07-06

Family

ID=81017199

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/137565 WO2023124876A1 (en) 2021-12-29 2022-12-08 Endoscope image detection auxiliary system and method, medium and electronic device

Country Status (2)

Country Link
CN (1) CN114332019B (en)
WO (1) WO2023124876A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117252927A (en) * 2023-11-20 2023-12-19 华中科技大学同济医学院附属协和医院 Catheter lower intervention target positioning method and system based on small target detection
CN117576097A (en) * 2024-01-16 2024-02-20 华伦医疗用品(深圳)有限公司 Endoscope image processing method and system based on AI auxiliary image processing information

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114332019B (en) * 2021-12-29 2023-07-04 小荷医疗器械(海南)有限公司 Endoscopic image detection assistance system, method, medium, and electronic device
CN116523907B (en) * 2023-06-28 2023-10-31 浙江华诺康科技有限公司 Endoscope imaging quality detection method, device, equipment and storage medium
CN117392449A (en) * 2023-10-24 2024-01-12 青岛美迪康数字工程有限公司 Enteroscopy part identification method, device and equipment based on endoscopic image features
CN117137410B (en) * 2023-10-31 2024-01-23 广东实联医疗器械有限公司 Medical endoscope image processing method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110495847A (en) * 2019-08-23 2019-11-26 重庆天如生物科技有限公司 Alimentary canal morning cancer assistant diagnosis system and check device based on deep learning
US20210280312A1 (en) * 2020-03-06 2021-09-09 Verily Life Sciences Llc Detecting deficient coverage in gastroenterological procedures
CN113487605A (en) * 2021-09-03 2021-10-08 北京字节跳动网络技术有限公司 Tissue cavity positioning method, device, medium and equipment for endoscope
CN113487608A (en) * 2021-09-06 2021-10-08 北京字节跳动网络技术有限公司 Endoscope image detection method, endoscope image detection device, storage medium, and electronic apparatus
CN113706536A (en) * 2021-10-28 2021-11-26 武汉大学 Sliding mirror risk early warning method and device and computer readable storage medium
CN114332019A (en) * 2021-12-29 2022-04-12 小荷医疗器械(海南)有限公司 Endoscope image detection assistance system, method, medium, and electronic apparatus

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109447973B (en) * 2018-10-31 2021-11-26 腾讯医疗健康(深圳)有限公司 Method, device and system for processing colon polyp image
KR102430946B1 (en) * 2019-11-08 2022-08-10 주식회사 인트로메딕 System and method for diagnosing small bowel preparation scale
CN112070124A (en) * 2020-08-18 2020-12-11 苏州慧维智能医疗科技有限公司 Digestive endoscopy video scene classification method based on convolutional neural network
CN113240662B (en) * 2021-05-31 2022-05-31 萱闱(北京)生物科技有限公司 Endoscope inspection auxiliary system based on artificial intelligence
CN113570592B (en) * 2021-08-05 2022-09-20 印迹信息科技(北京)有限公司 Gastrointestinal disease detection and model training method, device, equipment and medium
CN113470030B (en) * 2021-09-03 2021-11-23 北京字节跳动网络技术有限公司 Method and device for determining cleanliness of tissue cavity, readable medium and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110495847A (en) * 2019-08-23 2019-11-26 重庆天如生物科技有限公司 Alimentary canal morning cancer assistant diagnosis system and check device based on deep learning
US20210280312A1 (en) * 2020-03-06 2021-09-09 Verily Life Sciences Llc Detecting deficient coverage in gastroenterological procedures
CN113487605A (en) * 2021-09-03 2021-10-08 北京字节跳动网络技术有限公司 Tissue cavity positioning method, device, medium and equipment for endoscope
CN113487608A (en) * 2021-09-06 2021-10-08 北京字节跳动网络技术有限公司 Endoscope image detection method, endoscope image detection device, storage medium, and electronic apparatus
CN113706536A (en) * 2021-10-28 2021-11-26 武汉大学 Sliding mirror risk early warning method and device and computer readable storage medium
CN114332019A (en) * 2021-12-29 2022-04-12 小荷医疗器械(海南)有限公司 Endoscope image detection assistance system, method, medium, and electronic apparatus

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117252927A (en) * 2023-11-20 2023-12-19 华中科技大学同济医学院附属协和医院 Catheter lower intervention target positioning method and system based on small target detection
CN117252927B (en) * 2023-11-20 2024-02-02 华中科技大学同济医学院附属协和医院 Catheter lower intervention target positioning method and system based on small target detection
CN117576097A (en) * 2024-01-16 2024-02-20 华伦医疗用品(深圳)有限公司 Endoscope image processing method and system based on AI auxiliary image processing information
CN117576097B (en) * 2024-01-16 2024-03-22 华伦医疗用品(深圳)有限公司 Endoscope image processing method and system based on AI auxiliary image processing information

Also Published As

Publication number Publication date
CN114332019B (en) 2023-07-04
CN114332019A (en) 2022-04-12

Similar Documents

Publication Publication Date Title
WO2023124876A1 (en) Endoscope image detection auxiliary system and method, medium and electronic device
AU2019431299B2 (en) AI systems for detecting and sizing lesions
US20180263568A1 (en) Systems and Methods for Clinical Image Classification
US20220254017A1 (en) Systems and methods for video-based positioning and navigation in gastroenterological procedures
JP6254053B2 (en) Endoscopic image diagnosis support apparatus, system and program, and operation method of endoscopic image diagnosis support apparatus
US10178941B2 (en) Image processing apparatus, image processing method, and computer-readable recording device
WO2021139672A1 (en) Medical operation assisting method, apparatus, and device, and computer storage medium
WO2023029741A1 (en) Tissue cavity locating method and apparatus for endoscope, medium and device
RU2633320C2 (en) Selection of images for optical study of uterine cervix
US9530205B2 (en) Polyp detection apparatus and method of operating the same
JPWO2020121906A1 (en) Medical support system, medical support device and medical support method
JP2009022446A (en) System and method for combined display in medicine
JP2012024518A (en) Device, method, and program for assisting endoscopic observation
WO2023124877A1 (en) Endoscope image processing method and apparatus, and readable medium and electronic device
EP3254262A1 (en) Method and apparatus for displaying medical image
CN105769109A (en) Endoscope scanning control method and system
WO2023138619A1 (en) Endoscope image processing method and apparatus, readable medium, and electronic device
CN112116575A (en) Image processing method and device, electronic equipment and storage medium
JP2020010735A (en) Inspection support device, method, and program
WO2022251112A1 (en) Phase identification of endoscopy procedures
US20200345291A1 (en) Systems and methods for measuring volumes and dimensions of objects and features during swallowing observation
WO2023165332A1 (en) Tissue cavity positioning method, apparatus, readable medium, and electronic device
WO2023095208A1 (en) Endoscope insertion guide device, endoscope insertion guide method, endoscope information acquisition method, guide server device, and image inference model learning method
Tai et al. Upper gastrointestinal endoscopy: can we cut the cord?
Chang et al. Synchronization and analysis of multimodal bronchoscopic airway exams for early lung cancer detection

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22914131

Country of ref document: EP

Kind code of ref document: A1