CN111012285A - Endoscope moving time determining method and device and computer equipment - Google Patents

Endoscope moving time determining method and device and computer equipment Download PDF

Info

Publication number
CN111012285A
CN111012285A CN201911245155.0A CN201911245155A CN111012285A CN 111012285 A CN111012285 A CN 111012285A CN 201911245155 A CN201911245155 A CN 201911245155A CN 111012285 A CN111012285 A CN 111012285A
Authority
CN
China
Prior art keywords
endoscopic
endoscope
time
image
endoscopic image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911245155.0A
Other languages
Chinese (zh)
Other versions
CN111012285B (en
Inventor
邱俊文
付星辉
孙钟前
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110551339.0A priority Critical patent/CN113288007B/en
Priority to CN201911245155.0A priority patent/CN111012285B/en
Publication of CN111012285A publication Critical patent/CN111012285A/en
Application granted granted Critical
Publication of CN111012285B publication Critical patent/CN111012285B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00131Accessories for endoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Optics & Photonics (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Veterinary Medicine (AREA)
  • Physics & Mathematics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Signal Processing (AREA)
  • Endoscopes (AREA)

Abstract

The application relates to an endoscope moving time determining method, an endoscope moving time determining device, a computer readable storage medium and a computer device, which relate to artificial intelligence computer vision technology, wherein the method comprises the following steps: acquiring an endoscopic image shot by an endoscope in the moving process; the endoscope passes through the first part and the second part in sequence when moving; carrying out part identification on the endoscopic image to obtain a part identification result; determining a moving time starting point of the endoscope according to a time point corresponding to the endoscopic image of the first part represented by the part identification result; determining a moving time end point of the endoscope according to a time point corresponding to the endoscopic image of the second part represented by the part identification result; and determining the moving time of the endoscope according to the moving time starting point and the moving time end point. The scheme provided by the application can improve the efficiency of determining the moving time of the endoscope.

Description

Endoscope moving time determining method and device and computer equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for determining endoscope movement time, a computer-readable storage medium, and a computer device.
Background
An endoscope is a commonly used medical instrument and consists of a bendable part, a light source and a group of lenses. The endoscope enters the human body through a natural orifice of the human body or through a small incision made by surgery. When the endoscope is used, the endoscope is introduced into the organ to be inspected, and the change of the relevant part can be directly observed. When the endoscope is used, the endoscope enters the body of a patient, the feeling of endoscopy of the patient and the observation effect of the endoscope are balanced, and the moving time of the endoscope, such as the endoscope retreating time when the enteroscope moves from the ileocecal part of the patient to the anus, needs to be controlled.
However, currently, the moving time of the endoscope is mostly determined by manually backtracking statistics after the endoscope operation is finished by a user, and the process is complicated, so that the efficiency of determining the moving time of the endoscope is low.
Disclosure of Invention
In view of the above, it is necessary to provide an endoscope moving time determination method, an apparatus, a computer-readable storage medium, and a computer device for solving the technical problem of inefficient endoscope moving time determination.
An endoscope movement time determination method comprising:
acquiring an endoscopic image shot by an endoscope in the moving process; the endoscope passes through the first part and the second part in sequence when moving;
carrying out part identification on the endoscopic image to obtain a part identification result;
determining a moving time starting point of the endoscope according to a time point corresponding to the endoscopic image of the first part represented by the part identification result;
determining a moving time end point of the endoscope according to a time point corresponding to the endoscopic image of the second part represented by the part identification result;
and determining the moving time of the endoscope according to the moving time starting point and the moving time end point.
An endoscope movement time determination apparatus, the apparatus comprising:
the endoscopic image acquisition module is used for acquiring an endoscopic image shot by the endoscope in the moving process; the endoscope passes through the first part and the second part in sequence when moving;
the part recognition module is used for carrying out part recognition on the endoscopic image to obtain a part recognition result;
a time starting point determining module, which is used for determining the moving time starting point of the endoscope according to the time point corresponding to the endoscopic image of the first part represented by the part identification result;
the time end point determining module is used for determining the moving time end point of the endoscope according to the time point corresponding to the endoscopic image of the second part represented by the part identification result;
and the moving time determining module is used for determining the moving time of the endoscope according to the moving time starting point and the moving time end point.
A computer-readable storage medium, storing a computer program which, when executed by a processor, causes the processor to perform the steps of the endoscope movement time determination method as described above.
A computer device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to carry out the steps of the endoscope movement time determination method as described above.
The endoscope moving time determining method, the endoscope moving time determining apparatus, the endoscope moving time determining computer-readable storage medium, and the endoscope moving time determining computer device perform the site recognition of the endoscopic image captured by the endoscope while sequentially passing through the first site and the second site, determine the moving time starting point based on the time point corresponding to the endoscopic image indicating the first site according to the site recognition result, determine the moving time ending point based on the time point corresponding to the endoscopic image indicating the second site according to the site recognition result, and determine the endoscope moving time based on the moving time starting point and the moving time ending point. In the endoscope moving time determining process, the moving time starting point and the moving time end point are determined directly according to the part recognition result obtained by carrying out part recognition on the endoscope image, and the endoscope moving time is further obtained without manual backtracking statistics after the endoscope operation is completed, so that the determining process of the endoscope moving time is simplified, and the efficiency of determining the endoscope moving time is improved.
Drawings
FIG. 1 is a diagram of an exemplary embodiment of a method for determining endoscope movement time;
FIG. 2 is a schematic flow chart diagram of a method for determining endoscope movement time in one embodiment;
FIG. 3 is a diagram illustrating a display device displaying a time start alert message in one embodiment;
FIG. 4 is a diagram illustrating a display device displaying a timing-end prompt message, according to one embodiment;
FIG. 5 is a flow diagram illustrating the determination of an identification loss in one embodiment;
FIG. 6 is a schematic diagram of a model structure in one embodiment;
FIG. 7 is a schematic flow chart illustrating the process of determining the withdrawal time of an enteroscope in one embodiment;
FIG. 8 is a diagram of model training in one embodiment;
FIG. 9 is a schematic flowchart of a method for determining endoscope movement time in one embodiment;
FIG. 10 is a block diagram showing the construction of an endoscope movement time determining apparatus according to an embodiment;
FIG. 11 is a block diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The scheme provided by the embodiment of the application relates to Computer Vision (CV) technology of Artificial Intelligence (AI). The artificial intelligence is a theory, a method, a technology and an application system which simulate, extend and expand human intelligence by using a digital computer or a machine controlled by the digital computer, sense the environment, acquire knowledge and obtain the best result by using the knowledge. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
The computer vision related to the application is a science for researching how to make a machine see, and further, the science refers to that a camera and a computer are used for replacing human eyes to perform machine vision such as identification, tracking and measurement on a target, and further image processing is performed, so that the computer processing is an image which is more suitable for human eye observation or is transmitted to an instrument for detection. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. Computer vision technologies generally include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technologies, virtual reality, augmented reality, synchronous positioning, map construction, and other technologies, and also include common biometric technologies such as face recognition and fingerprint recognition. The method, the apparatus, the computer-readable storage medium, and the computer device for determining the endoscope moving time provided by the present application perform image recognition on an endoscopic image captured during the moving process of an endoscope by using an artificial intelligence computer vision technique, so as to determine the endoscope moving time based on a portion correspondingly represented by the endoscopic image, which are specifically described in the following embodiments.
Fig. 1 is a diagram of an application environment of a method for determining a moving time of an endoscope in one embodiment. (example: referring to FIG. 1, the endoscope movement time determination method is applied to an enteroscope system, which includes an enteroscope 110 and a server 120. the enteroscope 110 and the server 120 are connected via a network. the enteroscope 110 may be a type of endoscope, such as a small enteroscope, a proctoscope, a colonoscope, etc., specifically, the server 120 may be implemented as a stand-alone server or a server cluster consisting of a plurality of servers.)
In one embodiment, as shown in FIG. 2, an endoscope movement time determination method is provided. The embodiment is mainly illustrated by applying the method to the server 120 in fig. 1. Referring to fig. 2, the endoscope movement time determination method specifically includes the steps of:
s202, acquiring an endoscopic image shot by an endoscope in the moving process; the endoscope passes through the first and second locations in sequence as it moves.
The endoscope is a detection instrument integrating traditional optics, human engineering, precision machinery, modern electronics, mathematics and software into a whole. The endoscope has an image sensor, an optical lens, a light source illumination, a mechanical device, etc., and can enter the stomach through the mouth or enter the body through other natural orifices. Lesions that cannot be visualized by X-rays can be visualized with an endoscope, for example, by means of which the physician can observe ulcers or tumors in the stomach, whereupon the treatment is aimed. From the application aspect, endoscopes include industrial endoscopes and medical endoscopes. Wherein, for medical endoscopes, including hard tube endoscopes, optical fiber (hose) endoscopes and electronic endoscopes; according to the reaching position, the endoscope can be divided into an otorhinolaryngology endoscope, an oral endoscope, a dental endoscope, a neuroscope, a urethra cystoscope, an resectoscope, a laparoscope, an arthroscope, a sinonasal endoscope, a laryngoscope, an enteroscope and the like.
When the medical endoscope is used, the medical endoscope needs to enter a body, so that pain is brought to an observed object, and the moving operation time of the endoscope needs to be shortened as much as possible; however, in order to secure the observation effect of the endoscope, it is necessary to observe for as long a time as possible to increase the moving operation time of the endoscope. In order to balance the two problems, the moving operation time of the endoscope is generally required by specification in the use of the medical endoscope, so that the observation effect is ensured, and the probability of missing detection of the lesion is reduced. The endoscopic image is obtained by shooting the endoscope in the moving process. The endoscope is provided with a camera head, and the inside of an observed object is observed from the outside by shooting images after entering the inside of the observed object, such as observing the internal structure of the observed object through an endoscopic image shot by an industrial endoscope and observing the stomach of the observed object through an endoscopic image shot by a gastroscope. In a specific application, the video stream is obtained by the endoscope shooting, and the peep image in each frame can be extracted from the video stream.
The first portion and the second portion are determined according to the type and operation stage of the endoscope for which the moving time is required to be determined. Different endoscopes correspond to different first and second parts, for example, for gastroscopes and otolaryngological endoscopes, the observation parts are different, and the first and second parts corresponding to the determined moving time of the endoscope are different; the different operative stages of the endoscope also correspond to different first and second sites, for example, for an enteroscope, where the sites observed during the movement include the end of the ileum, ileocecal valve, appendix recess, ascending colon, hepatic flexure, transverse colon, splenic flexure, descending colon, sigmoid colon, rectum, etc., the movement times of the different operative stages correspond to the different operative stages. Wherein, the enteroscope among the medical endoscope is rectangular shape, and the head is subsidiary ultrasonic scanning probe, and it is internal that generally do the enteroscope and indicate to insert the patient with the probe through the anus by the endoscope doctor, and it is deepest to begin to roll back after going back to the cecum, and whether the doctor is observing the colorectal at the mirror-removing in-process and has pathological change, through the clear image of the internal colorectal of spying patient, but direct observation colorectal pipe wall structure changes, has important effect to the diagnosis of early tumour of alimentary canal and bile duct, pancreas disease. In the concrete application, the enteroscope is from going back to the whole consuming time of anus department promptly for moving back the mirror time length, ensures that certain moving back mirror time length can effectively improve the observation effect of enteroscope, improves enteroscope inspection quality. In specific implementation, if the endoscope is an enteroscope, the first part is a ileocecal part, and the second part is an anus, the moving time of the enteroscope passing through the enteroscope and the anus in sequence can be determined, and the endoscope withdrawing time of the enteroscope can be obtained.
In one specific application, a camera of the endoscope captures a video stream obtained during movement from a first site to a second site, and endoscopic images are sequentially extracted from the video stream.
And S204, carrying out part identification on the endoscopic image to obtain a part identification result.
During the moving process of the endoscope, the endoscopic image shot by the camera head may include different parts, and the part recognition is carried out on the endoscopic image so as to determine the shot part of the endoscopic image and obtain the part recognition result. For example, the endoscopic images taken by a laryngoscope are subjected to part identification to determine the specific laryngeal part of each endoscopic image, and a part identification result is obtained.
And S206, determining the moving time starting point of the endoscope according to the time point corresponding to the endoscopic image of the first part represented by the part identification result.
The part recognition result reflects the part imaged by the endoscopic image, and specifically, the part recognition result may include a part name or a part number of the part imaged by the endoscopic image. The time point is a time attribute of the endoscopic image, and may be determined according to a time stamp of the endoscopic image in the video stream, or may be determined according to a triggered time stamp, for example, a time counted by a triggered timer is taken as the time point corresponding to the endoscopic image. Specifically, the moving time starting point of the endoscope through the first site is determined based on a time point corresponding to the endoscopic image in which the site recognition result indicates the first site in the site recognition result. The starting point of the moving time of the endoscope is the starting time of the moving time, for example, for the endoscope withdrawal duration determination processing of the enteroscope, the first part is the ileocecal part, and the starting point of the moving time is the time point of the endoscopic image corresponding to the ileocecal part represented by the part identification result.
And S208, determining the moving time end point of the endoscope according to the time point corresponding to the endoscopic image of the second part represented by the part identification result.
When the endoscope passes through the first part and the second part in sequence during moving, and when the moving time end point corresponding to the endoscope passing through the second part is determined, the moving time end point of the endoscope is determined according to the time point corresponding to the endoscopic image of the second part indicated by the part identification result. The movement time end point is a time point when the endoscope moves to the second portion while sequentially passing through the first portion and the second portion. For example, in the endoscope withdrawing time length determination processing of the enteroscope, the second part is the anus, and the moving time endpoint is the time point at which the part identification result represents the endoscopic image corresponding to the anus.
And S210, determining the moving time of the endoscope according to the moving time starting point and the moving time end point.
And after the obtained moving time starting point and moving time end point of the endoscope in the moving process, obtaining the moving time of the endoscope according to the moving time starting point and the moving time end point. The endoscope movement time can be determined in particular directly from the time span between the start of the movement time and the end of the movement time.
The endoscope movement time determination method performs the part recognition of the endoscopic image obtained by imaging the endoscope while sequentially passing through the first part and the second part, determines the movement time starting point based on the time point corresponding to the endoscopic image indicating the first part according to the part recognition result, determines the movement time ending point based on the time point corresponding to the endoscopic image indicating the second part according to the part recognition result, and determines the endoscope movement time based on the movement time starting point and the movement time ending point. In the endoscope moving time determining process, the moving time starting point and the moving time end point are determined directly according to the part recognition result obtained by carrying out part recognition on the endoscope image, and the endoscope moving time is further obtained without manual backtracking statistics after the endoscope operation is completed, so that the determining process of the endoscope moving time is simplified, and the efficiency of determining the endoscope moving time is improved.
In one embodiment, acquiring endoscopic images captured by the endoscope during movement comprises: endoscopic images are sequentially extracted from a video stream obtained by shooting the endoscope in the moving process.
Specifically, a video stream captured by the endoscope during movement is acquired, and endoscopic images are sequentially extracted from the video stream. When the method is applied specifically, the video stream obtained by instant shooting of the endoscope in the moving process can be obtained, and the endoscopic images are sequentially extracted from the video stream in real time to perform the processing of determining the moving time of the endoscope, so that the real-time performance of determining the moving time of the endoscope is improved.
Further, the performing the part recognition on the endoscope image to obtain the part recognition result includes: inputting the extracted endoscopic images to a part recognition model in sequence; and acquiring the part recognition results which are sequentially output after the part recognition model processes the input endoscopic images.
In this embodiment, the endoscopic image is subjected to the portion recognition based on a portion recognition model, and the portion recognition model may be a machine learning model, such as a Decision Tree (DT) model, a Support Vector Machine (SVM) model, or an Artificial Neural Network (ANN) model. The artificial neural network is a computational model generated by a biological neural network simulating human brain processing information, and is mainly used for research and invocation of machine learning, such as speech recognition, computer image processing and the like.
Specifically, after obtaining the endoscopic image captured by the endoscope, the extracted endoscopic image is sequentially input to the part recognition model, so that the video stream captured by the endoscope at the moment can be subjected to real-time part recognition processing by the part recognition model, and part recognition results sequentially output after the part recognition model processes the input endoscopic image are obtained. In specific implementation, the part identification model can be obtained in advance according to an endoscopic training image carrying a part label, and the endoscopic training image can be obtained by labeling an endoscopic image extracted from a historical video stream shot by an endoscope.
In one embodiment, determining the start point of the movement time of the endoscope based on the time point corresponding to the endoscopic image of the first site indicated by the site recognition result includes: determining a first endoscopic image from the endoscopic image indicating the first site based on the site recognition result; determining a time point corresponding to the first endoscopic image; and taking the time point corresponding to the first endoscopic image as the moving time starting point of the endoscope.
In the present embodiment, the start time point of the movement of the endoscope is determined based on the time point corresponding to the first endoscopic image determined from the endoscopic image indicating the first site as the result of the site recognition. Specifically, after a site recognition result of the endoscopic image is obtained, the first endoscopic image is specified based on the endoscopic image whose site recognition result indicates the first site. In a specific application, when the endoscopic image extracted from the video stream captured by the endoscope is subjected to the site recognition in real time, the site recognition result of the endoscopic image can be analyzed, and when the endoscopic image is determined to represent the first site, the endoscopic image can be determined as the first endoscopic image. When the endoscope moving time is determined by performing the part recognition on the endoscopic images extracted from the stored non-real-time video stream captured by the endoscope, the first endoscopic image may be determined based on the endoscopic images indicating the first part in accordance with the part recognition result, the first endoscopic image may be determined based on the time points corresponding to the endoscopic images indicating the first part in accordance with the part recognition result, and the endoscopic image with the earliest time point among the endoscopic images indicating the first part in accordance with the part recognition result may be used as the first endoscopic image when the endoscope sequentially passes through the first part and the second part and the moving time of the endoscope passing through the first part and the second part needs to be determined.
A first endoscopic image of a first site is identified from endoscopic images of the first site indicated by the site recognition result, and then a time point corresponding to the first endoscopic image is identified. Specifically, the time point corresponding to the first endoscopic image may be determined according to the time stamp of the first endoscopic image in the video stream. When the endoscopic images sequentially extracted from the video stream immediately shot by the endoscope are processed in real time, the time point corresponding to the first endoscopic image can be determined by a timer, for example, when the first endoscopic image is obtained, the time corresponding to the timer is acquired, and the time corresponding to the timer is taken as the time point corresponding to the first endoscopic image. Wherein, the timer can start timing when the endoscope moves; the timer may be started when the first endoscopic image is determined, and in this case, the time point corresponding to the first endoscopic image is the time origin. After the time point corresponding to the first endoscopic image is obtained, the time point corresponding to the first endoscopic image is used as the moving time starting point of the endoscope. When the endoscope sequentially passes through the first part and the second part during moving and the moving time of the endoscope sequentially passing through the first part and the second part is determined, a time point corresponding to a first endoscopic image obtained by imaging the first part is taken as a moving time starting point of the endoscope.
In one embodiment, determining the first endoscopic image based on the endoscopic image representing the first site as a result of the site recognition comprises: taking an endoscopic image with the earliest shooting time in the endoscopic images with the part identification results representing the first part as a first endoscopic image; or determining the first endoscopic image according to the endoscopic images of the first part represented by the part identification results of at least two continuous frames.
In the present embodiment, the first endoscopic image is determined from the endoscopic images whose site recognition results indicate the first site, according to a preset first endoscopic image determination method. Specifically, after the endoscopic image obtained by the endoscope through live shooting is subjected to the site recognition in real time and the site recognition result of the endoscopic image is obtained, the endoscopic image with the earliest shooting time in the endoscopic image with the site recognition result indicating the first site is taken as the first endoscopic image, and specifically, the endoscopic image with the first obtained site recognition result indicating the first site can be taken as the first endoscopic image. When the moving time starting point can be determined by obtaining an endoscopic image whose part recognition result indicates the first part for the first time, that is, an endoscopic image captured by an endoscope includes the first part, that is, the endoscope has moved to the first part, the endoscopic image whose part recognition result indicates the first part and whose capturing time is the earliest is directly used as the first endoscopic image, and the moving time starting point of the endoscope is determined based on the time point corresponding to the first endoscopic image.
Further, after the endoscopic image obtained by the instantaneous imaging of the endoscope is subjected to the site recognition in real time and the site recognition result of the endoscopic image is obtained, the first endoscopic image may be specified from the endoscopic image in which the site recognition result of at least two consecutive frames indicates the first site. The part recognition result indicates that the endoscopic image which is imaged earliest in time among the endoscopic images of the first part, the endoscope may have moved to the first part, or the part recognition result may be erroneous, and the first endoscopic image is determined based on the part recognition results of the endoscopic images of a plurality of consecutive frames. And after the part recognition result of the endoscopic image is obtained, if the part recognition result of at least two continuous frames of endoscopic images all represent the first part, determining the first endoscopic image according to the at least two continuous frames of endoscopic images. In a specific implementation, one frame of endoscopic image may be determined as the first endoscopic image from the consecutive at least two frames of endoscopic images, for example, the first endoscopic image of the consecutive at least two frames of endoscopic images may be the first endoscopic image, or the last endoscopic image of the consecutive at least two frames of endoscopic images may be the first endoscopic image, or an intermediate frame of endoscopic image of the consecutive at least two frames of endoscopic images may be the first endoscopic image.
In one embodiment, determining the moving time end point of the endoscope based on the time point corresponding to the endoscopic image of the second site indicated by the site recognition result includes: determining a second endoscopic image from the endoscopic image indicating the second site based on the site recognition result; determining a time point corresponding to the second endoscopic image; and taking the time point corresponding to the second endoscopic image as the moving time end point of the endoscope.
In the present embodiment, the end point of the movement time of the endoscope is specified based on the time point corresponding to the second endoscopic image specified by the endoscopic image of the second site indicated by the site recognition result. Specifically, after the site recognition result of the endoscopic image is obtained, the second endoscopic image is specified based on the endoscopic image whose site recognition result indicates the second site. In a specific application, when the endoscopic image extracted from the video stream captured by the endoscope is subjected to the site recognition in real time, the site recognition result of the endoscopic image can be analyzed, and when the endoscopic image is determined to represent the second site, the endoscopic image can be determined as the second endoscopic image. When the endoscope moving time is determined by performing the part recognition on the endoscopic images extracted from the stored non-real-time video stream captured by the endoscope, the second endoscopic image may be determined based on the endoscopic images indicating the second part in accordance with the part recognition result, the second endoscopic image may be determined at a time point corresponding to each of the endoscopic images indicating the second part in accordance with the part recognition result, and specifically, when the endoscope sequentially passes through the first part and the second part during moving and the moving time of the endoscope passing through the first part and the second part needs to be determined, the endoscopic image with the earliest time point in each of the endoscopic images indicating the second part in accordance with the part recognition result may be used as the second endoscopic image.
A second endoscopic image of the second site is identified from the endoscopic images of the second site indicated by the site recognition result, and then the time corresponding to the second endoscopic image is identified. Specifically, the time point corresponding to the second endoscopic image may be determined according to the time stamp of the second endoscopic image in the video stream. When the endoscopic images sequentially extracted from the video stream immediately shot by the endoscope are processed in real time, the time point corresponding to the second endoscopic image can be determined by a timer, for example, when the second endoscopic image is obtained, the time corresponding to the timer is acquired, and the time corresponding to the timer is taken as the time point corresponding to the second endoscopic image. The timer can start timing when the endoscope performs moving operation, and finish timing when the second endoscopic image is determined, so as to determine the time point corresponding to the second endoscopic image according to the timing time of the timer. And after the time point corresponding to the second endoscopic image is obtained, taking the time point corresponding to the second endoscopic image as the moving time end point of the endoscope. When the endoscope sequentially passes through the first part and the second part during moving and the moving time of the endoscope sequentially passing through the first part and the second part is determined, a time point corresponding to a second endoscopic image obtained by shooting the second part is taken as an end point of the moving time of the endoscope.
In one embodiment, determining the second endoscopic image based on the endoscopic image indicative of the second site as a result of the site identification comprises: taking an endoscopic image with the earliest shooting time in the endoscopic images of the second part represented by the part identification result as a first endoscopic image; or determining a second endoscopic image according to the endoscopic image representing the second part according to the part identification results of at least two continuous frames.
In the present embodiment, the second endoscopic image is determined from the endoscopic images whose site recognition results indicate the second site, according to a preset second endoscopic image determination method. Specifically, after the endoscopic image obtained by the endoscope through live shooting is subjected to the site recognition in real time and the site recognition result of the endoscopic image is obtained, the endoscopic image with the earliest shooting time in the endoscopic images with the site recognition result indicating the second site is taken as the second endoscopic image, and specifically, the endoscopic image with the first obtained site recognition result indicating the second site can be taken as the second endoscopic image. When the moving time end point can be determined by obtaining an endoscopic image with the part recognition result indicating the second part for the first time, namely, when the endoscopic image shot by the endoscope includes the second part, namely, the endoscope has moved to the second part, the endoscopic image with the part recognition result indicating the second part and the earliest shooting time is directly used as the second endoscopic image, and the moving time end point of the endoscope is determined according to the time point corresponding to the second endoscopic image.
Further, after the endoscopic image obtained by the instantaneous imaging of the endoscope is subjected to the site recognition in real time and the site recognition result of the endoscopic image is obtained, the second endoscopic image may be specified based on the endoscopic image in which the site recognition result of at least two consecutive frames indicates the second site. The region recognition result indicates that the endoscopic image captured at the earliest timing among the endoscopic images of the second region has moved to the second region, and if the region recognition result is erroneous, the second endoscopic image is determined based on the region recognition results of the endoscopic images of the consecutive frames. And after the part recognition result of the endoscopic image is obtained, if the part recognition result of at least two continuous frames of endoscopic images all represent a second part, determining a second endoscopic image according to the at least two continuous frames of endoscopic images. In a specific implementation, one frame of endoscopic image may be determined as the second endoscopic image from the consecutive at least two frames of endoscopic images, for example, the first frame of endoscopic image of the consecutive at least two frames may be determined as the second endoscopic image, or the last frame of endoscopic image of the consecutive at least two frames may be determined as the second endoscopic image, or the intermediate frame of endoscopic image of the consecutive at least two frames may be determined as the second endoscopic image.
In one embodiment, after determining the start point of the moving time of the endoscope based on the time point corresponding to the endoscopic image of the first site indicated by the site recognition result, the method further includes: generating a timing start prompt message; after the end point of the moving time of the endoscope is determined based on the time point corresponding to the endoscopic image indicating the second site in the site recognition result, the method further includes: and generating a timing end prompt message.
In this embodiment, after the moving time starting point and the moving time ending point of the endoscope are determined, corresponding timing prompt messages, such as a timing starting prompt message corresponding to the moving time starting point and a timing ending prompt message corresponding to the moving time ending point, may also be generated respectively.
Specifically, after the start of the movement time of the endoscope is determined, a timer start prompting message for prompting the doctor that the timer has started is generated, and the timer start prompting message may further include the start of the movement time. For example, when endoscope movement time is determined based on a video stream immediately captured by the endoscope, a generated timer start prompting message may be played through a speaker and the timer may be timed, and may also be presented in a display device. Fig. 3 is a schematic diagram of a timer start prompt message displayed on a display device in a specific enteroscope application.
After the moving time end point of the endoscope is determined, a timing end prompting message is generated, the timing end prompting message is generated for prompting a doctor to finish timing, and the timing end prompting message is generated, wherein the timing end prompting message can comprise the moving time end point and can also comprise the moving time of the endoscope determined according to the moving time start point and the moving time end point. The timing end prompt message can be played through a loudspeaker or displayed in a display device. Fig. 4 is a schematic diagram of a timing end prompt message displayed by a display device in a specific enteroscope application.
In one embodiment, determining the endoscope movement time based on the movement time start point and the movement time end point comprises: determining a time difference between a moving time starting point and a moving time ending point; and obtaining the moving time of the endoscope passing through the first part and the second part when the endoscope moves according to the time difference.
In the present embodiment, the endoscope movement time is determined based on the time difference between the movement time start point and the movement time end point. Specifically, after obtaining the moving time starting point and the moving time ending point, the time difference between the moving time starting point and the moving time ending point is determined, for example, the moving time ending point and the moving time starting point may be subtracted to obtain the time difference between the moving time starting point and the moving time ending point, and the endoscope moving time when the endoscope passes through the first portion and the second portion when moving is obtained according to the time difference, for example, the time difference between the moving time starting point and the moving time ending point may be directly used as the endoscope moving time when the endoscope passes through the first portion and the second portion when moving. For example, for an enteroscopy application, when the endoscope moving time is a withdrawal time, the moving time starting point corresponds to the time when the endoscope moves to the ileocecal part, and the moving time ending point corresponds to the time when the endoscope moves to the anus, the withdrawal time of the enteroscopy can be determined according to the time difference between the moving time starting point and the moving time ending point.
In one embodiment, the endoscope movement time determination method further comprises: inquiring a preset endoscope movement constraint condition; and obtaining an endoscope movement evaluation result according to the comparison result of the endoscope movement time and the endoscope movement constraint condition.
In this embodiment, the obtained endoscope movement time is evaluated, thereby determining the quality of the current endoscope operation by the doctor. Specifically, after the endoscope movement time is obtained, a preset endoscope movement constraint condition is inquired. The endoscope movement constraint conditions comprise endoscope movement time constraint, for example, constraint conditions of endoscope withdrawal time in enteroscope, such as the endoscope withdrawal time not shorter than 6 minutes or the endoscope withdrawal time not shorter than 8 minutes. The moving constraint conditions of different endoscopes are different, and the flexible setting is specifically carried out according to the actual requirements of each endoscope. And comparing the obtained endoscope movement time with the endoscope movement constraint condition, thereby evaluating the endoscope movement time by using the preset endoscope movement constraint condition, determining the quality of the current endoscope operation, and specifically obtaining an endoscope movement evaluation result according to the comparison result. The endoscope movement evaluation result can be used as an index for endoscope operation evaluation to evaluate the quality of the current endoscope operation, so that the endoscope movement time is ensured to meet the preset endoscope operation specification, and the medical service quality is improved.
In one embodiment, the training of the part recognition model comprises: acquiring an endoscopic training image carrying a position label; carrying out part recognition on the endoscopic training image through a part recognition model to be trained to obtain a part recognition training result to which the endoscopic training image belongs correspondingly; determining the recognition loss according to the difference between the part recognition training result and the part label carried by the endoscopic training image; and adjusting the part recognition model to be trained according to the recognition loss, and continuing training until the training ending condition is met, so as to obtain the trained part recognition model.
In this embodiment, the part recognition model is an artificial neural network model obtained based on artificial neural network algorithm training. When the part recognition model is trained, an endoscopic training image carrying a part label is obtained, the endoscopic training image carrying the part label is used as model training sample data, and the endoscopic training image can be obtained by labeling according to an endoscopic image extracted from a historical video stream shot by an endoscope. And carrying out part recognition on the peeping training image through the part recognition model to be trained, and carrying out part recognition on the peeping training image through the part recognition model to be trained. In a specific application, the part recognition model can be a DenseNet deep convolution network model, and can be a DenseNet-121 model. And determining the recognition loss according to the difference between the part recognition training result output by the part recognition model to be trained and the part label carried by the corresponding input endoscopic training image. Identifying the loss may include classifying cross-entropy loss, L2 regularization, and Center loss (Center loss), and for an endoscopic training image comprising the data-enhanced endoscopic enhanced image, identifying the loss may further include a consistency constraint between the endoscopic raw image and a corresponding endoscopic raw image. And after the recognition loss is determined, adjusting the part recognition model to be trained according to the recognition loss, and continuing training until the training ending condition is met, and ending the training if the recognition loss meets the convergence condition to obtain the trained part recognition model. The trained part recognition model can recognize a part from an input endoscopic image, output a part recognition result of the endoscopic image, and specify a part correspondingly represented by the endoscopic image based on the part recognition result.
In one embodiment, before acquiring the endoscopic training image carrying the site tag, the method further comprises: acquiring an endoscopic original image carrying a position label; carrying out data enhancement processing on the endoscopic original image to obtain an endoscopic enhanced image which is corresponding to the endoscopic original image and carries a position label; and obtaining an endoscopic training image according to the endoscopic original image and the endoscopic enhanced image.
In the embodiment, the data enhancement is performed on the endoscopic original image carrying the position label, so that the model training sample data is expanded, the model training efficiency is improved, and the generalization capability of the trained model is improved.
Specifically, an endoscopic original image carrying a part label is obtained, and the endoscopic original image is obtained by labeling an endoscopic image directly extracted from a historical video stream shot by an endoscope. And performing data enhancement processing on the endoscopic original image, such as random cutting, random rotation, random brightness, color and contrast dithering and other data enhancement processing, to obtain an endoscopic enhanced image carrying a part label corresponding to the endoscopic original image, wherein the part label of the endoscopic enhanced image is the same as the part label of the corresponding endoscopic original image. The random clipping is to randomly select an image block (patch) with a fixed size from an endoscopic original image as an endoscopic enhanced image of the model. The random rotation is to perform rotation transformation of random angles on the peep original image. The random dithering of brightness, color and contrast refers to that the color, brightness and contrast values of an image are randomly changed in an endoscopic original image. By carrying out data enhancement processing on the endoscopic original image, the scale of model sample data can be effectively expanded, so that the model training efficiency and effect are improved. And performing data enhancement processing on the endoscopic original image to obtain a corresponding endoscopic enhanced image with a part label, and then obtaining an endoscopic training image according to the endoscopic original image and the endoscopic enhanced image, wherein the endoscopic training image is used as model training sample data of the endoscopic training image.
In one embodiment, as shown in fig. 5, the step of determining the recognition loss according to the difference between the part recognition training result and the part tag carried by the endoscopic training image includes:
and S502, obtaining a first recognition loss according to the classification cross entropy between the part recognition training result and the part label.
In this embodiment, the recognition loss of the part recognition model is determined by the classification cross entropy, the consistency constraint, and the consistency constraint. The classification cross entropy is used for measuring the difference between the part recognition training result and the part label, and can be obtained by calculating the part recognition training result and the part label, and the classification cross entropy is used as a first recognition loss.
S504, obtaining a second identification loss according to the difference of the identification characteristics between the endoscopic enhanced image and the endoscopic original image in the endoscopic training image; the recognition features are obtained from a part recognition model to be trained.
When the endoscopic training image comprises an endoscopic enhancement image after data enhancement processing, the extracted features of the model should be close to each other by considering the same picture through different transformations, and consistency constraint is introduced. The consistency constraint is obtained by the difference of the identification characteristics between the endoscopic enhanced image and the endoscopic original image in the endoscopic training image, and is used as a second identification loss of the part identification model. The recognition difference between the endoscopic enhanced image and the corresponding endoscopic original image can be reduced through the second recognition loss. The identification features are obtained from a part identification model to be trained, for example, for a DenseNet model, the identification features can be obtained according to the input of a softmax layer of a last classification layer in the part identification model.
And S506, obtaining a third recognition loss according to the center loss of the endoscopic training image.
The central loss of the endoscopic training image can be determined according to the feature vector of the endoscopic training image and the central feature vector of the corresponding part type of the endoscopic training image. The feature vector of the endoscopic training image can be extracted from the part recognition model after the part recognition training result is output by the part recognition model; the central feature vector of the part type corresponding to the endoscopic training image can be determined according to the feature centers corresponding to different part types. And obtaining a third recognition loss according to the central loss of the endoscopic training image, wherein the third recognition loss can enable the part recognition model to be more stable and cohesive in the feature learned by each part category, so that the generalization capability of the part recognition model in a real environment is improved.
And S508, determining the identification loss according to the first identification loss, the second identification loss and the third identification loss.
And determining the identification loss according to the obtained first identification loss, the second identification loss and the third identification loss, and specifically, synthesizing the first identification loss, the second identification loss and the third identification loss to obtain the identification loss. In addition, an L2 regular determination recognition loss can be further introduced to prevent the model from being over-fitted and ensure the training effect of the model.
In one embodiment, an endoscope movement time determination method is provided, which is implemented based on a pre-trained site recognition model. The part recognition model adopts a DenseNet-121 model structure, the structure comprises 4 dense blocks (dense blocks), the growth-rate (growth-rate) of the model is set to 24, the characteristic compression ratio of the translation layer (translation layer) is set to 0.5, and the specific structure of the model is shown in FIG. 6. The part recognition model includes a Convolution Layer (Convolution), a Pooling Layer (Pooling), a first conversion Layer, a first dense block, a second conversion Layer, a second dense block, a third conversion Layer, a fourth dense block, and a Classification Layer (Classification Layer) connected in sequence, and an Output Size (Output Size) and a model structure of each Layer structure are shown in FIG. 6.
In this embodiment, the endoscope is an enteroscope, and the endoscope movement time determination method is used to determine the endoscope withdrawal time of the enteroscope. The input of the part identification model is a white light RGB endoscopic image shot by an enteroscope, and the output is the part of the input endoscopic image, namely the organ type, including the ileocecal part, the anus and other parts. Fig. 7 is a schematic flow chart illustrating the process of determining the endoscope withdrawal time of the enteroscope using the site recognition model. The input endoscopic image is processed by a convolution layer, then is processed by each layer structure of a DenseNet-121 model in sequence, and finally a part identification result is output through a Linear classification layer (Linear), specifically a ileocecal part, an anus or the like. In this example, the DenseNet-121 model is used to perform the site recognition on the endoscopic image captured by the enteroscope, which can satisfy the task complexity of site recognition of the endoscopic image, and if some types of forms and colors are very similar, the type recognition needs to combine low-latitude color, texture feature and high-latitude abstract semantic feature, such as hue and smoothness of the whole mucosa, so as to obtain an accurate site recognition result.
Furthermore, when the part recognition model is trained, the differences of the endoscopic images shot by the enteroscope, which are caused by the differences of the collected environment, the collected equipment and the shooting habits of the user, are considered, so that the images of the same organ are different in appearance, and local performances of different organs are very similar. The labeled data is also limited (ten thousand levels), and the distribution and data volume difference of different categories are large.
In order to ensure that the DenseNet-121 model can extract the characteristics of sufficient robustness, during training, a rich and diversified data enhancement technology is adopted, specifically, data enhancement processing can be performed on an endoscopic original image, the endoscopic original image is obtained after being marked according to an endoscopic image directly extracted from a historical video stream shot by an endoscope, specifically, data enhancement processing such as random cutting, random rotation, random dithering of brightness, color and contrast is performed, an endoscopic enhanced image with a part label corresponding to the endoscopic original image is obtained, and the part label of the endoscopic enhanced image is the same as the part label of the corresponding endoscopic original image. The random clipping is to randomly select an image block (patch) with a fixed size from an endoscopic original image as an endoscopic enhanced image of the model. The random rotation is to perform rotation transformation of random angles on the peep original image. The random dithering of brightness, color and contrast refers to that the color, brightness and contrast values of an image are randomly changed in an endoscopic original image. Fig. 8 is a schematic processing diagram of performing model training on an endoscopic training image obtained by performing data enhancement processing on an endoscopic original image and then using the endoscopic original image and an endoscopic enhanced image in an embodiment. In fig. 8, the endoscopic raw image is subjected to rotation and color conversion processing, and model training is performed in combination with the endoscopic raw image itself. By carrying out data enhancement processing on the endoscopic original image, the scale of model sample data can be effectively expanded, so that the model training efficiency and effect are improved.
In the aspect of model generalization capability, in addition to the classification cross entropy loss and the L2 regularization, in consideration of the fact that the features extracted by the region recognition model should be very close to each other through the same picture of different transformations, consistency constraint is introduced, specifically, the consistency constraint is obtained through the difference of the recognition features between the endoscopic enhanced image and the endoscopic original image in the endoscopic training image, and the consistency constraint is used as the second recognition loss of the region recognition model. The recognition difference between the endoscopic enhanced image and the corresponding endoscopic original image can be reduced through the second recognition loss. Specifically, in the present embodiment, the classification cross entropy, the L2 regularization and the consistency constraint are expressed by the formula (1),
Figure BDA0002307317570000171
wherein, L (w) is a defined first cost function, and w is a parameter of the part identification model; n is the number of training samples, namely the number of endoscopic training images; w is a model parameter; x is the number ofiFor input endoscopic training images, i.e. pixel matrices, y of endoscopic training imagesiA part label carried by an input endoscopic training image; r is a super parameter, and a numerical value larger than 0 is taken for representing the weight; m is the number of data enhancement processes; h is0Representing the feature vector h extracted from the previous layer of the input endoscopic original image through a position recognition model softmaxkRepresenting a feature vector extracted from an endoscopic enhanced image generated after an endoscopic original image is subjected to certain transformation (such as random rotation, random color dithering and the like) through a layer before a part recognition model softmax; lambda is a super parameter, and a value larger than 0 is taken for representing the weight; y isilog f(xi(ii) a w) is the classification cross entropy,
Figure BDA0002307317570000172
for the parameter L2 to be regular,
Figure BDA0002307317570000173
is a consistency constraint.
Further, in order to ensure that the features of the model learned for each part type are more stable and cohesive, a Center Loss is introduced into the objective function, and the generalization capability of the model in the real environment is further improved. The center loss is shown as formula (2),
Figure BDA0002307317570000181
wherein, CyiAnd (4) a central feature vector of a corresponding part category of the endoscopic training image.
The loss function of the part recognition model is constructed based on the first cost function and the central loss to determine the recognition loss of the model, and the recognition loss is adjusted by adjusting the parameters of the part recognition model to train, so that the model training efficiency can be improved, the generalization capability of the model can be ensured, and the part recognition accuracy of the model can be improved.
In one embodiment, as shown in fig. 9, there is provided an endoscope movement time determination method including:
s901, sequentially extracting endoscopic images from a video stream shot by an endoscope in the moving process;
s902, inputting the extracted endoscopic images to a part recognition model in sequence;
and S903, acquiring the part recognition results which are sequentially output after the part recognition model processes the input endoscopic images.
In this embodiment, an endoscopic image is extracted from a video stream immediately captured by an endoscope, and a part is identified in real time by a part identification model, so as to obtain a part identification result.
S904, the endoscopic image with the earliest imaging time among the endoscopic images of the first site indicated by the site recognition result is taken as the first endoscopic image;
s905, determining a time point corresponding to the first endoscopic image;
s906, taking the time point corresponding to the first endoscopic image as the moving time starting point of the endoscope, and generating a timing start prompt message;
s907, setting the endoscopic image with the earliest imaging time among the endoscopic images of the second region indicated by the region identification result as the second endoscopic image;
s908, determining a corresponding time point of the second endoscopic image;
in S909, the time point corresponding to the second endoscopic image is set as the end point of the movement time of the endoscope, and a timer end prompt message is generated.
In the present embodiment, when an endoscopic image indicating a first site is detected as a site recognition result, the endoscopic image is determined as a first endoscopic image, and a time point corresponding to the first endoscopic image is used as a start point of the movement time of the endoscope, and a time count start prompt message is generated to prompt an operator of the endoscope that the time count has been started; when the part recognition result is detected to show the endoscopic image of the second part, the endoscopic image is determined as the second endoscopic image, the time point corresponding to the second endoscopic image is taken as the moving time end point of the endoscope, and a timing end prompt message is generated to prompt the operator to end timing.
S910, determining the time difference between the starting point and the ending point of the moving time;
s911, obtaining the endoscope moving time when the endoscope passes through the first part and the second part during moving according to the time difference;
s912, inquiring a preset endoscope movement constraint condition;
and S913, obtaining an endoscope movement evaluation result according to the comparison result of the endoscope movement time and the endoscope movement constraint condition.
The endoscope movement time is determined directly from the time span between the start of movement time and the end of movement time. And evaluating the obtained endoscope moving time according to preset endoscope moving constraint conditions, thereby determining the quality of the endoscope operation of the current doctor.
Fig. 9 is a flowchart illustrating a method for determining a moving time of an endoscope according to an embodiment. It should be understood that, although the steps in the flowchart of fig. 9 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 9 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
As shown in fig. 10, in one embodiment, there is provided an endoscope movement time determining apparatus 1000 including:
an endoscopic image acquisition module 1002, configured to acquire an endoscopic image captured by an endoscope during a moving process; the endoscope passes through the first part and the second part in sequence when moving;
a part recognition module 1004, configured to perform part recognition on the endoscopic image to obtain a part recognition result;
a time starting point determining module 1006, configured to determine a moving time starting point of the endoscope according to a time point corresponding to the endoscopic image of the first site indicated by the site recognition result;
a time end point determining module 1008, configured to determine a moving time end point of the endoscope according to a time point corresponding to the endoscopic image of the second site indicated by the site recognition result;
and a moving time determining module 1010 for determining the moving time of the endoscope according to the moving time starting point and the moving time ending point.
In one embodiment, the endoscopic image acquisition module 1002 includes a video stream processing module for sequentially extracting endoscopic images from a video stream captured by the endoscope during movement; the part recognition module 1004 includes a model input module and a recognition result acquisition module; wherein: the model input module is used for sequentially inputting the extracted endoscopic images to the part recognition model; and the recognition result acquisition module is used for acquiring the part recognition results which are sequentially output after the part recognition model processes the input endoscopic images.
In one embodiment, the time origin determination module 1006 includes a first endoscopic image module, a first time point module, and a moving time origin module; wherein: a first endoscopic image module for determining a first endoscopic image according to the endoscopic image of the first part represented by the part identification result; the first time point module is used for determining a time point corresponding to the first endoscopic image; and the moving time starting point module is used for taking a time point corresponding to the first endoscopic image as the moving time starting point of the endoscope.
In one embodiment, the first endoscopic image module is further configured to take an endoscopic image with the earliest shooting time among endoscopic images of which the site recognition result represents the first site as the first endoscopic image; or determining the first endoscopic image according to the endoscopic images of the first part represented by the part identification results of at least two continuous frames.
In one embodiment, the time endpoint determination module 1008 includes a second endoscopic image module, a second time point module, and a movement time endpoint module; wherein: the second endoscopic image module is used for determining a second endoscopic image according to the endoscopic image of the second part represented by the part identification result; the second time point module is used for determining a time point corresponding to the second endoscopic image; and the moving time end point module is used for taking the time point corresponding to the second endoscopic image as the moving time end point of the endoscope.
In one embodiment, the second endoscopic image module is further configured to take an endoscopic image with the earliest shooting time among endoscopic images of which the part identification result indicates the second part as the second endoscopic image; or determining a second endoscopic image according to the endoscopic image representing the second part according to the part identification results of at least two continuous frames.
In one embodiment, the system further comprises a timing starting prompt module and a timing ending prompt module; wherein: the timing start prompting module is used for generating a timing start prompting message; and the timing end prompting module is used for generating a timing end prompting message.
In one embodiment, the move time determination module 1010 includes a time difference determination module and a move time module; wherein: the time difference determining module is used for determining the time difference between the moving time starting point and the moving time end point; and the moving time module is used for obtaining the moving time of the endoscope passing through the first part and the second part when the endoscope moves according to the time difference.
In one embodiment, the system further comprises a constraint condition module and an evaluation result obtaining module; wherein: the constraint condition module is used for inquiring preset endoscope movement constraint conditions; and the evaluation result obtaining module is used for obtaining the endoscope movement evaluation result according to the comparison result of the endoscope movement time and the endoscope movement constraint condition.
In one embodiment, the system further comprises a training image acquisition module, a training recognition module and a recognition loss determination module; wherein: the training image acquisition module is used for acquiring an endoscopic training image carrying a part label; the training identification module is used for carrying out part identification on the endoscopic training image through the part identification model to be trained to obtain a part identification training result corresponding to the endoscopic training image; the identification loss determining module is used for determining identification loss according to the difference between the part identification training result and the part label carried by the endoscopic training image; and adjusting the part recognition model to be trained according to the recognition loss, and continuing training until the training ending condition is met, so as to obtain the trained part recognition model.
In one embodiment, the system further comprises an original image acquisition module, an enhancement processing module and a training image acquisition module; wherein: the original image acquisition module is used for acquiring an endoscopic original image carrying a position label; the enhancement processing module is used for carrying out data enhancement processing on the endoscopic original image to obtain an endoscopic enhanced image which is corresponding to the endoscopic original image and carries the position label; and the training image obtaining module is used for obtaining an endoscopic training image according to the endoscopic original image and the endoscopic enhanced image.
In one embodiment, the identification loss determination module includes a first identification loss module, a second identification loss module, a third identification loss module, and an identification loss module; wherein: the first recognition loss module is used for obtaining first recognition loss according to the classification cross entropy between the part recognition training result and the part label; the second identification loss module is used for obtaining second identification loss according to the difference of the identification characteristics between the endoscopic enhanced image and the endoscopic original image in the endoscopic training image; the identification characteristics are obtained from a part identification model to be trained; the third identification loss module is used for obtaining third identification loss according to the center loss of the endoscopic training image; and the identification loss module is used for determining the identification loss according to the first identification loss, the second identification loss and the third identification loss.
FIG. 11 is a diagram illustrating an internal structure of a computer device in one embodiment. The computer device may specifically be the server 120 in fig. 1. As shown in fig. 11, the computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system and may also store a computer program that, when executed by the processor, causes the processor to implement the endoscope movement time determination method. The internal memory may also have stored therein a computer program that, when executed by the processor, causes the processor to execute a method of determining endoscope movement time.
Those skilled in the art will appreciate that the architecture shown in fig. 11 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, the endoscope movement time determination apparatus provided herein may be implemented in the form of a computer program that is executable on a computer device such as that shown in fig. 11. The memory of the computer device may store various program modules constituting the endoscope movement time determination apparatus, such as an endoscopic image acquisition module 1002, a site recognition module 1004, a time start point determination module 1006, a time end point determination module 1008, and a movement time determination module 1010 shown in fig. 10. The computer program constituted by the respective program modules causes the processor to execute the steps in the endoscope movement time determination method of the respective embodiments of the present application described in the present specification.
For example, the computer device shown in fig. 11 may perform the acquisition of endoscopic images captured by the endoscope during movement by an endoscopic image acquisition module 1002 in the endoscope movement time determination apparatus shown in fig. 10; the endoscope passes through the first and second locations in sequence as it moves. The computer device may perform the site recognition on the endoscopic image through the site recognition module 1004 to obtain a site recognition result. The computer device may determine the moving time start point of the endoscope by the time start determination module 1006 according to a time point corresponding to the endoscopic image of the first site represented by the site recognition result. The computer device may determine the moving time end point of the endoscope by the time end point determination module 1008 according to the time point corresponding to the endoscopic image of the second site represented by the site recognition result. The computer device may perform the determination of the endoscope movement time from the movement time start point and the movement time end point by the movement time determination module 1010.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the above-described endoscope movement time determination method. Here, the steps of the endoscope movement time determination method may be the steps in the endoscope movement time determination methods of the respective embodiments described above.
In one embodiment, a computer-readable storage medium is provided, storing a computer program which, when executed by a processor, causes the processor to perform the steps of the endoscope movement time determination method described above. Here, the steps of the endoscope movement time determination method may be the steps in the endoscope movement time determination methods of the respective embodiments described above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (15)

1. An endoscope movement time determination method comprising:
acquiring an endoscopic image shot by an endoscope in the moving process; the endoscope passes through the first part and the second part in sequence when moving;
carrying out part identification on the endoscopic image to obtain a part identification result;
determining a moving time starting point of the endoscope according to a time point corresponding to the endoscopic image of the first part represented by the part identification result;
determining a moving time end point of the endoscope according to a time point corresponding to the endoscopic image of the second part represented by the part identification result;
and determining the moving time of the endoscope according to the moving time starting point and the moving time end point.
2. The method according to claim 1, wherein the acquiring endoscopic images captured by the endoscope during the moving comprises:
sequentially extracting endoscopic images from a video stream obtained by shooting the endoscope in the moving process;
the performing the site recognition on the endoscopic image to obtain a site recognition result includes:
inputting the extracted endoscopic images to a part recognition model in sequence;
and acquiring the part recognition results which are sequentially output after the part recognition model processes the input endoscopic images.
3. The method according to claim 1, wherein the determining a starting point of a moving time of the endoscope based on a time point corresponding to the endoscopic image in which the site recognition result indicates the first site comprises:
determining a first endoscopic image from the endoscopic image indicating the first site based on the site recognition result;
determining a time point corresponding to the first endoscopic image;
and taking the time point corresponding to the first endoscopic image as the starting point of the moving time of the endoscope.
4. The method according to claim 3, wherein determining the first endoscopic image from the endoscopic image representative of the first site as a result of the site identification comprises:
taking an endoscopic image with the earliest shooting time in the endoscopic images with the part identification results representing the first part as a first endoscopic image; or
A first endoscopic image is determined based on endoscopic images representing a first site from the results of site recognition of at least two consecutive frames.
5. The method according to claim 1, wherein the determining the end point of the movement time of the endoscope based on the time point corresponding to the endoscopic image in which the site recognition result indicates the second site comprises:
determining a second endoscopic image from the endoscopic image indicating the second site based on the site recognition result;
determining a time point corresponding to the second endoscopic image;
and taking the time point corresponding to the second endoscopic image as the moving time end point of the endoscope.
6. The method according to claim 5, wherein determining the second endoscopic image from the endoscopic image representative of the second site as a result of the site identification comprises:
taking an endoscopic image with the earliest shooting time in the endoscopic images of the second part represented by the part identification result as a second endoscopic image; or
And determining a second endoscopic image according to the endoscopic image which shows the second part according to the part identification results of at least two continuous frames.
7. The method according to claim 1, further comprising, after determining a start point of a movement time of the endoscope at a time point corresponding to the endoscopic image indicating the first site based on the site recognition result, the method further comprising: generating a timing start prompt message;
after the end point of the moving time of the endoscope is determined at the time point corresponding to the endoscopic image indicating the second site based on the site recognition result, the method further includes: and generating a timing end prompt message.
8. The method of claim 1, wherein said determining an endoscope movement time from said movement time start point and said movement time end point comprises:
determining a time difference between the start of movement time and the end of movement time;
and obtaining the endoscope moving time when the endoscope passes through the first part and the second part during moving according to the time difference.
9. The method of any one of claims 1 to 8, further comprising:
inquiring a preset endoscope movement constraint condition;
and obtaining an endoscope movement evaluation result according to the comparison result of the endoscope movement time and the endoscope movement constraint condition.
10. The method of claim 2, wherein the step of training the part recognition model comprises:
acquiring an endoscopic training image carrying a position label;
carrying out part recognition on the endoscopic training image through a part recognition model to be trained to obtain a part recognition training result to which the endoscopic training image belongs correspondingly;
determining recognition loss according to the difference between the part recognition training result and a part label carried by the endoscopic training image;
and adjusting the part recognition model to be trained according to the recognition loss, and continuing training until the training ending condition is met, so as to obtain the trained part recognition model.
11. The method according to claim 10, further comprising, prior to said acquiring the site tag-carrying endoscopic training image:
acquiring an endoscopic original image carrying a position label;
performing data enhancement processing on the endoscopic original image to obtain an endoscopic enhanced image which is corresponding to the endoscopic original image and carries a part label;
and obtaining an endoscopic training image according to the endoscopic original image and the endoscopic enhanced image.
12. The method according to any one of claims 1 to 11, wherein the determining a recognition loss from a difference between the part recognition training result and a part tag carried by the endoscopic training image comprises:
obtaining a first recognition loss according to the classification cross entropy between the part recognition training result and the part label;
obtaining a second identification loss according to the difference of the identification characteristics between the endoscopic enhanced image and the endoscopic original image in the endoscopic training image; the identification features are obtained from a part identification model to be trained;
obtaining a third identification loss according to the center loss of the endoscopic training image;
an identification loss is determined based on the first identification loss, the second identification loss, and the third identification loss.
13. An endoscope movement time determination apparatus, characterized in that the apparatus comprises:
the endoscopic image acquisition module is used for acquiring an endoscopic image shot by the endoscope in the moving process; the endoscope passes through the first part and the second part in sequence when moving;
the part recognition module is used for carrying out part recognition on the endoscopic image to obtain a part recognition result;
a time starting point determining module, which is used for determining the moving time starting point of the endoscope according to the time point corresponding to the endoscopic image of the first part represented by the part identification result;
the time end point determining module is used for determining the moving time end point of the endoscope according to the time point corresponding to the endoscopic image of the second part represented by the part identification result;
and the moving time determining module is used for determining the moving time of the endoscope according to the moving time starting point and the moving time end point.
14. A computer-readable storage medium, storing a computer program which, when executed by a processor, causes the processor to carry out the steps of the method according to any one of claims 1 to 12.
15. A computer device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of the method of any one of claims 1 to 12.
CN201911245155.0A 2019-12-06 2019-12-06 Endoscope moving time determining method and device and computer equipment Active CN111012285B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110551339.0A CN113288007B (en) 2019-12-06 2019-12-06 Endoscope moving time determining method and device and computer equipment
CN201911245155.0A CN111012285B (en) 2019-12-06 2019-12-06 Endoscope moving time determining method and device and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911245155.0A CN111012285B (en) 2019-12-06 2019-12-06 Endoscope moving time determining method and device and computer equipment

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202110551339.0A Division CN113288007B (en) 2019-12-06 2019-12-06 Endoscope moving time determining method and device and computer equipment

Publications (2)

Publication Number Publication Date
CN111012285A true CN111012285A (en) 2020-04-17
CN111012285B CN111012285B (en) 2021-06-08

Family

ID=70207527

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110551339.0A Active CN113288007B (en) 2019-12-06 2019-12-06 Endoscope moving time determining method and device and computer equipment
CN201911245155.0A Active CN111012285B (en) 2019-12-06 2019-12-06 Endoscope moving time determining method and device and computer equipment

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202110551339.0A Active CN113288007B (en) 2019-12-06 2019-12-06 Endoscope moving time determining method and device and computer equipment

Country Status (1)

Country Link
CN (2) CN113288007B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111767958A (en) * 2020-07-01 2020-10-13 武汉楚精灵医疗科技有限公司 Real-time enteroscopy withdrawal time monitoring method based on random forest algorithm
CN113768452A (en) * 2021-09-16 2021-12-10 重庆金山医疗技术研究院有限公司 Intelligent timing method and device for electronic endoscope
WO2023207564A1 (en) * 2022-04-29 2023-11-02 小荷医疗器械(海南)有限公司 Endoscope advancing and retreating time determining method and device based on image recognition

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114332025B (en) * 2021-12-29 2022-07-26 长沙慧维智能医疗科技有限公司 Digestive endoscopy oropharynx passing time automatic detection system and method
CN116681681B (en) * 2023-06-13 2024-04-02 富士胶片(中国)投资有限公司 Endoscopic image processing method, device, user equipment and medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1777390A (en) * 2003-04-25 2006-05-24 奥林巴斯株式会社 Device, method and program for image processing
CN101005794A (en) * 2004-08-23 2007-07-25 奥林巴斯株式会社 Image display device, image display method and image display program
CN101257838A (en) * 2005-09-09 2008-09-03 奥林巴斯医疗株式会社 Image display apparatus
CN102361585A (en) * 2009-03-23 2012-02-22 奥林巴斯医疗株式会社 Image processing system, external apparatus and image processing method therefor
CN103004188A (en) * 2010-07-19 2013-03-27 爱普索科技有限公司 Apparatus, system and method
CN107886503A (en) * 2017-10-27 2018-04-06 重庆金山医疗器械有限公司 A kind of alimentary canal anatomical position recognition methods and device
CN109523522A (en) * 2018-10-30 2019-03-26 腾讯科技(深圳)有限公司 Processing method, device, system and the storage medium of endoscopic images

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019049451A1 (en) * 2017-09-05 2019-03-14 オリンパス株式会社 Video processor, endoscope system, display method, and display program
CN110097105A (en) * 2019-04-22 2019-08-06 上海珍灵医疗科技有限公司 A kind of digestive endoscopy based on artificial intelligence is checked on the quality automatic evaluation method and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1777390A (en) * 2003-04-25 2006-05-24 奥林巴斯株式会社 Device, method and program for image processing
CN101005794A (en) * 2004-08-23 2007-07-25 奥林巴斯株式会社 Image display device, image display method and image display program
CN101257838A (en) * 2005-09-09 2008-09-03 奥林巴斯医疗株式会社 Image display apparatus
CN102361585A (en) * 2009-03-23 2012-02-22 奥林巴斯医疗株式会社 Image processing system, external apparatus and image processing method therefor
CN103004188A (en) * 2010-07-19 2013-03-27 爱普索科技有限公司 Apparatus, system and method
CN107886503A (en) * 2017-10-27 2018-04-06 重庆金山医疗器械有限公司 A kind of alimentary canal anatomical position recognition methods and device
CN109523522A (en) * 2018-10-30 2019-03-26 腾讯科技(深圳)有限公司 Processing method, device, system and the storage medium of endoscopic images

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111767958A (en) * 2020-07-01 2020-10-13 武汉楚精灵医疗科技有限公司 Real-time enteroscopy withdrawal time monitoring method based on random forest algorithm
CN113768452A (en) * 2021-09-16 2021-12-10 重庆金山医疗技术研究院有限公司 Intelligent timing method and device for electronic endoscope
WO2023207564A1 (en) * 2022-04-29 2023-11-02 小荷医疗器械(海南)有限公司 Endoscope advancing and retreating time determining method and device based on image recognition

Also Published As

Publication number Publication date
CN111012285B (en) 2021-06-08
CN113288007A (en) 2021-08-24
CN113288007B (en) 2022-08-09

Similar Documents

Publication Publication Date Title
CN111012285B (en) Endoscope moving time determining method and device and computer equipment
CN110600122B (en) Digestive tract image processing method and device and medical system
WO2021147429A1 (en) Endoscopic image display method, apparatus, computer device, and storage medium
CN106659362A (en) Image processing device, image processing method, image processing program, and endoscope system
JP2010279539A (en) Diagnosis supporting apparatus, method, and program
CN104869884B (en) Medical image-processing apparatus and medical image processing method
CN103458765B (en) Image processing apparatus
CN105072968A (en) Image processing device, endoscopic device, program and image processing method
CN106793939A (en) For the method and system of the diagnostic mapping of bladder
JP7050817B2 (en) Image processing device, processor device, endoscope system, operation method and program of image processing device
CN113642537B (en) Medical image recognition method and device, computer equipment and storage medium
CN112419295A (en) Medical image processing method, apparatus, computer device and storage medium
CN109241898B (en) Method and system for positioning target of endoscopic video and storage medium
CN114842000A (en) Endoscope image quality evaluation method and system
CN113421231B (en) Bleeding point detection method, device and system
CN114445406B (en) Enteroscopy image analysis method and device and medical image processing equipment
US20220361739A1 (en) Image processing apparatus, image processing method, and endoscope apparatus
JP6710853B2 (en) Probe-type confocal laser microscope endoscopic image diagnosis support device
CN113744266B (en) Method and device for displaying focus detection frame, electronic equipment and storage medium
CN112734707B (en) Auxiliary detection method, system and device for 3D endoscope and storage medium
JP2019118670A (en) Diagnosis support apparatus, image processing method, and program
CN114241565A (en) Facial expression and target object state analysis method, device and equipment
KR20220094791A (en) Method and system for data augmentation
CN112488036A (en) Tongue tremor degree evaluation system based on artificial intelligence
WO2022191058A1 (en) Endoscopic image processing device, method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40021964

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant