CN117717305B - Film reading interface display method of capsule endoscope image - Google Patents

Film reading interface display method of capsule endoscope image Download PDF

Info

Publication number
CN117717305B
CN117717305B CN202311349220.0A CN202311349220A CN117717305B CN 117717305 B CN117717305 B CN 117717305B CN 202311349220 A CN202311349220 A CN 202311349220A CN 117717305 B CN117717305 B CN 117717305B
Authority
CN
China
Prior art keywords
image
organ
images
area
displaying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311349220.0A
Other languages
Chinese (zh)
Other versions
CN117717305A (en
Inventor
曹健
张幸
嵇梦玥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Municipal Hospital
Original Assignee
Suzhou Municipal Hospital
Filing date
Publication date
Application filed by Suzhou Municipal Hospital filed Critical Suzhou Municipal Hospital
Priority to CN202311349220.0A priority Critical patent/CN117717305B/en
Publication of CN117717305A publication Critical patent/CN117717305A/en
Application granted granted Critical
Publication of CN117717305B publication Critical patent/CN117717305B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention provides a film reading interface display method of a capsule endoscope image, which comprises the following steps: displaying a film reading interface comprising a first area and a second area; displaying an alimentary tract map including a plurality of organ controls in a first area; acquiring a capsule endoscope image set, classifying the capsule endoscope images, and generating a plurality of organ image main sets; each organ image main set corresponds to one organ control; identifying fuzzy images and redundant images in the organ image main set, and generating an organ image sub-set from which the fuzzy images and the redundant images are removed; triggering the organ control in response to a first command, and displaying an image of a main set of organ images corresponding to the organ control in the second area; and responding to a second command to send the organ control, and displaying images of organ image subsets corresponding to the organ control in the second area. Compared with the prior art, the method for displaying the film reading interface can reduce the number of the film reading of doctors, thereby reducing the workload of the doctors.

Description

Film reading interface display method of capsule endoscope image
Technical Field
The invention relates to a method for displaying a film reading interface of a capsule endoscope image.
Background
Conventional hand-held endoscopes are not capable of inspecting the entire digestive tract. For example, gastroscopes can only examine the upper digestive tract, while enteroscopes can only examine the colon and rectum. While the capsule endoscope can observe the whole digestive tract of the human body. The capsule endoscope integrates the functions of image acquisition, wireless transmission and the like into a capsule which can be swallowed by a human body. When an examination is performed, the patient swallows the capsule endoscope from the mouth, and the capsule endoscope is then advanced along the alimentary canal by peristaltic action. The capsule endoscope stays in the human body for about 8 hours and images are taken at a rate of 2 frames per second. During the period, the image data shot and generated by the capsule endoscope are sequentially transmitted through the wireless communication module and are received and stored by a wireless receiving device arranged outside the patient. When the examination is finished, the image data are exported to an image display device in a video sequence mode, and the images are displayed through professional software for the clinician to read and diagnose.
Referring to fig. 1, a conventional film reading interface includes an image area 1 and a pathological image display area 2, wherein the image area 1 and the pathological image display area 2 are arranged along an up-down direction. When a doctor reads a film, clicking a play button of an image area, and playing the images shot by the capsule endoscope frame by frame in a video sequence mode. When a doctor observes a focus, the doctor manually pauses the playing, and then manually adds an image containing the focus to a pathological diagram display area for later generation of a diagnosis report. However, a capsule endoscope will produce more than 50000 images. This is an extremely time consuming and energy intensive task for the physician to read the film. When a doctor observes from frame to frame, there is a possibility that a lesion is missed due to visual fatigue. Second, the capsule endoscope may take blurred images due to environmental, auto-focusing of the capsule endoscope, etc. However, blurred images are not very useful for a doctor to read, but can be a great effort for the doctor. Finally, in clinical applications, many patients only examine a partial region of the digestive tract, such as the stomach, small intestine, rectum, etc. However, the capsule endoscope takes an image of the entire digestive tract. The existing film reading interface can not accurately position the region where the organ is located, so that the film reading difficulty of doctors is increased.
In view of the foregoing, it is desirable to provide a new method for displaying a film reading interface to solve the foregoing problems.
Disclosure of Invention
The invention aims to provide a film reading interface display method which can not only reduce the film reading quantity of doctors so as to lighten the workload of the doctors, but also rapidly and accurately position organ images so as to improve the working efficiency of the doctors. In addition, when a doctor needs to review front and rear images of a focus, the film reading interface display method can conveniently display original images before and after the focus, thereby effectively improving the working efficiency of the doctor and reducing the workload of the doctor.
In order to achieve the above object, the present invention provides a method for displaying a film reading interface of a capsule endoscope image, comprising: displaying a film reading interface, wherein the film reading interface comprises a first area and a second area, and the first area and the second area are arranged left and right; displaying an alimentary tract diagram in the first area, the alimentary tract diagram including a plurality of organ controls; acquiring a capsule endoscope image set, classifying the capsule endoscope images, and generating a plurality of organ image main sets; each organ image main set corresponds to one organ control; identifying fuzzy images and redundant images in the organ image main set, and generating an organ image sub-set from which the fuzzy images and the redundant images are removed; triggering the organ control in response to a first command, and displaying an image of a main set of organ images corresponding to the organ control in the second area; and responding to a second command to send the organ control, and displaying images of organ image subsets corresponding to the organ control in the second area.
As a further improvement of the invention, the second area comprises an image area and a progress area, and the image area and the progress area are arranged up and down; the image area is used for displaying images of the organ image main set or the organ image sub-set, and the progress area is used for displaying progress controls; in response to triggering a progress control, the image area displays an image corresponding to a trigger position of the progress control.
As a further improvement of the present invention, identifying whether each image in the organ image subset has a lesion; if the image has a focus, marking the focus in the image by using a first mark; at the same time, the image is marked with a second mark at the position corresponding to the progress control of the organ image main set, and the image is marked with a second mark at the position corresponding to the progress control of the organ image sub-set.
As a further improvement of the present invention, the organ control is displayed in an initial state when the organ control is not triggered; when the organ control is triggered by a first command, the organ control is displayed in a first state; when the organ control is triggered by a second command, the organ control is displayed in a second state.
As a further improvement of the present invention, when the organ control is triggered by a third command, the film reading interface displays a pop-up box for displaying an image marking the lesion.
As a further improvement of the present invention, the third command is that the mouse pointer is suspended on the organ control; when the mouse pointer is hovered over the organ control and moved, the pop-up box displays another image of the marked lesion.
As a further improvement of the present invention, the pop-up box displays images marking a lesion in time sequence as the mouse pointer moves on the alimentary canal along the advancing direction of the capsule; when the mouse pointer is moved on the alimentary canal in the opposite direction of the capsule advancement, the pop-up box displays images marking the lesion in reverse order of time.
As a further improvement of the present invention, when the mouse pointer is moved to a pop-up box and the mouse is clicked or double-clicked, the image area displays an image displayed by the pop-up box while the pop-up box disappears.
As a further improvement of the present invention, a third marker is displayed on an organ control corresponding to the lesion-labeled image; the location of the third marker on the organ control corresponds to the location of the lesion on the organ.
As a further improvement of the present invention, the digestive tract map further includes a focus map number display area; the focus image quantity display area is used for displaying the quantity of images marked with focuses and the quantity of images marked with focuses which are read.
The beneficial effects of the invention are as follows: the method for displaying the film reading interface can reduce the film reading quantity of doctors, thereby reducing the workload of the doctors, and can rapidly and accurately position the organ images, thereby improving the working efficiency of the doctors. In addition, when a doctor needs to review front and rear images of a focus, the film reading interface display method can conveniently display original images before and after the focus, thereby effectively improving the working efficiency of the doctor and reducing the workload of the doctor.
Drawings
FIG. 1 is a schematic illustration of a prior art film reading interface.
FIG. 2 is a schematic illustration of a film reader interface of the present invention.
Fig. 3 is a flow chart for identifying redundant images.
FIG. 4 is a flow chart of a method of displaying a film reading interface of a capsule endoscopic image of the present invention.
FIG. 5 is a schematic view of another state of the film reading interface shown in FIG. 2.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in detail with reference to the accompanying drawings and specific embodiments.
Fig. 2 is a schematic diagram of a film reading interface 100 provided in accordance with an exemplary embodiment of the present application. The film reading interface 100 is displayed on a display screen of the terminal device. The terminal device may be a mobile phone, tablet computer, notebook computer, desktop computer, etc. And the terminal equipment is provided with a film reading application program. The film reading application is provided with a film reading interface 100 provided by an embodiment of the present application. The film reading application program also comprises a data processing module. The data processing module may be integrated within the film reading application or may be a remote server. Referring to fig. 2, the film reading interface 100 includes a first area 10 and a second area 20, where the first area 10 and the second area 20 are arranged in a left-right arrangement. The first region 10 is used to show the digestive tract fig. 11. The gut map 11 includes several organ controls, such as: esophagus 111, stomach 112, small intestine 113, large intestine 114, and the like. The number of organ controls form the digestive tract or parts of the digestive tract. The second region 20 is used to display an image of the organ. Preferably, the second area 20 includes an image area 21, a progress area 22, and a pathological diagram display area 23. The image area 21, the progress area 22, and the pathological image display area 23 are arranged in order along the up-down direction. The image area 21 is used for displaying organ images, the progress area 22 is used for displaying progress controls, and the pathology map display area 23 is used for displaying pathology maps selected by doctors.
During film reading, the data processing module acquires a capsule endoscope image set and classifies images in the capsule endoscope image set so as to generate a plurality of organ image main sets, and each organ image main set corresponds to one organ control. For example, the esophageal image main set corresponds to the esophageal control 111, the stomach image main set corresponds to the stomach control 112, the small intestine image main set corresponds to the small intestine control 113, and the large intestine image main set corresponds to the large intestine control 114. The data processing module can realize automatic classification of the capsule endoscope images by training a convolutional neural network model. The construction of the organ classifier based on the convolutional neural network model comprises the following steps. First, a dataset comprising images of different human organs is collected. These images may come from medical images of hospitals, medical books, or online databases, etc. The dataset should contain images of various disease states, ages, sexes and ethnicities so that the model can have a wide variety of classification capabilities. Secondly, preprocessing is performed on the collected images, including operations such as clipping, resizing, graying, contrast enhancement, etc., to ensure consistency and quality of the input data. Then, a correct label is added to each image to indicate the organ type or disease state contained therein. This can be done manually or automatically using a deep learning method. Then, a Convolutional Neural Network (CNN) model is constructed and trained with a training set to learn the characteristics and patterns of the different organs. Finally, the constructed organ classifier based on the convolutional neural network model is used for automatically classifying the capsule endoscope image. Of course, it will be appreciated that in other embodiments, other models may be used to classify the capsule endoscopic images, and are not described in detail herein.
The data processing module then identifies blurred and redundant images in the primary set of organ images, thereby generating a secondary set of organ images from which the blurred and redundant images have been removed. Each organ image sub-set corresponds to an organ control. For example, the esophageal image subset corresponds to esophageal control 111, the gastric image subset corresponds to gastric control 112, the small intestine image subset corresponds to small intestine control 113, and the large intestine image subset corresponds to large intestine control 114. The data processing module may train a classification model based on Convolutional Neural Network (CNN) or cyclic neural network (RNN) to identify blurred images, or may identify blurred images based on frequency domain analysis.
Referring to fig. 3, when the data processing module identifies a redundant image, the method includes the following steps:
S101: and calculating the similarity SSIM between the nth image and the (n+1) th image.
Where n and n+1 are two images to be compared, μ n and μ n+1 are the mean of the luminance components of the images n and n+1, respectively, σ n and σ n+1 are their standard deviations, σ n(n+1) is the covariance between them, and C 1 and C 2 are constants for the stability calculation.
S102: when the SSIM is greater than the preset threshold, then the n+1st image is a redundant image. In this embodiment, the preset threshold is 0.6 to 0.8. By the arrangement, the continuity of the organ image sub-set when presenting the organ image can be ensured, namely: ensuring that no missing part exists between two adjacent images; meanwhile, redundant images can be removed, so that the workload of doctors is reduced, and the working efficiency of the doctors is improved.
Referring to fig. 2, when the user clicks the organ control, an image of a main set of organ images corresponding to the organ control is displayed in the second area 20. When the user double clicks the organ control, an image of a sub-set of organ images corresponding to the organ control is displayed in the second area 20. For example, when the user clicks the stomach control 112, the second area 20 displays an image of the main set of stomach images; when the user double clicks the stomach control 112, the second area 20 displays an image of a sub-set of stomach images.
In connection with the above description, fig. 4 is a flowchart of a method for displaying a film reading interface of a capsule endoscopic image according to an exemplary embodiment of the present application, which is performed by a terminal as shown in fig. 4, for example, the method for displaying a film reading interface of a capsule endoscopic image includes:
s201: the film reading interface 100 is displayed, wherein the film reading interface 100 comprises a first area 10 and a second area 20, and the first area 10 and the second area 20 are arranged left and right.
S202: in said first area 10 is shown a digestive tract map 11, said digestive tract map 11 comprising a number of organ controls.
In the present embodiment, the first area 10 is used to display the digestive tract map 11, but in other embodiments, the second area 20 may also be used to display the digestive tract map 11.
S203: acquiring a capsule endoscope image set, classifying the capsule endoscope images, and generating a plurality of organ image main sets; each organ image main set corresponds to an organ control.
S204: identifying fuzzy images and redundant images in the organ image main set, and generating an organ image sub-set from which the fuzzy images and the redundant images are removed; each organ image sub-set corresponds to an organ control.
S205: and triggering the organ control in response to the first command, and displaying images of a main set of organ images corresponding to the organ control in the second area 20.
In this embodiment, the first command is a mouse left click. In other embodiments, the first command may also be a mouse left click, or other form of command.
S206: and triggering the organ control in response to a second command, and displaying images of organ image subsets corresponding to the organ control in the second area 20.
In this embodiment, the second command is a left mouse button double click. But in other embodiments the second command may be a mouse left click, or other form of command.
Compared with the prior art, the method for displaying the film reading interface can reduce the film reading quantity of doctors, thereby reducing the workload of the doctors, and can rapidly and accurately position the organ images, thereby improving the working efficiency of the doctors. For example, when the patient complains about the stomach, the doctor may double click directly on the stomach control 112 so that the second area 20 may directly display an image of the stomach image subset. When the doctor needs to review the original images before and after the lesion, the doctor only needs to click the stomach control 112, and the second area 20 directly jumps to display the images of the main set of stomach images. By the arrangement, the working efficiency of doctors can be effectively improved, and the workload of the doctors is reduced.
Preferably, in response to triggering the progress control, the image area 21 displays an image corresponding to a trigger position of the progress control. For example, when the image area 21 displays images of a main set of stomach images, each position of the progress control 22 corresponds to each image of the main set of stomach images. When the doctor clicks on any one of the positions of the progress control 22, the image area 21 displays an image in the main set of stomach images corresponding to the clicked position. When the image area 21 displays images of a sub-set of stomach images, each position of the progress control 22 corresponds to each image of the sub-set of stomach images. When the doctor clicks on any one of the positions of the progress control 22, the image area 21 displays an image in the stomach image sub-set corresponding to the clicked position.
Preferably, identifying whether each of the images of the organ has a lesion; if there is a lesion in the image, marking the lesion in the image with a first mark 211; meanwhile, the image is marked with a second mark 221 at a position corresponding to the progress control of the organ image main set, and the image is marked with a second mark 221 at a position corresponding to the progress control 22 of the organ image sub-set.
Referring to fig. 2, the first mark 211 may be a loop line surrounding the lesion, and the loop line may be displayed in a striking color, such as red. The second mark 221 may be a red dot, a red triangle, or the like. Of course, the first mark 211 and the second mark 221 may be other implementation methods, which are not described in detail herein.
Preferably, when the organ control is not triggered, the organ control is displayed in an initial state; when the organ control is triggered by a first command, the organ control is displayed in a first state; when the organ control is triggered by a second command, the organ control is displayed in a second state. In this embodiment, the initial state is that the organ control is displayed in black and white or gray, the first state is that the organ control is highlighted, for example, orange, and the second state is that the organ control is highlighted, for example, blue, different from the first state.
Preferably, when the organ control is triggered by a third command, the film reading interface 100 displays a pop-up box 24, and the pop-up box 24 is used to display an image of the marked lesion.
Referring to fig. 5, in this embodiment, the third command is a mouse pointer 25 suspended over the organ control. When the mouse pointer 25 is hovered over the organ control and moved, the pop-up box 24 displays another image of the marked lesion.
Preferably, the pop-up box 24 displays images marking a lesion in time sequence as the mouse pointer 25 moves on the alimentary canal along the advancing direction of the capsule; the pop-up box 24 displays images marking the lesion in reverse order of time as the mouse pointer is moved over the alimentary canal in the opposite direction of capsule advancement. Preferably, when the mouse pointer 25 is hovered over an organ control, the pop-up box 24 displays an organ image marking a lesion and corresponding to the organ control; when the mouse pointer 25 is moved on the organ control along the advancing direction of the capsule, the pop-up box 24 displays organ images marked with lesions and corresponding to the organ control in time sequence; when the mouse pointer 25 is moved over the organ control in the opposite direction of the capsule advancement, the pop-up box 24 displays the organ images marked with lesions and corresponding to the organ control in reverse order of time. For example, when the mouse pointer 25 is hovered over the small intestine control 113, the pop-up box 24 displays a small intestine image marking a lesion. When the mouse pointer 25 is moved on the small intestine control 113 in the opposite direction of the capsule advancement, the pop-up box 24 displays the small intestine images marked with lesions in reverse order of time. The pop-up box 24 displays images of the small intestine marking the lesion in chronological order as the mouse pointer 25 is moved on the small intestine control 113 in the direction in which the capsule is advanced.
Preferably, when the mouse pointer 25 is moved to the pop-up box 24 and the left mouse button is clicked or double-clicked, the image area 21 displays the image displayed by the pop-up box 24 while the pop-up box 24 disappears. By the arrangement, doctors can conveniently and quickly locate and review interested focus images.
Referring to fig. 5, a third marker 115 is preferably displayed on the organ control corresponding to the lesion-labeled image. The location of the third marker 115 on the organ control corresponds to the location of the lesion on the organ. The third mark 115 may be a red dot. Of course, it is understood that in other embodiments, the third mark 115 may be implemented by other methods, such as: golden triangle, which is not described in detail herein.
Referring to fig. 5, the digestive tract fig. 11 preferably further includes a lesion number display area 116. The focus map number display area 116 is used to display the number of images marked with focus and the number of images marked with focus that have been read. By the arrangement, doctors can quickly know the number of the images marked with the focus and the images marked with the focus which are not yet referred, so that the doctors can be effectively prevented from missing to look.
Preferably, in response to triggering the lesion map number display area 116, the image area 21 displays a lesion-labeled organ image that has not yet been reviewed. For example, when a mouse pointer is hovered over the lesion number display area 116 and a mouse left button is clicked or double-clicked, the image area 21 displays an organ image marking a lesion that has not yet been referred to. By the arrangement, doctors can conveniently and quickly review organ images marked with lesions which are not reviewed. After the organ images marked with lesions have been completely reviewed, when a mouse pointer is suspended on the lesion number display area 116 and a left mouse button is clicked or double-clicked, the image area 21 displays a prompt box to remind the doctor that all organ images marked with lesions have been reviewed.
Referring to fig. 2 and 5, the second region 20 preferably further includes a lesion display area 26. The lesion display area 26 is configured to display an image of the organ in which the lesion is marked. For example, when the image area 21 displays a main set of stomach images or a sub-set of stomach images, the lesion display area 26 displays a stomach image in which lesions are marked. In this embodiment, the lesion display area 26 is located between the progress area 22 and the pathology map display area 23.
The above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made thereto without departing from the spirit and scope of the technical solution of the present invention.

Claims (10)

1. A method for displaying a film reading interface of a capsule endoscope image is characterized by comprising the following steps:
displaying a film reading interface, wherein the film reading interface comprises a first area and a second area, and the first area and the second area are arranged left and right;
Displaying an alimentary tract diagram in the first area, the alimentary tract diagram including a plurality of organ controls;
acquiring a capsule endoscope image set, classifying the capsule endoscope images, and generating a plurality of organ image main sets; each organ image main set corresponds to one organ control;
identifying fuzzy images and redundant images in the organ image main set, and generating an organ image sub-set from which the fuzzy images and the redundant images are removed;
Triggering the organ control in response to a first command, and displaying an image of a main set of organ images corresponding to the organ control in the second area;
and responding to a second command to trigger the organ control, and displaying images of organ image subsets corresponding to the organ control in the second area.
2. The method for displaying a film reading interface of a capsule endoscopic image as defined in claim 1, wherein: the second area comprises an image area and a progress area, and the image area and the progress area are arranged up and down; the image area is used for displaying images of the organ image main set or the organ image sub-set, and the progress area is used for displaying progress controls; in response to triggering a progress control, the image area displays an image corresponding to a trigger position of the progress control.
3. The method for displaying a film reading interface of a capsule endoscopic image as defined in claim 2, wherein: identifying whether each image in the organ image subset has a focus; if the image has a focus, marking the focus in the image by using a first mark; at the same time, the image is marked with a second mark at the position corresponding to the progress control of the organ image main set, and the image is marked with a second mark at the position corresponding to the progress control of the organ image sub-set.
4. The method for displaying a film reading interface of a capsule endoscopic image as defined in claim 3, wherein: when the organ control is not triggered, the organ control is displayed in an initial state; when the organ control is triggered by a first command, the organ control is displayed in a first state; when the organ control is triggered by a second command, the organ control is displayed in a second state.
5. The method for displaying a film reading interface of a capsule endoscopic image as defined in claim 4, wherein: when the organ control is triggered by a third command, the film reading interface displays a pop-up box, and the pop-up box is used for displaying images marked with focuses.
6. The method for displaying a film reading interface of a capsule endoscopic image as defined in claim 5, wherein: the third command is that a mouse pointer is suspended on the organ control; when the mouse pointer is hovered over the organ control and moved, the pop-up box displays another image of the marked lesion.
7. The method for displaying a film reading interface of a capsule endoscopic image as defined in claim 6, wherein: when the mouse pointer moves along the advancing direction of the capsule on the alimentary canal, the pop-up box displays images marked with focus according to time sequence; when the mouse pointer is moved on the alimentary canal in the opposite direction of the capsule advancement, the pop-up box displays images marking the lesion in reverse order of time.
8. The method for displaying a film reading interface of a capsule endoscopic image as defined in claim 7, wherein: when the mouse pointer is moved to the pop-up box and the mouse is clicked or double-clicked, the image area displays the image displayed by the pop-up box, and the pop-up box disappears.
9. The method for displaying a film reading interface of a capsule endoscopic image as defined in claim 8, wherein: displaying a third mark on an organ control corresponding to the image marked with the focus; the location of the third marker on the organ control corresponds to the location of the lesion on the organ.
10. The method for displaying a film reading interface of a capsule endoscopic image as defined in claim 9, wherein: the digestive tract diagram also comprises a focus diagram quantity display area; the focus image quantity display area is used for displaying the quantity of images marked with focuses and the quantity of images marked with focuses which are read.
CN202311349220.0A 2023-10-18 Film reading interface display method of capsule endoscope image Active CN117717305B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311349220.0A CN117717305B (en) 2023-10-18 Film reading interface display method of capsule endoscope image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311349220.0A CN117717305B (en) 2023-10-18 Film reading interface display method of capsule endoscope image

Publications (2)

Publication Number Publication Date
CN117717305A CN117717305A (en) 2024-03-19
CN117717305B true CN117717305B (en) 2024-06-07

Family

ID=

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103140160A (en) * 2011-03-30 2013-06-05 奥林巴斯医疗株式会社 Image management device, method, and program, and capsule type endoscope system
CN112515753A (en) * 2020-12-01 2021-03-19 苏州市立医院 Femoral intramedullary resetting device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103140160A (en) * 2011-03-30 2013-06-05 奥林巴斯医疗株式会社 Image management device, method, and program, and capsule type endoscope system
CN112515753A (en) * 2020-12-01 2021-03-19 苏州市立医院 Femoral intramedullary resetting device

Similar Documents

Publication Publication Date Title
US20210406591A1 (en) Medical image processing method and apparatus, and medical image recognition method and apparatus
JP6657480B2 (en) Image diagnosis support apparatus, operation method of image diagnosis support apparatus, and image diagnosis support program
US20210345865A1 (en) Systems and methods for generating and displaying a study of a stream of in-vivo images
WO2020098539A1 (en) Image processing method and apparatus, computer readable medium, and electronic device
US9514556B2 (en) System and method for displaying motility events in an in vivo image stream
US20220172828A1 (en) Endoscopic image display method, apparatus, computer device, and storage medium
CN109544526B (en) Image recognition system, device and method for chronic atrophic gastritis
US20200279373A1 (en) Ai systems for detecting and sizing lesions
CN110097105A (en) A kind of digestive endoscopy based on artificial intelligence is checked on the quality automatic evaluation method and system
JP2015509026A5 (en)
CN110916606A (en) Real-time intestinal cleanliness scoring system and method based on artificial intelligence
CN111091559A (en) Depth learning-based auxiliary diagnosis system for small intestine sub-scope lymphoma
CN110867233B (en) System and method for generating electronic laryngoscope medical test reports
CN111214255A (en) Medical ultrasonic image computer-aided diagnosis method
CN101273916B (en) System and method for evaluating status of patient
JP2007105458A (en) System and method for recognizing image in image database
CN112801958A (en) Ultrasonic endoscope, artificial intelligence auxiliary identification method, system, terminal and medium
CN115082448A (en) Method and device for scoring cleanliness of intestinal tract and computer equipment
CN117717305B (en) Film reading interface display method of capsule endoscope image
Pan et al. BP neural network classification for bleeding detection in wireless capsule endoscopy
CN117717305A (en) Film reading interface display method of capsule endoscope image
Vaidyanathan et al. Using human experts' gaze data to evaluate image processing algorithms
US20230298306A1 (en) Systems and methods for comparing images of event indicators
Arnold et al. Indistinct frame detection in colonoscopy videos
CN114581408A (en) Gastroscope polyp detection method based on YOLOV5

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant