CN112075914B - Capsule endoscopy system - Google Patents

Capsule endoscopy system Download PDF

Info

Publication number
CN112075914B
CN112075914B CN202011096890.2A CN202011096890A CN112075914B CN 112075914 B CN112075914 B CN 112075914B CN 202011096890 A CN202011096890 A CN 202011096890A CN 112075914 B CN112075914 B CN 112075914B
Authority
CN
China
Prior art keywords
target
image
focus
capsule endoscope
magnet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011096890.2A
Other languages
Chinese (zh)
Other versions
CN112075914A (en
Inventor
吴良信
阚述贤
王建平
孔令松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Jifu Medical Technology Co ltd
Original Assignee
Shenzhen Jifu Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Jifu Medical Technology Co ltd filed Critical Shenzhen Jifu Medical Technology Co ltd
Priority to CN202011096890.2A priority Critical patent/CN112075914B/en
Publication of CN112075914A publication Critical patent/CN112075914A/en
Application granted granted Critical
Publication of CN112075914B publication Critical patent/CN112075914B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/041Capsule endoscopes for imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00006Operational features of endoscopes characterised by electronic signal processing of control signals
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00011Operational features of endoscopes characterised by signal transmission
    • A61B1/00016Operational features of endoscopes characterised by signal transmission using wireless means
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00039Operational features of endoscopes provided with input arrangements for the user
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00043Operational features of endoscopes provided with output arrangements
    • A61B1/00045Display arrangement
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00147Holding or positioning arrangements
    • A61B1/00158Holding or positioning arrangements using magnetic field
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/045Control thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The invention discloses a capsule endoscopy system, which comprises a capsule endoscope, an inspection device, a magnetic control device, a wireless transceiver device and a graphic processing device; the capsule endoscope shoots images of a target area in real time; the wireless receiving and transmitting device receives the image and transmits the image to the checking device in real time; the image processing equipment receives the image sent by the checking equipment, identifies a target part in the image and determines the position and the size of the target part in the image; the processor of the inspection device determines the position and/or posture control information of the capsule endoscope according to the position information of the target part in the image and converts the position and/or posture control information into the position and/or posture control information of the second magnet; the magnetic control device controls the transmission mechanism to adjust the position and/or the gesture of the second magnet according to the control information so as to realize the adjustment of the position and/or the gesture of the capsule endoscope. The system improves the accuracy and the inspection efficiency of the inspection result.

Description

Capsule endoscopy system
Technical Field
The invention relates to the technical field of medical instruments, in particular to a capsule endoscopy system.
Background
The capsule endoscope has the advantages of convenient detection, no wound pain, no influence on the normal life of patients and the like, and has wide application in digestive tract examination.
At present, in order to overcome the defect that the capsule endoscope is driven to move in the body by relying on the peristalsis of the alimentary canal, namely the leakage detection risk is high due to a passive detection mode, a magnetic control capsule endoscope system capable of realizing active control appears on the market, namely a doctor manually controls the movement and rotation of a second magnet in magnetic control equipment according to personal clinical experience, and further drives the movement and rotation of a capsule endoscope with a first magnet in the body, so that the aim of carrying out active detection on the alimentary canal is fulfilled. However, the clinical experience and the operation proficiency of the examining doctor have great influence on the accuracy of the examination result and the examination efficiency, and the inaccuracy and the low examination efficiency of the examination result can be caused by the lack of the clinical experience or the inexperienced operation of the examining doctor.
Disclosure of Invention
In order to solve the technical problems in the prior art, the invention provides a capsule endoscopy system, which aims to automatically control a capsule endoscope to scan in a body and improve the accuracy and the inspection efficiency of an inspection result.
A capsule endoscopy system, the system comprising: the system comprises a capsule endoscope, an inspection device, a magnetic control device, a wireless transceiver and a graphic processing device, wherein the wireless transceiver is connected with the inspection device through a first communication link, the inspection device is connected with the graphic processing device through a second communication link, and the inspection device is connected with the magnetic control device through a third communication link;
the capsule endoscope is internally provided with a first magnet and is used for shooting an image of a target area in real time and transmitting the image to the wireless receiving and transmitting equipment in real time;
the wireless transceiver is used for receiving the image and sending the image to the inspection equipment; the image processing equipment is used for receiving the image sent by the inspection equipment, identifying a target part in the image to obtain a target part name, and determining the position of the target part in the image to obtain target part position information; for transmitting the target site name, the target site location information to the inspection apparatus;
the inspection apparatus includes a processor and a display device:
The processor: the position and/or posture control information of the capsule endoscope is determined according to the target position information, the position and/or posture control information of the capsule endoscope is converted into the position and/or posture control information of the second magnet, and the position and/or posture control information of the second magnet is sent to the magnetic control equipment;
the display device is used for displaying a graphical user interface, and the graphical user interface comprises a first display window used for displaying the image shot by the capsule endoscope in real time;
the magnetic control equipment is provided with a transmission mechanism and a second magnet, and is used for controlling the transmission mechanism to adjust the position and/or the posture of the second magnet according to the position and/or the posture control information of the second magnet, and the second magnet acts on the first magnet to adjust the position and/or the posture of the capsule endoscope. In some embodiments, the processor of the inspection device is to:
receiving an automatic inspection mode selected by the user through the setting window of the graphical user interface;
receiving the image shot by the capsule endoscope in real time in the target area, and sending the image to the graphic processing equipment in real time;
Planning a current cruising path of the capsule endoscope within the target area;
determining the target part to be scanned in the current cruising path;
determining position and/or attitude control information of the capsule endoscope based on the target site position information transmitted by the graphics processing device;
converting the position and/or posture control information of the capsule endoscope into the position and/or posture control information of the second magnet according to a mechanical motion model;
and sending the position and/or posture control information of the second magnet to the magnetic control equipment.
In some embodiments, the graphics processing apparatus is to:
obtaining the name of the target node, a target node mask and a target node detection frame according to an AI model;
and determining the position of the target node in the first image according to the target node mask and the target node detection frame to obtain the target node position information, wherein the target node position information comprises a target node position and a target node size, and the target node size is the number of pixels of the image in the target node detection frame.
In some embodiments, the graphics processing apparatus is to:
Performing part feature recognition and part name judgment on the image input part detection AI model to recognize the target part and the target part name in the image;
dividing the identified target part through a part division AI model to generate a target part mask and a target part detection frame;
and determining the position of the target part in the image according to the target part mask and the target part detection frame to obtain the target node position information, wherein the target node position information comprises a target node position and a target node size, and the target node size is the number of pixels of the image in the target node detection frame. In some embodiments, the processor is configured to create a list of unscanned sites, positive scan sites, and scanned sites;
the graphical user interface includes a second display window for displaying the non-scanned, the positive scanned, and the scanned region list in a first list or a 3D model of the target region. In some embodiments, the processor is configured to:
receiving real-time position and/or posture information of the second magnet sent by the magnetic control equipment through the third communication link;
Converting real-time position and/or posture information of the second magnet into real-time position and/or posture information of the capsule endoscope through the mechanical motion model;
the graphical user interface comprises a fourth display window, wherein the fourth display window is used for displaying real-time position and/or gesture information of the second magnet;
the graphical user interface includes a fifth display window for presenting real-time position and/or pose information of the capsule endoscope in the form of 3D simulated graphics.
In some embodiments, the graphics processing apparatus is to:
identifying a focus in the image to obtain a focus name, and determining the position and the size of the focus in the image to obtain the focus position and the focus size;
the lesion name, the lesion location and the lesion size are sent to the examination device.
In some embodiments, the graphics processing apparatus is to:
obtaining the name of the focus, a focus mask and a focus detection frame according to an AI model;
and determining the position and the size of the focus in the image according to the focus mask and the focus detection frame to obtain the focus position and the focus size.
In some embodiments, the graphics processing apparatus is to:
inputting the image into a focus detection AI model for focus feature identification and focus name judgment, and identifying the focus and the focus name in the image;
dividing the identified focus by a focus dividing AI model to generate a focus mask and a focus detection frame;
and determining the position and the size of the focus in the image according to the focus mask and the focus detection frame to obtain the focus position and the focus size.
In some embodiments, the processor is configured to:
establishing an association list of the focus and the target part according to the target part position, the target part size, the focus position and the focus size;
the graphical user interface includes a third display window for displaying a list of associations of the target lesion with the target site in a second list or a 3D model of the target area.
In some embodiments, the processor of the inspection device is to: receiving at least one image which is displayed by the graphical user interface and contains important information and selected by the user through a mouse or a key, and generating a captured image list;
The graphical user interface includes a sixth display window for presenting the captured image list.
In some embodiments, the processor is configured to: storing at least one of the images and its corresponding target site name, target site location, target site size, lesion name, lesion location and lesion size in a corresponding image file.
A capsule endoscopy system, the system comprising: the device comprises a capsule endoscope, an inspection device, a magnetic control device and a wireless transceiver device, wherein the wireless transceiver device is connected with the inspection device through a first communication link, and the inspection device is connected with the magnetic control device through a third communication link;
the capsule endoscope is internally provided with a first magnet and is used for shooting an image of a target area in real time and transmitting the image to the wireless receiving and transmitting equipment in real time;
the wireless transceiver is used for receiving the image and sending the image to the inspection equipment; the examination apparatus comprises a processor and a display device,
the processor shown is for:
identifying a target part in the image to obtain a target part name, and determining the position and the size of the target part in the image to obtain target part position information;
Determining the position and/or posture control information of the capsule endoscope according to the target position information, converting the position and/or posture control information of the capsule endoscope into the position and/or posture control information of the second magnet, and sending the position and/or posture control information of the second magnet to the magnetic control equipment; the display device is used for displaying a graphical user interface, and the graphical user interface comprises a first display window used for displaying the image shot by the capsule endoscope in real time;
the magnetic control equipment is provided with a transmission mechanism and a second magnet, and is used for controlling the transmission mechanism to adjust the position and/or the posture of the second magnet according to the position and/or the posture control information of the second magnet, and the second magnet acts on the first magnet to adjust the position and/or the posture of the capsule endoscope. The embodiment of the invention provides a capsule endoscopy system, which comprises a capsule endoscope, an inspection device, a magnetic control device, a wireless receiving and transmitting device and a graphic processing device; the capsule endoscope shoots an image of a target area in real time and sends the shot image to wireless receiving and transmitting equipment in real time; the wireless transceiver device sends the image to the inspection device in real time; the display device of the inspection equipment renders the image and displays the image in a first display window of a graphical user interface in real time, and the inspection equipment sends the image to the graphical processing equipment in real time; the graphic processing equipment identifies a target part in the image to obtain a target part name, determines the position of the target part in the image to obtain target part position information, and sends the target part name and the target part position information to the inspection equipment; the processor of the inspection device determines the position and/or posture control information of the capsule endoscope according to the target position information, converts the position and/or posture control information of the capsule endoscope into the position and/or posture control information of the second magnet, and sends the position and/or posture control information of the second magnet to the magnetic control device; the magnetic control device is provided with a transmission mechanism and a second magnet, and the magnetic control device controls the transmission mechanism to adjust the position and/or the posture of the second magnet according to the position and/or the posture control information of the second magnet, and the second magnet acts on the first magnet to adjust the position and/or the posture of the capsule endoscope. The system automatically realizes the active control of the capsule endoscope in the target area through image recognition based on the visual guidance of the capsule endoscope, controls the change of the position and/or the posture of the capsule endoscope in the target area, does not need manual control of a user (such as a doctor), solves the problems of inaccurate inspection results and low inspection efficiency caused by human factors, and improves the accuracy and the inspection efficiency of the inspection results.
Drawings
The accompanying drawings are included to provide a further understanding of embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain, without limitation, the embodiments of the invention.
FIG. 1 is a schematic illustration of a capsule endoscopy system in accordance with an embodiment of the present invention;
FIG. 2 is a schematic diagram of an inspection apparatus according to an embodiment of the present invention;
FIG. 3 illustrates a possible cruising path between nodes to be scanned in a target area according to an embodiment of the present invention;
FIG. 4 is an optimal cruising path between each node to be scanned in a target area in an embodiment of the present invention;
FIG. 5 is a schematic view of a capsule endoscope performing a circular scan of a target site C in an embodiment of the present invention;
FIG. 6 is a schematic diagram showing the target portion C and its neighboring regions being scanned completely according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a cross scan of a target site C by a capsule endoscope in an embodiment of the present invention;
FIG. 8 is a schematic diagram showing the target portion C and four adjacent regions thereof being scanned completely according to an embodiment of the present invention;
FIG. 9 shows a mask and a detection frame of the current part B and a mask and a detection frame of the target part C generated in the embodiment of the present invention;
FIG. 10 is a schematic diagram of an inspected portion, a normal inspected portion and an unchecked portion according to an embodiment of the present invention;
FIG. 11 is an exemplary diagram of a second display window showing an unscanned portion, a positive scan portion, and a list of scanned portions in an embodiment of the invention;
FIG. 12 is a diagram of a fourth display window displaying real-time position and orientation information of a second magnet according to an embodiment of the present invention;
FIG. 13 is a schematic diagram showing real-time position and orientation information of a capsule endoscope in a fifth display window according to an embodiment of the present invention;
FIG. 14 is a diagram of a focus mask and a detection frame, focus and part association diagram according to an embodiment of the present invention;
fig. 15 is an exemplary schematic diagram of a third display window showing a list of lesion and site associations according to an embodiment of the present invention;
FIG. 16 is a schematic view of another capsule endoscopy system in accordance with an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Fig. 1 is a schematic diagram of a capsule endoscopy system according to an embodiment of the present invention, and fig. 2 is a schematic diagram of an inspection apparatus according to an embodiment of the present invention. As shown in fig. 1 and 2, the system includes: comprises a capsule endoscope b1, a wireless transceiver device b2, a graphic processing device b3, a magnetic control device b4 and an inspection device b5. The wireless transceiver device b2 is connected with the checking device b5 through a first communication link, the checking device b5 is connected with the graphic processing device b3 through a second communication link, and the checking device b5 is connected with the magnetic control device b4 through a third communication link. The first communication link, the second communication link, and the third communication link may each include, but are not limited to: a wired network, a wireless network, wherein the wired network comprises: local area networks, metropolitan area networks, and wide area networks, the wireless network comprising: bluetooth, WIFI, and other networks that enable wireless communications.
The inspection device b5 may be a local server, a cloud server, or a terminal device, which may be, but is not limited to, various smart phones, tablet computers, notebook computers, desktop computers, smart speakers, smart watches, etc. The graphics processing device b3 may be a local server, a cloud server, or a terminal device, which may be, but is not limited to, various smart phones, tablet computers, notebook computers, desktop computers, smart speakers, smart watches, etc. In some embodiments, the inspection apparatus b5 and the graphic processing apparatus b3 may be the same apparatus having the functions of both the inspection apparatus b5 and the graphic processing apparatus b3, as shown in fig. 16.
The capsule endoscope b1 includes: the device comprises a camera module, a first control module, a first radio frequency module and a first magnet. The magnetic control device b4 comprises a transmission mechanism and a second magnet. The first magnet and the second magnet may be electromagnets, permanent magnets or other kinds of magnets. The wireless transceiver b2 comprises a second control module and a second radio frequency module.
The capsule endoscope control system is used for checking the digestive tract of human body, mainly the stomach. After the patient swallows the capsule endoscope, the capsule endoscope reaches a target area, such as the stomach, scans the target area (captures an image of the target area in real time), and transmits the captured image to the wireless transceiver device in real time. Specifically, a first control module of the capsule endoscope controls the camera module to shoot an image of the target area in real time, and the image is sent to the wireless transceiver in the form of an image data packet through a first radio frequency module. The second control module of the wireless receiving and transmitting equipment controls the second radio frequency module to receive the image data packet, and performs verification and combination on the image data packet to form a complete frame of the image; the wireless transceiver device transmits the image to the inspection device in real time over a first communication link. The inspection device sends the received image to the graphics processing device in real time through a second communication link. The function of the wireless transceiver device may be replaced by other devices having data transmission function, which is not limited herein.
The image processing equipment is used for receiving the image sent by the inspection equipment, identifying a target part in the image to obtain a target part name, and determining the position of the target part in the image to obtain target part position information; for transmitting the target site name, the target site location information to the inspection apparatus.
For example, the target area is the stomach, and the part to be scanned of the stomach comprises: cardiac, fundus, body, angle, antrum and pylorus. The part of the stomach to be scanned can also be subdivided into: the cardiac, lower cardiac anterior wall, lower cardiac posterior wall, fundus, upper gastric anterior wall, upper gastric posterior wall, upper gastric greater curvature, upper gastric lesser curvature, middle gastric anterior wall, middle gastric posterior wall, middle gastric greater curvature, middle gastric lesser curvature, lower gastric anterior wall, lower gastric posterior wall, lower gastric greater curvature, lower gastric lesser curvature, corner, anterior wall, corner posterior wall, anterior wall, posterior wall, dou Dawan, dou Xiaowan and pylorus. The target site is the next site to be scanned, which is to be scanned by the capsule endoscope, of the sites to be scanned.
The image processing equipment identifies each received image, identifies a part in the image, obtains a part name, determines the position of the part in the image, and obtains part position information; and transmits the location name and location information to the inspection apparatus. For an image containing a target part, the graphic user equipment identifies the target part to obtain a target part name, and determines the position of the target part in the image to obtain target part position information; and transmitting the target site name, the target site location information to the inspection apparatus.
The inspection apparatus includes a processor and a display device:
the processor: the position and/or posture control information of the capsule endoscope is determined according to the target position information, the position and/or posture control information of the capsule endoscope is converted into the position and/or posture control information of the second magnet, and the position and/or posture control information of the second magnet is sent to the magnetic control equipment.
The display device is used for displaying a graphical user interface, and the graphical user interface comprises a first display window used for displaying the image shot by the capsule endoscope in real time. The display device may be a display. The display device receives the image shot by the capsule endoscope in real time, renders the image, and displays the image on a first display window of the graphical user interface in real time, so that a user (such as a doctor) is assisted to know the checking process in real time.
The magnetic control equipment is provided with a transmission mechanism and a second magnet, and is used for controlling the transmission mechanism to adjust the position and/or the posture of the second magnet according to the position and/or the posture control information of the second magnet, and the second magnet acts on the first magnet to adjust the position and/or the posture of the capsule endoscope. The embodiment of the invention provides a capsule endoscopy system, which comprises a capsule endoscope, an inspection device, a magnetic control device, a wireless receiving and transmitting device and a graphic processing device; the capsule endoscope shoots an image of a target area in real time and sends the shot image to wireless receiving and transmitting equipment in real time; the wireless transceiver device sends the image to the inspection device in real time; the display device of the inspection equipment renders the image and displays the image in a first display window of a graphical user interface in real time, and the inspection equipment sends the image to the graphical processing equipment in real time; the graphic processing equipment identifies a target part in the image to obtain a target part name, determines the position of the target part in the image to obtain target part position information, and sends the target part name and the target part position information to the inspection equipment; the processor of the inspection device determines the position and/or posture control information of the capsule endoscope according to the target position information, converts the position and/or posture control information of the capsule endoscope into the position and/or posture control information of the second magnet, and sends the position and/or posture control information of the second magnet to the magnetic control device; the magnetic control device is provided with a transmission mechanism and a second magnet, and the magnetic control device controls the transmission mechanism to adjust the position and/or the posture of the second magnet according to the position and/or the posture control information of the second magnet, and the second magnet acts on the first magnet to adjust the position and/or the posture of the capsule endoscope. The system automatically realizes the active control of the capsule endoscope in the target area through image recognition based on the visual guidance of the capsule endoscope, controls the change of the position and/or the posture of the capsule endoscope in the target area, does not need manual control of a user (such as a doctor), solves the problems of inaccurate inspection results and low inspection efficiency caused by human factors, and improves the accuracy and the inspection efficiency of the inspection results.
In some embodiments, the processor of the inspection device is to: receiving an automatic inspection mode selected by the user through the setting window of the graphical user interface; receiving the image shot by the capsule endoscope in real time in the target area, and sending the image to the graphic processing equipment in real time; planning a current cruising path of the capsule endoscope within the target area; determining the target part to be scanned in the current cruising path; determining position and/or attitude control information of the capsule endoscope based on the target site position information transmitted by the graphics processing device; converting the position and/or posture control information of the capsule endoscope into the position and/or posture control information of the second magnet according to a mechanical motion model; and sending the position and/or posture control information of the second magnet to the magnetic control equipment.
In particular, the graphical user interface may further include a setup window for setting an inspection mode including a manual inspection mode and an automatic inspection mode, and a control window; the control window is used for controlling the change of the position and/or the posture of the capsule endoscope.
The processor receives a manual review mode selected by the user (e.g., physician) through the setup window of the graphical user interface, in which: the user determines the position and/or posture control information of the capsule endoscope in the target area, such as upward movement or downward movement and rotation angle, according to the image shot by the capsule endoscope displayed in real time by the graphical user interface and according to clinical experience; the control window of the graphical user interface is provided with keys (or icons), each key corresponds to a position or gesture control instruction of a capsule endoscope, the position and/or gesture control instruction of the capsule endoscope, which is input by a user through the keys, is received, and of course, the user can also input the position and/or gesture control instruction of the capsule endoscope through an operation rod or a handle; the processor converts the position and/or posture control information of the capsule endoscope into the position and/or posture control information of the second magnet according to a preset mechanical motion model, wherein the mechanical motion model is a mechanical motion model formed by the capsule endoscope and the magnetic control device, namely a stress model formed by a first magnet in the capsule endoscope in a target area and a second magnet in the magnetic control device outside the target area, the position and/or posture of the second magnet in the magnetic control device outside the target area and the position and posture of the capsule endoscope in the target area have a mapping relation, the position and the posture of the second magnet in the magnetic control device outside the target area are changed according to the mapping relation, and the position and the posture of the capsule endoscope in the target area can be adjusted, namely the position and/or posture control information of the capsule endoscope is converted into the position and/or posture control information of the second magnet according to the mapping relation, and the position and/or posture control information of the second magnet is sent to the magnetic control device.
The processor receives an automatic check mode selected by the user through the setup window of the graphical user interface, in which the processor:
receiving the image shot by the capsule endoscope in real time in the target area, and sending the image to the graphic processing equipment in real time;
planning a current cruising path of the capsule endoscope within the target area;
determining the target part to be scanned in the current cruising path;
determining position and/or attitude control information of the capsule endoscope based on the target site position information transmitted by the graphics processing device;
converting the position and/or posture control information of the capsule endoscope into the position and/or posture control information of the second magnet according to a mechanical motion model;
and sending the position and/or posture control information of the second magnet to the magnetic control equipment.
Specifically, in the embodiment of the invention, the target area takes the stomach of a human body as an example, a three-dimensional position relation model of each part of the stomach is pre-established, and each part is represented by a coordinate point of a three-dimensional space, so that the three-dimensional position relation among the parts can be represented by a position vector under a specific coordinate system. The three-dimensional position relation model of each part can be seen as a three-dimensional network topological graph, each node represents a corresponding part, a connecting line between the nodes represents a feasible cruising path between the corresponding parts (shown in fig. 3), the length of the connecting line represents the distance of the cruising path, a capsule endoscope enters the stomach, the first shot part to be scanned is taken as a starting point (current part), a path planning algorithm of the three-dimensional network topological graph, such as a dynamic planning algorithm, a divide-and-conquer algorithm or a constraint optimization algorithm, is adopted, the whole three-dimensional network topological graph is traversed by adopting a depth-first traversing method, all cruising paths capable of traversing all parts of the whole three-dimensional network topological graph are found, one path with the smallest sum of weights is selected, namely the optimal cruising path is taken as the current cruising path, and the optimal cruising path comprises the scanning sequence of each part to be scanned. The optimal cruising path is used as the current cruising path, namely the cruising scanning path of the capsule endoscope is shortest, so that the checking efficiency is improved. . In some embodiments, the capsule endoscope enters the target area, the first shot node to be scanned is taken as a starting point (current node), a path planning algorithm of the three-dimensional network topological graph, such as a dynamic planning algorithm, a divide-and-conquer algorithm or a constraint optimization algorithm, is adopted to traverse the whole three-dimensional network topological graph by adopting a depth-first traversing method, all the cruising paths capable of traversing all the nodes of the whole three-dimensional network topological graph are found, any one of the cruising paths is selected as the current cruising path, and the current cruising path comprises the scanning sequence of each node to be scanned.
After the capsule endoscope scans the current position, marking the scanned current position as a scanned position, and selecting a target position according to the scanning sequence of each position to be scanned in the current cruising path, so that the efficiency of inspection can be improved, missed inspection is avoided, and the accuracy of inspection results is improved.
The capsule endoscope cruises from the current position to the target position, and the direction of the capsule endoscope towards the target position is adjusted according to the position relation between the current position and the target position in the three-dimensional position relation model, so that the target position appears in an image shot by the capsule endoscope in real time. The image processing equipment identifies a target part in the image, obtains a target part name, determines the position of the target part in the image, and obtains target part position information; and transmitting the target site name, the target site location information to the inspection apparatus. The target site location information includes a target site location and a target site size. The target position is the position relation between the coordinates of the center point of the target position detection frame and the coordinates of the center point of the image where the target position is located, and is expressed by vectors; the target site size refers to the number of pixels of the image within the target site detection frame. The processor of the inspection device determines the moving direction and/or the deflection angle of the capsule endoscope based on the target position and the target position size to obtain the position and/or the gesture control information of the capsule endoscope; the processor converts the position and/or posture control information of the capsule endoscope into the position and/or posture control information of the second magnet according to the mechanical motion model, and sends the position and/or posture control information of the second magnet to the magnetic control equipment.
It should be noted that, in the embodiment of the present invention, the scanning of the capsule endoscope in the target area, for example, the stomach is an automatic cruise scanning process, the inspection device sequentially scans each target portion to be scanned according to the planned current cruise path based on the identification and positioning of the target portion by the graphics processing device, the inspection device determines the position and/or posture control information of the capsule endoscope in the stomach, and converts the control information into the position and/or posture control information of the second magnet in the magnetic control device, the magnetic control device controls the transmission mechanism to adjust the position and/or posture of the second magnet, the second magnet acts on the first magnet in the capsule endoscope to realize the adjustment of the position and/or posture of the capsule endoscope, and controls the capsule endoscope to sequentially scan each target portion to be scanned according to the scanning sequence, so that the current portion after the scanning is marked as the scanned portion until the scanning of all the target portions to be scanned is completed.
The process of scanning the target part by the capsule endoscope is as follows:
the method comprises the steps that firstly, the transmission mechanism is controlled by a magnetic control device to adjust the position and/or the gesture of a second magnet, the second magnet acts on a first magnet to adjust the position and/or the gesture of a capsule endoscope, a target part appears in the center of an image shot by the capsule endoscope in real time, at the moment, according to a target part detection frame, the pixel distances of the target part detection frame from the upper side, the lower side, the left side and the right side of the image shot in real time are calculated, the resolution ratio of the image shot in real time is set to be R, if the pixel distances of the target part detection frame from the four sides of the image shot in real time are all more than or equal to R/4 pixels, the fact that the target part completely falls into the visual field of the capsule endoscope is explained, at the moment, the capsule endoscope directly scans the target part, and extra scanning (annular scanning and/or cross scanning) is not needed. If the distance between the target part detection frame and any side of the image shot in real time is smaller than L/4 pixels, the boundary of the target part is possibly beyond the visual field range of the capsule endoscope, and annular scanning and/or cross scanning are/is needed to be carried out on the target part.
As shown in fig. 5, the annular scanning specifically refers to that the inspection device controls the magnetic control device to change the posture of the second magnet through the transmission mechanism to adjust the posture of the capsule endoscope, so that the capsule endoscope is offset by 15-30 degrees from the center of the target part, and 360-degree annular scanning is performed on the periphery of the target part, so that the target part and the adjacent area thereof are completely scanned, and as shown in fig. 6, the dotted line frame is the area scanned by the capsule endoscope.
As shown in fig. 7, the cross scanning specifically refers to that the inspection device controls the magnetic control device to change the posture of the second magnet through the transmission mechanism to adjust the posture of the capsule endoscope, the capsule endoscope performs scanning in four directions of upward, downward, leftward and rightward in sequence, so as to ensure that when scanning in each direction, the distance between the target part detection frame and the image boundary in the direction exceeds R/2 pixels, and then the target part and the adjacent areas in the four directions are completely scanned, as shown in fig. 8, and the dotted line frame is the area scanned by the capsule endoscope.
When the target part does not fall into the visual field of the capsule endoscope, the inspection equipment controls the magnetic control equipment to change the gesture of the second magnet through the transmission mechanism to adjust the gesture of the capsule endoscope, and the capsule endoscope performs annular scanning and/or cross scanning on the target part, so that the overall and integrity of the scanning of the capsule endoscope is ensured, and the accuracy of the inspection result is further improved.
In the embodiment of the invention, the scanning is the process of shooting the image of the capsule endoscope in the target area in real time.
In some cases, in the process of scanning the capsule endoscope, when the capsule endoscope is judged to deviate from the current cruising path or a target node is found to exceed a preset time, the current cruising path needs to be planned again, and cruising scanning is performed according to the method until scanning of all target parts to be scanned is completed, wherein the planning method of the current cruising path is the same as the method. The capsule endoscope deviates from the current cruising path, typically due to an unexpected posture adjustment of the subject, resulting in the capsule endoscope moving away from the previously scanned area (e.g., from the original upper stomach to the lower stomach). The method can judge according to the three-dimensional position relation model of each node to be scanned, and if the node to be scanned and the target node which are currently shot by the capsule endoscope are not in the same area, namely the capsule endoscope cannot scan according to the current cruising path, a new cruising path needs to be re-planned.
In some embodiments, the graphics processing device identifies a target location in the image, obtains a target location name, determines a location of the target location in the image, and obtains target location information specifically includes:
Performing part feature recognition and part name judgment on the image input part detection AI model to recognize the target part and the target part name in the image;
dividing the identified target part through a part division AI model to generate a target part mask and a target part detection frame;
and determining the position of the target part in the image according to the target part mask and the target part detection frame to obtain target part position information, wherein the target part position information comprises a target part position and a target part size, and the target part size is the number of pixels of the image in the target part detection frame. Specifically, for the site detection AI model, any one of a cyclic network model, a convolutional network model, a deep neural network model, a depth generation model, a self-encoder model and other AI models may be selected, the selected model is trained by using an image set of the AI model, which is captured in advance by a capsule endoscope, at different positions of a target region, so as to obtain a site detection AI model satisfying one or a combination of recognition accuracy, sensitivity and specificity, and then a first image is input into the site detection AI model to perform site feature recognition and site name determination, so that the target site and the name of the target site in the first image are recognized.
For the part segmentation AI model, any one of an AI model such as a circulation network model, a convolution network model, a depth neural network model, a depth generation model and a self-encoder model can be selected, the selected model is trained by using an image set of the AI model, which is shot in advance by a capsule endoscope, at different positions of a target area, so that the part segmentation AI model meeting one or a combination of recognition accuracy, sensitivity and specificity is obtained, and then a first image is input into the part segmentation AI model to perform part segmentation, so as to generate a target part mask and a target part detection frame. In the embodiment of the invention, a part detection depth convolutional neural network model and a part segmentation depth convolutional neural network model are taken as examples for explanation, and the specific steps are as follows:
selecting an image set of the capsule endoscope, which is shot in advance, at different positions of the stomach, wherein the position corresponding to each image in the image set can be identified, and at least one position can be completely contained in the image; marking all parts in the selected image set, completely marking all the parts, generating a marking frame file according to the marking area, and dividing the marked image into a training set and a testing set, wherein the images in the training set and the images in the testing set are not overlapped; training the initial part detection depth convolution neural network model and the initial part segmentation depth convolution neural network model by using a training set respectively; the initial part detection depth convolution neural network model is based on a natural scene detection network architecture, the weight of the initial part detection depth convolution neural network model is initialized to be the weight of a natural scene detection network pre-training model, and the weight of the initial part detection depth convolution neural network model is fixed in the training process; the initial part segmentation depth convolution neural network model is directly trained by adopting a labeling mask; the feature maps generated by the initial position detection depth convolution neural network model and the initial position segmentation depth convolution neural network model through all network convolution layers in the training process are mutually transmitted in a cascading mode, meanwhile, a detection frame generated by the initial position detection depth convolution neural network model acts on the initial position segmentation depth convolution neural network model, finally, a position mask is output, a position mask output by the initial position segmentation depth convolution neural network model acts on the detection frame output by the initial position detection depth convolution neural network model, and parameters of the initial position detection depth convolution neural network model and the initial position segmentation depth convolution neural network model are updated respectively through loss function gradient back propagation, so that the current position detection depth convolution neural network model and the current position segmentation depth convolution neural network model are obtained.
Training the current position detection depth convolutional neural network model and the current position segmentation depth convolutional neural network model respectively by using a training set, testing the current position detection depth convolutional neural network model and the current position segmentation depth convolutional neural network model generated by single iteration training by using a testing set to respectively obtain one or a combination of recognition precision, sensitivity and specificity of the current position detection depth convolutional neural network model and the current position segmentation depth convolutional neural network model, judging whether indexes respectively corresponding to the current position detection depth convolutional neural network model and the current position segmentation depth convolutional neural network model meet preset requirements or not respectively by using one or a combination of recognition precision, sensitivity and specificity, terminating the training if the indexes respectively corresponding to the current position detection depth convolutional neural network model and the current position segmentation depth convolutional neural network model meet preset requirements, taking the current position detection depth convolutional neural network model and the current position segmentation depth convolutional neural network model at termination time as a final position detection depth convolutional neural network model and a final position segmentation depth convolutional neural network model respectively, and continuing the training until the preset requirements are met if the indexes are not met.
When the capsule endoscope is used for examination, a real-time image acquired by the capsule endoscope is input into a part detection depth convolution neural network model to perform part feature recognition and part name judgment, and the target part and the name of the target part in the real-time image are recognized.
And inputting the real-time image into a part segmentation depth convolutional neural network model, and segmenting the identified target part to generate a target part mask and a target part detection frame. Wherein the target site detection frame may be rectangular or polygonal. For example, referring to fig. 9, a mask and a detection frame of the current site B and a mask and a detection frame of the target site C are generated by a site-segmentation depth convolutional neural network model, respectively.
Taking an image center point of a real-time image where a target part is located as an origin, and obtaining a position of the target part by using a position relation between coordinates of the center point of a target part detection frame and coordinates of the origin, namely the position of the target part in the real-time image; the number of pixels of the image in the target part detection frame is the size of the target part in the real-time image, and the size of the target part is obtained.
According to the capsule endoscopy system provided by the embodiment of the invention, the inspection equipment plans the optimal cruising path, the graphic processing equipment identifies and positions the parts in the image through the part detection AI model and the part segmentation AI model, the active control of the capsule endoscope in the target area is realized based on the visual guidance of the capsule endoscope, the change of the position and/or the posture of the capsule endoscope in the target area is controlled, the manual control of a user (such as a doctor) is not needed, the problems of inaccurate inspection result and low inspection efficiency caused by human factors are solved, and the accuracy and the inspection efficiency of the inspection result are improved.
In some embodiments, the graphics processing device identifies a target location in the image, obtains a target location name, determines a location of the target location in the image, and obtains target location information specifically includes:
obtaining the name of the target part, a target part mask and a target part detection frame according to an AI model;
and determining the position of the target part in the first image according to the target part mask and the target part detection frame to obtain target part position information, wherein the target part position information comprises a target part position and a target part size, and the target part size is the number of pixels of the image in the target part detection frame.
It can be understood that the position detection and the position segmentation can be realized through one AI model, any one of an AI model such as a circulation network model, a convolution network model, a depth neural network model, a depth generation model, a self-encoder model and the like can be selected, the selected model is trained by using an image set of the AI model, which is shot in advance by a capsule endoscope, at different positions of a target area, so that the AI model meeting one or combination of recognition precision, sensitivity and specificity is obtained, and then a first image is input into the AI model to perform position feature recognition, position name judgment, and position mask and position detection frame generation; and determining the position of the target part in the first image according to the target part mask and the target part detection frame to obtain target part position information, wherein the target part position information comprises a target part position and a target part size, and the target part size is the number of pixels of the image in the target part detection frame. The specific implementation process is referred to the detailed description in the above embodiments, and will not be repeated here.
In some embodiments, the processor of the examination apparatus is configured to create a list of unscanned sites, positively scanned sites, and scanned sites; the graphical user interface includes a second display window for displaying the non-scanned, the positive scanned, and the scanned region list in a first list or a 3D model of the target region.
Specifically, before checking, all parts to be scanned are drawn into an unchecked list; in the inspection process, a part currently appearing in the center of the field of view of the capsule endoscope (a part under scanning observation) is scraped into a list of parts under inspection, and a part removed from the center of the field of view of the capsule endoscope (a part having completed scanning observation) is scraped from the list of parts under inspection into a list of parts under inspection, as shown in fig. 10, part B is a part under inspection, part C is a part under inspection, and part D is a part under no inspection, based on the recognition and positioning results of the part detection depth convolutional neural network model and the part division depth convolutional neural network model. The list of unscanned sites, positive scan sites, and scanned sites may be presented in a list. Taking a target area as a stomach as an example, the non-scanning part, the positive scanning part and the scanned part can be displayed by a 3D stomach model, for example, the non-scanning part, the positive scanning part and the scanned part are respectively marked by different colors; or marking the non-scanned part, the positive scanned part and the scanned part respectively in different lines to assist doctors in judging the integrity of stomach examination, and improving the integrity and the completeness of examination results. As shown in fig. 11, the second display window schematically shows a manner of displaying a list of the non-scanned part, the positive scanned part and the scanned part, wherein different parts in the stomach model are divided, and the non-scanned part, the positive scanned part and the scanned part are marked by different lines respectively, wherein the non-scanned part is a part D, a part E, a part F and a part G, the positive scanned part is a part C, and the scanned part is a part a and a part B.
In some embodiments, the processor of the inspection device is to: receiving real-time position and/or posture information of the second magnet sent by the magnetic control equipment through the third communication link; converting real-time position and/or posture information of the second magnet into real-time position and/or posture information of the capsule endoscope through the mechanical motion model; the graphical user interface comprises a fourth display window, wherein the fourth display window is used for displaying real-time position and/or gesture information of the second magnet; the graphical user interface includes a fifth display window for presenting real-time position and/or pose information of the capsule endoscope in the form of 3D simulated graphics. The real-time position and/or posture information of the second magnet and the real-time position and/or posture information of the capsule endoscope are displayed on the graphical user interface, so that monitoring of a doctor on an inspection process can be facilitated, and accuracy of an inspection result can be improved. As shown in fig. 12, the diagram schematically shows a schematic diagram of displaying real-time position and posture information of the second magnet by using a fourth display window, in the diagram, three-dimensional position coordinates and three-axis posture coordinates of the second magnet are displayed by using a 3D simulation diagram, the three-dimensional position coordinates of the second magnet are X-axis Xn, Y-axis Yn, Z-axis Zn, the three-axis posture coordinates of the second magnet are RX-axis RXn, RY-axis RYn, and RZ-axis RZ. As shown in fig. 13, the diagram schematically shows a schematic diagram of displaying real-time position and posture information of the capsule endoscope by the fifth display window, in the diagram, three-dimensional position coordinates and three-axis posture coordinates of the capsule endoscope are displayed by the 3D simulation diagram, the three-dimensional position coordinates of the capsule endoscope are Xn in the X-axis, yn in the Y-axis, zn in the Z-axis, RXn in the RX-axis, RYn in the RZ-axis, and RZ in the RX-axis.
In some embodiments, the graphics processing apparatus is to: identifying a focus in the image to obtain a focus name, and determining the position and the size of the focus in the image to obtain the focus position and the focus size;
the lesion name, the lesion location and the lesion size are sent to the examination device.
Further, the graphics processing apparatus is specifically configured to: inputting the image into a focus detection AI model for focus feature identification and focus name judgment, and identifying the focus and the focus name in the image; dividing the identified focus by a focus dividing AI model to generate a focus mask and a focus detection frame; and determining the position and the size of the focus in the image according to the focus mask and the focus detection frame to obtain the focus position and the focus size.
Specifically, for a focus detection AI model, any one of a circulation network model, a convolution network model, a depth neural network model, a depth generation model, a self-encoder model and other AI models can be selected, the selected model is trained by using an image set of the capsule endoscope, which is shot in advance, at different positions of a target area, so that after the focus detection AI model meeting one or a combination of recognition accuracy, sensitivity and specificity is obtained, an image is input into the focus detection AI model to perform focus feature recognition and focus name judgment, and the target focus and the name of the target focus in the image are recognized.
For the focus segmentation AI model, any one of an AI model such as a circulation network model, a convolution network model, a depth neural network model, a depth generation model and a self-encoder model can be selected, the selected model is trained by using an image set of the model, which is shot in advance by a capsule endoscope, at different positions of a target area, so that after the focus segmentation AI model meeting one or combination of recognition precision, sensitivity and specificity is obtained, an image is input into the focus segmentation AI model to perform focus segmentation, and a target focus mask and a target focus detection frame are generated. In the embodiment of the invention, a focus detection depth convolutional neural network model and a focus segmentation depth convolutional neural network model are taken as examples for explanation, and the specific steps are as follows:
selecting an image set of the capsule endoscope, which is shot in advance, at different positions of the stomach, wherein a focus corresponding to each image in the image set is identifiable, and at least one focus can be completely contained in the image; marking all focuses in the selected image set, completely marking each focus, generating a marking frame file according to the marking area, and dividing the marked image into a training set and a testing set, wherein the images in the training set and the images in the testing set are not overlapped; training the initial focus detection depth convolution neural network model and the initial focus segmentation depth convolution neural network model by using a training set respectively; the initial focus detection depth convolution neural network model is based on a natural scene detection network architecture, the weight of the initial focus detection depth convolution neural network model is initialized to be the weight of a natural scene detection network pre-training model, and the weight of the initial focus detection depth convolution neural network model is fixed in the training process; the initial focus segmentation depth convolution neural network model is directly trained by adopting a labeling mask; the method comprises the steps that feature maps generated by each network convolution layer in a training process of an initial focus detection depth convolution neural network model and an initial focus segmentation depth convolution neural network model are mutually transmitted in a cascading mode, meanwhile, a detection frame generated by the initial focus detection depth convolution neural network model acts on the initial focus segmentation depth convolution neural network model, finally, a focus mask is output, a focus mask output by the initial focus segmentation depth convolution neural network model acts on the detection frame output by the initial focus detection depth convolution neural network model, and parameters of the initial focus detection depth convolution neural network model and the initial focus segmentation depth convolution neural network model are updated respectively through loss function gradient back propagation, so that a current focus detection depth convolution neural network model and a current focus segmentation depth convolution neural network model are obtained.
Training the current focus detection depth convolutional neural network model and the current focus segmentation depth convolutional neural network model respectively by utilizing a training set, testing the current focus detection depth convolutional neural network model and the current focus segmentation depth convolutional neural network model generated by single iteration training by utilizing a testing set to respectively obtain one or a combination of recognition precision, sensitivity and specificity of the current focus detection depth convolutional neural network model and the current focus segmentation depth convolutional neural network model, judging whether indexes respectively corresponding to the current focus detection depth convolutional neural network model and the current focus segmentation depth convolutional neural network model respectively meet preset requirements or not by one or the combination of the recognition precision, the sensitivity and the specificity, if so, terminating the training, taking the current focus detection depth convolutional neural network model and the current focus segmentation depth convolutional neural network model at termination time as a final focus detection depth convolutional neural network model and a focus segmentation depth convolutional neural network model respectively, if not meeting preset requirements, continuing the training until the preset requirements are met, and taking the current focus detection depth convolutional neural network model and the current focus segmentation depth convolutional neural network model as a final focus detection depth convolutional neural network model and a final focus segmentation depth convolutional neural network model respectively.
When the capsule endoscope is used for examination, the real-time image acquired by the capsule endoscope is input into a focus detection depth convolution neural network model to perform focus feature recognition and focus name judgment, and the focus name in the real-time image are recognized.
Inputting the real-time image into a focus segmentation depth convolution neural network model, and segmenting the identified focus to generate a focus mask and a focus detection frame. Wherein the lesion detection frame may be rectangular or polygonal. For example, referring to fig. 14, a mask and a detection frame for lesion B and a mask and a detection frame for lesion C are generated by a lesion segmentation depth convolutional neural network model, respectively.
Taking an image center point of a real-time image where a focus is located as an origin, and obtaining a position of the focus by using a position relation (direction vector) between coordinates of the center point of the focus detection frame and coordinates of the origin, namely the position of the focus in the real-time image; the number of pixels of the image in the focus detection frame is the size of the focus in the real-time image, and the focus size is obtained.
In some embodiments, the graphics processing apparatus is to:
obtaining the name of the focus, a focus mask and a focus detection frame according to an AI model;
And determining the position and the size of the focus in the image according to the focus mask and the focus detection frame to obtain the focus position and the focus size.
It can be understood that focus detection and focus segmentation can be realized by one AI model, any one of an AI model such as a circulation network model, a convolution network model, a depth neural network model, a depth generation model, a self-encoder model and the like can be selected, the selected model is trained by using an image set of the AI model, which is shot in advance by a capsule endoscope, at different positions of a target area, so that an AI model meeting one or a combination of recognition precision, sensitivity and specificity is obtained, and then an image is input into the AI model to perform focus feature recognition, focus name judgment and focus mask and focus detection frame generation; and determining the position and the size of the focus in the image according to the focus mask and the focus detection frame to obtain the focus position and the focus size. The specific implementation process is referred to the detailed description in the above embodiments, and will not be repeated here.
In some embodiments, the processor of the inspection device is to: and establishing a correlation list of the focus and the target part according to the target part position, the target part size, the focus position and the focus size. The graphical user interface includes a third display window for displaying a list of associations of the target lesion with the target site in a second list or a 3D model of the target area.
Specifically, according to the identification and positioning results of the focus and the target part, when an overlapping area exists between a focus detection frame and a target part detection frame, the focus is associated with the target part; when there is no overlapping area between a focus detection frame and a target part detection frame, calculating a target part detection frame nearest to the focus detection frame, and associating the focus with the target part nearest to the focus detection frame, as shown in fig. 14, focus 1 with part B and focus 2 with part C. According to the association relation between the focus and the part, a list of the association between the focus and the target part is established, and a third display window of the graphical user interface is displayed in a list or a 3D model of the target area, so that a doctor is assisted in judging the focus distribution condition of the stomach, and the accuracy of the examination result is improved. When the target area is the stomach, the association relationship between the focus and the target part can be displayed in a 3D stomach model diagram. As shown in fig. 15, the third display window schematically shows a manner of displaying a focus-to-site association list, in which different sites in the stomach model are divided, and association relationship between the focus and the site is marked in the form of a line, in which the focus 1 is associated with the site B, the focus 2 is associated with the site C, and the site includes a target site, which is not marked in the figure.
In some embodiments, the processor of the inspection device is to: receiving at least one image which is displayed by the graphical user interface and contains important information and selected by the user through a mouse or a key, and generating a captured image list; the graphical user interface includes a sixth display window for presenting the captured image list.
Specifically, a user (such as a doctor) can capture at least one image containing important information in an image captured in real time by a capsule endoscope displayed on a first display window of a graphical user interface through a mouse or a key event, receive the captured at least one image, generate a captured image list, send the captured image list to a display device, and display the captured image list on a sixth display window of the graphical user interface after rendering by the display device so as to assist the doctor in checking, thereby improving the accuracy of a checking result. Important information includes, but is not limited to, normal sites, typical sites, and focal sites.
In some embodiments, the processor of the inspection device is to: storing at least one image and the corresponding target site name, target site position, target site size, focus name, focus position and focus size in a corresponding image file to assist a doctor in reading and making a diagnosis result.
As shown in fig. 16, an embodiment of the present invention provides a capsule endoscopy system, comprising: the device comprises a capsule endoscope, an inspection device, a magnetic control device and a wireless transceiver device, wherein the wireless transceiver device is connected with the inspection device through a first communication link, and the inspection device is connected with the magnetic control device through a third communication link;
the capsule endoscope is internally provided with a first magnet and is used for shooting an image of a target area in real time and transmitting the image to the wireless receiving and transmitting equipment in real time;
the wireless transceiver is used for receiving the image and sending the image to the inspection equipment; the examination apparatus comprises a processor and a display device,
the processor shown is for:
identifying a target part in the image to obtain a target part name, and determining the position of the target part in the image to obtain target part position information;
determining the position and/or posture control information of the capsule endoscope according to the target position information, converting the position and/or posture control information of the capsule endoscope into the position and/or posture control information of the second magnet, and sending the position and/or posture control information of the second magnet to the magnetic control equipment; the display device is used for displaying a graphical user interface, and the graphical user interface comprises a first display window used for displaying the image shot by the capsule endoscope in real time;
The magnetic control equipment is provided with a transmission mechanism and a second magnet, and is used for controlling the transmission mechanism to adjust the position and/or the posture of the second magnet according to the position and/or the posture control information of the second magnet, and the second magnet acts on the first magnet to adjust the position and/or the posture of the capsule endoscope. In this embodiment, the connection relationship and the executed operations or functions of each device are described in detail in the foregoing embodiments, and are not described herein.
In addition, the specific features described in the above embodiments may be combined in any suitable manner without contradiction. In order to avoid unnecessary repetition, various possible combinations of embodiments of the present invention are not described in detail.
In addition, any combination of various embodiments of the present invention may be performed, so long as the concept of the embodiments of the present invention is not violated, and the disclosure of the embodiments of the present invention should also be considered.

Claims (12)

1. A capsule endoscopy system, the system comprising: the system comprises a capsule endoscope, an inspection device, a magnetic control device, a wireless transceiver and a graphic processing device, wherein the wireless transceiver is connected with the inspection device through a first communication link, the inspection device is connected with the graphic processing device through a second communication link, and the inspection device is connected with the magnetic control device through a third communication link;
The capsule endoscope is internally provided with a first magnet and is used for shooting an image of a target area in real time and transmitting the image to the wireless receiving and transmitting equipment in real time;
the magnetic control equipment is provided with a transmission mechanism and a second magnet, and is used for controlling the transmission mechanism to adjust the position and/or the posture of the second magnet according to the position and/or the posture control information of the second magnet, and the second magnet acts on the first magnet to adjust the position and/or the posture of the capsule endoscope;
the wireless transceiver is used for receiving the image and sending the image to the inspection equipment;
the image processing equipment is used for receiving the image sent by the inspection equipment, identifying a target part in the image to obtain a target part name, and determining the position of the target part in the image to obtain target part position information; for transmitting the target site name, the target site location information to the inspection apparatus;
the inspection apparatus includes a processor and a display device:
the processor: an automatic check mode for receiving a user selection through a setup window of a graphical user interface;
The image processing device is used for receiving the image shot by the capsule endoscope in real time in the target area and sending the image to the image processing device in real time;
the method comprises the steps of using a part to be scanned, which is shot by a capsule endoscope entering a target area for the first time, as a starting point, traversing the whole three-dimensional network topological graph by adopting a path planning algorithm of the three-dimensional network topological graph, finding out all cruising paths capable of traversing all parts of the whole three-dimensional network topological graph, selecting a path with the smallest weight sum, namely an optimal cruising path, and taking the optimal cruising path as a current cruising path, wherein the optimal cruising path comprises the scanning sequence of all the parts to be scanned, and the three-dimensional network topological graph is a three-dimensional position relation model of all the parts in the target area;
the method comprises the steps that the target position is selected according to the scanning sequence of each position to be scanned in the current cruising path, the capsule endoscope cruises from the current position to the target position, and firstly, the direction of the capsule endoscope towards the target position is adjusted according to the position relation between the current position and the target position in the three-dimensional position relation model, so that the target position appears in an image shot by the capsule endoscope in real time;
The position and/or posture control information of the capsule endoscope is determined according to the target position information, the position and/or posture control information of the capsule endoscope is converted into the position and/or posture control information of the second magnet, and the position and/or posture control information of the second magnet is sent to the magnetic control equipment;
the method comprises the steps of during scanning of the capsule endoscope, when the capsule endoscope is judged to deviate from the current cruising path or a target node is found to exceed a preset time, re-planning the current cruising path;
the display device is used for displaying a graphical user interface, and the graphical user interface comprises a first display window used for displaying the image shot by the capsule endoscope in real time.
2. The capsule endoscopy system of claim 1, wherein the graphics processing apparatus is configured to:
obtaining the name of the target node, a target node mask and a target node detection frame according to an AI model;
and determining the position of the target node in the first image according to the target node mask and the target node detection frame to obtain the target node position information, wherein the target node position information comprises a target node position and a target node size, and the target node size is the pixel number of the image in the target node detection frame.
3. The capsule endoscopy system of claim 1, wherein the graphics processing apparatus is configured to:
performing part feature recognition and part name judgment on the image input part detection AI model to recognize the target part and the target part name in the image;
dividing the identified target part through a part division AI model to generate a target part mask and a target part detection frame;
and determining the position of the target part in the image according to the target part mask and the target part detection frame to obtain the target node position information, wherein the target node position information comprises a target node position and a target node size, and the target node size is the number of pixels of the image in the target node detection frame.
4. The capsule endoscopy system of claim 1, wherein the processor is configured to create a list of unscanned sites, positively scanned sites, and scanned sites;
the graphical user interface includes a second display window for displaying the non-scanned, the positive scanned, and the scanned region list in a first list or a 3D model of the target region.
5. The capsule endoscopy system of claim 1, wherein the processor is configured to:
receiving real-time position and/or posture information of the second magnet sent by the magnetic control equipment through the third communication link;
converting the real-time position and/or posture information of the second magnet into real-time position and/or posture information of the capsule endoscope through a mechanical motion model;
the graphical user interface comprises a fourth display window, wherein the fourth display window is used for displaying real-time position and/or gesture information of the second magnet;
the graphical user interface includes a fifth display window for presenting real-time position and/or pose information of the capsule endoscope in the form of 3D simulated graphics.
6. The capsule endoscopy system of claim 1, wherein the graphics processing apparatus is configured to:
identifying a focus in the image to obtain a focus name, and determining the position and the size of the focus in the image to obtain the focus position and the focus size;
the lesion name, the lesion location and the lesion size are sent to the examination device.
7. The capsule endoscopy system of claim 6, wherein the graphics processing apparatus is configured to:
obtaining the name of the focus, a focus mask and a focus detection frame according to an AI model;
and determining the position and the size of the focus in the image according to the focus mask and the focus detection frame to obtain the focus position and the focus size.
8. The capsule endoscopy system of claim 6, wherein the graphics processing apparatus is configured to:
inputting the image into a focus detection AI model for focus feature identification and focus name judgment, and identifying the focus and the focus name in the image;
dividing the identified focus by a focus dividing AI model to generate a focus mask and a focus detection frame;
and determining the position and the size of the focus in the image according to the focus mask and the focus detection frame to obtain the focus position and the focus size.
9. The capsule endoscopy system of claim 6, wherein the processor is configured to:
establishing an association list of the focus and the target part according to the target part position, the target part size, the focus position and the focus size;
The graphical user interface includes a third display window for displaying a list of associations of target lesions with the target site in a second list or a 3D model of the target area.
10. The capsule endoscopy system of claim 6, wherein the processor of the inspection apparatus is configured to: receiving at least one image which is displayed by the graphical user interface and contains important information and selected by the user through a mouse or a key, and generating a captured image list;
the graphical user interface includes a sixth display window for presenting the captured image list.
11. The capsule endoscopy system of claim 6, wherein the processor is configured to:
storing at least one of the images and its corresponding target site name, target site location, target site size, lesion name, lesion location and lesion size in a corresponding image file.
12. A capsule endoscopy system, the system comprising: the device comprises a capsule endoscope, an inspection device, a magnetic control device and a wireless transceiver device, wherein the wireless transceiver device is connected with the inspection device through a first communication link, and the inspection device is connected with the magnetic control device through a third communication link;
The capsule endoscope is internally provided with a first magnet and is used for shooting an image of a target area in real time and transmitting the image to the wireless receiving and transmitting equipment in real time;
the magnetic control equipment is provided with a transmission mechanism and a second magnet, and is used for controlling the transmission mechanism to adjust the position and/or the posture of the second magnet according to the position and/or the posture control information of the second magnet, and the second magnet acts on the first magnet to adjust the position and/or the posture of the capsule endoscope;
the wireless transceiver is used for receiving the image and sending the image to the inspection equipment;
the examination apparatus comprises a processor and a display device,
the processor shown is for:
receiving an automatic checking mode selected by a user through a setting window of a graphical user interface;
identifying a target part in the image to obtain a target part name, and determining the position of the target part in the image to obtain target part position information;
taking a part to be scanned, which is shot by a capsule endoscope entering a target area for the first time, as a starting point, traversing the whole three-dimensional network topological graph by adopting a path planning algorithm of the three-dimensional network topological graph, finding out all cruising paths capable of traversing all parts of the whole three-dimensional network topological graph, selecting one path with the smallest weight sum as an optimal cruising path, taking the optimal cruising path as a current cruising path, wherein the optimal cruising path comprises the scanning sequence of all the parts to be scanned, and the three-dimensional network topological graph is a three-dimensional position relation model of all the parts in the target area;
Selecting the target position according to the scanning sequence of each position to be scanned in the current cruising path, cruising the capsule endoscope from the current position to the target position, and firstly adjusting the direction of the capsule endoscope towards the target position according to the position relation between the current position and the target position in the three-dimensional position relation model so that the target position appears in an image shot by the capsule endoscope in real time;
determining the position and/or posture control information of the capsule endoscope according to the target position information, converting the position and/or posture control information of the capsule endoscope into the position and/or posture control information of the second magnet, and sending the position and/or posture control information of the second magnet to the magnetic control equipment;
when the capsule endoscope is judged to deviate from the current cruising path or a target node is found to exceed a preset time in the scanning process of the capsule endoscope, the current cruising path is re-planned;
the display device is used for displaying a graphical user interface, and the graphical user interface comprises a first display window used for displaying the image shot by the capsule endoscope in real time.
CN202011096890.2A 2020-10-14 2020-10-14 Capsule endoscopy system Active CN112075914B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011096890.2A CN112075914B (en) 2020-10-14 2020-10-14 Capsule endoscopy system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011096890.2A CN112075914B (en) 2020-10-14 2020-10-14 Capsule endoscopy system

Publications (2)

Publication Number Publication Date
CN112075914A CN112075914A (en) 2020-12-15
CN112075914B true CN112075914B (en) 2023-06-02

Family

ID=73730249

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011096890.2A Active CN112075914B (en) 2020-10-14 2020-10-14 Capsule endoscopy system

Country Status (1)

Country Link
CN (1) CN112075914B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052956B (en) * 2021-03-19 2023-03-10 安翰科技(武汉)股份有限公司 Method, device and medium for constructing film reading model based on capsule endoscope
CN113240726B (en) * 2021-05-20 2022-10-14 南开大学 Real-time measurement method for optical target size under endoscope
CN114305297B (en) * 2021-09-08 2022-12-13 深圳市资福医疗技术有限公司 Magnetic control capsule endoscope system
CN113520279A (en) * 2021-09-10 2021-10-22 深圳市资福医疗技术有限公司 Gravity balance supporting equipment and magnetic control capsule endoscope system using same
CN113610847B (en) * 2021-10-08 2022-01-04 武汉楚精灵医疗科技有限公司 Method and system for evaluating stomach markers in white light mode
CN114184354B (en) * 2021-10-29 2023-08-29 深圳市资福医疗技术有限公司 Method, device and storage medium for detecting optical resolution of capsule endoscope
CN114463348A (en) * 2022-01-11 2022-05-10 广州思德医疗科技有限公司 Method for completing capsule endoscope stomach shooting through posture change, capsule endoscope and terminal
CN114259197B (en) * 2022-03-03 2022-05-10 深圳市资福医疗技术有限公司 Capsule endoscope quality control method and system
CN114916898A (en) * 2022-07-20 2022-08-19 广州华友明康光电科技有限公司 Automatic control inspection method, system, equipment and medium for magnetic control capsule

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107007242A (en) * 2017-03-30 2017-08-04 深圳市资福技术有限公司 A kind of capsule endoscopic control method and device
CN108720793A (en) * 2018-03-02 2018-11-02 重庆金山医疗器械有限公司 A kind of control system and method for capsule endoscope
CN209059133U (en) * 2018-09-04 2019-07-05 重庆金山医疗器械有限公司 Controlled capsule type endoscope diagnostic and examination system based on image recognition
CN109480746A (en) * 2019-01-14 2019-03-19 深圳市资福医疗技术有限公司 Intelligent control capsule endoscopic is in alimentary canal different parts working method and device
CN109846444A (en) * 2019-02-26 2019-06-07 重庆金山医疗器械有限公司 A kind of capsule automated navigation system and air navigation aid
CN211511733U (en) * 2019-06-17 2020-09-18 深圳硅基智控科技有限公司 Magnetic control device of capsule endoscope
CN111091536B (en) * 2019-11-25 2023-04-07 腾讯科技(深圳)有限公司 Medical image processing method, apparatus, device, medium, and endoscope
CN111227768A (en) * 2020-01-16 2020-06-05 重庆金山医疗技术研究院有限公司 Navigation control method and device of endoscope
CN112089392A (en) * 2020-10-14 2020-12-18 深圳市资福医疗技术有限公司 Capsule endoscope control method, device, equipment, system and storage medium

Also Published As

Publication number Publication date
CN112075914A (en) 2020-12-15

Similar Documents

Publication Publication Date Title
CN112075914B (en) Capsule endoscopy system
CN107811710B (en) Operation aided positioning system
US11025889B2 (en) Systems and methods for determining three dimensional measurements in telemedicine application
US7922652B2 (en) Endoscope system
US10881353B2 (en) Machine-guided imaging techniques
CN112089392A (en) Capsule endoscope control method, device, equipment, system and storage medium
CN114259197B (en) Capsule endoscope quality control method and system
WO2012014438A1 (en) Device, method, and program for assisting endoscopic observation
US10078906B2 (en) Device and method for image registration, and non-transitory recording medium
EP2929831A1 (en) Endoscope system and operation method of endoscope system
CN113662573B (en) Mammary gland focus positioning method, device, computer equipment and storage medium
CN111588464A (en) Operation navigation method and system
CN106780706A (en) Method for displaying image and device based on laparoscope
CN108814717A (en) surgical robot system
JP7321836B2 (en) Information processing device, inspection system and information processing method
CN110418610A (en) Determine guidance signal and for providing the system of guidance for ultrasonic hand-held energy converter
KR102382544B1 (en) System for providing surgical video and method thereof
US20190388057A1 (en) System and method to guide the positioning of a physiological sensor
JP2012228346A (en) Image display device
CN114831729A (en) Left auricle plugging simulation system for ultrasonic cardiogram and CT multi-mode image fusion
CN114637871A (en) Method and device for establishing digestive tract database and storage medium
CN115919461B (en) SLAM-based surgical navigation method
CN108460820B (en) Micro mobile device control device and method based on image feedback
JP2017063908A (en) Image registration device, method, and program
JP2017136275A (en) Image registration apparatus, method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant