CN114365992A - Endoscope blind area coverage detection method, system, equipment and storage medium - Google Patents

Endoscope blind area coverage detection method, system, equipment and storage medium Download PDF

Info

Publication number
CN114365992A
CN114365992A CN202111566096.4A CN202111566096A CN114365992A CN 114365992 A CN114365992 A CN 114365992A CN 202111566096 A CN202111566096 A CN 202111566096A CN 114365992 A CN114365992 A CN 114365992A
Authority
CN
China
Prior art keywords
monitoring
endoscope
information
acquiring
contact
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111566096.4A
Other languages
Chinese (zh)
Inventor
徐强
李凌
陈宇桥
辜嘉
李文超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Zhongkehuaying Health Technology Co ltd
Original Assignee
Suzhou Zhongkehuaying Health Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Zhongkehuaying Health Technology Co ltd filed Critical Suzhou Zhongkehuaying Health Technology Co ltd
Priority to CN202111566096.4A priority Critical patent/CN114365992A/en
Publication of CN114365992A publication Critical patent/CN114365992A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00057Operational features of endoscopes provided with means for testing or calibration
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00131Accessories for endoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Abstract

The invention discloses a method, a system, equipment and a storage medium for detecting the coverage of a blind area of an endoscope, wherein the method comprises the following steps: acquiring a three-dimensional structure of a monitoring target object; acquiring pose information and view field information of the endoscope; acquiring a monitoring line group according to the pose information and the view field information; acquiring contact information of the monitoring line group and the monitoring target object according to the monitoring line group and the three-dimensional structure; and determining a contact area detected under the current pose of the endoscope according to the contact information. The beneficial effects are as follows: the detection area is determined by monitoring the contact information of the sight line and the monitored target object, so that the detection blind area can be effectively reduced, and the detected area is truly reflected; and the contact detection speed of the monitoring sight line and the monitoring target object is high, the consumed time is short, and the detected area can be quickly determined.

Description

Endoscope blind area coverage detection method, system, equipment and storage medium
Technical Field
The invention relates to the field of medical treatment, in particular to a method, a system, equipment and a storage medium for detecting endoscope blind area coverage.
Background
Endoscopy is an optical instrument which is sent into the body from the outside of the body through a natural cavity of the human body to examine diseases in the body. The endoscope is an optical instrument, and is sent into the body from the outside of the body through a natural cavity of the human body to examine internal diseases, so that the pathological changes of the internal cavity of the viscera can be directly observed, the position and the range of the pathological changes can be determined, and photographing, biopsy or brushing can be carried out, thereby greatly improving the diagnosis accuracy of cancer and being capable of carrying out certain treatments.
Most of existing endoscope blind area monitoring methods utilize a pose sensor and a deep learning technology to be combined, the deep learning technology is utilized to identify key parts of an area to be detected, and whether all the key parts are identified or not is judged. However, the critical part cannot represent the whole area, and the risk of missing detection still exists.
Current endoscopy is prone to missing part of the area for less experienced physicians, resulting in less than full coverage of the area for endoscopy. Therefore, there is a need for a method for blind area monitoring during endoscopy, which can distinguish inspected areas from non-inspected areas in the entire area to be examined, so as to help doctors to perform quick full-coverage endoscopy.
Disclosure of Invention
In order to overcome the defects and shortcomings of the prior art, the invention discloses a method for detecting the coverage of a blind area of an endoscope, which determines and detects the area by monitoring the contact information of a sight line and a monitored target object, can effectively reduce the detection blind area and truly reflect the detected area; the method comprises the following steps:
acquiring a three-dimensional structure of a monitoring target object;
acquiring pose information and view field information of the endoscope;
acquiring a monitoring line group according to the pose information and the view field information, wherein the monitoring line group consists of a plurality of monitoring sights;
acquiring contact information of the monitoring line group and the monitoring target object according to the monitoring line group and the three-dimensional structure;
and determining a contact area detected under the current pose of the endoscope according to the contact information.
Still further, the acquiring of the three-dimensional structure of the examination region comprises the steps of:
acquiring a two-dimensional medical image of the monitoring target object;
acquiring a first characteristic parameter according to the two-dimensional medical image;
scanning the monitoring target object to obtain a basic three-dimensional structure;
acquiring a second characteristic parameter according to the basic three-dimensional structure;
and inputting the first characteristic parameters and the second characteristic parameters into a deep learning model for registration processing to obtain a three-dimensional structure.
Still further, the pose information includes at least a spatial position and a pose matrix.
Further, the field of view information includes at least a viewing distance information and a viewing angle information;
the acquiring the monitoring line group according to the pose information and the view field information comprises:
determining a monitoring plane and a monitoring starting point according to the pose information and the view field information;
and acquiring the monitoring line group according to the monitoring starting point and the monitoring plane.
Further, determining the contact area detected in the current pose of the endoscope according to the contact information includes:
when the number of the contact points of the same monitoring sight line on the monitoring target object is less than 1, the designated area is an untouched area;
and when the number of the contact points of the same monitoring sight line on the monitoring target object is more than or equal to 1, selecting the area of the contact point closest to the monitoring target object as the contact area.
Still further, the acquiring pose information and view field information of the endoscope includes:
controlling the endoscope to change the pose, and acquiring pose information and view field information of the endoscope under different poses;
the acquiring the contact information of the monitoring line group and the monitoring target object according to the monitoring line group and the three-dimensional structure comprises:
acquiring a plurality of contact information of the monitoring line group and the monitoring target object under different poses;
determining the detected contact area in the current pose of the endoscope according to the contact information comprises:
acquiring a plurality of contact areas corresponding to a plurality of pieces of contact information; integrating a plurality of the contact regions is displayed on the three-dimensional structure.
Still further, said integrating a plurality of said contact regions to display on said three-dimensional structure comprises:
recording a plurality of the contact areas in real time;
the contact area is marked in real time and displayed in the three-dimensional structure through animation.
On the other hand, this application still provides an endoscope blind area cover detecting system, includes:
a three-dimensional structure acquisition module: the system comprises a three-dimensional structure for acquiring a monitoring target object;
the monitoring information acquisition module: the endoscope is used for acquiring pose information and view field information of the endoscope;
a monitoring sight line acquisition module: the monitoring line group is obtained according to the pose information and the view field information;
a contact information acquisition module: the system is used for acquiring contact information of the monitoring visual line group and the monitoring target object according to the monitoring visual line group and the three-dimensional structure;
a region determination module: the endoscope touch detection device is used for determining a touch area detected in the current pose of the endoscope according to the touch information.
In a third aspect, the present application also provides an electronic device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by the processor to implement an endoscope blind spot coverage detection method as described above.
In a fourth aspect, the present application further provides a computer-readable storage medium, wherein at least one instruction, at least one program, code set, or instruction set is stored in the storage medium, and the at least one instruction, at least one program, code set, or instruction set is loaded by a processor and executes an endoscope blind area coverage detection method as described above.
The embodiment has the following effects:
1. the detection area is determined by monitoring the contact information of the sight line and the monitored target object, so that the detection blind area can be effectively reduced, and the detected area is truly reflected; and the contact detection speed of the monitoring sight line and the monitoring target object is high, the consumed time is short, and the detected area can be quickly determined.
2. The detected area is recorded in real time on the three-dimensional structure and marked, the contact detection speed of the monitoring sight line and the monitoring target object is high, the time consumption for obtaining contact information is short, and the real-time recording of the detected area on the three-dimensional structure cannot delay.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a flowchart of a method for detecting coverage of a blind area of an endoscope according to an embodiment of the present invention;
fig. 2 is a flowchart of a three-dimensional structure obtaining method according to an embodiment of the present invention;
FIG. 3 is a schematic view of a cone-shaped field of view provided by an embodiment of the present invention;
FIG. 4 is a schematic view illustrating the contact effect between a cone-shaped field of view and a monitoring target object according to an embodiment of the present invention;
fig. 5 is a schematic view illustrating a contact effect between a monitoring sight line and a monitoring target object according to an embodiment of the present invention;
fig. 6 is a block diagram of a system for detecting coverage of a blind area of an endoscope according to an embodiment of the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Examples
An endoscope is used as an optical instrument, and can directly observe pathological changes in internal organs by sending the endoscope into a patient, and in the examination process, doctors need to be familiar with the internal organs, if a detection coverage blind area is large, disease diagnosis can be seriously influenced, partial area detection can be easily omitted for doctors with relatively low experience, and the existing endoscope cannot distinguish a detected area from an undetected area, so that the embodiment provides an endoscope blind area coverage detection method which can intuitively distinguish the undetected area from a detected area by monitoring whether sight line contacts a monitoring target object, as shown in fig. 1, the method comprises the following steps:
s1: acquiring a three-dimensional structure of a monitoring target object;
in order to facilitate a doctor to visually check the current detection condition, the current three-dimensional structure of a monitoring target object needs to be acquired and displayed on a display device, the doctor uses an endoscope to detect and simultaneously display the whole three-dimensional structure on the display device, after the endoscope detects an area, the area is displayed on the display device and is detected, in order to improve the detection accuracy, the three-dimensional structure also needs to truly reflect the structure of the monitoring target object, the conventional three-dimensional structure acquisition method is to acquire a plurality of two-dimensional medical images, input the two-dimensional medical images into a depth learning model to generate the three-dimensional structure, but the three-dimensional structure generated by the monitoring target object in the two-dimensional medical images belongs to two expression forms, and the three-dimensional coordinate systems of the three-dimensional images are different, so that the three-dimensional structures are not completely overlapped in a three-dimensional space and the true three-dimensional form of the monitoring target object cannot be accurately reflected, therefore, the embodiment improves the method for obtaining the three-dimensional structure, as shown in fig. 2, and the specific steps are as follows:
s11: acquiring a two-dimensional medical image of a monitoring target object;
s12: acquiring a first characteristic parameter according to the two-dimensional medical image;
s13: scanning a monitoring target object to obtain a basic three-dimensional structure;
s14: acquiring a second characteristic parameter according to the basic three-dimensional structure;
s15: and inputting the first characteristic parameters and the second characteristic parameters into a deep learning model for registration processing to obtain a three-dimensional structure.
The method includes the steps that a two-dimensional medical image of an affected part of a patient is obtained firstly, the two-dimensional medical image can be obtained through a two-dimensional information acquisition device, for example, X-ray scanning and the like, and first characteristic parameters are obtained according to the two-dimensional medical image, in the embodiment, a monitored target is taken as a stomach, first parameters of a stomach structure are obtained, and the first characteristic parameters include, but are not limited to, two-dimensional coordinates of parts such as a through opening, a small stomach bend, a fundus, a stomach channel and a pylorus part; then, acquiring a basic three-dimensional structure of a monitored target object through a three-dimensional information acquisition device, such as a CT (computed tomography) scanned stomach, acquiring the basic three-dimensional structure of the stomach of a patient, and acquiring second characteristic parameters of the basic three-dimensional structure, wherein the second characteristic parameters include but are not limited to three-dimensional coordinates of a through gate, a small stomach curve, a fundus, a gastric tract, a pylorus part and the like, and finally inputting the first characteristic parameters and the second characteristic parameters into a deep learning model for registration treatment, and generating the three-dimensional structure after the registration treatment is finished; the registration processing comprises the step of carrying out synchronous deformation operation on three-dimensional coordinates of each part in the basic three-dimensional structure so as to enable the second characteristic parameters to be attached to the second characteristic parameters, namely, the step of carrying out size adjustment, direction adjustment and position adjustment on each part in the basic three-dimensional structure so that the two-dimensional medical image is the projection of the adjusted three-dimensional oral cavity structure under a certain angle, the fact that the three-dimensional structure truly reflects the actual stomach structure is achieved, and the detection accuracy is improved.
The endoscope has different sizes and angles of monitored areas at different positions, different orientations and different endoscope specifications, so that a monitored target object is detected before the target object is detected. Acquiring pose information and view field information of the endoscope; the method comprises the following specific steps:
s2: acquiring pose information and view field information of the endoscope;
acquiring current pose information of the endoscope by using a pose sensor built in the front end of the endoscope, wherein the pose information comprises but is not limited to a spatial position and a pose matrix;
the view field information includes, but is not limited to, a view distance information and a view angle information, and a current detection area of the endoscope is judged by acquiring a specific position and pose matrix of the endoscope and the view field information.
In the process of examining a monitoring target object by an endoscope, the endoscope moves and rotates along a preset track, more areas in the monitoring target object are detected by continuously changing the pose information of the endoscope, the detection of the monitoring target object by changing the pose information of the endoscope is realized, the detected areas are recorded and identified in a three-dimensional result, and the method specifically comprises the following steps:
s21: controlling the endoscope to change the pose, and acquiring pose information and view field information of the endoscope;
the endoscope is controlled to change the pose according to a preset track, the preset track comprises an inspection track of a doctor on a monitored target object according to experience of the doctor, or the type of the monitored target object is input into a deep learning model, a set of inspection tracks are automatically generated by combining a first characteristic parameter and a second characteristic parameter in a three-dimensional structure, visceral organs of each person are different, a set of inspection tracks are respectively generated according to the first characteristic parameter and the second characteristic parameter of different patients, and the inspection of the endoscope is more thorough and comprehensive.
S3: acquiring a monitoring line group according to the pose information and the view field information, wherein the monitoring line group consists of a plurality of monitoring sights;
as shown in fig. 3, when the position of the endoscope is used as a starting point, when the endoscope emits a root monitoring view to the periphery to form a conical monitoring view field, namely a conical view field, wherein α is a detection view angle of the endoscope, L is a monitoring view distance of the endoscope, and when the pose information of the endoscope and the three-dimensional structure of the monitoring target object are determined, the region of the endoscope projected on the monitoring target object, namely a monitoring plane β, can be calculated through a parameter equation according to the known detection view angle α and the monitoring view distance L. Whether the area is detected or not can be judged through whether the cone sight collides with the monitoring target object or not, collision is contact, namely the cone contacts the monitoring target object, but a monitoring blind area is easy to occur when the monitoring target object is subjected to a collision test through the whole cone view field, as shown in fig. 4, when the monitoring target object is provided with a small bulge, the cone emits to the bulge part of the monitoring target object, only a plane gamma which is close to the endoscope can be actually seen, but a plane delta which is positioned behind the plane is shielded, but the view field is detected through the range, the cone collides with the small bulge, but the system still considers that the plane delta behind the cone view field is detected, therefore, the cone view field is divided into a certain number of monitoring sight lines Y, the detected area is determined through collision of the monitoring sight lines with the monitoring target object, and the speed of the monitoring sight lines in comparison with the cone view field to the monitoring target object is higher, the monitoring sight line acquisition method specifically comprises the following steps:
s31: determining a monitoring plane and a monitoring starting point according to the pose information and the view field information;
firstly, the pose information of the current endoscope needs to be acquired, a monitoring starting point, namely the specific position of the endoscope, can be acquired through the pose information, and a monitoring plane is determined according to the view field information, namely the monitoring plane projected on a monitoring target object by the conical view field can be determined under the condition that the specific position, the view angle information and the view distance information of the endoscope are determined.
S32: and acquiring a monitoring line group according to the monitoring starting point and the monitoring plane.
The method comprises the following steps of determining a monitoring sight emitted by a monitoring starting point through the determined monitoring starting point and a monitoring plane, uniformly dividing the determined monitoring plane into a plurality of areas according to coordinates, respectively emitting the monitoring sight to each area of the monitoring plane by taking the monitoring starting point as the starting point, judging the detected area of an endoscope by collision of the monitoring sight with the area, and judging and determining the detected area by contact information between the monitoring sight and a monitoring target object after acquiring a monitoring sight group, wherein the contact information acquisition step comprises the following steps:
s4: acquiring contact information of the monitoring line group and a monitoring target object according to the monitoring line group and the three-dimensional structure;
s41: acquiring a plurality of contact information of monitoring sight lines and monitoring target objects at different poses;
whether the area is detected or not is judged by whether the monitoring sight line emitted from the endoscope is in contact with the monitoring target object or not, when the endoscope changes the pose, the projected areas of the endoscope on the monitoring target object are different, and the contact information of the endoscope under different poses is recorded in real time.
S5: and determining the contact area detected under the current pose of the endoscope according to the contact information.
When the number of contact points of the same monitoring sight line on the monitoring target object is less than 1, the designated area is an untouched area;
and when the number of the contact points of the same monitoring sight line on the monitoring target object is more than or equal to 1, selecting the area of the contact point closest to the monitoring target object as a contact area.
As shown in fig. 5, when the monitoring sight line collides with the small bump of the monitoring target object, there is a first collision point, i.e., a contact point, indicating an area where the first collision point can be detected, this area is the contact area, and if one wants to detect the back plane of a small bump, one must have the monitoring line of sight hit the back plane, and one monitoring sight line can only determine whether the current monitoring sight line and the area of the first collision point of the small bulge are detected or not, a plurality of monitoring sight lines can form a complete conical monitoring view field, when the endoscope detects, the endoscope moves, correspondingly marking the area detected by the endoscope on the three-dimensional structure, if a small area appears in the large monitored area and is not marked, then the area is possibly shielded and cannot be detected, and a doctor can detect a detection blind area again according to the mark until the blind area is marked on the three-dimensional structure; the method comprises the steps that the projection plane of the endoscope on a monitoring target object is subjected to uniform area division, each area is provided with a monitoring line to collide with the area, the area where a collision point is located is judged to be a detected area, when the monitoring lines have a plurality of collision points with the area, the area where the collision point which is close to the endoscope is located is judged to be the monitored area, the collision speed of the monitoring lines and the monitoring target object is higher than that of the whole cone body when the monitoring lines and the monitoring target object collide, and the current detected area is reflected more truly.
S51: acquiring a plurality of contact areas corresponding to a plurality of contact information; the integrated plurality of contact regions is displayed in a three-dimensional structure.
The contact area corresponding to each monitoring sight line can be displayed in the three-dimensional structure, the contact areas corresponding to all the monitoring sight lines are integrated, and all the areas detected at present can be visually seen in the three-dimensional structure.
S511: recording a plurality of contact areas in real time;
s512: marking the contact area in real time and displaying the contact area in a three-dimensional structure through animation;
the endoscope detects the contact information acquired by the monitoring target object through different poses, the contact information is recorded in real time through the three-dimensional structure and displayed through animation, the contact detection speed of the monitoring sight line and the monitoring target object is high, the time consumption for acquiring the contact information is short, the detected area is recorded in real time on the three-dimensional structure, delay is avoided, a doctor can know that the current detection contains a blind area at the first time, the pose of the endoscope can be adjusted immediately, the blind area is detected, and the diagnosis accuracy of diseases is effectively improved.
As shown in fig. 6, this embodiment further provides an endoscope blind area coverage detection system, which can implement all the functions of the above method, and the system includes:
the three-dimensional structure obtaining module 601: the system comprises a three-dimensional structure for acquiring a monitoring target object;
the monitoring information obtaining module 602: the endoscope system is used for acquiring pose information and view field information of the endoscope;
the monitoring gaze acquisition module 603: the monitoring line group is obtained according to the pose information and the view field information;
the contact information acquisition module 604: and the method is used for acquiring the contact information of the monitoring video group and the monitoring target object according to the monitoring video group and the three-dimensional structure.
The zone determination module 605: and the endoscope is used for determining the detected area under the current pose of the endoscope according to the contact information.
Embodiments of the present invention also provide an electronic device, which includes a processor and a memory, where at least one instruction, at least one program, code set, or instruction set is stored in the memory, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by the processor to implement the endoscope blind area coverage detection method as in the method embodiments.
Embodiments of the present invention also provide a storage medium that can be disposed in a server to store at least one instruction, at least one program, a code set, or a set of instructions related to implementing the endoscope blind area coverage detection method in the method embodiments, where the at least one instruction, the at least one program, the code set, or the set of instructions are loaded and executed by the processor to implement the endoscope blind area coverage detection method provided by the above method embodiments.
Alternatively, in this embodiment, the storage medium may be located in at least one network server of a plurality of network servers of a computer network. Optionally, in this embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
As can be seen from the above embodiments of the endoscope blind area coverage detection method, system, electronic device, and storage medium provided by the present invention, in the embodiments of the present invention, the detected area of the endoscope is determined according to the contact information between the monitoring line emitted by the endoscope and the monitoring target object, and the detected area is determined according to the contact information between the monitoring line emitted by the endoscope and the monitoring target object, so that the blind area coverage during detection can be reduced, and compared with the contact information between the monitoring target objects of the panoramic view field cone, the speed of collision of the monitoring line with the monitoring target object is faster, and the detection time is shorter.
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system and server embodiments, since they are substantially similar to the method embodiments, the description is simple, and reference may be made to some descriptions of the method embodiments for relevant points.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The embodiment has the following effects:
1. the detection area is determined by monitoring the contact information of the sight line and the monitored target object, so that the detection blind area can be effectively reduced, and the detected area is truly reflected; and the contact detection speed of the monitoring sight line and the monitoring target object is high, the consumed time is short, and the detected area can be quickly determined.
2. The detected area is recorded in real time on the three-dimensional structure and marked, the contact detection speed of the monitoring sight line and the monitoring target object is high, the time consumption for obtaining contact information is short, and the real-time recording of the detected area on the three-dimensional structure cannot delay.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. An endoscope blind area coverage detection method is characterized by comprising the following steps:
acquiring a three-dimensional structure of a monitoring target object;
acquiring pose information and view field information of the endoscope;
acquiring a monitoring line group according to the pose information and the view field information, wherein the monitoring line group consists of a plurality of monitoring sights;
acquiring contact information of the monitoring line group and the monitoring target object according to the monitoring line group and the three-dimensional structure;
and determining a contact area detected under the current pose of the endoscope according to the contact information.
2. The method for detecting the coverage of the blind area of the endoscope as claimed in claim 1, wherein the step of acquiring the three-dimensional structure of the examination area comprises the steps of:
acquiring a two-dimensional medical image of the monitoring target object;
acquiring a first characteristic parameter according to the two-dimensional medical image;
scanning the monitoring target object to obtain a basic three-dimensional structure;
acquiring a second characteristic parameter according to the basic three-dimensional structure;
and inputting the first characteristic parameters and the second characteristic parameters into a deep learning model for registration processing to obtain a three-dimensional structure.
3. The method according to claim 1, wherein the pose information at least comprises a spatial position and a pose matrix.
4. The method for detecting the coverage of the blind area of the endoscope according to claim 1,
the field of view information at least comprises visual range information and visual angle information;
the acquiring the monitoring line group according to the pose information and the view field information comprises:
determining a monitoring plane and a monitoring starting point according to the pose information and the view field information;
and acquiring the monitoring line group according to the monitoring starting point and the monitoring plane.
5. The endoscope blind area coverage detection method according to claim 1, wherein determining the contact area detected in the current pose of the endoscope according to the contact information comprises:
when the number of the contact points of the same monitoring sight line on the monitoring target object is less than 1, the designated area is an untouched area;
and when the number of the contact points of the same monitoring sight line on the monitoring target object is more than or equal to 1, selecting the area of the contact point closest to the monitoring target object as the contact area.
6. The method for detecting the coverage of the blind area of the endoscope according to claim 1,
the acquiring pose information and view field information of the endoscope comprises:
controlling the endoscope to change the pose, and acquiring pose information and view field information of the endoscope under different poses;
the acquiring the contact information of the monitoring line group and the monitoring target object according to the monitoring line group and the three-dimensional structure comprises:
acquiring a plurality of contact information of the monitoring line group and the monitoring target object under different poses;
determining the detected contact area in the current pose of the endoscope according to the contact information comprises:
acquiring a plurality of contact areas corresponding to a plurality of pieces of contact information; integrating a plurality of the contact regions is displayed on the three-dimensional structure.
7. The method according to claim 6, wherein said integrating the plurality of contact areas to display the three-dimensional structure comprises:
recording a plurality of the contact areas in real time;
the contact area is marked in real time and displayed in the three-dimensional structure through animation.
8. An endoscope blind zone coverage detection system, comprising:
a three-dimensional structure acquisition module: the system comprises a three-dimensional structure for acquiring a monitoring target object;
the monitoring information acquisition module: the endoscope is used for acquiring pose information and view field information of the endoscope;
a monitoring sight line acquisition module: the monitoring line group is obtained according to the pose information and the view field information;
a contact information acquisition module: the system is used for acquiring contact information of the monitoring visual line group and the monitoring target object according to the monitoring visual line group and the three-dimensional structure;
a region determination module: the endoscope touch detection device is used for determining a touch area detected in the current pose of the endoscope according to the touch information.
9. An electronic device, comprising a processor and a memory, wherein at least one instruction, at least one program, set of codes, or set of instructions is stored in the memory, and wherein the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement a method of endoscope blind zone coverage detection as defined in any one of claims 1 to 7.
10. A computer readable storage medium having stored therein at least one instruction, at least one program, set of codes, or set of instructions for being loaded by a processor and for performing a method of endoscope blind area coverage detection as claimed in any one of claims 1 to 7.
CN202111566096.4A 2021-12-20 2021-12-20 Endoscope blind area coverage detection method, system, equipment and storage medium Pending CN114365992A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111566096.4A CN114365992A (en) 2021-12-20 2021-12-20 Endoscope blind area coverage detection method, system, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111566096.4A CN114365992A (en) 2021-12-20 2021-12-20 Endoscope blind area coverage detection method, system, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114365992A true CN114365992A (en) 2022-04-19

Family

ID=81140838

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111566096.4A Pending CN114365992A (en) 2021-12-20 2021-12-20 Endoscope blind area coverage detection method, system, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114365992A (en)

Similar Documents

Publication Publication Date Title
US20220192611A1 (en) Medical device approaches
JP6395995B2 (en) Medical video processing method and apparatus
JP4829217B2 (en) Data set visualization
US20140303499A1 (en) Ultrasound diagnostic apparatus and method for controlling the same
US20090005640A1 (en) Method and device for generating a complete image of an inner surface of a body cavity from multiple individual endoscopic images
EP2390839A1 (en) 3D ultrasound apparatus and method for operating the same
Beigi et al. Needle trajectory and tip localization in real-time 3-D ultrasound using a moving stylus
JP6716853B2 (en) Information processing apparatus, control method, and program
JP2017526467A (en) Quality metrics for multibeat echocardiography for immediate user feedback
US8237784B2 (en) Method of forming virtual endoscope image of uterus
TW201347737A (en) Breast ultrasound scanning and diagnosis aid system
CN107527379B (en) Medical image diagnosis apparatus and medical image processing apparatus
JPH11104072A (en) Medical support system
US20220409030A1 (en) Processing device, endoscope system, and method for processing captured image
CN114532965A (en) Real-time lung cancer focus recognition system under thoracoscope
CN112150543A (en) Imaging positioning method, device and equipment of medical imaging equipment and storage medium
US9123163B2 (en) Medical image display apparatus, method and program
JP2007014483A (en) Medical diagnostic apparatus and diagnostic support apparatus
KR102133695B1 (en) The needle guide system and method for operating the same
WO2006060373A2 (en) Ultrasonic image and visualization aid
CN114365992A (en) Endoscope blind area coverage detection method, system, equipment and storage medium
KR101014562B1 (en) Method of forming virtual endoscope image of uterus
KR20220122312A (en) Artificial intelligence-based gastroscopy diagnosis supporting system and method
JP5121163B2 (en) Cross-sectional image capturing device
CN111658141B (en) Gastrectomy port position navigation system, gastrectomy port position navigation device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination