CN117814912A - Positioning module, positioning method and robot system - Google Patents

Positioning module, positioning method and robot system Download PDF

Info

Publication number
CN117814912A
CN117814912A CN202410026749.7A CN202410026749A CN117814912A CN 117814912 A CN117814912 A CN 117814912A CN 202410026749 A CN202410026749 A CN 202410026749A CN 117814912 A CN117814912 A CN 117814912A
Authority
CN
China
Prior art keywords
point cloud
cloud set
frames
signal
period
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410026749.7A
Other languages
Chinese (zh)
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Microport Medbot Group Co Ltd
Original Assignee
Shanghai Microport Medbot Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Microport Medbot Group Co Ltd filed Critical Shanghai Microport Medbot Group Co Ltd
Priority to CN202410026749.7A priority Critical patent/CN117814912A/en
Publication of CN117814912A publication Critical patent/CN117814912A/en
Pending legal-status Critical Current

Links

Abstract

The invention provides a positioning module, a positioning method and a robot system. The positioning module is used for executing the following steps: acquiring a first point cloud set of a space occupied by a target pipeline; acquiring a multi-frame point cloud set of a space occupied by a snake-shaped moving object; and based on the point cloud sets and the first point cloud set, the first signal is output at the proper phase moment, and the second signal is output at other moments. The first signal and the second signal are used for providing a basis for logic judgment for other methods or devices working cooperatively. So configured, the positioning of the serpentine moving object is performed through the point cloud set, and the influence of the breathing fluctuation of the patient on the positioning is eliminated through the matching degree, so that the problem of inaccurate positioning in the lung puncture operation process in the prior art is solved.

Description

Positioning module, positioning method and robot system
Technical Field
The invention relates to the technical field of intra-operative navigation algorithms, in particular to a positioning module, a positioning method and a robot system.
Background
There are two types of aspiration biopsy procedures: one is a bronchoscopy puncture procedure, and the other is a percutaneous pulmonary puncture procedure. The bronchoscope puncture operation is to put the slender bronchoscope into the lower respiratory tract of a patient through the oral cavity or the nasal cavity, directly observe the lesions of the trachea and the bronchus, and perform corresponding puncture biopsy and treatment according to the condition of the lesions; percutaneous aspiration biopsy is a common method for diagnosing peripheral parenchymal lesions at present, which is to determine the position of the lesions of the lung through CT or X-ray scanning, penetrate a puncture needle into the lesions of the lung through the skin of the chest and obtain biopsy tissue. The surgical methods of the two procedures are different and often cannot be switched instantaneously due to the high frequency occupation of the operating room and equipment when one procedure fails and requires switching the other. In addition, because the respiratory movement of the patient causes the puncture position and direction planned before the operation to not accurately penetrate into the focus in the operation process, multiple times of CT, CBCT (cone beam CT), or fluorescence imaging, or ultrasound imaging equipment are needed to confirm the puncture success in the operation process.
The above scheme has defects or shortcomings:
1) The puncture position and direction of the conventional puncture operation are planned according to the preoperative CT, and the respiratory motion in the operation can not penetrate into the focus area according to the original planning scheme, so that the success rate of the operation is low.
2) Conventional puncture surgery requires multiple CT, or CBCT, or X-ray, or fluoroscopic imaging to determine whether the puncture needle is accurately introduced into the focal region, which is a radiation hazard to doctors and a long operation time.
3) Electromagnetic sensors used in conventional puncture operations and CT imaging devices cannot be used simultaneously, if devices such as CT are required to confirm the puncture path and to confirm whether puncture is successful, the magnetic field generator needs to be removed at this time, once the position of the magnetic field generator changes, the registration matrix of the previous reaction magnetic field coordinate system relative to the CT coordinate system is inaccurate, if navigation and positioning are also required to be continued, the patient needs to return to the starting point for re-registration, which increases the duration and complexity of the operation.
4) When the focus position is not penetrated through the bronchoscope, the switching to percutaneous pulmonary puncture is needed, and the operation time, the operating room and the use of CT equipment are often reserved additionally, so that the operation time and the risk are increased.
In summary, the prior art has the problem of inaccurate positioning during the lung puncture operation. Inaccuracy here should be understood in that, in order to take into account the physical health of the patient, it is limited by the means of the instrument, resulting in inaccurate positioning; if certain accurate perspective instruments are forcefully adopted, the problem is changed to that the body damage to the patient is large, but the large body damage to the patient is also caused by the inaccuracy of the positioning means with small damage.
Disclosure of Invention
The invention aims to provide a positioning module, a positioning method and a robot system, which are used for solving the problem of inaccurate positioning in the lung puncture operation process in the prior art.
In order to solve the above technical problem, according to a first aspect of the present invention, there is provided a positioning module for performing the steps of: acquiring a first point cloud set of a space occupied by a target pipeline; acquiring a second point cloud set of multiple frames of space occupied by the serpentine moving object in a first preset time period; acquiring a coordinate system conversion matrix based on the pairing relation between the second point cloud set and the first point cloud set; acquiring a third point cloud set of multiple frames of the space occupied by the serpentine motion object in a second preset time period; performing coordinate conversion on the third point cloud set based on the coordinate system conversion matrix to obtain a fourth point cloud set; if the pairing relation between the fourth point cloud set and the first point cloud set of the current frame accords with the pairing success condition, outputting a first signal, and positioning the snakelike moving object by using the fourth point cloud set of the current frame; and if the pairing relation between the fourth point cloud set and the first point cloud set of the current frame does not accord with the pairing success condition, outputting a second signal.
The first signal and the second signal are used for providing a basis for logic judgment for other methods or devices working cooperatively.
Optionally, the first point cloud set is acquired based on a perspective imaging device; the first point cloud set acquired by the perspective imaging device has delay, or the working time of the perspective imaging device has limitation.
Optionally, a shape optical fiber is arranged on the serpentine motion object, and the second point cloud set and the third point cloud set are acquired based on the shape optical fiber; or, a plurality of electromagnetic sensing patches are arranged on the serpentine motion object, and the second point cloud set and the third point cloud set are acquired based on the electromagnetic sensing patches.
Optionally, the acquisition time of each frame in the second point cloud set is indicated by an externally input detection signal, and the acquisition time of each frame in the third point cloud set is indicated by the detection signal.
Optionally, the step of obtaining the coordinate system transformation matrix based on the pairing relationship between the second point cloud set and the first point cloud set includes:
acquiring the second point cloud set of the 1 st period; wherein the point cloud set for each period comprises a preset number of frames.
Registering the second point cloud set and the first point cloud set of the first frame to obtain an initial coordinate system conversion matrix.
Based on the initial coordinate system conversion matrix, converting the second point cloud set of the 1 st period to obtain K frames with highest matching degree; wherein K is a preset parameter.
Circularly acquiring the second point cloud set of the ith period; wherein i is an integer greater than 1.
And converting the second point cloud set of the ith period based on the initial coordinate system conversion matrix to obtain K frames with highest matching degree.
If the frames of the K frames with the highest matching degree in the ith period are adjacent, the frames of the K frames with the highest matching degree in the ith-1 period are adjacent, and the deviation value of the number of the frames of the K frames with the highest matching degree in the ith period and the number of the frames of the K frames with the highest matching degree in the ith-1 period is within a first threshold, outputting the current initial coordinate system conversion matrix as the coordinate system conversion matrix; and simultaneously, stopping the circulation and switching to the second preset time period.
And if not, iterating the initial coordinate system transformation matrix by using the intersection of all the second point cloud sets in the ith period and starting the cycle of the next period.
Optionally, if the pairing relationship between the fourth point cloud set and the first point cloud set of the current frame meets a pairing success condition, outputting a first signal; if the pairing relation between the fourth point cloud set and the first point cloud set in the current frame does not meet the pairing success condition, the step of outputting a second signal comprises the following steps:
circularly acquiring the fourth point cloud set of the j-th period, and acquiring K frames with highest matching degree; wherein j is an integer greater than n, and n is the total number of cycles in the first preset time period.
If the frames of the K frames with the highest matching degree in the j-th period are adjacent, the frames of the K frames with the highest matching degree in the j-1-th period are adjacent, the number of the frames of the K frames with the highest matching degree in the j-th period and the deviation value of the frames of the K frames with the highest matching degree in the j-1-th period are within a second threshold, and outputting the first signal at the moment corresponding to the median frame of the K frames with the highest matching degree; and outputting the second signal at the moment corresponding to the other frames.
And if not, iterating the initial coordinate system conversion matrix by using K frames with the highest matching degree in the j-th period and starting the cycle of the next period, and outputting the second signal by each frame.
Wherein L is a preset parameter.
In order to solve the above technical problem, according to a second aspect of the present invention, there is provided a positioning method including: acquiring a first point cloud set of a space occupied by a target pipeline; acquiring a second point cloud set of multiple frames of space occupied by the serpentine moving object in a first preset time period; acquiring a coordinate system conversion matrix based on the pairing relation between the second point cloud set and the first point cloud set; acquiring a third point cloud set of multiple frames of the space occupied by the serpentine motion object in a second preset time period; performing coordinate conversion on the third point cloud set based on the coordinate system conversion matrix to obtain a fourth point cloud set; if the pairing relation between the fourth point cloud set and the first point cloud set of the current frame accords with the pairing success condition, outputting a first signal, and positioning the snakelike moving object by using the fourth point cloud set of the current frame; and if the pairing relation between the fourth point cloud set and the first point cloud set of the current frame does not accord with the pairing success condition, outputting a second signal. The first signal and the second signal are used for providing a basis for logic judgment for other methods or devices working cooperatively.
In order to solve the above technical problem, according to a third aspect of the present invention, there is provided a robot system including the above positioning module.
Optionally, the target line is a bronchus, and the robotic system includes a catheter configured as one of the serpentine motion objects and a puncture needle configured as the other serpentine motion object.
Optionally, the robotic system further comprises an execution module for driving the catheter and/or the puncture needle to move; the execution module moves when the positioning module outputs the first signal; and when the positioning module outputs the second signal, the executing module stops moving.
Optionally, the robot system includes a respiratory signal detection module, the respiratory signal detection module includes a respiratory patch, the respiratory patch is used for being attached to a chest or an oro-nasal part of a medical object, the respiratory signal detection module is used for inputting a detection signal to the positioning module so that the positioning module obtains a collection time of each frame of the second point cloud set, and a collection time of each frame of the third point cloud set.
Optionally, the robot system includes a visualization module, where the visualization module is configured to display the first point cloud set, the second point cloud set, and the fourth point cloud set after the transformation based on the coordinate system transformation matrix.
When the positioning module outputs the first signal, the second point cloud set and the fourth point cloud set converted based on the coordinate system conversion matrix are displayed in a first mode; and when the positioning module outputs the second signal, displaying or hiding the second point cloud set and the fourth point cloud set in a second mode based on the coordinate system conversion matrix.
Compared with the prior art, in the positioning module, the positioning method and the robot system provided by the invention, the positioning module is used for executing the following steps: acquiring a first point cloud set of a space occupied by a target pipeline; acquiring a second point cloud set of multiple frames of space occupied by the serpentine moving object in a first preset time period; acquiring a coordinate system conversion matrix based on the pairing relation between the second point cloud set and the first point cloud set; acquiring a third point cloud set of multiple frames of the space occupied by the serpentine motion object in a second preset time period; performing coordinate conversion on the third point cloud set based on the coordinate system conversion matrix to obtain a fourth point cloud set; if the pairing relation between the fourth point cloud set and the first point cloud set of the current frame accords with the pairing success condition, outputting a first signal, and positioning the snakelike moving object by using the fourth point cloud set of the current frame; and if the pairing relation between the fourth point cloud set and the first point cloud set of the current frame does not accord with the pairing success condition, outputting a second signal. The first signal and the second signal are used for providing a basis for logic judgment for other methods or devices working cooperatively. So configured, the positioning of the serpentine motion object is performed through the fourth point cloud set, and the influence of the breathing fluctuation of the patient on the positioning is eliminated through the matching degree, so that the problem of inaccurate positioning in the lung puncture operation process in the prior art is solved.
Drawings
Those of ordinary skill in the art will appreciate that the figures are provided for a better understanding of the present invention and do not constitute any limitation on the scope of the present invention. Wherein:
FIG. 1 is a schematic diagram of a robotic system according to one embodiment of the invention;
FIG. 2 is a block diagram of a robotic system according to one embodiment of the invention;
FIG. 3 is a flow chart of a positioning method according to an embodiment of the invention;
FIG. 4 is a schematic view of a catheter according to an embodiment of the present invention;
FIG. 5 is a schematic view showing the structure of a puncture needle according to an embodiment of the present invention;
FIG. 6 is a diagram of shape fiber sensing positioning according to an embodiment of the present invention;
FIG. 7 is a flow chart of determining respiratory phase according to an embodiment of the present invention;
FIG. 8 is a schematic workflow diagram of an actuator according to an embodiment of the invention;
FIG. 9 is a schematic diagram of a set of point clouds according to an embodiment of the invention;
FIG. 10 is a schematic diagram of the presentation of a visualization module according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of the correspondence between point cloud movement and respiratory cycle according to an embodiment of the present invention;
FIG. 12 is a schematic illustration of a percutaneous pulmonary puncture examination scenario in accordance with an embodiment of the invention;
FIG. 13 is a schematic view of visual contents of a bronchoscopy puncture in accordance with an embodiment of the present invention;
Fig. 14 is a schematic view of the visual contents of percutaneous pulmonary puncture according to an embodiment of the present invention.
In the accompanying drawings:
1-a robotic system; 11-bronchoscopic robot; 12-an image display navigation device; 2-patient trolley; 3-lesions; 4-a puncture path; 5-lung; 6-thorax; 111-a robot trolley; 112-a mechanical arm; 113-a mounting plate; 114-a catheter; 115-puncture needle; 131-a positioning module; 132-an execution module; 133-a respiratory signal detection module; 134-visualization module; 135-a pose sensing module; 136-a puncture path planning module; 137-a data storage module;
1141-a camera; 1142-an illumination lamp; 1143-shaped optical fiber; 1144-a bendable section; 11431-bragg grating.
Detailed Description
The invention will be described in further detail with reference to the drawings and the specific embodiments thereof in order to make the objects, advantages and features of the invention more apparent. It should be noted that the drawings are in a very simplified form and are not drawn to scale, merely for convenience and clarity in aiding in the description of embodiments of the invention. Furthermore, the structures shown in the drawings are often part of actual structures. In particular, the drawings are shown with different emphasis instead being placed upon illustrating the various embodiments.
As used in this disclosure, the singular forms "a," "an," and "the" include plural referents, the term "or" are generally used in the sense of comprising "and/or" and the term "several" are generally used in the sense of comprising "at least one," the term "at least two" are generally used in the sense of comprising "two or more," and the term "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying any relative importance or number of features indicated. Thus, a feature defining "first," "second," "third," or "third" may explicitly or implicitly include one or at least two such features, the term "proximal" typically being one end proximal to the operator, the term "distal" typically being one end proximal to the patient, "one end" and "other" and "proximal" and "distal" typically referring to corresponding two portions, including not only the endpoints, the terms "mounted," "connected," "coupled," or "coupled" are to be construed broadly, e.g., as either a fixed connection, a removable connection, or as one piece; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communicated with the inside of two elements or the interaction relationship of the two elements. Furthermore, as used in this disclosure, an element disposed on another element generally only refers to a connection, coupling, cooperation or transmission between two elements, and the connection, coupling, cooperation or transmission between two elements may be direct or indirect through intermediate elements, and should not be construed as indicating or implying any spatial positional relationship between the two elements, i.e., an element may be in any orientation, such as inside, outside, above, below, or on one side, of the other element unless the context clearly indicates otherwise. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
The invention provides a positioning module, a positioning method and a robot system, which are used for solving the problem of inaccurate positioning in the lung puncture operation process in the prior art.
The following description refers to the accompanying drawings.
Referring to fig. 1, the present embodiment provides a robot system 1, and the robot system 1 includes a bronchoscope robot 11 and an image display navigation device 12. Also shown in fig. 1 is a patient trolley 2, the patient trolley 2 being for supporting a medical object. Specifically, the bronchoscope robot 11 in turn includes a robot trolley 111, a robotic arm 112, a mounting plate 113, and a catheter 114. The robot trolley is used for installing a mechanical arm 112, and the inner space is used for accommodating devices such as a control host and an image computing platform. The end of the mechanical arm 112 is connected with a mounting plate, and a built-in driver can control the catheter to advance and retreat in the trachea. The mounting plate 113 is provided with a sliding rail, and through the sliding rail, a controller arranged in the mechanical arm can control the catheter to advance and retreat in the air pipe. Catheter 114 contains at least one guidewire therein to control the bending of the catheter. The robotic system also includes a puncture needle (not shown) for performing a lung puncture procedure.
Referring to fig. 2, from another dimension division, the robotic system includes a positioning module 131, an execution module 132, a respiratory signal detection module 133, a visualization module 134, a pose sensing module 135, a puncture path planning module 136, and a data storage module 137.
The following description focuses on the positioning module 131 and then on the other modules in turn.
The purpose of the positioning module 131 mainly includes: the position of catheter 114 in the global coordinate system is determined, as well as the respiratory phase moment. Since the shape of the bronchi may change slightly with the patient's breath, the respiratory phase moment is the specific time position in one respiratory cycle of the patient at which the moment of the bronchial model is built. For example, one respiratory cycle of a patient includes 30 frames, and the respiratory phase time is the number of frames corresponding to the time at which the bronchial model is constructed.
That is, the positioning module 131 is used to perform a positioning method.
Referring to fig. 3, the positioning method includes:
s10, acquiring a first point cloud set of the space occupied by the target pipeline.
S20, acquiring a second point cloud set of multiple frames of the space occupied by the serpentine motion object in a first preset time period.
S30, acquiring a coordinate system conversion matrix based on the pairing relation between the second point cloud set and the first point cloud set.
S40, in a second preset time period, acquiring a third point cloud set of multiple frames of the space occupied by the serpentine motion object.
S50, carrying out coordinate transformation on the third point cloud set based on the coordinate system transformation matrix to obtain a fourth point cloud set; if the pairing relation between the fourth point cloud set and the first point cloud set of the current frame meets the pairing success condition, outputting a first signal, and positioning the snakelike moving object by using the fourth point cloud set of the current frame; and if the pairing relation between the fourth point cloud set and the first point cloud set of the current frame does not accord with the pairing success condition, outputting a second signal.
S60, the first signal and the second signal are used for providing a basis for logic judgment for other methods or devices working cooperatively.
From a global point of view, the target vessel, i.e. the bronchus, but for the localization method, whether the target vessel is a bronchus does not affect its performance. The catheter is a serpentine motion object, and the puncture needle is another serpentine motion object. The first signal is output, that is, the current time is the respiratory phase time, and the second signal is output, that is, the current time is not the respiratory phase time, and more rigorous speaking, the current time cannot be judged as the respiratory phase time.
The first point cloud set is acquired based on the perspective imaging device, and a specific acquisition flow can be set according to actual needs. For example, shooting is performed simultaneously from different angles at the same time, and then a first point cloud set is obtained by back-pushing the shot images.
It is worth emphasizing that the first point cloud set is acquired based on a perspective imaging device; there is a delay in acquiring the first point cloud set by the perspective imaging apparatus, or there is a limit in the operating time of the perspective imaging apparatus. The perspective imaging device is, for example, CT, CBCT, fluoroscopic imaging apparatus, ultrasonic probe, X-ray machine, or the like. A disadvantage of the above described perspective imaging apparatus is that there is a delay in acquiring the first point cloud set, e.g. CT; alternatively, there are limitations to the length of time that a patient can be exposed to X-rays for extended periods of time, for example, to radiation damage. Because of these drawbacks, the above-described fluoroscopic imaging apparatus cannot provide fluoroscopic images during the complete course of a lung puncture operation, and related medical personnel and researchers can only seek other positioning methods. The second, third and fourth point cloud sets are actually point clouds of a serpentine motion object, given different designations for ease of distinction only.
The serpentine motion object is provided with a shape optical fiber, and the second point cloud set and the third point cloud set are acquired based on the shape optical fiber; or, a plurality of electromagnetic sensing patches are arranged on the snakelike moving object, and the second point cloud set and the third point cloud set are acquired based on the electromagnetic sensing patches.
In the point cloud set, at least part of points are obtained through direct sampling, and when the data volume is insufficient, interpolation and other methods can be adopted to increase the data. In the point cloud set of the multi-frame, at least part of data of the frames is obtained through direct sampling, and when the data quantity is insufficient, interpolation and other methods can be adopted to increase the data. These means may be selectively implemented according to actual needs, and will not be described herein.
The process of converting and calculating the output signal of the shaped optical fiber to obtain the point cloud, and the process of converting and calculating the output signal of the electromagnetic sensing patch to obtain the point cloud can be understood according to the common general knowledge in the art, and will not be described herein.
The structure of the catheter is shown in fig. 4, fig. 4 is a schematic cross-sectional view of the catheter, the catheter comprising: a camera 1141, an illumination lamp 1142, and a shape fiber 1143. Wherein at least one guide wire (not shown) is threaded inside the catheter, which can be extended and shortened so that the catheter tip can be bent in at least one direction. A working channel is reserved in the middle of the catheter, so that equipment such as biopsy, ablation and the like can enter. The catheter tip is fitted with a monocular camera (i.e., camera 1141) to view the bronchial internals and capture real endoscopic images for surgical navigation. At least one illumination lamp 1142 is mounted to the distal end of the catheter to provide a light source.
At least one shape fiber 1143 is arranged inside the catheter in a penetrating way and is used for accurately sensing the shape of the catheter in the patient in real time and positioning the position and the posture of the tail end of the catheter. In one embodiment, four optical fibers 1143 are uniformly and symmetrically distributed in the catheter.
As shown in fig. 5, at least one optical fiber in a shape is inserted into the puncture needle 115, so that the position and posture information of the puncture needle can be acquired in real time. In fig. 5, the upper half is in cross-section and the lower half is in side view.
In this embodiment, a shaped fiber is selected to be distributed in the center of the needle.
Under navigation guidance, the doctor adjusts the direction and depth of the puncture needle according to the real-time position information so as to ensure that the target area is accurately and safely reached.
The shape optical fiber can also be replaced by a plurality of electromagnetic sensing patches, each electromagnetic sensing patch represents a space point under an electromagnetic coordinate system, and space coordinates among the plurality of sensing patches can be interpolated, so that all the shape optical fibers of the embodiment can be replaced by a plurality of electromagnetic patches which are distributed equidistantly, and the shape optical fiber is used as a second embodiment of shape sensing.
As shown in fig. 6, the positioning principle of the shaped optical fiber is described as follows: the inside of the catheter has at least one shaped fiber, each extending over the entire length of the catheter, measuring the shape and strain changes of the entire length of the catheter. Each fiber is equidistantly distributed with a plurality of Bragg fiber gratings 11431, which can sense the strain of the catheter, react to the wavelength change of the reflected wave, and further convert the wavelength change into space coordinate information, so as to provide a positioning function; discrete points among the several sensors can obtain the position information thereof through interpolation, and further describe the three-dimensional shape of the whole optical fiber section, namely the whole catheter section. Fig. 6 also shows a bendable section 1144 of the catheter, as will be appreciated by the reader.
The shaped optical fiber has the following advantages:
1) The optical fiber has good flexibility, and is a first-choice sensing element for shape measurement of the flexible robot.
2) The optical fiber is not affected by electromagnetic interference, and can be used with preoperative CT and intraoperative CBCT, X-ray, fluoroscopic imaging and the like.
3) The optical fiber can not only locate the end of the catheter, but also describe the shape and posture of the entire catheter.
In one embodiment, each optical fiber has its own zero point of the coordinate system, which is physically fixed in a fixed position of the mechanical arm, so that the coordinate systems of the optical fibers can be mechanically registered, each optical fiber can measure its spatial position independently of each other, and calculate the relative position between the measurement points of the optical fibers.
In this embodiment, the specific implementation of the optical fiber coordinate system is not specifically limited, and in different application scenarios, different optical fiber coordinate systems may be selected based on different purposes. And finally, unifying the coordinate system conversion matrix into a global coordinate system.
Further, the acquisition time of each frame in the second point cloud set is indicated by an externally input detection signal, and the acquisition time of each frame in the third point cloud set is indicated by the detection signal. In an embodiment, the detection signal is sent by the respiratory signal detection module 133, the respiratory signal detection module 133 includes a respiratory patch, the respiratory patch is used for being attached to the chest or the mouth and nose of the medical object, and the respiratory signal detection module is used for inputting the detection signal to the positioning module so that the positioning module obtains the acquisition time of each frame of the second point cloud set and the acquisition time of each frame of the third point cloud set.
It should be understood that the acquisition timing of each frame is limited only to that indicated by the detection signal, but the detection signal need not necessarily be issued at each frame. For example, the detection signal may be sent in the first frame of a complete breathing cycle, and the timing of each frame may be indicated. For another example, the detection signal may have some information, and by decoding the information, a specific acquisition timing of each frame may be known, and so on.
Specifically, based on the pairing relationship between the second point cloud set and the first point cloud set, the step of obtaining the coordinate system transformation matrix includes:
acquiring a second point cloud set of the 1 st period; wherein the point cloud set for each period comprises a preset number of frames.
Registering the second point cloud set and the first point cloud set of the first frame to obtain an initial coordinate system conversion matrix.
Based on the initial coordinate system conversion matrix, converting a second point cloud set of the 1 st period to obtain K frames with highest matching degree; wherein K is a preset parameter.
Circularly acquiring a second point cloud set of the ith period; wherein i is an integer greater than 1.
And converting the second point cloud set of the ith period based on the initial coordinate system conversion matrix to obtain K frames with highest matching degree.
If the frame numbers of the K frames with the highest matching degree in the ith period are adjacent, the frame numbers of the K frames with the highest matching degree in the ith-1 period are adjacent, and the deviation value of the frame numbers of the K frames with the highest matching degree in the ith period and the frame numbers of the K frames with the highest matching degree in the ith-1 period is within a first threshold, outputting the current initial coordinate system conversion matrix as a coordinate system conversion matrix; at the same time, the cycle is stopped and switched to a second preset time period.
And if not, iterating the initial coordinate system conversion matrix by using the intersection of all the second point cloud sets in the ith period and starting the cycle of the next period.
The calculation of the matching degree can be set according to actual needs, and in this embodiment, the number of the coincident points is selected to represent the matching degree. Whether the two points coincide or not can be judged according to whether the coordinate interpolation of the two points is smaller than a set distance threshold value or not. In other embodiments, the sum of the absolute values or the sum of the squares of the differences for each point may also be selected to calculate the degree of matching, and so on.
If the frames with highest matching degree in the ith period and the ith-1 th period are not adjacent, the continuous circulation can be judged, and the deviation value of the two frames is not needed to be judged. If the frame with highest matching degree in the ith period and the (i-1) th period is adjacent, the frame number difference value of the first frame in the two groups of data can be used as the deviation value, or the frame number of the last frame is used as the deviation value, and the calculation results are the same. For example, in the 3 rd cycle, the {5,6,7} frame is the frame with the highest matching degree, and in the 4 th cycle, the {6,7,8} frame is the frame with the highest matching degree, and the deviation value of the two frames is (6-5) or (8-7), and the calculation result is 1.
In this embodiment, the first threshold is 1, and in other embodiments, other values may be set.
The process of iterating the initial coordinate system transformation matrix by using the intersections of all the second point cloud sets in the ith period can be set according to actual needs, and the specific iterative process can be as follows: and adjusting the value of the initial coordinate system conversion matrix so that the numerical value with the highest matching degree of the converted second point cloud set and the first point cloud set is used as the value of the new initial coordinate system conversion matrix.
The pairing success condition can be set according to actual needs, for example, if the number of the coincident points exceeds a coincidence threshold, the pairing success condition is judged to be met.
In order to achieve higher judgment accuracy, in a preferred embodiment, if the pairing relationship between the fourth point cloud set and the first point cloud set of the current frame meets a pairing success condition, outputting a first signal; if the pairing relation between the fourth point cloud set and the first point cloud set of the current frame does not meet the pairing success condition, the step of outputting the second signal comprises the following steps:
circularly acquiring a fourth point cloud set of the jth period, and acquiring K frames with highest matching degree; wherein j is an integer greater than n, n being the total number of cycles in the first predetermined time period.
If the frames of the K frames with the highest matching degree in the j-th period are adjacent in the L continuous periods, the frames of the K frames with the highest matching degree in the j-1-th period are adjacent, and the number of the frames of the K frames with the highest matching degree in the j-th period and the deviation value of the frames of the K frames with the highest matching degree in the j-1-th period are within a second threshold, outputting a first signal at the moment corresponding to the median frame of the K frames with the highest matching degree; and outputting a second signal at a time corresponding to the other frames.
And if not, iterating the initial coordinate system conversion matrix by using K frames with the highest matching degree in the j-th period and starting the cycle of the next period, and outputting a second signal by each frame.
Wherein L is a preset parameter.
The step of iterating the initial coordinate system transformation matrix using the K frames in the j-th period, which have the highest degree of matching, may be implemented as follows: for example, the point clouds of the K frames are averaged, and an initial coordinate system transformation matrix is iterated based on the average value; the frame iteration initial coordinate system conversion matrix with the highest matching degree in the K frames can be selected, and other iteration modes can be adopted. . The calculation of the deviation value may be the same as the calculation of the deviation value in the foregoing. For ease of understanding, the initial coordinate system transformation matrix at this stage may also be renamed to the optimal registration matrix, but in fact refers to the same matrix.
Referring to fig. 7, fig. 7 shows a flow chart for determining respiratory phase. In the figure, one respiratory phase is 1 frame, N respiratory moments are N frames, the optical fiber point set is a second point cloud set or a third point cloud set, { P CT_k I.e. the transformed second point cloud set or the fourth point cloud set. CT model, i.e. first point cloud set. The middle time, i.e., the middle frame, when K is odd, and when K is even, the middle frame may select the previous frame or the next frame closest to K/2.
The explanation of fig. 7 is as follows: description of the method for intraoperative searching for preoperative CT imaging phases, the procedure performs a set of fiber points { P } during the continuous advancement or stay of the intraoperative catheter within the bronchi fibre_k The method comprises the steps of capturing position points captured by all optical fiber sensors corresponding to the time k and interpolating the position points among the sensors
The position coordinates of all fiber points (sensor captured position points and interpolated position points) and corresponding respiratory phase moments are marked and stored for iterative updating or initializing of the registration matrix as the catheter is advanced within the bronchi.
The execution module 132 is used for driving the catheter and/or the puncture needle to move; the positioning module 131 outputs the first signal, and the executing module 132 moves; when the positioning module 131 outputs the second signal, the executing module 132 stops moving.
The actuator module 132 may include the robotic arm 112 and mounting plate 113 shown in fig. 1, as well as other actuators. Fig. 8 shows a workflow of the execution module 132 in which whether a puncture phase is reached, i.e. whether the current frame is a respiratory phase. After the operation is started, the bronchoscope with the shaped optical fiber advances in the bronchus according to a navigation planning path, the shooting respiratory phase of the preoperative CT is found as a puncture phase in the advancing process, and the relative position relation among the focus, the bronchus and the puncture needle is displayed in real time (logic related to display is described below). When the puncture needle reaches the puncture position planned before operation, the display interface displays that the puncture position has been reached and the puncture position, angle and puncture depth are planned again and adjusted. When the respiration time reaches the puncture phase, the color change of the display interface indicates that the time is consistent with the preoperative CT respiration state and can be punctured. Simultaneously, the doctor performs the puncturing operation rapidly. If the puncture is successful, the operation is ended, if the puncture is failed, whether the percutaneous puncture needs to be switched is selected by analyzing the failure reason, if the puncture is failed, the navigation interface is switched to the navigation interface of the percutaneous pulmonary puncture, and if the puncture is not successful, the bronchoscope puncture navigation interface is continuously used for guiding a doctor to execute the puncture operation again.
The visualization module 134 is configured to display the first point cloud set, the second point cloud set transformed based on the coordinate system transformation matrix, and the fourth point cloud set. The visualization module 134 may include the image display navigation device 12.
When the positioning module outputs a first signal, the second point cloud set and the fourth point cloud set converted based on the coordinate system conversion matrix are displayed in a first mode; and when the positioning module outputs the second signal, the second point cloud set and the fourth point cloud set converted based on the coordinate system conversion matrix are displayed in a second mode or are hidden.
Visual presentation can be performed in three ways: 1) The first mode is the same as the second mode; 2) The first mode is highlighting display, and the second mode is fading semitransparent display; 3) When the positioning module outputs the second signal, the current point cloud set is not displayed, but the point cloud set corresponding to the breathing phase closest to the current moment is displayed.
Referring to fig. 9, fig. 9 shows a first set of point clouds (right side) and a second/four point cloud set (left side), the first set of point clouds only showing the outermost contours thereof for ease of understanding. The transformation matrix of the optical fiber coordinate system and the CT coordinate system is obtained by registering the optical fiber point cloud (i.e., the second/fourth point cloud set) with the corresponding CT bronchi Duan Dianyun (i.e., the first point cloud set). And if the optical fiber point cloud is the point set of the whole gas absorption phase, obtaining a coordinate system conversion matrix. And if the optical fiber point cloud is a point set of the breathing phase to be selected, obtaining an optimized registration matrix.
Referring to fig. 10, fig. 10 shows the presentation of the visualization module. The second/four-point cloud set slightly moves along with the respiration of the patient, the number of the optical fiber point clouds falling in the CT model in the left graph and the right graph of fig. 10 is smaller than that in the middle graph, and the respiration phase corresponding to the middle graph is considered to be the respiration phase during preoperative CT shooting.
Referring to fig. 11, fig. 11 shows a schematic diagram of the correspondence between the point cloud movement and the breathing cycle. FIG. 11 is a schematic diagram of mapping of point clouds corresponding to 3 respiratory phases to a CT coordinate system; the lower part of fig. 11 is a respiratory signal diagram of an example of three respiratory cycles, one cycle corresponds to one sine wave, the wave crest corresponds to the end of inspiration, and the wave trough corresponds to the end of expiration.
The intermediate phase t shown in fig. 11 is the breathing phase when the CT is taken before the operation is found, because the number in the CT model is the highest in the CT coordinate system when the corresponding optical fiber point cloud is mapped to the CT coordinate system at this time, and the later puncturing is performed at the time of the intermediate phase t.
The pose sensor module 135 includes the catheter and the puncture tube described above, and further includes some calculation units for calculating coordinates according to the sensing signals. The purpose of the pose sensor module 135 is to obtain a second set of point clouds and a third set of point clouds.
The data storage module 137 is used to tag and store the position and respiration time for each sensor point on the fiber and the point interpolated between the sensors.
Table 1 point cloud storage table
Table 1 illustrates a stored table, each row representing a fiber point, the spatial coordinates x, y, z of each fiber point, the time t that this point produces, and the respiratory phase that this point corresponds to. According to the format, all the optical fiber points collected at all the moments are labeled and recorded.
And then, when the iterative optimization is aligned, all current and historical optical fiber points of a certain phase can be taken out for registration with a CT coordinate system. Because the distribution of points is broader and belongs to a particular phase, the resulting registration matrix is more accurate.
The fiber points are then selected based on the specific phase when the fiber point cloud is mapped to the CT coordinate system based on the registration matrix.
The function of the time t generated by the optical fiber points is that when the points are too much, repeated points and earlier points can be deleted according to the occurrence time and the occurrence position of the points, so that the purposes of reducing memory occupation and accurately registering and finding respiratory phases are achieved.
The embodiment has the following two application scenarios, and can switch between the two when needed in an unobstructed manner. One is to puncture via bronchoscope (when the lesion is located in the bronchoscope reach); and a percutaneous pulmonary puncture (when the lesion is located in a bronchoscope unreachable range or the surrounding is complex). Fig. 1 shows a schematic view of a bronchoscopic lung puncture examination scenario, and fig. 12 shows a schematic view of a percutaneous lung puncture examination scenario. The main difference between fig. 12 and fig. 1 is that the puncture needle with the optical fiber is not inserted from the catheter but from outside the body. The zero point position of the optical fiber puncture needle is physically fixed at the fixed position of the mechanical arm, so that the unification of a multi-optical fiber coordinate system is achieved.
Fig. 13 is visual contents of puncture through a bronchoscope, and fig. 14 is visual contents of puncture through a percutaneous lung. In the figure, 3 represents a lesion, 4 represents a puncture path, 5 represents a lung, and 6 represents a thorax. Because both the needle and the catheter have shaped fiber optic sensors inside, their morphology can be calculated and plotted on a display navigation interface (i.e., the display interface of the visualization module) in real time. When puncturing, the puncture needle is adjusted to make the gesture consistent with the angle of the puncture path, and when the respiratory moment reaches the puncture respiratory phase, the puncture is performed by inserting the needle according to the planned depth.
The puncture path is planned by a puncture path planning module, and specifically comprises puncture path planning of a bronchoscope, puncture path planning of percutaneous pulmonary puncture and a seamless switching function between the two.
The present embodiment also provides a readable storage medium having a program stored thereon, which when executed, performs the positioning method described above.
In summary, the present embodiment provides a positioning module, a positioning method, and a robot system. The positioning module is used for executing the following steps: acquiring a first point cloud set of a space occupied by a target pipeline; acquiring a second point cloud set of multiple frames of space occupied by the serpentine moving object in a first preset time period; acquiring a coordinate system conversion matrix based on the pairing relation between the second point cloud set and the first point cloud set; acquiring a third point cloud set of multiple frames of the space occupied by the serpentine motion object in a second preset time period; performing coordinate conversion on the third point cloud set based on the coordinate system conversion matrix to obtain a fourth point cloud set; if the pairing relation between the fourth point cloud set and the first point cloud set of the current frame accords with the pairing success condition, outputting a first signal, and positioning the snakelike moving object by using the fourth point cloud set of the current frame; and if the pairing relation between the fourth point cloud set and the first point cloud set of the current frame does not accord with the pairing success condition, outputting a second signal. The first signal and the second signal are used for providing a basis for logic judgment for other methods or devices working cooperatively. So configured, the positioning of the serpentine motion object is performed through the fourth point cloud set, and the influence of the breathing fluctuation of the patient on the positioning is eliminated through the matching degree, so that the problem of inaccurate positioning in the lung puncture operation process in the prior art is solved.
The foregoing description is only illustrative of the preferred embodiments of the present invention, and is not intended to limit the scope of the present invention in any way, and any changes and modifications made by those skilled in the art in light of the foregoing disclosure will be deemed to fall within the scope and spirit of the present invention.

Claims (12)

1. A positioning module, characterized in that the positioning module is adapted to perform the steps of:
acquiring a first point cloud set of a space occupied by a target pipeline;
acquiring a second point cloud set of multiple frames of space occupied by the serpentine moving object in a first preset time period;
acquiring a coordinate system conversion matrix based on the pairing relation between the second point cloud set and the first point cloud set;
acquiring a third point cloud set of multiple frames of the space occupied by the serpentine motion object in a second preset time period;
performing coordinate conversion on the third point cloud set based on the coordinate system conversion matrix to obtain a fourth point cloud set; the method comprises the steps of,
if the pairing relation between the fourth point cloud set and the first point cloud set of the current frame accords with the pairing success condition, outputting a first signal, and positioning the snakelike moving object by using the fourth point cloud set of the current frame; if the pairing relation between the fourth point cloud set and the first point cloud set of the current frame does not accord with the pairing success condition, outputting a second signal;
The first signal and the second signal are used for providing a basis for logic judgment for other methods or devices working cooperatively.
2. The positioning module of claim 1, wherein the first set of point clouds is acquired based on a perspective imaging device;
the first point cloud set acquired by the perspective imaging device has delay, or the working time of the perspective imaging device has limitation.
3. The positioning module of claim 1, wherein the serpentine motion object has a shape fiber disposed thereon, the second set of point clouds and the third set of point clouds being acquired based on the shape fiber; or, a plurality of electromagnetic sensing patches are arranged on the serpentine motion object, and the second point cloud set and the third point cloud set are acquired based on the electromagnetic sensing patches.
4. The positioning module of claim 1, wherein the acquisition timing of each frame in the second set of point clouds is indicated by an externally input detection signal, and the acquisition timing of each frame in the third set of point clouds is indicated by the detection signal.
5. The positioning module of claim 1, wherein the step of obtaining a coordinate system transformation matrix based on the pairing relationship of the second set of point clouds and the first set of point clouds comprises:
Acquiring the second point cloud set of the 1 st period; wherein the point cloud set for each period comprises a preset number of frames;
registering the second point cloud set and the first point cloud set of the first frame to obtain an initial coordinate system conversion matrix;
based on the initial coordinate system conversion matrix, converting the second point cloud set of the 1 st period to obtain K frames with highest matching degree; wherein K is a preset parameter;
circularly acquiring the second point cloud set of the ith period; wherein i is an integer greater than 1;
based on the initial coordinate system conversion matrix, converting the second point cloud set of the ith period to obtain K frames with highest matching degree;
if the frames of the K frames with the highest matching degree in the ith period are adjacent, the frames of the K frames with the highest matching degree in the ith-1 period are adjacent, and the deviation value of the number of the frames of the K frames with the highest matching degree in the ith period and the number of the frames of the K frames with the highest matching degree in the ith-1 period is within a first threshold, outputting the current initial coordinate system conversion matrix as the coordinate system conversion matrix; meanwhile, stopping circulation and switching to the second preset time period; the method comprises the steps of,
Otherwise, iterating the initial coordinate system transformation matrix using the intersection of all the second point cloud sets in the ith period and starting the cycle of the next period.
6. The positioning module of claim 5, wherein the first signal is output if a pairing relationship between the fourth set of point clouds and the first set of point clouds of the current frame meets a pairing success condition; if the pairing relation between the fourth point cloud set and the first point cloud set in the current frame does not meet the pairing success condition, the step of outputting a second signal comprises the following steps:
circularly acquiring the fourth point cloud set of the j-th period, and acquiring K frames with highest matching degree; wherein j is an integer greater than n, n being the total number of cycles in the first preset time period;
if the frames of the K frames with the highest matching degree in the j-th period are adjacent, the frames of the K frames with the highest matching degree in the j-1-th period are adjacent, the number of the frames of the K frames with the highest matching degree in the j-th period and the deviation value of the frames of the K frames with the highest matching degree in the j-1-th period are within a second threshold, and outputting the first signal at the moment corresponding to the median frame of the K frames with the highest matching degree; outputting the second signal at the moment corresponding to other frames; the method comprises the steps of,
Otherwise, iterating the initial coordinate system transformation matrix by using K frames with highest matching degree in the j-th period and starting the cycle of the next period, and outputting the second signal by each frame;
wherein L is a preset parameter.
7. A positioning method, characterized in that the positioning method comprises:
acquiring a first point cloud set of a space occupied by a target pipeline;
acquiring a second point cloud set of multiple frames of space occupied by the serpentine moving object in a first preset time period;
acquiring a coordinate system conversion matrix based on the pairing relation between the second point cloud set and the first point cloud set;
acquiring a third point cloud set of multiple frames of the space occupied by the serpentine motion object in a second preset time period;
performing coordinate conversion on the third point cloud set based on the coordinate system conversion matrix to obtain a fourth point cloud set; the method comprises the steps of,
if the pairing relation between the fourth point cloud set and the first point cloud set of the current frame accords with the pairing success condition, outputting a first signal, and positioning the snakelike moving object by using the fourth point cloud set of the current frame; if the pairing relation between the fourth point cloud set and the first point cloud set of the current frame does not accord with the pairing success condition, outputting a second signal;
The first signal and the second signal are used for providing a basis for logic judgment for other methods or devices working cooperatively.
8. A robotic system, characterized in that it comprises a positioning module according to any one of claims 1-6.
9. The robotic system of claim 8, wherein the target line is a bronchus, the robotic system comprising a catheter configured as one of the serpentine motion objects and a puncture needle configured as the other serpentine motion object.
10. The robotic system of claim 9, further comprising an execution module for driving the catheter and/or the puncture needle into motion; the execution module moves when the positioning module outputs the first signal; and when the positioning module outputs the second signal, the executing module stops moving.
11. The robotic system of claim 8, wherein the robotic system comprises a respiratory signal detection module comprising a respiratory patch for application to a chest or an oronasal portion of a medical subject, the respiratory signal detection module for inputting detection signals to the positioning module to cause the positioning module to acquire an acquisition time of each frame of the second set of point clouds and an acquisition time of each frame of the third set of point clouds.
12. The robotic system of claim 8, wherein the robotic system comprises a visualization module for exposing the first set of point clouds, the second set of point clouds, and the fourth set of point clouds transformed based on the coordinate system transformation matrix;
when the positioning module outputs the first signal, the second point cloud set and the fourth point cloud set converted based on the coordinate system conversion matrix are displayed in a first mode; and when the positioning module outputs the second signal, displaying or hiding the second point cloud set and the fourth point cloud set in a second mode based on the coordinate system conversion matrix.
CN202410026749.7A 2024-01-08 2024-01-08 Positioning module, positioning method and robot system Pending CN117814912A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410026749.7A CN117814912A (en) 2024-01-08 2024-01-08 Positioning module, positioning method and robot system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410026749.7A CN117814912A (en) 2024-01-08 2024-01-08 Positioning module, positioning method and robot system

Publications (1)

Publication Number Publication Date
CN117814912A true CN117814912A (en) 2024-04-05

Family

ID=90507707

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410026749.7A Pending CN117814912A (en) 2024-01-08 2024-01-08 Positioning module, positioning method and robot system

Country Status (1)

Country Link
CN (1) CN117814912A (en)

Similar Documents

Publication Publication Date Title
JP7154832B2 (en) Improving registration by orbital information with shape estimation
US20220346886A1 (en) Systems and methods of pose estimation and calibration of perspective imaging system in image guided surgery
US20230061771A1 (en) Systems and methods of registration for image-guided procedures
US9554729B2 (en) Catheterscope 3D guidance and interface system
CN108024838B (en) System and method for using registered fluoroscopic images in image-guided surgery
JP2023171877A (en) Biopsy apparatus and system
KR20210018858A (en) Route-based navigation of coronary networks
KR20210016566A (en) Robot system and method for navigation of lumen network detecting physiological noise
KR20170038012A (en) Systems and methods for intraoperative segmentation
JP2009254837A (en) Structure of endoscope and technique for navigating to target in branched structure
CN112294436A (en) Cone beam and 3D fluoroscopic lung navigation
US20230162380A1 (en) Mitigation of registration data oversampling
US20230372024A1 (en) Synthetic position in space of an endoluminal instrument
CN117814912A (en) Positioning module, positioning method and robot system
US20240099776A1 (en) Systems and methods for integrating intraoperative image data with minimally invasive medical techniques
US20230360212A1 (en) Systems and methods for updating a graphical user interface based upon intraoperative imaging
EP4171421A1 (en) Systems for evaluating registerability of anatomic models and associated methods

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination