CN114159157A - Method, device and equipment for assisting moving of instrument and storage medium - Google Patents
Method, device and equipment for assisting moving of instrument and storage medium Download PDFInfo
- Publication number
- CN114159157A CN114159157A CN202111478462.0A CN202111478462A CN114159157A CN 114159157 A CN114159157 A CN 114159157A CN 202111478462 A CN202111478462 A CN 202111478462A CN 114159157 A CN114159157 A CN 114159157A
- Authority
- CN
- China
- Prior art keywords
- instrument
- dimensional model
- channel
- determining
- moved
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 63
- 230000004044 response Effects 0.000 claims abstract description 26
- 230000003287 optical effect Effects 0.000 claims description 62
- 238000004590 computer program Methods 0.000 claims description 16
- 238000006243 chemical reaction Methods 0.000 claims description 15
- 230000037361 pathway Effects 0.000 claims 4
- 108091006146 Channels Proteins 0.000 description 127
- 230000008569 process Effects 0.000 description 13
- 238000010586 diagram Methods 0.000 description 8
- 238000001356 surgical procedure Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 230000009471 action Effects 0.000 description 4
- 238000003325 tomography Methods 0.000 description 4
- 238000012549 training Methods 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 3
- 210000003484 anatomy Anatomy 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 2
- 230000036772 blood pressure Effects 0.000 description 2
- 230000002490 cerebral effect Effects 0.000 description 2
- 238000002591 computed tomography Methods 0.000 description 2
- 238000002059 diagnostic imaging Methods 0.000 description 2
- 239000003814 drug Substances 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000003780 insertion Methods 0.000 description 2
- 230000037431 insertion Effects 0.000 description 2
- 230000004660 morphological change Effects 0.000 description 2
- 230000004962 physiological condition Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 230000002792 vascular Effects 0.000 description 2
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical compound [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 230000036760 body temperature Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000013011 mating Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 229910052760 oxygen Inorganic materials 0.000 description 1
- 239000001301 oxygen Substances 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/361—Image-producing devices, e.g. surgical cameras
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/102—Modelling of surgical devices, implants or prosthesis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2055—Optical tracking systems
Landscapes
- Health & Medical Sciences (AREA)
- Surgery (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Robotics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Pathology (AREA)
- Processing Or Creating Images (AREA)
Abstract
The disclosure relates to a method, a device, equipment and a storage medium for assisting in moving an instrument, wherein the method comprises the steps of optically matching a three-dimensional model of the instrument with the instrument to be moved, optically matching an object three-dimensional model with an object entity, further, determining a target channel in the object three-dimensional model in response to a channel planning operation, and determining movement information of the instrument to be moved based on the target channel and the current pose of the three-dimensional model of the instrument. Because the target channel is determined in the object three-dimensional model, and the movement information of the to-be-moved instrument is determined based on the target channel and the current pose of the instrument three-dimensional model, the position deviation and the angle deviation of the operation of a doctor can be intuitively prompted. Can make doctors fully understand the structure of the affected part of the patient and accurately position the structure and the position of the focus before the operation. And doctors with insufficient experience can practice the operation before the operation, thereby reducing the operation risk.
Description
Technical Field
The present disclosure relates to the field of virtual reality, and in particular, to a method, an apparatus, a device, and a storage medium for assisting a mobile device.
Background
Generally, clinical surgery is an important medical treatment in modern medicine. In actual clinical practice, for example, cerebral surgery has low success rate and high risk, and requires high surgical skill and abundant clinical experience. When a less experienced surgeon needs to perform surgical training, the prior art generally remains in a stage where an experienced surgeon teaches a clinician the clinical experience or the experienced surgeon is viewed by the physician to perform the surgery.
However, the operation is directly performed on the patient, the operation visual field is limited, the doctor is more unlikely to see through the diseased tissue structure of the patient, and in addition, the focus is often subjected to morphological change in the operation, the vascular structure has complex distortion and is often difficult to visually identify. Medical imaging equipment equipped in an operating room is complex to use, special medical staff is often required to operate instruments, the use is inconvenient, surgeons and interns are difficult to associate intraoperative images with preoperative imaging images, anatomical structure identification is difficult due to the fact that information is asymmetric, decision making in the operation is affected, and training effects of interns are also affected.
Disclosure of Invention
In order to solve the above technical problems or at least partially solve the above technical problems, the present disclosure provides a method, an apparatus, a device, and a storage medium for assisting in moving an instrument, so that a doctor can fully understand the structure of a diseased part of a patient and accurately locate the structure and position of a lesion before an operation. And doctors with insufficient experience can practice the operation before the operation, so that the operation risk is reduced.
In a first aspect, an embodiment of the present disclosure provides a method for assisting in moving an instrument, where the instrument to be moved is an instrument provided with a first optical rigid body, and a three-dimensional instrument model of the instrument to be moved is pre-constructed, where the method includes:
optically matching the instrument three-dimensional model with the instrument to be moved;
optically matching an object three-dimensional model with an object entity, wherein the object entity is an object entity provided with a second optical rigid body, and the object three-dimensional model is a three-dimensional model generated based on the object entity;
determining a target channel in the three-dimensional model of the object in response to a channel planning operation;
and determining the movement information of the to-be-moved instrument based on the target channel and the current pose of the instrument three-dimensional model, so that the pose of the instrument three-dimensional model after movement is matched with the target channel after the to-be-moved instrument moves.
In a second aspect, embodiments of the present disclosure provide an apparatus for assisting in moving an instrument, including:
the matching module is used for optically matching the instrument three-dimensional model with the instrument to be moved;
the matching module is further configured to: optically matching an object three-dimensional model with an object entity, wherein the object entity is an object entity provided with a second optical rigid body, and the object three-dimensional model is a three-dimensional model generated based on the object entity;
a response module for determining a target channel in the three-dimensional model of the object in response to a channel planning operation;
and the determining module is used for determining the movement information of the to-be-moved instrument based on the target channel and the current pose of the instrument three-dimensional model, so that the pose of the instrument three-dimensional model after movement is matched with the target channel after the to-be-moved instrument is moved.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including:
a memory;
a processor; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method of the first aspect.
In a fourth aspect, the disclosed embodiments provide a computer-readable storage medium having a computer program stored thereon, the computer program being executed by a processor to implement the method according to the first aspect.
In a fifth aspect, the embodiments of the present disclosure also provide a computer program product, which includes a computer program or instructions, when executed by a processor, implement the method for assisting moving an instrument as described above.
According to the method, the device, the equipment and the storage medium for assisting in moving the instrument, the instrument three-dimensional model is optically matched with the instrument to be moved, the object three-dimensional model is optically matched with the object entity, further, a target channel is determined in the object three-dimensional model in response to a channel planning operation, and the movement information of the instrument to be moved is determined based on the target channel and the current pose of the instrument three-dimensional model. Because the target channel is determined in the object three-dimensional model, and the movement information of the to-be-moved instrument is determined based on the target channel and the current pose of the instrument three-dimensional model, the position deviation and the angle deviation of the operation of a doctor can be intuitively prompted. Can make doctors fully understand the structure of the affected part of the patient and accurately position the structure and the position of the focus before the operation. And doctors with insufficient experience can practice the operation before the operation, thereby reducing the operation risk.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present disclosure, the drawings used in the description of the embodiments or prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a flow chart of a method of assisting in moving an instrument provided by an embodiment of the present disclosure;
fig. 2 is a flow chart of a method of assisting in moving an instrument according to another embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a positional relationship between a three-dimensional model of an instrument and a target passageway at a first perspective according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a positional relationship between the three-dimensional model of the instrument and the target passageway at a second perspective according to an embodiment of the present disclosure;
FIG. 5 is a flow chart of a method of assisting in moving an instrument according to another embodiment of the present disclosure;
FIG. 6 is a schematic structural diagram of an apparatus for assisting in moving an instrument provided in an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, aspects of the present disclosure will be further described below. It should be noted that the embodiments and features of the embodiments of the present disclosure may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced in other ways than those described herein; it is to be understood that the embodiments disclosed in the specification are only a few embodiments of the present disclosure, and not all embodiments.
Generally, clinical surgery is an important medical treatment in modern medicine. In actual clinical practice, for example, cerebral surgery has low success rate and high risk, and requires high surgical skill and abundant clinical experience. When a less experienced surgeon needs to perform surgical training, the prior art generally remains in a stage where an experienced surgeon teaches a clinician the clinical experience or the experienced surgeon is viewed by the physician to perform the surgery.
However, the operation is directly performed on the patient, the operation visual field is limited, the doctor is more unlikely to see through the diseased tissue structure of the patient, and in addition, the focus is often subjected to morphological change in the operation, the vascular structure has complex distortion and is often difficult to visually identify. Medical imaging equipment equipped in an operating room is complex to use, special medical staff is often required to operate instruments, the use is inconvenient, surgeons and interns are difficult to associate intraoperative images with preoperative imaging images, anatomical structure identification is difficult due to the fact that information is asymmetric, decision making in the operation is affected, and training effects of interns are also affected.
In recent years, in order to solve the problem that complex operations are difficult to effectively develop, by combining with the latest development of virtual reality and augmented reality technologies, an undistorted three-dimensional image is established mainly through preoperative intraoperative information, intraoperative visual fields or intraoperative actions are captured, physiological conditions of each stage in the operation are detected and fused one by one, and the physiological conditions are fed back to a doctor to serve as guidance.
In the prior art, medical intelligent head glasses have been disclosed, which use augmented reality technology to display the current body data and supplementary data of pulse, blood pressure and body temperature of a patient during the operation of a doctor. This equipment passes through the bluetooth and is connected with other guardianship equipment in the operating room, and portable scheme has played fine mating reaction, has avoided when upgrading current operating room equipment, and the equipment is too much, the miscellaneous problem of function, and the doctor can put back patient's on one's body attention, if emergency appears, in time adjusts the operation decision-making. However, the device can only present the simplest parameters such as blood pressure, blood oxygen and the like, and cannot effectively visually present complex information such as brain structures and the like.
In the prior art, a method for performing operation navigation based on a laparoscope video exists. The method comprises the steps of firstly calibrating a laparoscopic camera, determining camera parameters, and setting a projection matrix of a three-dimensional (3D) graphic rendering engine by using the parameters; acquiring a laparoscope image in a surgical object operation, generating a corresponding undistorted view by using a 3D (three-dimensional) graphic rendering engine, and deforming the image into a virtual view with the same distortion effect as an actual laparoscope by using a distortion model; and finally, obtaining an accurate virtual-real fusion video with a correct position mapping relation for surgical navigation by fusing the virtual view and the actual laparoscope image to generate the depth value of each pixel point of the virtual view. However, the method is based on real-time shooting in the operation, and has no effect on preoperative planning. To address this issue, embodiments of the present disclosure provide a method for assisting in moving an instrument, which is described below in conjunction with specific embodiments.
Fig. 1 is a flowchart of a method for assisting in moving an instrument according to an embodiment of the present disclosure. As shown in fig. 1, the method comprises the following steps:
s101, optically matching the instrument three-dimensional model with the instrument to be moved.
The to-be-moved instrument is an instrument provided with a first optical rigid body, and an instrument three-dimensional model of the to-be-moved instrument is constructed in advance. For example, the device to be moved is a sleeve device through which the needle can be passed. The first optical rigid body comprises at least three optical reflection points, and the three-dimensional model of the instrument is a three-dimensional model generated based on the instrument to be moved.
In some embodiments, the three-dimensional model of the instrument is a three-dimensional model generated by tomography of the instrument to be moved. For example, the electronic computer tomography device transmits a three-dimensional model generated by the to-be-moved instrument through tomography to the terminal, and after the terminal receives the three-dimensional model of the to-be-moved instrument, the terminal identifies the positions of at least three model reflecting points according to the instrument three-dimensional model and records the positions as first positions. And further, acquiring the positions of the at least three optical reflecting points through a binocular camera, and recording the positions as second positions. And the binocular camera sends the acquired second positions of the at least three optical reflection points to the terminal, and the terminal receives the second positions of the at least three optical reflection points. Then, the terminal identifies first positions of at least three model reflecting points and second positions of at least three optical reflecting points sent by the binocular camera based on the three-dimensional model of the instrument, and the three-dimensional model of the instrument is matched with the pose of the instrument to be moved. The pose includes position and attitude, i.e., 6 degrees of freedom information.
And S102, carrying out optical matching on the three-dimensional model of the object and the object entity.
The object entity is an object entity provided with a second optical rigid body, and the object three-dimensional model is a three-dimensional model generated based on the object entity. The object entity is a skeleton of a diseased part, the second optical rigid body comprises at least three optical reflecting points, and the object three-dimensional model is a three-dimensional model generated based on the object entity.
In some embodiments, the three-dimensional model of the object is a three-dimensional model generated by tomography of the object entity. For example, the electronic computed tomography device transmits a three-dimensional model generated by tomography of the object entity to the terminal, and after the terminal receives the three-dimensional model of the object, the terminal identifies the positions of at least three model reflecting points according to the three-dimensional model of the object and records the positions as first positions. And further, acquiring the positions of the at least three optical reflecting points through a binocular camera, and recording the positions as second positions. And the binocular camera sends the acquired second positions of the at least three optical reflection points to the terminal, and the terminal receives the second positions of the at least three optical reflection points. Then, the terminal identifies first positions of at least three model reflecting points and second positions of the at least three optical reflecting points sent by the binocular camera based on the object three-dimensional model, and the object three-dimensional model is matched with the poses of the object entity.
S103, determining a target channel in the object three-dimensional model in response to a channel planning operation.
For example, a channel planning control (channel planning button) is configured on the user interface, the user clicks the channel planning button, and the terminal, in response to the user clicking the channel planning button, displays a user interface diagram as shown in fig. 3, where fig. 3 shows a position relationship between the three-dimensional model of the instrument and the target channel at a first viewing angle, the left pattern is the target channel determined in the three-dimensional model of the object, and the right pattern is the instrument model at the first viewing angle. The user may also click a switching display view angle, and the terminal may display the user interface diagram shown in fig. 4 in response to an operation of clicking the button for switching the display view angle by the user, where fig. 4 shows a positional relationship between the three-dimensional model of the apparatus and the target channel at a second view angle, the left pattern is the target channel determined in the object three-dimensional model, and the right pattern is the apparatus model at the second view angle.
And S104, determining the movement information of the to-be-moved instrument based on the target channel and the current pose of the instrument three-dimensional model.
The terminal determines the movement information of the to-be-moved instrument based on the target channel and the current pose of the instrument three-dimensional model, the user moves the to-be-moved instrument according to the movement information, the terminal can display the pose of the instrument three-dimensional model in real time until the instrument three-dimensional model is completely overlapped with the target channel, and the terminal can display the movement information in real time in the process that the user moves the to-be-moved instrument, so that the pose of the instrument three-dimensional model after moving is matched with the target channel after the to-be-moved instrument moves.
The method and the device for the three-dimensional movement of the instrument have the advantages that the instrument three-dimensional model is optically matched with the instrument to be moved, the object three-dimensional model is optically matched with the object entity, further, a target channel is determined in the object three-dimensional model in response to channel planning operation, and movement information of the instrument to be moved is determined based on the target channel and the current pose of the instrument three-dimensional model. Because the target channel is determined in the object three-dimensional model, and the movement information of the to-be-moved instrument is determined based on the target channel and the current pose of the instrument three-dimensional model, the position deviation and the angle deviation of the operation of a doctor can be intuitively prompted. Can make doctors fully understand the structure of the affected part of the patient and accurately position the structure and the position of the focus before the operation. And doctors with insufficient experience can practice the operation before the operation, so that the operation risk is reduced.
Fig. 2 is a flowchart of a method for assisting in moving an instrument according to another embodiment of the present disclosure, as shown in fig. 2, the method includes the following steps:
s201, establishing a world coordinate system based on a preset landmark scale rigid body, wherein the landmark scale rigid body comprises at least three optical reflection points.
A landmark scale rigid body is preset as a reference point or an origin (0, 0, 0) position, and the landmark scale rigid body is an integral body composed of at least the three optical reflection points and a connection structure between the three optical reflection points. And the terminal establishes a world coordinate system based on the landmark scale rigid body.
S202, optically positioning at least three optical reflection points of the rigid body of the ground scale, and determining a conversion relation between a three-dimensional model coordinate system and the world coordinate system.
The terminal identifies first positions of at least three model light reflecting points and second positions of at least three optical light reflecting points received from a binocular camera according to the landmark scale three-dimensional model, the at least three model light reflecting points and the optical light reflecting points are respectively in one-to-one correspondence, and then the corresponding relation between the at least three model light reflecting points and the at least three optical light reflecting points is determined, namely the conversion relation between a three-dimensional model coordinate system and the world coordinate system is determined.
S203, optically positioning the first optical rigid body and the second optical rigid body, and converting the first optical rigid body and the second optical rigid body into the three-dimensional model coordinate system.
The first optical rigid body is optically positioned, so that the pose of the instrument three-dimensional model can be determined; the pose of the object three-dimensional model can be determined by optically positioning the second optical rigid body.
For example, positions of the at least three optical reflection points of the first optical rigid body and positions of the at least three optical reflection points of the second optical rigid body are acquired by a binocular camera, and a terminal receives the positions of the at least three optical reflection points of the first optical rigid body and the positions of the at least three optical reflection points of the second optical rigid body transmitted by the binocular camera and converts the first optical rigid body and the second optical rigid body into a three-dimensional model coordinate system according to a conversion relationship between the three-dimensional model coordinate system and the world coordinate system.
S204, optically matching the instrument three-dimensional model with the instrument to be moved.
Specifically, the implementation process and principle of S204 and S101 are consistent, and are not described herein again.
S205, carrying out optical matching on the object three-dimensional model and the object entity.
Specifically, the implementation process and principle of S205 and S102 are consistent, and are not described herein again.
S206, responding to the channel planning operation, and determining a target channel in the object three-dimensional model.
Specifically, the implementation process and principle of S206 and S103 are consistent, and are not described herein again.
S207, determining the movement information of the to-be-moved instrument based on the target channel and the current pose of the instrument three-dimensional model.
Specifically, the implementation process and principle of S207 and S104 are consistent, and are not described herein again.
The disclosed embodiments establish a world coordinate system based on a pre-set landmark scale rigid body that includes at least three optical reflection points. Further, at least three optical reflection points of the ground scale rigid body are optically positioned, a conversion relation between a three-dimensional model coordinate system and the world coordinate system is determined, and then the first optical rigid body and the second optical rigid body are optically positioned and converted into the three-dimensional model coordinate system. The landmark scale rigid body is determined, the world coordinate system is also determined, the conversion relation between the three-dimensional model coordinate system and the world coordinate system is also determined, the first optical rigid body and the second optical rigid body are converted into the three-dimensional model coordinate system, and under the same coordinate system, the position deviation and the angle deviation of the operation of a doctor can be more intuitively prompted, so that the doctor can conveniently perform the operation, and the operation practice can also be performed before the operation.
On the basis of the above embodiment, the determining a target channel in the three-dimensional model of the object in response to the channel planning operation includes: determining a first channel in the first x-ray map; determining a second channel in the second x-ray map; determining the target channel based on the first channel and the second channel based on a conversion relation between an x-ray image coordinate system and a three-dimensional model coordinate system;
wherein the first x-ray map and the second x-ray map are any two of a plurality of x-ray maps for constructing the three-dimensional model of the object.
In some embodiments, the terminal, in response to an operation of clicking a channel planning button by a user, acquires a first x-ray map, determines a first channel in the first x-ray map, the first channel being a two-dimensional channel, and then acquires a second x-ray map, determines a second channel in the second x-ray map, the second channel also being a two-dimensional channel. Further, the terminal can uniquely determine a three-dimensional space coordinate system by two different x-ray images based on the conversion relation between the x-ray image coordinate system and the three-dimensional model coordinate system, and the three-dimensional space coordinate system corresponds to the three-dimensional model coordinate system, namely the conversion relation between the x-ray image coordinate system and the three-dimensional model coordinate system is determined, and the target channel is determined based on the first channel and the second channel, wherein the target channel is a three-dimensional channel. Wherein the first x-ray map and the second x-ray map are any two of a plurality of x-ray maps for constructing the three-dimensional model of the object.
Optionally, the determining a target channel in the three-dimensional model of the object in response to the channel planning operation includes: determining a first channel at a first perspective of the three-dimensional model of the object; determining a second channel at a second perspective of the three-dimensional model of the object; determining the target channel based on the first channel and the second channel.
In some embodiments, in response to the user clicking the channel planning button, the terminal determines a first channel at a first viewing angle of the three-dimensional model of the object as shown in fig. 3, the first channel being a two-dimensional channel, and then determines a second channel at a second viewing angle of the three-dimensional model of the object as shown in fig. 4, the second channel also being a two-dimensional channel. And the terminal determines the target channel based on the first channel and the second channel, wherein the target channel is a three-dimensional channel.
Optionally, the determining the movement information of the to-be-moved instrument based on the current poses of the target channel and the instrument three-dimensional model includes: determining a distance deviation and an orientation deviation based on the pose of the target channel in the three-dimensional model coordinate system and the current pose of the instrument three-dimensional model; and determining the movement information of the to-be-moved instrument based on the distance deviation and the orientation deviation.
And the terminal determines distance deviation and orientation deviation based on the pose of the target channel in the three-dimensional model coordinate system and the current pose of the instrument three-dimensional model, and determines the movement information of the instrument to be moved based on the distance deviation and the orientation deviation. And the terminal displays the movement information in real time in the process that the user moves the to-be-moved instrument, so that the moving pose of the to-be-moved instrument is matched with the target channel after the to-be-moved instrument moves.
The method comprises the steps of determining a first channel in a first x-ray image, determining a second channel in a second x-ray image, and determining a target channel based on the first channel and the second channel based on a conversion relation between an x-ray image coordinate system and a three-dimensional model coordinate system; or determining a first channel at a first view angle of the three-dimensional model of the object, determining a second channel at a second view angle of the three-dimensional model of the object, and determining the target channel based on the first channel and the second channel. And then the terminal determines a distance deviation and an orientation deviation based on the pose of the target channel in the three-dimensional model coordinate system and the current pose of the instrument three-dimensional model, and determines the movement information of the instrument to be moved based on the distance deviation and the orientation deviation. The target channel is planned by two methods, so that a doctor can conveniently perform an operation, and the operation can be performed before the operation.
Fig. 5 is a flowchart of a method for assisting in moving an instrument according to another embodiment of the present disclosure, as shown in fig. 5, the method includes the following steps:
s501, optically matching the instrument three-dimensional model with the instrument to be moved.
Specifically, the implementation process and principle of S501 and S101 are consistent, and are not described herein again.
And S502, optically matching the three-dimensional model of the object with the object entity.
Specifically, the implementation process and principle of S502 and S102 are consistent, and are not described herein again.
S503, responding to the channel planning operation, and determining a target channel in the object three-dimensional model.
Specifically, the implementation process and principle of S503 and S103 are consistent, and are not described herein again.
S504, taking the intersection point of the target channel and the surface of the object three-dimensional model as the target position of the instrument three-dimensional model.
And the terminal calculates the intersection point of the target channel and the surface of the object three-dimensional model, and determines the intersection point as the target position of the instrument three-dimensional model, namely the needle insertion position.
And S505, taking the orientation of the target channel as the target orientation of the instrument three-dimensional model.
And the terminal determines the target orientation of the instrument three-dimensional model, namely the needle insertion orientation, according to the orientation of the target channel.
S506, determining distance deviation and orientation deviation based on the target position, the target orientation and the current pose of the instrument three-dimensional model.
The terminal renders a distance deviation and an orientation deviation between the instrument three-dimensional model and the target channel on a user interface based on the target position, the target orientation, and a current pose of the instrument three-dimensional model.
And S507, determining the movement information of the to-be-moved instrument based on the distance deviation and the orientation deviation.
And the terminal determines the movement information of the to-be-moved instrument according to the distance deviation and the orientation deviation between the instrument three-dimensional model and the target channel, wherein the movement information of the to-be-moved instrument comprises a movement direction, an angle and a distance. And the terminal displays the movement information in real time in the process that the user moves the to-be-moved instrument, so that the moving pose of the to-be-moved instrument is matched with the target channel after the to-be-moved instrument moves.
The embodiment of the disclosure optically matches the instrument three-dimensional model with the instrument to be moved, optically matches the object three-dimensional model with the object entity, and determines a target channel in the object three-dimensional model in response to a channel planning operation. Further, the intersection point of the target channel and the surface of the object three-dimensional model is used as the target position of the instrument three-dimensional model, and the orientation of the target channel is used as the target orientation of the instrument three-dimensional model. Then, based on the target position, the target orientation and the current pose of the instrument three-dimensional model, a distance deviation and an orientation deviation are determined, and based on the distance deviation and the orientation deviation, the movement information of the instrument to be moved is determined. Since the intersection point of the target channel and the surface of the object three-dimensional model is used as the target position of the instrument three-dimensional model, the orientation of the target channel is used as the target orientation of the instrument three-dimensional model, the distance deviation and the orientation deviation are determined based on the target position, the target orientation and the current pose of the instrument three-dimensional model, and the movement information of the instrument to be moved is determined based on the distance deviation and the orientation deviation, the position deviation and the angle deviation of the operation of a doctor can be intuitively prompted. Can make doctors fully understand the structure of the affected part of the patient and accurately position the structure and the position of the focus before the operation. And doctors with insufficient experience can practice the operation before the operation, so that the operation risk is reduced.
Fig. 6 is a schematic structural diagram of a device for assisting in moving an instrument according to an embodiment of the present disclosure. The means for assisting movement of the instrument may be a terminal as described in the above embodiments, or the means for assisting movement of the instrument may be a component or assembly in the terminal. The apparatus for assisting moving of an instrument provided in the embodiments of the present disclosure may perform the processing procedure provided in the embodiments of the method for assisting moving of an instrument, as shown in fig. 6, the apparatus 60 for assisting moving of an instrument includes: a matching module 61, a response module 62, a first determination module 63; the matching module 61 is used for optically matching the instrument three-dimensional model with the instrument to be moved; the matching module 61 is further configured to optically match an object three-dimensional model with an object entity, where the object entity is an object entity provided with a second optical rigid body, and the object three-dimensional model is a three-dimensional model generated based on the object entity; a response module 62 is configured to determine a target channel in the three-dimensional model of the object in response to a channel planning operation; the first determining module 63 is configured to determine movement information of the to-be-moved instrument based on the target channel and the current pose of the three-dimensional instrument model, so that after the to-be-moved instrument is moved, the pose of the three-dimensional instrument model after the movement matches the target channel.
Optionally, the apparatus further comprises: an establishing module 64, a second determining module 65 and a converting module 66; the establishing module 64 is configured to establish a world coordinate system based on a preset surface ruler rigid body, where the surface ruler rigid body includes at least three optical reflection points; the second determining module 65 is configured to optically locate at least three optical reflection points of the surface scale rigid body, and determine a conversion relationship between a three-dimensional model coordinate system and the world coordinate system; the conversion module 66 is configured to optically position the first optical rigid body and the second optical rigid body and convert the optical rigid bodies into the three-dimensional model coordinate system.
Optionally, when the response module 62 determines the target channel in the three-dimensional object model in response to the channel planning operation, it is specifically configured to: determining a first channel in the first x-ray map; determining a second channel in the second x-ray map; and determining the target channel based on the first channel and the second channel based on the conversion relation between the x-ray image coordinate system and the three-dimensional model coordinate system. Wherein the first x-ray map and the second x-ray map are any two of a plurality of x-ray maps for constructing the three-dimensional model of the object.
Optionally, when the response module 62 determines the target channel in the three-dimensional object model in response to the channel planning operation, it is specifically configured to: determining a first channel at a first perspective of the three-dimensional model of the object; determining a second channel at a second perspective of the three-dimensional model of the object; determining the target channel based on the first channel and the second channel.
Optionally, when determining the movement information of the to-be-moved instrument based on the target channel and the current pose of the instrument three-dimensional model, the first determining module 63 is specifically configured to: determining a distance deviation and an orientation deviation based on the pose of the target channel in the three-dimensional model coordinate system and the current pose of the instrument three-dimensional model; and determining the movement information of the to-be-moved instrument based on the distance deviation and the orientation deviation.
Optionally, when determining the movement information of the to-be-moved instrument based on the target channel and the current pose of the instrument three-dimensional model, the first determining module 63 is specifically configured to: taking the intersection point of the target channel and the surface of the object three-dimensional model as the target position of the instrument three-dimensional model; taking the orientation of the target channel as the target orientation of the instrument three-dimensional model; and determining the movement information of the to-be-moved instrument based on the target position, the target orientation and the current pose of the instrument three-dimensional model, so that after the to-be-moved instrument is moved, the pose of the instrument three-dimensional model after the movement accords with the target position and the target orientation.
Optionally, when determining the movement information of the to-be-moved instrument based on the target position, the target orientation, and the current pose of the three-dimensional instrument model, the first determining module 63 is specifically configured to: determining a distance bias and an orientation bias based on the target position, the target orientation, and a current pose of the instrument three-dimensional model; and determining the movement information of the to-be-moved instrument based on the distance deviation and the orientation deviation.
The device for assisting in moving an instrument according to the embodiment shown in fig. 6 can be used for implementing the technical solutions of the above method embodiments, and the implementation principle and technical effects are similar, and are not described herein again.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. The electronic device may be a terminal as described in the above embodiments. The electronic device provided in the embodiment of the present disclosure may execute the processing procedure provided in the embodiment of the method for assisting moving an instrument, as shown in fig. 7, the electronic device 70 includes: memory 71, processor 72, computer programs and communication interface 73; wherein the computer program is stored in the memory 71 and is configured to be executed by the processor 72 for a method of assisting the movement of an instrument as described above.
In addition, the embodiment of the disclosure also provides a computer readable storage medium, on which a computer program is stored, the computer program being executed by a processor to implement the method for assisting in moving an instrument according to the above embodiment.
Furthermore, the embodiments of the present disclosure also provide a computer program product comprising a computer program or instructions which, when executed by a processor, implement the method of assisting movement of an instrument as described above.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present disclosure, which enable those skilled in the art to understand or practice the present disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (10)
1. A method for assisting in moving an instrument, wherein the instrument to be moved is an instrument provided with a first optical rigid body, and a three-dimensional model of the instrument to be moved is pre-constructed, the method comprising:
optically matching the instrument three-dimensional model with the instrument to be moved;
optically matching an object three-dimensional model with an object entity, wherein the object entity is an object entity provided with a second optical rigid body, and the object three-dimensional model is a three-dimensional model generated based on the object entity;
determining a target channel in the three-dimensional model of the object in response to a channel planning operation;
and determining the movement information of the to-be-moved instrument based on the target channel and the current pose of the instrument three-dimensional model, so that the pose of the instrument three-dimensional model after movement is matched with the target channel after the to-be-moved instrument moves.
2. The method of claim 1, wherein prior to optically matching the instrument three-dimensional model to the instrument to be moved, the method further comprises:
establishing a world coordinate system based on a preset landmark scale rigid body, wherein the landmark scale rigid body comprises at least three optical reflection points;
optically positioning at least three optical reflection points of the rigid body of the ground scale, and determining a conversion relation between a three-dimensional model coordinate system and the world coordinate system;
and optically positioning the first optical rigid body and the second optical rigid body, and converting the optical rigid bodies into the three-dimensional model coordinate system.
3. The method of claim 1, wherein said determining a target pathway in the three-dimensional model of the object in response to a pathway planning operation comprises:
determining a first channel in the first x-ray map;
determining a second channel in the second x-ray map;
determining the target channel based on the first channel and the second channel based on a conversion relation between an x-ray image coordinate system and a three-dimensional model coordinate system;
wherein the first x-ray map and the second x-ray map are any two of a plurality of x-ray maps for constructing the three-dimensional model of the object.
4. The method of claim 1, wherein said determining a target pathway in the three-dimensional model of the object in response to a pathway planning operation comprises:
determining a first channel at a first perspective of the three-dimensional model of the object;
determining a second channel at a second perspective of the three-dimensional model of the object;
determining the target channel based on the first channel and the second channel.
5. The method of claim 3, wherein the determining movement information of the instrument to be moved based on the current poses of the target channel and the three-dimensional model of the instrument comprises:
determining a distance deviation and an orientation deviation based on the pose of the target channel in the three-dimensional model coordinate system and the current pose of the instrument three-dimensional model;
and determining the movement information of the to-be-moved instrument based on the distance deviation and the orientation deviation.
6. The method of claim 4, wherein the determining movement information of the instrument to be moved based on the current poses of the target channel and the three-dimensional model of the instrument comprises:
taking the intersection point of the target channel and the surface of the object three-dimensional model as the target position of the instrument three-dimensional model;
taking the orientation of the target channel as the target orientation of the instrument three-dimensional model;
and determining the movement information of the to-be-moved instrument based on the target position, the target orientation and the current pose of the instrument three-dimensional model, so that after the to-be-moved instrument is moved, the pose of the instrument three-dimensional model after the movement accords with the target position and the target orientation.
7. The method of claim 6, wherein the determining movement information of the instrument to be moved based on the target position, the target orientation, and the current pose of the three-dimensional model of the instrument comprises:
determining a distance bias and an orientation bias based on the target position, the target orientation, and a current pose of the instrument three-dimensional model;
and determining the movement information of the to-be-moved instrument based on the distance deviation and the orientation deviation.
8. An apparatus for assisting in moving an implement, the apparatus comprising:
the matching module is used for optically matching the instrument three-dimensional model with the instrument to be moved;
the matching module is further configured to: optically matching an object three-dimensional model with an object entity, wherein the object entity is an object entity provided with a second optical rigid body, and the object three-dimensional model is a three-dimensional model generated based on the object entity;
a response module for determining a target channel in the three-dimensional model of the object in response to a channel planning operation;
and the determining module is used for determining the movement information of the to-be-moved instrument based on the target channel and the current pose of the instrument three-dimensional model, so that the pose of the instrument three-dimensional model after movement is matched with the target channel after the to-be-moved instrument is moved.
9. An electronic device, comprising:
a memory;
a processor; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method of any one of claims 1-7.
10. A storage medium on which a computer program is stored, which computer program, when being executed by a processor, carries out the method according to any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111478462.0A CN114159157A (en) | 2021-12-06 | 2021-12-06 | Method, device and equipment for assisting moving of instrument and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111478462.0A CN114159157A (en) | 2021-12-06 | 2021-12-06 | Method, device and equipment for assisting moving of instrument and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114159157A true CN114159157A (en) | 2022-03-11 |
Family
ID=80483617
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111478462.0A Pending CN114159157A (en) | 2021-12-06 | 2021-12-06 | Method, device and equipment for assisting moving of instrument and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114159157A (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102961187A (en) * | 2012-10-26 | 2013-03-13 | 深圳市旭东数字医学影像技术有限公司 | Surgical planning method and system for percutaneous puncture |
CN110025378A (en) * | 2018-01-12 | 2019-07-19 | 中国科学院沈阳自动化研究所 | A kind of operation auxiliary navigation method based on optical alignment method |
CN110037768A (en) * | 2019-04-23 | 2019-07-23 | 雅客智慧(北京)科技有限公司 | Joint replacement surgery assisted location method, positioning device and system |
CN111388087A (en) * | 2020-04-26 | 2020-07-10 | 深圳市鑫君特智能医疗器械有限公司 | Surgical navigation system, computer and storage medium for performing surgical navigation method |
CN112155727A (en) * | 2020-08-31 | 2021-01-01 | 上海市第一人民医院 | Surgical navigation systems, methods, devices, and media based on three-dimensional models |
CN113648057A (en) * | 2021-08-18 | 2021-11-16 | 上海电气集团股份有限公司 | Surgical navigation system and surgical navigation method |
-
2021
- 2021-12-06 CN CN202111478462.0A patent/CN114159157A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102961187A (en) * | 2012-10-26 | 2013-03-13 | 深圳市旭东数字医学影像技术有限公司 | Surgical planning method and system for percutaneous puncture |
CN110025378A (en) * | 2018-01-12 | 2019-07-19 | 中国科学院沈阳自动化研究所 | A kind of operation auxiliary navigation method based on optical alignment method |
CN110037768A (en) * | 2019-04-23 | 2019-07-23 | 雅客智慧(北京)科技有限公司 | Joint replacement surgery assisted location method, positioning device and system |
CN111388087A (en) * | 2020-04-26 | 2020-07-10 | 深圳市鑫君特智能医疗器械有限公司 | Surgical navigation system, computer and storage medium for performing surgical navigation method |
CN112155727A (en) * | 2020-08-31 | 2021-01-01 | 上海市第一人民医院 | Surgical navigation systems, methods, devices, and media based on three-dimensional models |
CN113648057A (en) * | 2021-08-18 | 2021-11-16 | 上海电气集团股份有限公司 | Surgical navigation system and surgical navigation method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6751456B2 (en) | Augmented reality navigation system for use in a robotic surgical system and method of use thereof | |
US11717376B2 (en) | System and method for dynamic validation, correction of registration misalignment for surgical navigation between the real and virtual images | |
US9375133B2 (en) | Endoscopic observation support system | |
JP2022017422A (en) | Augmented reality surgical navigation | |
JP2022512420A (en) | Surgical system with a combination of sensor-based navigation and endoscopy | |
US7643862B2 (en) | Virtual mouse for use in surgical navigation | |
US20200315734A1 (en) | Surgical Enhanced Visualization System and Method of Use | |
US20120327186A1 (en) | Endoscopic observation supporting system, method, device and program | |
US11896441B2 (en) | Systems and methods for measuring a distance using a stereoscopic endoscope | |
JP2016512973A (en) | Tracking device for tracking an object relative to the body | |
WO2012062482A1 (en) | Visualization of anatomical data by augmented reality | |
US20210121238A1 (en) | Visualization system and method for ent procedures | |
Hu et al. | Head-mounted augmented reality platform for markerless orthopaedic navigation | |
JP5934070B2 (en) | Virtual endoscopic image generating apparatus, operating method thereof, and program | |
CN110169821B (en) | Image processing method, device and system | |
JP5961504B2 (en) | Virtual endoscopic image generating apparatus, operating method thereof, and program | |
TWI741196B (en) | Surgical navigation method and system integrating augmented reality | |
JP6952740B2 (en) | How to assist users, computer program products, data storage media, and imaging systems | |
Harders et al. | Multimodal augmented reality in medicine | |
KR20190004591A (en) | Navigation system for liver disease using augmented reality technology and method for organ image display | |
US20220370147A1 (en) | Technique of Providing User Guidance For Obtaining A Registration Between Patient Image Data And A Surgical Tracking System | |
CN113925611A (en) | Matching method, device, equipment and medium for object three-dimensional model and object entity | |
CN114159157A (en) | Method, device and equipment for assisting moving of instrument and storage medium | |
CN113317874B (en) | Medical image processing device and medium | |
CN211325035U (en) | Novel endoscope system with visible inside and outside |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20220311 |