WO2022206435A1 - 一种手术导航系统、方法、电子设备和可读存储介质 - Google Patents
一种手术导航系统、方法、电子设备和可读存储介质 Download PDFInfo
- Publication number
- WO2022206435A1 WO2022206435A1 PCT/CN2022/081728 CN2022081728W WO2022206435A1 WO 2022206435 A1 WO2022206435 A1 WO 2022206435A1 CN 2022081728 W CN2022081728 W CN 2022081728W WO 2022206435 A1 WO2022206435 A1 WO 2022206435A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- surgical
- image
- recognition
- navigation system
- surgical navigation
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 239000003550 marker Substances 0.000 claims abstract description 77
- 230000003993 interaction Effects 0.000 claims abstract description 56
- 230000002452 interceptive effect Effects 0.000 claims description 50
- 230000008569 process Effects 0.000 description 24
- 238000010586 diagram Methods 0.000 description 10
- 239000000243 solution Substances 0.000 description 8
- 238000012545 processing Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 238000004590 computer program Methods 0.000 description 5
- 230000009471 action Effects 0.000 description 3
- 230000003190 augmentative effect Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000001960 triggered effect Effects 0.000 description 3
- 210000003484 anatomy Anatomy 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000001356 surgical procedure Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 238000011109 contamination Methods 0.000 description 1
- 210000004247 hand Anatomy 0.000 description 1
- 238000002513 implantation Methods 0.000 description 1
- 238000002347 injection Methods 0.000 description 1
- 239000007924 injection Substances 0.000 description 1
- 239000007788 liquid Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000002980 postoperative effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001954 sterilising effect Effects 0.000 description 1
- 238000004659 sterilization and disinfection Methods 0.000 description 1
- 210000003813 thumb Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2055—Optical tracking systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2065—Tracking using image or pattern recognition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2068—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis using pointers, e.g. pointers having reference marks for determining coordinates of body points
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
Definitions
- the present application relates to the medical field, and in particular, to a surgical navigation system, method, electronic device and readable storage medium.
- the surgical navigation system accurately corresponds the preoperative or intraoperative image data of the patient with the patient's anatomical structure on the operating bed, tracks the surgical instruments during the operation, and updates and displays the position of the surgical instruments on the patient image in real time in the form of a virtual probe, allowing doctors to See at a glance the position of surgical instruments relative to the patient's anatomy, making surgery faster, more precise, and safer.
- Augmented reality equipment can significantly improve the work efficiency of the wearer. It mainly uses gestures, voice and other methods to achieve human-computer interaction.
- the augmented reality equipment that uses gestures, voice and other methods to achieve human-computer interaction is applied to the surgical navigation system, its existence
- the disadvantage is: if gestures are used to realize the human-computer interaction of the surgical navigation system, the system will misjudge due to blood contamination in the doctor's gloves or multiple hands appearing in the camera's field of view; if the human-computer interaction of the surgical navigation system is realized by voice, It will cause false triggering due to the necessary communication during the operation.
- the present application provides a surgical navigation system, method, electronic device and readable storage medium.
- a first aspect of the present application includes:
- an image acquisition module used to acquire an image of a surgical scene
- an image recognition module configured to perform image recognition on the surgical scene image to obtain a first recognition result, where the first recognition result is used to represent the identifier contained in the surgical scene image;
- an instruction acquisition module configured to acquire a corresponding interactive instruction according to the first identification result
- the instruction execution module is configured to control the surgical navigation system to execute corresponding surgical navigation steps according to the interactive instruction.
- the image recognition module is used to perform image recognition on the surgical scene image, and when the first recognition result is obtained, it is specifically used for:
- the first recognition result is determined according to the similarity between the image feature of the surgical scene image and the image feature of the marker.
- the instruction acquisition module is configured to acquire the corresponding interaction instruction according to the first identification result, it is specifically used for:
- Corresponding interactive instructions are acquired according to the first identification result and the surgical navigation stage.
- the instruction acquisition module is configured to acquire the corresponding interaction instruction according to the first identification result, it is specifically used for:
- the instruction acquisition module is configured to acquire the corresponding interaction instruction according to the first identification result, it is specifically used for:
- the instruction acquisition module is configured to acquire the corresponding interaction instruction according to the first identification result, it is specifically used for:
- the corresponding interaction instruction is acquired.
- the instruction acquisition module is configured to acquire the corresponding interaction instruction according to the first identification result, it is specifically used for:
- an information interaction method for a surgical navigation system includes:
- performing image recognition on the surgical scene image to obtain a first recognition result including:
- a first recognition result is obtained according to the image feature of the surgical scene image and the image feature of the marker.
- obtaining the corresponding interaction instruction according to the first identification result includes:
- Corresponding interaction instructions are acquired according to the first identification result and the operation stage information.
- obtaining the corresponding interaction instruction according to the first identification result includes:
- the corresponding interaction instruction is acquired.
- obtaining the corresponding interaction instruction according to the first identification result includes:
- obtaining the corresponding interaction instruction according to the first identification result includes:
- obtaining the corresponding interaction instruction according to the first identification result includes:
- an electronic device includes a memory and a processor, wherein the memory is used to store computer instructions, wherein the computer instructions are executed by the processor to implement any one of the second aspect of the present application. method described in item.
- a fourth aspect of the present application provides a readable storage medium having computer instructions stored thereon, characterized in that, when the computer instructions are executed by a processor, the method described in any one of the second aspect of the present application is implemented.
- the technical solutions in the embodiments of the present application can automatically obtain the corresponding interactive instructions according to the markers contained in the surgical scene images, according to the markers contained in the surgical scene images. , and then control the surgical navigation system to perform the corresponding surgical navigation steps according to the interactive instructions; so that the operator can control the surgical navigation system to perform the corresponding surgical navigation steps by photographing the surgical scene with markers, without using voice, gestures, etc. to operate .
- implementing the technical solution of the present application can reduce the probability of misjudgment when the surgical navigation system is controlled.
- FIG. 1 is a structural block diagram of a surgical navigation system disclosed in an embodiment of the present application.
- FIG. 2 is a schematic diagram of a surgical scene disclosed in an embodiment of the present application.
- FIG. 3 is a schematic diagram of another surgical scene disclosed in an embodiment of the present application.
- FIG. 4 is a schematic diagram of another operation scene disclosed in the embodiment of the present application.
- FIG. 5 is a schematic diagram of another operation scene disclosed in an embodiment of the present application.
- FIG. 6 is a schematic diagram of another operation scene disclosed in an embodiment of the present application.
- FIG. 7 is a schematic diagram of another operation scene disclosed in an embodiment of the present application.
- FIG. 8 is a flowchart of an information interaction method of a surgical navigation system disclosed in an embodiment of the present application.
- FIG. 9 is a structural block diagram of another electronic device disclosed in an embodiment of the present application.
- FIG. 10 is a schematic structural diagram of a computer system disclosed in an embodiment of the present application.
- Surgical Navigation System including:
- an image acquisition module 101 configured to acquire an image of a surgical scene
- the image recognition module 102 is configured to perform image recognition on the surgical scene image to obtain a first recognition result, where the first recognition result is used to represent the identifier contained in the surgical scene image;
- an instruction acquisition module 103 configured to acquire a corresponding interactive instruction according to the first identification result
- the instruction execution module 104 is configured to control the surgical navigation system to execute corresponding surgical navigation steps according to the interactive instructions.
- the surgical navigation system in the embodiment of the present application can automatically identify the markers contained in the surgical scene image captured by the camera, obtain corresponding interactive instructions according to the markers contained in the surgical scene image, and then control the surgical navigation system according to the interactive instructions Perform the appropriate surgical navigation steps. It enables the operator to control the surgical navigation system to perform corresponding surgical navigation steps by photographing the surgical scene with markers, without using voice, gestures, etc. to operate, so that the implementation of the technical solution of the present application can reduce the time when the surgical navigation system is controlled. chance of misjudgment. At the same time, the operation of the surgical navigation system is also more convenient, which reduces the influence on the normal operation of the operator when operating the surgical system.
- the markers in the embodiments of the present application may have at least one of specific optical features, pattern features, and geometric features, so that the images obtained by photographing the markers have specific image features.
- the markers may be information board, plane positioning board, two-dimensional code, etc.
- the surgical navigation system in this embodiment of the present application triggers the surgical navigation system to perform a surgical navigation step corresponding to the specific identifier by identifying the specific marker.
- the identifier includes a plane positioning plate set on the operating table. When the plane positioning plate is recognized, an interactive instruction of "triggering the operation area initialization” is obtained, and the surgical navigation step of "surgical area initialization” is performed according to the interactive instruction.
- the identifier includes a puncture handle, and when the puncture handle is recognized, an interactive instruction of "trigger puncture navigation" is obtained, and the surgical navigation step of "puncture navigation" is performed according to the interactive instruction.
- the identifier includes a two-dimensional code on the operating table. When the two-dimensional code on the operating table is recognized, an interactive instruction of "trigger the surgical navigation system to enter the calibration" is obtained, and according to the interactive instruction, the "surgical navigation system calibration” is executed. surgical navigation steps.
- the surgical navigation step may include the step of selecting a surgical instrument model.
- the system pre-stores a surgical instrument model library, and the surgical instrument model library includes different types and types of surgical instrument models.
- the operator can point the camera of the head-mounted device to the For the markers set on the quasi-surgical instruments (such as the QR code set on the surgical instruments), the surgical instrument model is selected, so that the surgical instrument model of the navigation system is consistent with the model of the real surgical application, and then the next step of registration is performed.
- the surgical navigation step may include the step of selecting the surgical navigation process. For example, multiple identifiers are set in the scene, for example, a first identifier (information board) is set on the operating table, and a second identifier (two dimensional code).
- the camera of the doctor's head-mounted device When the camera of the doctor's head-mounted device is facing the first identification object, it enters the first stage (eg, the registration stage), and when the camera is facing the second identification object, it enters another stage (eg, the guiding puncture stage).
- the first stage eg, the registration stage
- the camera when the camera is facing the second identification object, it enters another stage (eg, the guiding puncture stage).
- This embodiment does not limit this.
- the identifier in the embodiment of the present application is preferably an identifiable pattern integrated with the disposable surgical instrument, such as a two-dimensional code provided on the puncture needle, so that the interactive identifier can meet the requirements of repeated sterilization or one-time sterile use one of these two conditions.
- the image recognition module in this embodiment of the present application may use existing image recognition algorithms to perform image recognition, such as a spot detection algorithm, a corner detection algorithm, and the like.
- a corresponding appropriate algorithm can be selected according to the form of the marker. For example, when the marker is a two-dimensional code set on an operating table or a surgical instrument, a corresponding two-dimensional code identification algorithm can be directly used.
- the image recognition module is used to perform image recognition on the surgical scene image, and when the first recognition result is obtained, it is specifically used for:
- the first recognition result is determined according to the similarity between the image feature of the surgical scene image and the image feature of the marker.
- the image recognition module is preset with a similarity threshold, and when the similarity between the image features of the surgical scene image and the image features of the marker is greater than the similarity threshold, it is determined that the surgical scene image contains the corresponding marker.
- the image features include one or more of color features, texture features, shape features, and spatial relationship features.
- the instruction acquisition module is used to acquire the corresponding interactive instruction according to the first recognition result, and is specifically used for:
- corresponding interactive instructions are obtained, so that the pattern corresponding to the same marker can correspond to different interactive instructions in different surgical navigation stages, so as to reduce the setting of markers; That is to say, when performing image recognition on the surgical scene image, if it is recognized that the surgical scene image contains the same marker pattern, but the surgical navigation stage of the surgical navigation system is different, the corresponding interactive instructions are also different.
- the surgical navigation system when the surgical navigation system is not in the navigation stage, if it recognizes that the QR code is included in the image of the surgical scene captured by the camera, a "triggering the surgical navigation system to enter the configuration" will be generated.
- the surgical navigation system is in the registration stage, if it is recognized that the QR code is included in the surgical scene image captured by the camera, the interactive instruction of "re-registration" is generated.
- the registration scene is triggered; when an accident occurs during the registration process and the registration needs to be restarted again, it is only necessary to recognize the second QR code at this position again.
- the QR code resets the entire process.
- the instruction acquisition module is used to acquire the corresponding interactive instruction according to the first recognition result, and is specifically used for:
- the second recognition result is used to represent the relative position of the marker in the preset space, or the relative distance between the marker and the preset target;
- the corresponding interaction instruction is acquired.
- the preset space can be set according to specific application requirements.
- the preset space can be set as the space corresponding to the surgical scene image
- the preset target can be set according to the specific application requirements.
- the preset target can be set as Configure points, patients, etc.
- different interactive instructions can be generated based on the position of the marker.
- the registration process is also reset. If the marker is placed next to the patient, the entire process is reset, and if the marker is placed in a certain registration process Near the quasi-point, it means to reset the registration data for this position only.
- the marker being a two-dimensional code as an example, see Figures 3 and 4.
- the difference between Figures 3 and 4 is that the same marker has different positions, wherein the marker 4 in Figure 3 is located next to the patient, while the marker 4 in Figure 4 The marker 4 is located next to the registration point; and for the area in the box in FIG. 3 is the photographed surgical scene image, the first recognition result obtained by performing image recognition on the surgical scene image is that the logo is included in the surgical scene image,
- the second recognition result obtained by performing image recognition on the surgical scene image is that the relative distance between the two-dimensional code and the patient (specifically, the patient's head) is less than the first preset distance threshold. At this time, the first recognition result and the second recognition result The result is a "trigger reset all process" interactive command.
- the marker in FIG. 4 is moved to the vicinity of the registration point, and the area in the box in FIG. 4 is the captured surgical scene image, and the first recognition result obtained by performing image recognition on the surgical scene image is that the surgical scene image contains
- the second recognition result obtained by performing image recognition on the surgical scene image is that the relative distance between the two-dimensional code and the registration point is less than the second preset distance threshold.
- the first recognition result and the second recognition result are generated. is an interactive instruction to "trigger to reset the registration data for the current position only".
- the first preset distance threshold and the second preset distance threshold may be set to 90% or the like.
- the relative distance between the marker and the preset target is the relative distance between the extension line of the marker and the preset target, for example, the relative distance between the extension line of the puncture needle and the ribs, when the extension line of the puncture needle is relative to the rib
- the relative distance from the rib is less than the set value, it means that the extension line of the puncture needle has the risk of touching the rib.
- the corresponding interactive command of "trigger prompt information" is obtained to give a prompt.
- the set value can be is 0.
- the instruction acquisition module is used to acquire the corresponding interactive instruction according to the first recognition result, and is specifically used for:
- the corresponding interaction instruction is acquired.
- the orientation and/or angle of the marker can be identified by using an existing related algorithm, and the marker has corresponding characteristics, so that the orientation and/or angle of the marker can be obtained after the marker is recognized by the image.
- the operator can trigger the corresponding interactive command by adjusting the orientation and/or angle of the marker to improve the convenience of control.
- the first recognition result of the surgical navigation system is that the surgical scene image contains the puncture needle.
- the third recognition result is that the direction of the puncture needle is correctly pointed to the target site, and an interactive instruction of "triggering distance measurement, displaying the distance between the tip of the puncture needle and the target site" is generated; see Figure 6, in the registration process, the direction of the puncture needle 6 deviates from the target Part 7.
- the first recognition result of the surgical navigation system is that the surgical scene image contains a puncture needle
- the third recognition result is that the direction of the puncture needle deviates from the target site, and an interactive instruction of "trigger angle measurement and display prompt information" is generated.
- the operator finds that there is an error in the registration of the surgical navigation system, when re-registration is required, the operator can perform specific actions on the identification object, such as moving the position of the identification object or changing the shape (orientation and/or angle) of the identification object, According to this change structure, the system enters the process of re-registration.
- the instruction acquisition module is used to acquire the corresponding interactive instruction according to the first recognition result, and is specifically used for:
- the corresponding interaction instruction is acquired.
- the operator can control the surgical navigation system by blocking the marker, which improves the convenience of control.
- the operator's hand When the operator's hand partially blocks the QR code on the surgical instrument, it means that the final purpose of the puncture operation is in progress or has been completed: liquid injection or instrument implantation. At this time, the operation end procedure of the surgical navigation system needs to be triggered. Referring to FIG. 7 , taking the marker as a two-dimensional code provided on the puncture needle 6 as an example, the two-dimensional code on the puncture needle 6 in the figure is partially blocked by the operator's hand.
- the first recognition result of the surgical navigation system It is that the QR code is contained in the surgical scene image, and the fourth recognition result is that the QR code is partially occluded, and an interactive instruction "triggering the operation end process of the surgical navigation system" is generated, and the surgical navigation process of "the surgery end process of the surgical navigation system” is executed. . Specifically, when the marker exceeding the preset ratio value is blocked, it is considered that the marker is partially blocked.
- the preset scale value can be set to 10% etc.
- the instruction acquisition module is used to acquire corresponding interactive instructions according to the first recognition result, and is specifically used for:
- the fifth recognition result obtained by performing image recognition on the surgical scene image is used to represent the absolute motion track or relative running track of the marker, wherein the absolute motion track is the motion track of the marker relative to the stationary object, and the relative motion track is the motion track of the marker relative to the stationary object.
- the movement track is the movement track of the marker relative to the set person;
- the corresponding interaction instruction is acquired.
- the motion trajectory may be an absolute motion trajectory or a relative motion trajectory, wherein the absolute motion trajectory is the motion trajectory relative to the stationary object, that is, For example, the ground, the operating table; and the relative motion trajectory is the motion trajectory relative to the set personnel, such as the operator.
- the operator can perform specific actions on the identification object, such as moving the position of the identification object or changing the shape of the identification object. According to this change process, the system Enter the re-registration process.
- the instruction acquisition module is used to acquire the corresponding interactive instruction according to the first recognition result, and is specifically used for:
- Corresponding interaction instructions are acquired according to at least two of the surgical navigation stage, the second identification result, the third identification result, the fourth identification result, and the fifth identification result, and the first identification result.
- the corresponding interaction instruction is acquired according to at least three of the surgical navigation stage, the second identification result, the third identification result, the fourth identification result, and the fifth identification result, and the first identification result.
- the corresponding interactive instructions are acquired according to the surgical navigation stage, the first identification result, the second identification result, the third identification result, the fourth identification result and the fifth identification result.
- the current process is decided according to the different needs for navigation information.
- Multiple identification objects can be set in the operation scene.
- the camera is facing the first identification object, it means that the operation is in the preparation stage and registration stage.
- the camera is facing the second identification object, it means that the puncture needle is already in the stage of entering the human body.
- the puncture needle enters the human body, the doctor needs to pay attention. At this time, avoid providing too much interference information display, and only provide the most critical information.
- the surgical navigation system includes a navigation information display module, which is used to superimpose and display or hide the corresponding surgical navigation information at the corresponding position of the real scene in an augmented reality manner. Post-operative navigation information.
- the surgical navigation system of the present application can trigger different surgical navigation steps by identifying the same identifier when the surgical navigation system is in different surgical navigation stages. For example, in the process of human body registration, if the plane positioning plate is recognized again, the current registration process can be reset. If the puncture needle is identified in the registration process, it is defined as the identification needle, which is used to determine the position of the marker point on the human body surface, and the puncture needle is identified during the puncture process, and the puncture navigation task is performed.
- the surgical navigation system of the present application can identify different occlusion degrees of the same marker, and can trigger different surgical navigation steps. For example, in the process of puncture navigation, when the puncture needle is partially blocked by the thumb and lasts for a certain period of time, it is considered that the action of releasing the instrument in the puncture needle has occurred, and the previous position of the needle tip is recorded at this time, which can be used as the instrument release point. Surgical records for subsequent surgical analysis.
- the surgical navigation system of the present application can identify the relative position of the same marker in the preset space, or the relative distance from the preset target, and can trigger different surgical navigation steps. For example, during the registration process, if the identification plate is placed near the registration point of a recorded position, only the position information of this point is reset to improve the registration efficiency.
- the information interaction method of the surgical navigation system includes:
- the information interaction method of the surgical navigation system in the embodiment of the present application can automatically identify the markers included in the surgical scene image captured by the camera, and obtain corresponding interaction instructions according to the markers included in the surgical scene image.
- the surgical navigation system implementing the information interaction method of the present embodiment can obtain corresponding interactive instructions according to the surgical scene images with markers captured by the operator, and the interactive instructions control the surgical navigation system to perform corresponding surgical navigation steps without using voice. , gestures, etc., so that the implementation of the technical solution of the present application can reduce the probability of misjudgment when the surgical navigation system is controlled.
- the operator can photograph the surgical scene by wearing the camera of the head-mounted device to acquire the image of the surgical scene.
- step S802 image recognition is performed on the surgical scene image to obtain a first recognition result, including:
- the first recognition result is determined according to the similarity between the image feature of the surgical scene image and the image feature of the marker.
- step S803 As an optional implementation manner of step S803, according to the first identification result, a corresponding interaction instruction is obtained, including:
- step S803 As an optional implementation manner of step S803, according to the first identification result, a corresponding interaction instruction is obtained, including:
- the second recognition result is used to represent the relative position of the marker in the preset space, or the relative distance between the marker and the preset target;
- the corresponding interaction instruction is acquired.
- step S803 As an optional implementation manner of step S803, according to the first identification result, a corresponding interaction instruction is obtained, including:
- the corresponding interaction instruction is acquired.
- step S803 obtain the corresponding interaction instruction, including:
- the corresponding interaction instruction is acquired.
- step S803 As an optional implementation manner of step S803, according to the first identification result, a corresponding interaction instruction is obtained, including:
- the fifth recognition result obtained by performing image recognition on the surgical scene image is used to represent the absolute motion track or relative running track of the marker, wherein the absolute motion track is the motion track of the marker relative to the stationary object, and the relative motion track is the motion track of the marker relative to the stationary object.
- the movement track is the movement track of the marker relative to the set person;
- the corresponding interaction instruction is acquired.
- an electronic device 900 includes a memory 901 and a processor 902.
- the memory 901 is used for storing computer instructions, and the computer instructions are executed by the processor 902 to implement any information interaction method in the embodiments of this application.
- the present application also provides a readable storage medium on which computer instructions are stored, and when the computer instructions are executed by a processor, implement any of the information interaction methods in the embodiments of the present application.
- FIG. 10 is a schematic structural diagram of a computer system suitable for implementing the method of an embodiment of the present application.
- the computer system includes a processing unit 1001, which can execute the programs shown in the above figures according to a program stored in a read only memory (ROM) 1002 or a program loaded from a storage section 1008 into a random access memory (RAM) 1003 Various processing in the embodiment of .
- RAM random access memory
- various programs and data necessary for system operation are also stored.
- the processing unit 1001 , the ROM 1002 and the RAM 1003 are connected to each other through a bus 1004 .
- An input/output (I/O) interface 1005 is also connected to the bus 1004 .
- the following components are connected to the I/O interface 1005: an input section 1006 including a keyboard, a mouse, etc.; an output section 1007 including a cathode ray tube (CRT), a liquid crystal display (LCD), etc., and a speaker, etc.; a storage section 1008 including a hard disk, etc. ; and a communication section 1009 including a network interface card such as a LAN card, a modem, and the like.
- the communication section 1009 performs communication processing via a network such as the Internet.
- a drive 1010 is also connected to the I/O interface 1005 as needed.
- a removable medium 1011 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, etc., is mounted on the drive 1010 as needed so that a computer program read therefrom is installed into the storage section 1008 as needed.
- the processing unit 1001 may be implemented as a processing unit such as a CPU, a GPU, a TPU, an FPGA, and an NPU.
- the method described above may be implemented as a computer software program.
- embodiments of the present application include a computer program product comprising a computer program tangibly embodied on a readable medium thereof, the computer program comprising program code for performing the methods of the Figures.
- the computer program may be downloaded and installed from the network through the communication section 1009, and/or installed from the removable medium 1011.
- references to the terms “one embodiment/mode”, “some embodiments/modes”, “example”, “specific example”, or “some examples”, etc. are intended to be combined with the description of the embodiment/mode
- a particular feature, structure, material, or characteristic described by way of example or example is included in at least one embodiment/mode or example of the present application.
- schematic representations of the above terms are not necessarily directed to the same embodiment/mode or example.
- the particular features, structures, materials or characteristics described may be combined in any suitable manner in any one or more embodiments/means or examples.
- those skilled in the art may combine and combine the different embodiments/modes or examples described in this specification and the features of the different embodiments/modes or examples without conflicting each other.
- first and second are only used for descriptive purposes, and should not be construed as indicating or implying relative importance or implying the number of indicated technical features. Thus, a feature delimited with “first”, “second” may expressly or implicitly include at least one of that feature.
- plurality means at least two, such as two, three, etc., unless expressly and specifically defined otherwise.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Surgery (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Veterinary Medicine (AREA)
- Animal Behavior & Ethology (AREA)
- Molecular Biology (AREA)
- Heart & Thoracic Surgery (AREA)
- Robotics (AREA)
- Public Health (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
本方案提供了一种手术导航系统、方法、电子设备和可读存储介质。一种手术导航系统,包括:图像获取模块,用于获取手术场景图像;图像识别模块,用于对手术场景图像进行图像识别,得到第一识别结果,第一识别结果用于表征手术场景图像中所包含的标识物;指令获取模块,用于根据第一识别结果,获取相应的交互指令;指令执行模块,用于根据交互指令,控制手术导航系统执行相应的手术导航步骤。实施本申请的技术方案可以降低手术导航系统被控制时发生误判的几率。
Description
本申请涉及医疗领域,尤其涉及一种手术导航系统、方法、电子设备和可读存储介质。
手术导航系统,是将病人术前或术中影像数据和手术床上病人解剖结构准确对应,手术中跟踪手术器械并将手术器械的位置在病人影像上以虚拟探针的形式实时更新显示,使医生对手术器械相对病人解剖结构的位置一目了然,使外科手术更快速、更精确、更安全。
增强现实设备可以显著提高佩戴者的工作效率,其主要采用手势、语音等方式实现人机互动,将采用手势、语音等方式实现人机互动的增强现实设备运用到手术导航系统时,其存在的不足是:若采用手势实现手术导航系统的人机互动,会因为医生手套的血液污染或多手出现在摄像头视野内等情况而发生系统误判;若采用语音实现手术导航系统的人机互动,会因为术中必要的交流造成误触发。
发明内容
为了解决上述技术问题中的至少一个,本申请提供了一种手术导航系统、方法、电子设备和可读存储介质。
本申请的第一方面,一种手术导航系统,包括:
图像获取模块,用于获取手术场景图像;
图像识别模块,用于对所述手术场景图像进行图像识别,得到第一识别结果,所述第一识别结果用于表征所述手术场景图像中所包含的标识物;
指令获取模块,用于根据所述第一识别结果,获取相应的交互指令;
指令执行模块,用于根据所述交互指令,控制所述手术导航系统执行相应的手术导航步骤。
可选的,所述图像识别模块用于对所述手术场景图像进行图像识别,得到第一识别结果时,具体用于:
提取所述手术场景图像的图像特征;
根据所述手术场景图像的图像特征与所述标识物的图像特征的相似度,确定所述第一识别结果。
可选的,所述指令获取模块用于根据所述第一识别结果,获取相应的交互指令时,具体用于:
获取所述手术导航系统所处的手术导航阶段;
根据所述第一识别结果和所述手术导航阶段,获取相应的交互指令。
可选的,所述指令获取模块用于根据所述第一识别结果,获取相应的交互指令时,具体用于:
获取对所述手术场景图像进行图像识别得到的第二识别结果,其中,所述第二识别结果用于表征所述标识物在预设空间的相对位置,或表征所述标识物与预设目标的相对距离;
根据所述第一识别结果和所述第二识别结果,获取相应的交互指令。
可选的,所述指令获取模块用于根据所述第一识别结果,获取相应的交互指令时,具体用于:
获取对所述手术场景图像进行图像识别得到的第三识别结果,所述第三识别结果用于表征所述标识物的朝向和/或角度;
根据所述第一识别结果和所述第三识别结果,获取相应的交互指令。
可选的,所述指令获取模块用于根据所述第一识别结果,获取相应的交互指令时,具体用于:
获取对所述手术场景图像进行图像识别得到的第四识别结果,所述第四识别结果用于表征所述标识物的被遮挡程度;
根据所述第一识别结果和所述第四识别结果,获取相应的交互指 令。
可选的,所述指令获取模块用于根据所述第一识别结果,获取相应的交互指令时,具体用于:
获取对所述手术场景图像进行图像识别得到的第五识别结果,所述第五识别结果用于表征所述标识物的绝对运动轨迹或相对运行轨迹,其中,所述绝对运动轨迹是所述标识物相对于静止物体的运动轨迹,所述相对运动轨迹是所述标识物相对于设定人员的运动轨迹;
根据所述第一识别结果和所述第五识别结果,获取相应的交互指令。
本申请的第二方面,一种手术导航系统的信息交互方法,包括:
获取手术场景图像;
对所述手术场景图像进行图像识别,得到第一识别结果,所述第一识别结果用于表征识别得到的所述手术场景图像中的标识物;
根据所述第一识别结果,获取相应的交互指令。
可选的,所述对所述手术场景图像进行图像识别,得到第一识别结果,包括:
提取所述手术场景图像的图像特征;
根据所述手术场景图像的图像特征以及所述标识物所具有的图像特征,获得第一识别结果。
可选的,所述根据所述第一识别结果,获取相应的交互指令,包括:
获取手术阶段信息;
根据所述第一识别结果和所述手术阶段信息,获取相应的交互指令。
可选的,所述根据所述第一识别结果,获取相应的交互指令,包括:
获取对所述手术场景图像进行图像识别得到的第二识别结果,所述第二识别结果用于表征所述标识物在预设空间的相对位置,或用于表征所述标识物与预设目标的相对距离;
根据所述第一识别结果和所述第二识别结果,获取相应的交互指 令。
可选的,所述根据所述第一识别结果,获取相应的交互指令,包括:
获取对所述手术场景图像进行图像识别得到的第三识别结果,所述第三识别结果用于表征所述标识物的朝向和/或角度;
根据所述第一识别结果和所述第三识别结果,获取相应的交互指令。
可选的,所述根据所述第一识别结果,获取相应的交互指令,包括:
获取对所述手术场景图像进行图像识别得到的第四识别结果,所述第四识别结果用于表征所述标识物的被遮挡程度;
根据所述第一识别结果和所述第四识别结果,获取相应的交互指令。
可选的,所述根据所述第一识别结果,获取相应的交互指令,包括:
获取对所述手术场景图像进行图像识别得到的第五识别结果,所述第五识别结果用于表征所述标识物的绝对运动轨迹或相对运行轨迹,其中,所述绝对运动轨迹是所述标识物相对于静止物体的运动轨迹,所述相对运动轨迹是所述标识物相对于设定人员的运动轨迹;
根据所述第一识别结果和所述第五识别结果,获取相应的交互指令。
本申请的第三方面,一种电子设备,包括存储器和处理器,所述存储器用于存储计算机指令,其特征在于,所述计算机指令被所述处理器执行以实现本申请第二方面任一项所述的方法。
本申请的第四方面,一种可读存储介质,其上存储有计算机指令,其特征在于,该计算机指令被处理器执行时实现本申请第二方面任一项所述的方法。
通过实施本申请的技术方案可以取得以下有益技术效果:本申请实施例中的技术方案,可以自动手术场景图像中所包含的标识物,根据手术场景图像中所包含的标识物获得相应的交互指令,再根据交互指 令控制手术导航系统执行相应的手术导航步骤;使得操作者可以通过拍摄具有标识物的手术场景的方式,控制手术导航系统执行相应的手术导航步骤,无需采用语音、手势等进行操作。相对于现有技术,实施本申请的技术方案可以降低手术导航系统被控制时发生误判的几率。
附图示出了本公开的示例性实施方式,并与其说明一起用于解释本公开的原理,其中包括了这些附图以提供对本公开的进一步理解,并且附图包括在本说明书中并构成本说明书的一部分。
图1是本申请实施例公开的一种手术导航系统的结构框图;
图2是本申请实施例公开的一种手术场景示意图;
图3是本申请实施例公开的另一种手术场景示意图;
图4是本申请实施例公开的另一种手术场景示意图;
图5是本申请实施例公开的另一种手术场景示意图;
图6是本申请实施例公开的另一种手术场景示意图;
图7是本申请实施例公开的另一种手术场景示意图;
图8是本申请实施例公开的一种手术导航系统的信息交互方法的流程图;
图9是本申请实施例公开的另一种电子设备的结构框图;
图10是本申请实施例公开的一种计算机系统的结构示意图。
下面结合附图和实施方式对本公开作进一步的详细说明。可以理解的是,此处所描述的具体实施方式仅用于解释相关内容,而非对本公开的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与本公开相关的部分。
需要说明的是,在不冲突的情况下,本公开中的实施方式及实施方式中的特征可以相互组合。下面将参考附图并结合实施方式来详细说明本公开。
参见图1,手术导航系统,包括:
图像获取模块101,用于获取手术场景图像;
图像识别模块102,用于对手术场景图像进行图像识别,得到第一识别结果,第一识别结果用于表征手术场景图像中所包含的标识物;
指令获取模块103,用于根据第一识别结果,获取相应的交互指令;
指令执行模块104,用于根据交互指令,控制手术导航系统执行相应的手术导航步骤。
本申请实施例中的手术导航系统,可以自动识别摄像头拍摄的手术场景图像中所包含的标识物,根据手术场景图像中所包含的标识物获得相应的交互指令,再根据交互指令控制手术导航系统执行相应的手术导航步骤。使得操作者可以通过拍摄具有标识物的手术场景的方式,控制手术导航系统执行相应的手术导航步骤,无需采用语音、手势等进行操作,使得实施本申请的技术方案可以降低手术导航系统被控制时发生误判的几率。同时,手术导航系统的操作也更为便捷,减少操作者操作手术系统时对其正常作业的影响。
其中,可以知道的,操作者可以通过佩戴头戴式设备的摄像头来拍摄手术场景以采集手术场景图像,参见图2,图2中是操作者1根据通过佩戴头戴式设备2的摄像头拍摄到方框区域3的手术场景图像。
本申请实施例中的标识物可具有特定的光学特征、图案特征和几何特征中的至少一种,以使得拍摄标识物所得到的图像具有特定的图像特征,举例来说,标识物可以是信息板、平面定位板、二维码等。
本申请实施例中的手术导航系统,通过识别特定的标识物,触发手术导航系统执行与该特定的标识物相对应的手术导航步骤。例如,标识物包括设置在手术台的平面定位板,当识别到平面定位板时,获得“触发手术区域初始化”的交互指令,根据该交互指令执行“手术区域初始化”的手术导航步骤。例如,标识物包括穿刺手柄,当识别到穿刺手柄时,获得“触发穿刺导航”的交互指令,根据该交互指令执行“穿刺导航”的手术导航步骤。例如,标识物包括手术台上的二维码,当识别到手术台上的二维码时,获得“触发手术导航系统进入 校准”的交互指令,根据该交互指令,执行“手术导航系统校准”的手术导航步骤。
其中,手术导航步骤可以包括选择手术器械模型的步骤,例如:系统预先存储手术器械模型库,手术器械模型库包括不同类型、不同型号的手术器械模型,操作者可以将头戴式设备的摄像头对准手术器械上设置的标识物(如手术器械上设置的二维码),也就选取了手术器械模型,使导航系统的手术器械模型与真实手术应用的模型一致,然后进行下一步的配准。手术导航步骤可以包括选择手术导航流程的步骤,例如:场景中设置有多个识别物,如在手术台设置有第一识别物(信息板),在手术器械上设置有第二识别物(二维码)。当医生的头戴式设备的摄像头正对第一识别物时,进入第一阶段(如配准阶段),当摄像头正对第二识别物时,进入另一阶段(如引导穿刺阶段)。本实施例对此不做限定。
本申请实施例中的标识物优先选用与一次性手术器械一体的可被识别图案,例如设置在穿刺针上的二维码,以使交互的标识物能满足重复灭菌或一次性无菌使用这两个条件之一。
本申请实施例中的图像识别模块可以利用现有的图像识别算法进行图像识别,例如斑点检测算法、角点检测算法等。具体的,可根据标识物的形式选择相应合适的算法,例如标识物是设置在手术台或手术器械上的二维码时,可直接采用相应的二维码识别算法。
作为图像识别模块的可选实施方式,图像识别模块用于对手术场景图像进行图像识别,得到第一识别结果时,具体用于:
提取手术场景图像的图像特征;
根据手术场景图像的图像特征与标识物的图像特征的相似度,确定第一识别结果。
具体的,图像识别模块预先设置有相似度阈值,当手术场景图像的图像特征与标识物的图像特征的相似度大于相似度阈值时,判断手术场景图像中包含相应的标识物。
其中,图像特征包括颜色特征、纹理特征、形状特征和空间关系特征中的一种或多种。
作为图像识别模块的可选实施方式,指令获取模块用于根据第一识别结果,获取相应的交互指令时,具体用于:
获取手术导航系统所处的手术导航阶段;
根据第一识别结果和手术导航阶段,获取相应的交互指令。
本实施方式中,根据第一识别结果和手术导航阶段,获取相应的交互指令,使得同一标识物对应的图案在不同手术导航阶段可对应不同的交互指令,达减少标识物的设置的目的;也就是说,对手术场景图像进行图像识别时,若识别得到手术场景图像中包含的是同一标识物图案时,但手术导航系统的手术导航阶段不同,则相应的交互指令也不同。
以标识物是位于患者身旁的二维码为例,手术导航系统处于未开始导航阶段时,若识别到摄像头拍摄的手术场景图像中包含该二维码,则生成“触发手术导航系统进入配准阶段”的交互指令;当手术导航系统处于配准阶段时,若识别到摄像头拍摄的手术场景图像中包含该二维码,则生成“重新配准”的交互指令。在具体运用过程中,利用摄像头首次识别到患者身旁的二维码时,触发开始配准场景;当在配准过程中发生意外,需要再次重新开始配准时,仅需再次识别这个位置的二维码即可将整个流程重置。
作为图像识别模块的可选实施方式,指令获取模块用于根据第一识别结果,获取相应的交互指令时,具体用于:
获取对手术场景图像进行图像识别得到的第二识别结果,其中,第二识别结果用于表征标识物在预设空间的相对位置,或表征标识物与预设目标的相对距离;
根据第一识别结果和第二识别结果,获取相应的交互指令。
本实施方式中,预设空间可以根据具体运用需求进行设定,例如预设空间可以设置成手术场景图像对应的空间,预设目标可以根据具体运用需求进行设定,例如预设目标可以设置为配置点、患者等。
本实施方式中,可以基于标识物的位置生成不同的交互指令,例如,同样是重置配准流程,如果标识物放在患者身旁是重置全部流程,而如果标识物放在某一个配准点附近,则代表仅重置这个位置的配准 数据。
以标识物是二维码为例,参见图3和图4,图3和图4的区别在于同一标识物的位置不相同,其中,图3中的标识物4位于患者旁边,而图4中的标识物4位于配准点旁边;而针对图3中方框内的区域是拍摄到的手术场景图像,对该手术场景图像进行图像识别得到的第一识别结果为手术场景图像内包含该标识物,对该手术场景图像进行图像识别得到的第二识别结果是该二维码与患者(具体可以是患者头部)的相对距离小于第一预设距离阈值,此时第一识别结果和第二识别结果生成的是“触发重置全部流程”的交互指令。图4中的标识物被移动到配准点附近,而针对图4中方框内的区域是拍摄到的手术场景图像,对该手术场景图像进行图像识别得到的第一识别结果为手术场景图像内包含该标识物,对该手术场景图像进行图像识别得到的第二识别结果是该二维码与配准点的相对距离小于第二预设距离阈值,此时第一识别结果和第二识别结果生成的是“触发仅重置当前位置的配准数据”的交互指令。第一预设距离阈值和第二预设距离阈值可以设置成90%等。
具体的,在一个实施方式中,标识物与预设目标的相对距离是标识物的延长线与预设目标的相对距离,例如穿刺针的延长线与肋骨的相对距离,当穿刺针的延长线与肋骨的相对距离小于设定值时,即表明穿刺针的延长线有触及到肋骨的风险,此时获取相应的“触发提示信息”的交互指令,以给出提示,其中,设定值可以是0。
作为图像识别模块的可选实施方式,指令获取模块用于根据第一识别结果,获取相应的交互指令时,具体用于:
获取对手术场景图像进行图像识别得到的第三识别结果,第三识别结果用于表征标识物的朝向和/或角度;
根据第一识别结果和第三识别结果,获取相应的交互指令。
本实施方式中,标识物的朝向和/或角度可以采用已有的相关算法识别,标识物具有相应的特征,以使标识物被图像识别后可以得到标识物的朝向和/或角度。
本实施方式中,操作者可以通过调整标识物的朝向和/或角度触发 相应的交互指令,提高控制的便利性。
以标识物是穿刺针为例,参见图5,在配准流程中,穿刺针6方向正确地指向目标部位7,此时手术导航系统的第一识别结果是手术场景图像内包含穿刺针,第三识别结果是穿刺针方向正确地指向目标部位,生成“触发距离测算,显示穿刺针的针尖和目标部位的距离”的交互指令;参见图6,在配准流程中,穿刺针6方向偏离目标部位7,此时手术导航系统的第一识别结果是手术场景图像内包含穿刺针,第三识别结果是穿刺针方向偏离目标部位,生成“触发角度测算,显示提示信息”的交互指令。
操作者发现手术导航系统的配准存在误差,需要重新配准时,操作者可以对识别物作出特定的动作,如移动识别物的摆放位置或改变识别物的形态(朝向和/或角度),根据这一改变结构,系统进入重新配准的流程。
作为图像识别模块的可选实施方式,指令获取模块用于根据第一识别结果,获取相应的交互指令时,具体用于:
获取对手术场景图像进行图像识别得到的第四识别结果,第四识别结果用于表征标识物的被遮挡程度;
根据第一识别结果和第四识别结果,获取相应的交互指令。
本实施方式中,操作者可以通过遮挡标识物的方式控制手术导航系统,提高控制的便利性。
操作者手部部分遮挡了手术器械上的二维码时,说明正在进行或已经完成了穿刺手术的最终目的:液体注射或器械植入。此时需要触发手术导航系统手术结束流程。参见图7,以标识物是设置在穿刺针6上的二维码为例,图中穿刺针6上的二维码被操作者手部部分遮挡,此时,手术导航系统的第一识别结果是手术场景图像内包含该二维码,第四识别结果是二维码部分被遮挡,生成“触发手术导航系统手术结束流程”的交互指令,执行“手术导航系统手术结束流程”的手术导航流程。具体的,超过预设比例值的标识物被遮挡时,认为标识物被部分遮挡。预设比例值可以设置为10%等。
作为图像识别模块的可选实施方式,指令获取模块用于根据第一 识别结果,获取相应的交互指令时,具体用于:
获取对手术场景图像进行图像识别得到的第五识别结果,第五识别结果用于表征标识物的绝对运动轨迹或相对运行轨迹,其中,绝对运动轨迹是标识物相对于静止物体的运动轨迹,相对运动轨迹是标识物相对于设定人员的运动轨迹;
根据第一识别结果和第五识别结果,获取相应的交互指令。
本实施方式的技术方案,根据标识物的运动轨迹自动生成相应的交互指令,具体的,运动轨迹可以是绝对运动轨迹或相对运动轨迹,其中,绝对运动轨迹是相对于静止物体的运动轨迹,即例如地面、手术台;而相对运动轨迹是相对于设定人员的运动轨迹,例如操作员。
以标识物是设置在穿刺针上的二维码为例,当操作人员旋转穿刺针时,二维码发生运动,根据二维码的绝对运动轨迹生成相应的交互指令,如识别到二维码旋转一周时,生成“触发隐藏肋骨图案”的交互指令。
操作者发现手术导航系统的配准存在误差,需要重新配准时,操作者可以对识别物作出特定的动作,如移动识别物的摆放位置或改变识别物的形态,根据这一改变过程,系统进入重新配准的流程。
作为图像识别模块的可选实施方式,指令获取模块用于根据第一识别结果,获取相应的交互指令时,具体用于:
根据手术导航阶段、第二识别结果、第三识别结果、第四识别结果和第五识别结果中的至少两个以及第一识别结果获取相应的交互指令。
更具体的,根据手术导航阶段、第二识别结果、第三识别结果、第四识别结果和第五识别结果中的至少三个以及第一识别结果获取相应的交互指令。
更具体的,根据手术导航阶段、第一识别结果、第二识别结果、第三识别结果、第四识别结果和第五识别结果获取相应的交互指令。
由于手术导航系统在不同阶段对导航信息需求不同,根据导航信息需求的不同,决策当前流程。可以在手术场景中设置多个识别物,摄像头正对第一识别物时,代表手术处于准备阶段、配准阶段,摄像 头正对第二识别物时,代表已经处于开始穿刺针进入人体阶段。穿刺针进入人体时,需要医生集中注意力,此时避免提供过多的干扰信息显示,只提供最关键的信息。
手术导航系统包括导航信息显示模块,用于以增强现实的方式在真实场景的相应位置叠加显示或隐藏相应的手术导航信息,例如根据“触发隐藏肋骨图案”的交互指令,显示相应的隐藏肋骨图案后的手术导航信息。
综上,本申请的手术导航系统,在手术导航系统处于不同的手术导航阶段时,识别同一标识物,可以触发不同手术导航步骤。如在人体配准过程中,再次识别平面定位板,可重置当前配准过程。如在配准过程中识别穿刺针,则将它定义为识别针,作用为确定人体体表标记点位置,在穿刺过程中识别穿刺针,则执行穿刺导航任务。
本申请的手术导航系统,在同一个手术导航阶段,识别同一标识物的不同角度或不同动作轨迹,可以触发不同手术导航步骤。如在穿刺导航过程中,当操作者操作穿刺针顺时针旋转一周,则隐藏肋骨图案,让操作者可以更清晰的看到肋骨后的手术区域
本申请的手术导航系统,识别同一标识物的不同遮挡程度,可以触发不同的手术导航步骤。如在穿刺导航过程中,当发生穿刺针被拇指部分遮挡,且持续一定时间后,则认为已发生释放穿刺针内器械的动作,此时记录下之前针尖所在位置,即可作为器械释放点的手术记录,用于后续手术分析。
本申请的手术导航系统,识别同一标识物在预设空间的相对位置,或与预设目标的相对距离,可以触发不同的手术导航步骤。如在配准过程中,将识别板放置于某一已记录位置的配准点附近,则仅重置此点的位置信息,提高配准效率。
参见图8,手术导航系统的信息交互方法,包括:
S801,获取手术场景图像;
S802,对手术场景图像进行图像识别,得到第一识别结果,第一识别结果用于表征手术场景图像中所包含的标识物;
S803,根据第一识别结果,获取相应的交互指令。
本申请实施例中的手术导航系统的信息交互方法,可以自动识别摄像头拍摄的手术场景图像中所包含的标识物,根据手术场景图像中所包含的标识物获得相应的交互指令。使得执行本实施例的信息交互方法的手术导航系统可以根据操作者拍摄的具有标识物的手术场景图像获得相应的交互指令,由该交互指令控制手术导航系统执行相应的手术导航步骤,无需采用语音、手势等进行操作,使得实施本申请的技术方案可以降低手术导航系统被控制时发生误判的几率。
其中,可以知道的,操作者可以通过佩戴头戴式设备的摄像头来拍摄手术场景以采集手术场景图像。
作为步骤S802的可选实施方式,对手术场景图像进行图像识别,得到第一识别结果,包括:
提取手术场景图像的图像特征;
根据手术场景图像的图像特征与标识物的图像特征的相似度,确定第一识别结果。
作为步骤S803的可选实施方式,根据第一识别结果,获取相应的交互指令,包括:
获取手术导航系统所处的手术导航阶段;
根据第一识别结果和手术导航阶段,获取相应的交互指令。
作为步骤S803的可选实施方式,根据第一识别结果,获取相应的交互指令,包括:
获取对手术场景图像进行图像识别得到的第二识别结果,其中,第二识别结果用于表征标识物在预设空间的相对位置,或表征标识物与预设目标的相对距离;
根据第一识别结果和第二识别结果,获取相应的交互指令。
作为步骤S803的可选实施方式,根据第一识别结果,获取相应的交互指令,包括:
获取对手术场景图像进行图像识别得到的第三识别结果,第三识别结果用于表征标识物的朝向和/或角度;
根据第一识别结果和第三识别结果,获取相应的交互指令。
作为步骤S803的可选实施方式,根据第一识别结果,获取相应的 交互指令,包括:
获取对手术场景图像进行图像识别得到的第四识别结果,第四识别结果用于表征标识物的被遮挡程度;
根据第一识别结果和第四识别结果,获取相应的交互指令。
作为步骤S803的可选实施方式,根据第一识别结果,获取相应的交互指令,包括:
获取对手术场景图像进行图像识别得到的第五识别结果,第五识别结果用于表征标识物的绝对运动轨迹或相对运行轨迹,其中,绝对运动轨迹是标识物相对于静止物体的运动轨迹,相对运动轨迹是标识物相对于设定人员的运动轨迹;
根据第一识别结果和第五识别结果,获取相应的交互指令。
上述实施例的信息交互方法的具体技术方案、原理和效果可以参考上述手术导航系统中的相关技术方案、原理和效果。
参见图9,一种电子设备900,包括存储器901和处理器902,存储器901用于存储计算机指令,计算机指令被处理器902执行以实现本申请实施例中任一的信息交互方法。
本申请还提供了一种可读存储介质,其上存储有计算机指令,该计算机指令被处理器执行时实现本申请实施例中任一的信息交互方法。
图10为适于用来实现本申请一实施方式的方法的计算机系统的结构示意图。
参见图10,计算机系统包括处理单元1001,其可以根据存储在只读存储器(ROM)1002中的程序或者从存储部分1008加载到随机访问存储器(RAM)1003中的程序而执行上述附图所示的实施方式中的各种处理。在RAM1003中,还存储有系统操作所需的各种程序和数据。处理单元1001、ROM1002以及RAM1003通过总线1004彼此相连。输入/输出(I/O)接口1005也连接至总线1004。
以下部件连接至I/O接口1005:包括键盘、鼠标等的输入部分1006;包括诸如阴极射线管(CRT)、液晶显示器(LCD)等以及扬声器等的输出部分1007;包括硬盘等的存储部分1008;以及包括诸如LAN卡、调 制解调器等的网络接口卡的通信部分1009。通信部分1009经由诸如因特网的网络执行通信处理。驱动器1010也根据需要连接至I/O接口1005。可拆卸介质1011,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器1010上,以便于从其上读出的计算机程序根据需要被安装入存储部分1008。其中,处理单元1001可实现为CPU、GPU、TPU、FPGA、NPU等处理单元。
特别地,根据本申请的实施方式,上文描述的方法可以被实现为计算机软件程序。例如,本申请的实施方式包括一种计算机程序产品,其包括有形地包含在及其可读介质上的计算机程序,计算机程序包含用于执行附图中的方法的程序代码。在这样的实施方式中,该计算机程序可以通过通信部分1009从网络上被下载和安装,和/或从可拆卸介质1011被安装。
在本说明书的描述中,参考术语“一个实施例/方式”、“一些实施例/方式”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例/方式或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施例/方式或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例/方式或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例/方式或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例/方式或示例以及不同实施例/方式或示例的特征进行结合和组合。
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本申请的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。
本领域的技术人员应当理解,上述实施方式仅仅是为了清楚地说明本公开,而并非是对本公开的范围进行限定。对于所属领域的技术人员而言,在上述公开的基础上还可以做出其它变化或变型,并且这些变化或变型仍处于本公开的范围内。
Claims (10)
- 一种手术导航系统,其特征在于,包括:图像获取模块,用于获取手术场景图像;图像识别模块,用于对所述手术场景图像进行图像识别,得到第一识别结果,所述第一识别结果用于表征所述手术场景图像中所包含的标识物;指令获取模块,用于根据所述第一识别结果,获取相应的交互指令;指令执行模块,用于根据所述交互指令,控制所述手术导航系统执行相应的手术导航步骤。
- 根据权利要求1所述的手术导航系统,其特征在于,所述图像识别模块用于对所述手术场景图像进行图像识别,得到第一识别结果时,具体用于:提取所述手术场景图像的图像特征;根据所述手术场景图像的图像特征与所述标识物的图像特征的相似度,确定所述第一识别结果。
- 根据权利要求1所述的手术导航系统,其特征在于,所述指令获取模块用于根据所述第一识别结果,获取相应的交互指令时,具体用于:获取所述手术导航系统所处的手术导航阶段;根据所述第一识别结果和所述手术导航阶段,获取相应的交互指令。
- 根据权利要求1所述的手术导航系统,其特征在于,所述指令获取模块用于根据所述第一识别结果,获取相应的交互指令时,具体用于:获取对所述手术场景图像进行图像识别得到的第二识别结果,其中,所述第二识别结果用于表征所述标识物在预设空间的相对位置,或表征所述标识物与预设目标的相对距离;根据所述第一识别结果和所述第二识别结果,获取相应的交互指令。
- 根据权利要求1所述的手术导航系统,其特征在于,所述指令获取模块用于根据所述第一识别结果,获取相应的交互指令时,具体用于:获取对所述手术场景图像进行图像识别得到的第三识别结果,所述第三识别结果用于表征所述标识物的朝向和/或角度;根据所述第一识别结果和所述第三识别结果,获取相应的交互指令。
- 根据权利要求1所述的手术导航系统,其特征在于,所述指令获取模块用于根据所述第一识别结果,获取相应的交互指令时,具体用于:获取对所述手术场景图像进行图像识别得到的第四识别结果,所述第四识别结果用于表征所述标识物的被遮挡程度;根据所述第一识别结果和所述第四识别结果,获取相应的交互指令。
- 根据权利要求1~6任一所述的手术导航系统,所述指令获取模块用于根据所述第一识别结果,获取相应的交互指令时,具体用于:获取对所述手术场景图像进行图像识别得到的第五识别结果,所述第五识别结果用于表征所述标识物的绝对运动轨迹或相对运行轨迹,其中,所述绝对运动轨迹是所述标识物相对于静止物体的运动轨迹,所述相对运动轨迹是所述标识物相对于设定人员的运动轨迹;根据所述第一识别结果和所述第五识别结果,获取相应的交互指令。
- 一种手术导航系统的信息交互方法,其特征在于,包括:获取手术场景图像;对所述手术场景图像进行图像识别,得到第一识别结果,所述第一识别结果用于表征识别得到的所述手术场景图像中的标识物;根据所述第一识别结果,获取相应的交互指令。
- 一种电子设备,包括存储器和处理器,所述存储器用于存储计算机指令,其特征在于,所述计算机指令被所述处理器执行以实现如权利要求8所述的方法。
- 一种可读存储介质,其上存储有计算机指令,其特征在于,该计算机指令被处理器执行时实现如权利要求8所述的方法。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/552,077 US20240189043A1 (en) | 2021-04-01 | 2022-03-18 | Surgical navigation system and method, and electronic device and readable storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110358153.3 | 2021-04-01 | ||
CN202110358153.3A CN113133829B (zh) | 2021-04-01 | 2021-04-01 | 一种手术导航系统、方法、电子设备和可读存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022206435A1 true WO2022206435A1 (zh) | 2022-10-06 |
Family
ID=76810332
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/081728 WO2022206435A1 (zh) | 2021-04-01 | 2022-03-18 | 一种手术导航系统、方法、电子设备和可读存储介质 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20240189043A1 (zh) |
CN (1) | CN113133829B (zh) |
WO (1) | WO2022206435A1 (zh) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113133829B (zh) * | 2021-04-01 | 2022-11-01 | 上海复拓知达医疗科技有限公司 | 一种手术导航系统、方法、电子设备和可读存储介质 |
CN114840110B (zh) * | 2022-03-17 | 2023-06-20 | 杭州未名信科科技有限公司 | 一种基于混合现实的穿刺导航交互辅助方法及装置 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017098505A1 (en) * | 2015-12-07 | 2017-06-15 | M.S.T. Medical Surgery Technologies Ltd. | Autonomic system for determining critical points during laparoscopic surgery |
US20180110571A1 (en) * | 2016-10-25 | 2018-04-26 | Novartis Ag | Medical spatial orientation system |
CN110169822A (zh) * | 2018-02-19 | 2019-08-27 | 格罗伯斯医疗有限公司 | 用于与机器人外科手术系统一起使用的增强现实导航系统及其使用方法 |
CN111821025A (zh) * | 2020-07-21 | 2020-10-27 | 腾讯科技(深圳)有限公司 | 空间定位方法、装置、设备、存储介质以及导航棒 |
CN113133829A (zh) * | 2021-04-01 | 2021-07-20 | 上海复拓知达医疗科技有限公司 | 一种手术导航系统、方法、电子设备和可读存储介质 |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7295220B2 (en) * | 2004-05-28 | 2007-11-13 | National University Of Singapore | Interactive system and method |
EP2622518A1 (en) * | 2010-09-29 | 2013-08-07 | BrainLAB AG | Method and device for controlling appartus |
CN105208958B (zh) * | 2013-03-15 | 2018-02-02 | 圣纳普医疗(巴巴多斯)公司 | 用于微创治疗的导航和模拟的系统和方法 |
US11412951B2 (en) * | 2013-03-15 | 2022-08-16 | Syanptive Medical Inc. | Systems and methods for navigation and simulation of minimally invasive therapy |
US10413366B2 (en) * | 2016-03-16 | 2019-09-17 | Synaptive Medical (Bardbados) Inc. | Trajectory guidance alignment system and methods |
CN106096857A (zh) * | 2016-06-23 | 2016-11-09 | 中国人民解放军63908部队 | 增强现实版交互式电子技术手册、内容构建及辅助维修/辅助操作流程的构建 |
US11497417B2 (en) * | 2016-10-04 | 2022-11-15 | The Johns Hopkins University | Measuring patient mobility in the ICU using a novel non-invasive sensor |
CN109674534A (zh) * | 2017-10-18 | 2019-04-26 | 深圳市掌网科技股份有限公司 | 一种基于增强现实的手术导航图像显示方法和系统 |
DE102018201612A1 (de) * | 2018-02-02 | 2019-08-08 | Carl Zeiss Industrielle Messtechnik Gmbh | Verfahren und Vorrichtung zur Erzeugung eines Steuersignals, Markeranordnung und steuerbares System |
US10869727B2 (en) * | 2018-05-07 | 2020-12-22 | The Cleveland Clinic Foundation | Live 3D holographic guidance and navigation for performing interventional procedures |
CN110478039A (zh) * | 2019-07-24 | 2019-11-22 | 常州锦瑟医疗信息科技有限公司 | 一种基于混合现实技术的医用器械跟踪系统 |
CN111966212A (zh) * | 2020-06-29 | 2020-11-20 | 百度在线网络技术(北京)有限公司 | 基于多模态的交互方法、装置、存储介质及智能屏设备 |
-
2021
- 2021-04-01 CN CN202110358153.3A patent/CN113133829B/zh active Active
-
2022
- 2022-03-18 US US18/552,077 patent/US20240189043A1/en active Pending
- 2022-03-18 WO PCT/CN2022/081728 patent/WO2022206435A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017098505A1 (en) * | 2015-12-07 | 2017-06-15 | M.S.T. Medical Surgery Technologies Ltd. | Autonomic system for determining critical points during laparoscopic surgery |
US20180110571A1 (en) * | 2016-10-25 | 2018-04-26 | Novartis Ag | Medical spatial orientation system |
CN110169822A (zh) * | 2018-02-19 | 2019-08-27 | 格罗伯斯医疗有限公司 | 用于与机器人外科手术系统一起使用的增强现实导航系统及其使用方法 |
CN111821025A (zh) * | 2020-07-21 | 2020-10-27 | 腾讯科技(深圳)有限公司 | 空间定位方法、装置、设备、存储介质以及导航棒 |
CN113133829A (zh) * | 2021-04-01 | 2021-07-20 | 上海复拓知达医疗科技有限公司 | 一种手术导航系统、方法、电子设备和可读存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN113133829A (zh) | 2021-07-20 |
US20240189043A1 (en) | 2024-06-13 |
CN113133829B (zh) | 2022-11-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102013866B1 (ko) | 실제수술영상을 이용한 카메라 위치 산출 방법 및 장치 | |
EP3138526B1 (en) | Augmented surgical reality environment system | |
WO2022206435A1 (zh) | 一种手术导航系统、方法、电子设备和可读存储介质 | |
JP6730938B2 (ja) | 医学画像表示装置、医学画像表示システム、医学画像表示装置を動作させる方法 | |
US10674891B2 (en) | Method for assisting navigation of an endoscopic device | |
JP6404713B2 (ja) | 内視鏡手術におけるガイド下注入のためのシステム及び方法 | |
WO2017211225A1 (zh) | 一种基于实时反馈的增强现实人体定位导航方法及装置 | |
JP2022507622A (ja) | 拡張現実ディスプレイでの光学コードの使用 | |
CN110494921A (zh) | 利用三维数据增强患者的实时视图 | |
CN110537980A (zh) | 一种基于运动捕捉和混合现实技术的穿刺手术导航方法 | |
US12064280B2 (en) | System and method for identifying and marking a target in a fluoroscopic three-dimensional reconstruction | |
CN112294436A (zh) | 锥形束和3d荧光镜肺导航 | |
CN112386336A (zh) | 用于进行初始配准的荧光-ct成像的系统和方法 | |
CN111839727A (zh) | 基于增强现实的前列腺粒子植入路径可视化方法及系统 | |
JP2002534204A (ja) | 座標化蛍光透視法を用いた解剖学的対象の測定装置および方法 | |
JP2023149127A (ja) | 画像処理装置、方法およびプログラム | |
CN115998429A (zh) | 用于规划和导航管腔网络的系统和方法 | |
CN115843232A (zh) | 用于目标覆盖的缩放检测和荧光镜移动检测 | |
CN113940756B (zh) | 一种基于移动dr影像的手术导航系统 | |
US20220370147A1 (en) | Technique of Providing User Guidance For Obtaining A Registration Between Patient Image Data And A Surgical Tracking System | |
CN113317874B (zh) | 一种医学图像处理装置及介质 | |
US20220409300A1 (en) | Systems and methods for providing surgical assistance based on operational context | |
US10049480B2 (en) | Image alignment device, method, and program | |
US20230215059A1 (en) | Three-dimensional model reconstruction | |
JP7495216B2 (ja) | 鏡視下手術支援装置、鏡視下手術支援方法、及びプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22778618 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18552077 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22778618 Country of ref document: EP Kind code of ref document: A1 |