WO2017211225A1 - 一种基于实时反馈的增强现实人体定位导航方法及装置 - Google Patents

一种基于实时反馈的增强现实人体定位导航方法及装置 Download PDF

Info

Publication number
WO2017211225A1
WO2017211225A1 PCT/CN2017/086892 CN2017086892W WO2017211225A1 WO 2017211225 A1 WO2017211225 A1 WO 2017211225A1 CN 2017086892 W CN2017086892 W CN 2017086892W WO 2017211225 A1 WO2017211225 A1 WO 2017211225A1
Authority
WO
WIPO (PCT)
Prior art keywords
target object
reconstructed image
image
dimensional reconstructed
component
Prior art date
Application number
PCT/CN2017/086892
Other languages
English (en)
French (fr)
Inventor
叶健
高寒
邱凌凌
万里
Original Assignee
叶健
高寒
邱凌凌
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 叶健, 高寒, 邱凌凌 filed Critical 叶健
Publication of WO2017211225A1 publication Critical patent/WO2017211225A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06F19/321
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical

Definitions

  • the present application relates to the field of computer technology, and in particular, to an augmented reality human body positioning navigation method and apparatus based on real-time feedback.
  • three-dimensional visualization software such as 3DSlicer, ImageJ, etc.
  • some three-dimensional reconstruction software specially configured by the navigation system have been gradually applied in the clinic, and the three-dimensional reconstruction of the body part of the patient is performed using the software, which can be used for surgery. Pre-observation so that the doctor can judge the condition of the patient's body parts.
  • the medical three-dimensional reconstructed image in the prior art does not establish a direct connection with the real human body, and the doctor's operation plan cannot correspond to the real physical organization of the patient, thereby causing real-time adjustment according to the real human body at the surgical site.
  • the present invention provides an augmented reality human body positioning navigation method and device based on real-time feedback, which is used to solve the problem that the medical three-dimensional reconstructed image existing in the prior art does not directly associate with the real human body, and the doctor's operation plan cannot correspond to the real physical organization of the patient. This leads to technical problems that cannot be adjusted in real time according to the real human body at the surgical site.
  • the present application provides an augmented reality human body positioning navigation method based on real-time feedback, including:
  • the AR device is transparent and allows a user to view the target object through the AR device;
  • the first three-dimensional reconstructed image of the target object is obtained, and then the feature point information of the target object and the first target object are collected according to the AR device.
  • Reconstructing the image in three dimensions generating an image transformation parameter, adjusting the first three-dimensional reconstructed image according to the image transformation parameter, and obtaining a second three-dimensional reconstructed image, and And the feature points in the second three-dimensional reconstructed image coincide with the feature points of the target object viewed by the user through the AR device, and the second three-dimensional reconstructed image is used for display on the AR device.
  • the application can realize displaying a three-dimensional reconstructed image on the AR device, and the displayed three-dimensional reconstructed image is adjusted in real time according to the feature point information of the collected target object, so that the doctor sees the three-dimensional on the AR device when viewing the AR device.
  • the reconstructed image is consistent with the target object viewed by the user through the AR device, and even if the position of the AR device is moved, the three-dimensional reconstructed image on the AR device can be adjusted in real time, thereby allowing the doctor to be in operation Greatly improved the accuracy and efficiency of the operation.
  • the image transformation parameters including:
  • the rotation angle, the rotation direction, the translation distance, and the scaling ratio are used as the image transformation parameters.
  • the method before the generating the image transformation parameter according to the feature point information of the target object and the first three-dimensional reconstruction image of the target object, the method further includes:
  • the adjusting the first three-dimensional reconstructed image according to the image transformation parameter to obtain the second three-dimensional reconstructed image further includes:
  • the AR device collects feature point information of the target object by scanning the target object by using a sensor on the AR device, where the feature point information is information corresponding to the feature tag, or
  • the AR device acquires feature point information of the target object by capturing the target object by using a camera on the AR device, where the feature point information is information corresponding to a preset position on the target object.
  • the method further includes:
  • the third three-dimensional reconstructed image is displayed on the AR device.
  • the initial three-dimensional reconstruction image is adjusted according to an instruction of the medical database system and the user input, to obtain a first three-dimensional reconstructed image of the target object, including:
  • the initial three-dimensional reconstruction image is adjusted according to an instruction of the medical database system and the user input, to obtain a first three-dimensional reconstructed image of the target object, including:
  • generating the initial three-dimensional reconstructed image of the target object according to the medical image data of the target object including:
  • Adjusting the initial three-dimensional reconstruction map according to the medical database system and an instruction input by the user For example, obtaining a first three-dimensional reconstructed image of the target object, including:
  • the present application provides a human body positioning navigation device, including:
  • An image generating unit configured to generate an initial three-dimensional reconstructed image of the target object according to the medical image data of the target object; and adjust the initial three-dimensional reconstructed image according to an instruction input by the medical database system and the user, to obtain the target object a three-dimensional reconstruction image;
  • An image transformation parameter generating unit configured to generate image transformation parameters according to the feature point information of the target object acquired by the augmented reality AR device and the first three-dimensional reconstruction image of the target object, where the first three-dimensional reconstruction image is according to the Generated by the medical image data of the target object, the AR device is transparent and allows a user to view the target object through the AR device;
  • an adjusting unit configured to adjust the first three-dimensional reconstructed image according to the image transformation parameter, to obtain a second three-dimensional reconstructed image, and feature points and user penetration in the second three-dimensional reconstructed image displayed on the AR device The feature points of the target object viewed by the AR device match.
  • the image transformation parameter generating unit is specifically configured to:
  • the rotation angle, the rotation direction, the translation distance, and the scaling ratio are used as the image transformation parameters.
  • the human body positioning navigation device further includes a receiving unit, configured to:
  • the human body positioning navigation device further includes a sending unit, configured to:
  • the AR device scans the target object by using a sensor on the AR device. Collecting feature point information of the target object, where the feature point information is information corresponding to the feature tag, or
  • the AR device acquires feature point information of the target object by capturing the target object by using a camera on the AR device, where the feature point information is information corresponding to a preset position on the target object.
  • the image transformation parameter generating unit is further configured to generate an image adjustment parameter according to the received user instruction information
  • the adjusting unit is further configured to adjust the second three-dimensional reconstructed image according to the image adjustment parameter to obtain a third three-dimensional reconstructed image;
  • the human body positioning navigation device further includes a display unit for displaying the third three-dimensional reconstructed image on the AR device.
  • the image generating unit is specifically configured to:
  • the image generating unit is specifically configured to:
  • the image generating unit is specifically configured to:
  • the present application provides a human body positioning navigation device, including:
  • At least one processor and,
  • the memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to:
  • the AR device is transparent and allows a user to view the target object through the AR device;
  • the at least one processor is specifically configured to:
  • the rotation angle, the rotation direction, the translation distance, and the scaling ratio are used as the image transformation parameters.
  • the at least one processor is configured to:
  • the at least one processor scans the target object by using a sensor on the AR device to collect feature point information of the target object, where the feature point information is information corresponding to the feature tag, or
  • the at least one processor acquires feature point information of the target object by capturing the target object by using a camera on the AR device, where the feature point information is information corresponding to a preset position on the target object.
  • the at least one processor is further configured to generate an image adjustment parameter according to the received user instruction information, and adjust the second three-dimensional reconstructed image according to the image adjustment parameter to obtain a third three-dimensional reconstructed image;
  • the third three-dimensional reconstructed image is displayed on the AR device.
  • the at least one processor is specifically configured to:
  • the at least one processor is specifically configured to:
  • a second parameter adjustment instruction according to a function module selected by the user from the pre-established function module library and a connection manner of the user to the selected function module, where each function module in the pre-established function module library is used to represent one a combination of image processing mode or multiple image processing modes, all functional modules in the pre-established function module library can be connected under certain rules; according to the second parameter adjustment instruction and the medical database system, Adjusting the initial three-dimensional reconstructed image to obtain a first three-dimensional reconstructed image of the target object.
  • the at least one processor is specifically configured to:
  • each function module in the pre-established function module library is used to represent an image processing mode or a combination of multiple image processing modes, and all the functional modules in the pre-established function module library can be Connect under certain rules;
  • the present application provides a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform any of the above first aspects
  • the augmented reality human body positioning navigation method based on real-time feedback.
  • the present application provides a computer program product comprising a computing program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions, when the program instructions are executed by a computer And causing the computer to perform the real-time feedback-based augmented reality human body positioning navigation method according to any one of the above aspects.
  • the present application provides a puncture trajectory guiding device
  • the first component, the second component, and the third component are included;
  • the third component includes a base and a groove on the base, the axial direction of the groove is perpendicular to the base, and the inner side of the groove includes a first fixing structure;
  • the first component is a catheter structure for carrying a puncture tube through the first component, the puncture tube entering a puncture object through the first component, the first end of the first component being inserted into the first a recess of the three component, and an outer diameter of the first end of the first component is smaller than an inner diameter of the groove;
  • the second component includes a second fixing structure for engaging with the first fixing structure to fix the first component inserted into the third component to a first angle.
  • the second end of the first component includes a tracker for detecting the first angle.
  • the apparatus further includes a fourth component, the fourth component being a conduit structure, the first component carrying the fourth component passing through the first component, the fourth component carrying The puncture tube of the fourth component, the difference between the inner diameter of the first component and the outer diameter of the fourth component Less than the first threshold, the difference between the inner diameter of the fourth component and the outer diameter of the puncture tube is less than a second threshold.
  • a fourth component the fourth component being a conduit structure, the first component carrying the fourth component passing through the first component, the fourth component carrying The puncture tube of the fourth component, the difference between the inner diameter of the first component and the outer diameter of the fourth component Less than the first threshold, the difference between the inner diameter of the fourth component and the outer diameter of the puncture tube is less than a second threshold.
  • the fourth component has an inner diameter of 2.5 mm, or 4 mm, or 4.67 mm.
  • part or all of the first component, the second component, and the third component are a half-fitting structure.
  • the first fixing structure is a thread structure
  • the second fixing structure is a thread structure
  • the present application provides a puncture trajectory guiding device applied to a puncture scene, comprising: a catheter and two marking members fixed at different positions on the catheter; the catheter is used to carry through the device a puncture tube of the catheter through which the puncture tube enters the puncture target; the two indicia members are respectively adapted to coincide with the indicia on the puncture positioning line displayed on the augmented reality AR device during puncture.
  • a first one of the two marking members is 5 cm from a first end of the conduit, and a second one of the two marking members is from the first of the conduit The distance of the end is 10 cm, and the first end of the catheter is near one end of the piercing object.
  • the inner diameter of the catheter may be 2.5 mm, or 4 mm, or 4.67 mm.
  • the inner side of the catheter comprises at least one elastic piece for fixing the puncture tube.
  • FIG. 1 is a flowchart of a real-time feedback-based augmented reality human body positioning navigation method provided by the present application
  • FIG. 2 is a schematic diagram of a parameter adjustment method based on a parameter adjustment system provided by the present application
  • FIG. 3 is a schematic diagram of a parameter adjustment manner based on a function module library provided by the present application.
  • FIG. 4 is a schematic diagram of collecting information using a sensor provided by the present application.
  • FIG. 5 is a schematic diagram of collecting information by using a camera provided by the present application.
  • FIG. 6 is a detailed flowchart of a real-time feedback-based augmented reality human body positioning navigation method provided by the present application.
  • FIG. 7 is a schematic diagram of a human body positioning navigation device provided by the present application.
  • FIG. 8 is a schematic diagram of a human body positioning navigation device provided by the present application.
  • Figure 9 is a puncture trajectory guiding device provided by the present application.
  • Figure 10 is a cross-sectional view of a third component provided by the present application.
  • FIG. 11 is another puncture trajectory guiding device provided by the present application.
  • the real-time feedback-based augmented reality human body positioning navigation method includes:
  • Step 101 Generate an initial three-dimensional reconstructed image of the target object according to medical image data of the target object.
  • Step 102 Adjust the initial three-dimensional reconstructed image according to the medical database system and an instruction input by the user, to obtain a first three-dimensional reconstructed image of the target object;
  • Step 103 Generate image transformation parameters according to feature point information of the target object acquired by the augmented reality AR device and the first three-dimensional reconstructed image of the target object, where the first three-dimensional reconstructed image is a medical image according to the target object Data generated, the AR device is transparent and allows a user to view the target object through the AR device;
  • Step 104 Adjust the first three-dimensional reconstructed image according to the image transformation parameter to obtain a second three-dimensional reconstructed image, and feature points in the second three-dimensional reconstructed image displayed on the AR device A feature point of the target object viewed by the user through the AR device.
  • the information may be collected based on an AR (Augmented Reality) device.
  • information may be collected based on other general-purpose display devices, for example, 2D (2 Dimensions) or All display devices that display images in 3D (3 Dimensions) mode may be VR (Virtual Reality)/3D/2D glasses, VR/3D/2D display devices, or VR/2D/3D wearable devices.
  • the target object refers to the patient itself or a certain body part of the patient (such as the skull, arms, upper body parts, etc.), the patient can be lying on the operating table, and then the doctor can display the AR device through the AR device.
  • a three-dimensional reconstructed image of the target object for example, when the patient's head needs to be observed, a three-dimensional reconstructed image of the patient's skull can be displayed on the AR device, and the camera is mounted on the AR device, and the patient can be viewed through the camera, of course, if AR
  • the device is transparent and wearable (for example, the AR device is AR glasses), and the patient can also be observed directly by wearing the AR device and then through the AR device.
  • the doctor can see the patient itself through the AR device, and can also See the 3D reconstruction image on the AR device.
  • the position of the AR device can be adjusted to find a suitable position, so that the three-dimensional reconstructed image displayed on the AR device coincides with the target object viewed by the user through the AR device, taking the skull as an example, when the doctor
  • the three-dimensional reconstructed image can be seen on the AR device, and then the position of the AR device is moved to find a suitable position, so that the patient's skull viewed by the AR device and the three-dimensional reconstructed image of the skull on the AR device coincide with each other, thereby facilitating the doctor.
  • the internal structure of the patient's skull is observed by viewing a three-dimensional reconstructed image on the AR device.
  • Steps 101 to 104 above in the present application can automatically align the three-dimensional reconstructed image on the AR device with the target object. That is, when the position of the AR device changes, the three-dimensional reconstructed image displayed on the AR device can automatically change the angle and size, so that the target object viewed by the user through the AR device matches the transformed three-dimensional reconstructed image.
  • an initial three-dimensional reconstructed image of the target object is first generated according to the medical image data of the target object, wherein the medical image data of the target object may be CT (Computed Tomography), MRI (Magnetic) Resonance Imaging, Magnetic Resonance Imaging, PET (Positron Emission Computed Tomography)
  • CT Computer Tomography
  • MRI Magnetic
  • Magnetic Resonance Imaging Magnetic Resonance Imaging
  • PET PET
  • image data such as sub-emission computed tomography or the image data after the fusion of one or more of the image data described above can obtain an initial three-dimensional reconstructed image by means of three-dimensional modeling.
  • the initial three-dimensional reconstructed image is adjusted according to the medical database system and an instruction input by the user, to obtain a first three-dimensional reconstructed image of the target object.
  • the medical database system refers to a medical information database based on statistical significance, which can be updated according to medical data of the current patient, and can optimize the three-dimensional reconstruction data according to historical optimal results, historical average values, variance and other statistical information. And obtaining an optimized initial reconstructed image as the first three-dimensional reconstructed image.
  • Method 1 Adjusting the initial three-dimensional reconstructed image based on a preset parameter adjustment system
  • Step 1 the receiving user adjusts a first parameter adjustment instruction input by the system through a preset parameter, and the parameter adjustment system is configured to display the three-dimensional reconstructed image information in a visual manner;
  • Step 2 Adjust the initial three-dimensional reconstructed image according to the first parameter adjustment instruction and the medical database system to obtain a first three-dimensional reconstructed image of the target object.
  • FIG. 2 a schematic diagram of a parameter adjustment manner based on a parameter adjustment system provided by the present application is provided.
  • the first three-dimensional image on the upper left is the initial three-dimensional reconstructed image.
  • the three-dimensional reconstructed image can be directly modified by the parameter adjustment system as shown in FIG. 2, for example, by using the right border.
  • the parameter adjustment box is adjusted. For example, if a head puncture path is required, the user (such as a doctor) can calibrate some reference points by experience on the upper right image and the lower two images, and then click on the lower right corner of FIG. 2
  • the "generate" button is generated to obtain an adjusted first three-dimensional reconstructed image, for example, adjusted, and the first three-dimensional reconstructed image includes information such as a puncture path.
  • a user (such as a doctor) can perform fine adjustment of parameters on the parameter adjustment system shown in FIG. 2 according to the usual experience, thereby obtaining desired results, which can realize human-computer interaction, especially greatly facilitated.
  • the doctor performed preoperative observation and intraoperative mobile phone operation before surgery.
  • the parameter adjustment system shown in Fig. 2 is pre-designed, and once the system is established, it means that only The functions provided on the system are adjusted accordingly, which will result in different requirements for different users. For example, if the doctor wants to display a blood vessel image on the finally displayed three-dimensional reconstructed image, but the parameter adjustment system does not establish the function beforehand, the doctor cannot adjust the three-dimensional reconstructed image through the system described in FIG. 2 to obtain the desired image. display effect.
  • the present application also provides another method for adjusting an initial three-dimensional reconstructed image, as follows.
  • Method 2 adjusting the initial three-dimensional reconstructed image based on a function module selected by a user from a library of pre-established function modules
  • Step 1 determining a second parameter adjustment instruction according to a function module selected by the user from a pre-established function module library and a connection manner of the user to the selected function module, where each function module in the pre-established function module library is used In the combination of an image processing mode or a plurality of image processing modes, all the functional modules in the pre-established function module library can be connected under certain rules;
  • Step 2 Adjust the initial three-dimensional reconstructed image according to the second parameter adjustment instruction and the medical database system to obtain a first three-dimensional reconstructed image of the target object.
  • FIG. 3 a schematic diagram of a parameter adjustment manner based on a function module library provided by the present application is provided.
  • the second method is to combine the function module library to adjust the initial three-dimensional reconstructed image to obtain the desired first three-dimensional reconstructed image.
  • the function module library includes a plurality of function modules, each function module is used to represent an image processing mode or a combination of multiple image processing modes, and all function modules in the function module library can be under certain rules. Make a connection.
  • the specific connection manner may include, but is not limited to, series connection, parallel connection, feedback, and the like.
  • a "display ventricle function module” may be added; if it is desired to further display the skull, a “display skull function module” may be added. If you want to display the skin further, you can add a "Show Skin Function Module”; if you want to further specify the puncture trajectory, you can increase A "specified puncture trajectory function module”.
  • a plurality of functional modules can be connected and arranged in combination (arrangement and combination can determine the order of execution), and after adding the corresponding functional modules, the corresponding functional modules can also be deleted, which is very flexible and convenient, and each functional module is Editing, specifically, for example, by clicking a right mouse button on a function module, a property editing dialog box pops up, through which the parameters of the function module can be modified and adjusted, and after the adjustment, immediately Previewing the display effect, and thus the implementation of the second method is more convenient than the first method, especially for a user like a doctor, so that the doctor can understand and use the method more easily.
  • the combination can also be stored in a medical database and saved as a processing template for reference in future use.
  • the present application further provides another method for generating an initial three-dimensional reconstructed image, which specifically includes:
  • Step 1 generating an initial three-dimensional reconstructed image of the target object according to the medical image data of the target object, the function module selected by the user from the pre-established function module library, and the connection manner of the user to the selected function module,
  • Each functional module in the pre-established function module library is used to represent a combination of image processing modes or multiple image processing modes, and all functional modules in the pre-established function module library can be connected under certain rules.
  • Step 2 according to the user inputting the instruction to the selected function module, to obtain a third parameter adjustment instruction
  • Step 3 Adjust the initial three-dimensional reconstructed image according to the medical database system and the third parameter adjustment instruction to obtain a first three-dimensional reconstructed image of the target object.
  • the initial three-dimensional reconstructed image may be obtained based on the function module selected by the user from the pre-established function module library and the connection manner of the user to the selected function module, and then combined with the medical image data of the input target object.
  • the adjustment of the three-dimensional reconstructed image may be completed according to steps 2 and 3, that is, according to the user's instruction input to the selected function module (ie, the parameter adjustment dialog box may be popped up by right clicking on each selected function module). And then input the command to get the adjusted parameter), get the third parameter adjustment instruction, and then according to the root And adjusting the initial three-dimensional reconstructed image according to the medical database system and the third parameter adjustment instruction to obtain a first three-dimensional reconstructed image of the target object.
  • the medical image data can be input into the function module library in combination with the medical image data and the function module library, and then the final three-dimensional reconstructed image is output.
  • the first three-dimensional reconstructed image After the first three-dimensional reconstructed image is obtained, in order to display the first three-dimensional reconstruction on the AR device in an appropriate manner, the first three-dimensional reconstructed image needs to be adjusted in angle and direction, so that the adjusted first three-dimensional When the reconstructed image is displayed on the AR device, it can be aligned with the target object of the patient observed by the doctor through the AR device.
  • the image transformation parameter is generated according to the feature point information of the target object and the first three-dimensional reconstruction image of the target object collected by the augmented reality AR device, and the first three-dimensional reconstruction image is according to the target object.
  • the medical image data is generated, the AR device is transparent and allows a user to view the target object through the AR device.
  • the collected information may be information of feature point information, brightness, contrast, depth of field, distance, hue, chroma, edge, and the like.
  • the present application how to adjust the three-dimensional reconstructed image displayed on the AR device will be described by taking the feature point information of the collected target object as an example.
  • the image transformation parameter is generated according to the feature point information of the collected target object and the first three-dimensional reconstruction image of the target object.
  • the AR device may collect the feature of the target object by using at least the following two methods. Point information:
  • the AR device scans the target object by using a sensor on the AR device to collect feature point information of the target object, where the feature point information is information corresponding to the feature tag.
  • FIG. 4 a schematic diagram of collecting information using a sensor provided in the present application is provided.
  • the sensor 401 is installed on the AR device in a built-in or external manner, and the feature tag is mounted on the target object 402.
  • the sensor 401 can actively acquire or passively receive sensing information on the target object 402, thereby based on the sensor on the AR device.
  • the feature point information of the target object can be obtained, and the information includes the current positional relationship information (including angle, distance, direction, and the like) of the doctor and the target object 402, that is, the feature point information of the target object can be acquired by the sensor.
  • the AR device captures the target object by using a camera on the AR device to acquire feature point information of the target object, where the feature point information is information corresponding to a preset position on the target object.
  • FIG. 5 a schematic diagram of collecting information by using a camera provided by the present application is provided.
  • a camera 501 is installed on the AR device in a built-in or external manner, and the camera 501 can take a photo on the target object 502, so that the camera can be photographed according to the photograph.
  • the obtained photo information is subjected to pattern recognition analysis to obtain related information of a preset position.
  • the target object is a skull
  • the preset position is an eye and a nose
  • the target object is performed by a camera on the AR device.
  • the photograph is taken to obtain an image, and then the position information of the feature points such as the glasses and the nose in the image is obtained based on the pattern recognition, so that the positional relationship (including the angular relationship and the distance relationship) between the AR device and the target object can be known.
  • the feature point information of the target object can be finally obtained, and the information about the positional relationship between the doctor and the target object is included in the feature point information of the target object, and the information can be collected according to the AR device.
  • Feature point information of the target object and a first three-dimensional reconstructed image of the target object generating image transformation parameters, optionally, the feature point information of the target object collected according to the AR device and the first three-dimensional reconstructed image of the target object And generating an image transformation parameter, comprising: determining a feature mode of the target object according to the feature point information; determining the first three-dimensional image according to the feature mode of the target object and the first three-dimensional reconstructed image of the target object Reconstructing an angle of rotation, a rotation direction, a translation distance, and a scaling ratio of the image; and using the rotation angle, the rotation direction, the translation distance, and the scaling ratio as the image transformation parameters.
  • the feature pattern of the target object is determined according to the feature point information of the target object, wherein a plurality of feature patterns are pre-stored, and each feature pattern represents a positional relationship between the doctor and the target object.
  • a feature pattern may be matched from a plurality of feature patterns stored in advance, and then based on the feature pattern and the target object.
  • the first three-dimensional reconstructed image can obtain a rotation angle, a rotation direction, a translation distance, and a scaling ratio required for the first three-dimensional reconstruction image, and the rotation angle, the rotation direction, the translation distance, and the scaling ratio As the image transformation parameters.
  • the first three-dimensional reconstructed image is adjusted according to the image transformation parameter to obtain a second three-dimensional reconstructed image, and feature points and users in the second three-dimensional reconstructed image displayed on the AR device The feature points of the target object viewed through the AR device match.
  • step 102 after the feature point information of the target object 402 is acquired by the sensor 401, and then based on the feature point information and the first three-dimensional reconstructed image 403 (ie, the current three-dimensional reconstructed image), image transformation parameters may be obtained, and then In step 102, the first three-dimensional reconstructed image 403 is adjusted according to the image transformation parameter to obtain a second three-dimensional reconstructed image 404, wherein the feature points in the second three-dimensional reconstructed image are viewed by the user through the AR device.
  • the feature points of the target object are consistent.
  • the second three-dimensional reconstructed image 404 is finally displayed on the AR device, and the second three-dimensional reconstructed image 404 coincides with the target object 402 seen by the doctor through the AR device. If the position of the AR device moves, so that the target object seen by the doctor changes (mainly the change of the angle and the distance between the AR device and the AR device), the method of step 101 to step 104 is reused again.
  • the three-dimensional reconstructed image on the AR device is adjusted such that the adjusted three-dimensional reconstructed image is reconciled with the observed target object.
  • the method of the present application can ensure that the doctor can update the three-dimensional reconstructed image on the AR device in real time when the AR device is freely moved on the operation site, and keeps coincident with the target object, so that the doctor can watch the three-dimensional reconstruction on the AR device.
  • the image is used to view the internal structure of the target object, improving the accuracy and efficiency of the surgery.
  • the manner in which the target object information is acquired by the camera in FIG. 5 is similar to the manner in FIG. 4 above, that is, the image transformation parameter can be obtained according to the acquired information of the target object 502 and the first three-dimensional reconstructed image 503, and then the image transformation parameter is obtained according to the image.
  • the first three-dimensional reconstructed image 503 is adjusted to obtain a second three-dimensional reconstructed image 504, wherein the second three-dimensional reconstructed image 504 is coincident with the target object 502.
  • the specific implementation of the foregoing steps 101 to 104 in the present application may be It is implemented by a processor in the AR device, that is, a processor is integrated in the AR device; or the method of the above steps 101 to 104 is implemented by a third-party PC (personal computer), that is, the AR device is only responsible for
  • the feature information of the target object is collected and then sent to the PC, and the first three-dimensional reconstructed image is transformed by the PC to obtain a second three-dimensional reconstructed image, which is then sent to the AR device for display.
  • step 101 to step 104 When the method of step 101 to step 104 is performed by the PC, the feature point information of the target object collected by the AR device needs to be received; and then the first three-dimensional reconstructed image is adjusted according to the image transformation parameter to obtain a second three-dimensional image. Reconstructing the image, and also transmitting the second three-dimensional reconstructed image to the AR device to cause the second three-dimensional reconstructed image to be displayed on the AR device.
  • the three-dimensional reconstructed image on the AR device can be adjusted in real time regardless of how the doctor moves the AR device, so that the three-dimensional reconstructed image displayed on the AR device coincides with the target object viewed by the user through the AR device.
  • the present application also provides the following methods to adjust the three-dimensional reconstructed image on the AR device:
  • the current AR device displays a second three-dimensional reconstructed image that has been aligned with the target object.
  • the doctor can turn off the automatic alignment function, so that no real-time calibration is performed, and the doctor can send an instruction to the AR device, for example, for example. It can be by voice, head movement, gesture or manually adjusting the buttons on the AR device. For example, the doctor informs the AR device of the desired operation by "multiplying twice" and "rotating 30 degrees counterclockwise".
  • the AR device adjusts the second three-dimensional reconstructed image to obtain a third three-dimensional reconstructed image, and then displays the image on the AR device, or the AR device sends the received voice command to the PC, which is paired by the PC.
  • the second three-dimensional reconstructed image is adjusted accordingly, and the third three-dimensional reconstructed image is obtained and sent to the AR device for display. Therefore This method allows the doctor to control the display of the three-dimensional reconstructed image on the AR device.
  • the first three-dimensional reconstructed image of the target object is obtained, and then the feature point information of the target object and the first target object are collected according to the AR device.
  • 3D reconstructing an image, generating an image transformation parameter, adjusting the first 3D reconstruction image according to the image transformation parameter, obtaining a second 3D reconstruction image, and the feature points in the second 3D reconstruction image are viewed by the user through the AR device
  • the feature points of the target object are matched, and the second three-dimensional reconstructed image is used for display on the AR device.
  • the application can realize displaying a three-dimensional reconstructed image on the AR device, and the displayed three-dimensional reconstructed image is adjusted in real time according to the feature point information of the collected target object, so that the doctor sees the three-dimensional on the AR device when viewing the AR device.
  • the reconstructed image is consistent with the target object viewed by the user through the AR device, and even if the position of the AR device is moved, the three-dimensional reconstructed image on the AR device can be adjusted in real time, thereby allowing the doctor to be in operation Greatly improved the accuracy and efficiency of the operation.
  • a detailed flowchart of a real-time feedback-based augmented reality human body positioning navigation method includes:
  • Step 601 Generate an initial three-dimensional reconstructed image of the target object according to the medical image data of the target object.
  • Step 602 Adjust an initial three-dimensional reconstructed image according to the medical database system to obtain a first three-dimensional reconstructed image of the target object.
  • Step 603 Determine a feature mode of the target object according to the feature point information of the target object collected by the AR device, where the AR device is transparent and allows the user to view the target object through the AR device.
  • Step 604 Determine, according to the feature mode of the target object and the first three-dimensional reconstructed image of the target object, a rotation angle, a rotation direction, a translation distance, and a scaling ratio of the first three-dimensional reconstructed image;
  • Step 605 Taking a rotation angle, a rotation direction, a translation distance, and a scaling ratio as the image transformation parameters
  • Step 606 Adjust the first three-dimensional reconstructed image according to the image transformation parameter to obtain a second three-dimensional reconstructed image.
  • Step 607 Send the second three-dimensional reconstructed image to the AR device, so that the second three-dimensional reconstructed image is displayed on the AR device.
  • the second three-dimensional reconstructed image displayed on the AR device coincides with the target object viewed by the user through the AR device.
  • Step 608 Generate image adjustment parameters according to the received user instruction information.
  • Step 609 Adjust the second three-dimensional reconstructed image according to the image adjustment parameter to obtain a third three-dimensional reconstructed image.
  • Step 610 Display the third three-dimensional reconstructed image on the AR device.
  • the third three-dimensional reconstructed image displayed on the AR device coincides with the target object viewed by the user through the AR device.
  • the first three-dimensional reconstructed image of the target object is obtained, and then the feature point information of the target object and the first target object are collected according to the AR device.
  • 3D reconstructing an image, generating an image transformation parameter, adjusting the first 3D reconstruction image according to the image transformation parameter, obtaining a second 3D reconstruction image, and the feature points in the second 3D reconstruction image are viewed by the user through the AR device
  • the feature points of the target object are matched, and the second three-dimensional reconstructed image is used for display on the AR device.
  • the application can realize displaying a three-dimensional reconstructed image on the AR device, and the displayed three-dimensional reconstructed image is adjusted in real time according to the feature point information of the collected target object, so that the doctor sees the three-dimensional on the AR device when viewing the AR device.
  • the reconstructed image is consistent with the target object viewed by the user through the AR device, and even if the position of the AR device is moved, the three-dimensional reconstructed image on the AR device can be adjusted in real time, thereby allowing the doctor to be in operation Greatly improved the accuracy and efficiency of the operation.
  • the present application further provides a human body positioning navigation device, including:
  • the image generating unit 701 is configured to generate an initial three-dimensional reconstructed image of the target object according to the medical image data of the target object; and adjust the initial three-dimensional reconstructed image according to an instruction input by the medical database system and the user, to obtain the target object First three-dimensional reconstruction image;
  • the image transformation parameter generation unit 702 is configured to select a target pair according to the augmented reality AR device. And generating, by the feature point information of the image, the first three-dimensional reconstructed image of the target object, the first three-dimensional reconstructed image is generated according to the medical image data of the target object, and the AR device is transparent and Having a user view the target object through the AR device;
  • the adjusting unit 703 is configured to adjust the first three-dimensional reconstructed image according to the image transformation parameter to obtain a second three-dimensional reconstructed image, and feature points and users in the second three-dimensional reconstructed image displayed on the AR device The feature points of the target object viewed through the AR device match.
  • the image transformation parameter generating unit 702 is specifically configured to:
  • the rotation angle, the rotation direction, the translation distance, and the scaling ratio are used as the image transformation parameters.
  • the human body positioning navigation device further includes a receiving unit 704, configured to:
  • the human body positioning navigation device further includes a sending unit 705, configured to:
  • the AR device collects feature point information of the target object by scanning the target object by using a sensor on the AR device, where the feature point information is information corresponding to the feature tag, or
  • the AR device acquires feature point information of the target object by capturing the target object by using a camera on the AR device, where the feature point information is information corresponding to a preset position on the target object.
  • the image transformation parameter generating unit 702 is further configured to generate an image adjustment parameter according to the received user instruction information
  • the adjusting unit is further configured to adjust the second three-dimensional reconstructed image according to the image adjustment parameter to obtain a third three-dimensional reconstructed image;
  • the human body positioning navigation device further includes a display unit 706 for displaying the third three-dimensional reconstructed image on the AR device.
  • the image generating unit 701 is specifically configured to:
  • the image generating unit 701 is specifically configured to:
  • the image generating unit 701 is specifically configured to:
  • the first three-dimensional reconstructed image of the target object is obtained, and then the feature point information of the target object and the first target object are collected according to the AR device.
  • Reconstructing the image in three dimensions generating an image transformation parameter, adjusting the first three-dimensional reconstructed image according to the image transformation parameter, and obtaining a second three-dimensional reconstructed image, and And the feature points in the second three-dimensional reconstructed image coincide with the feature points of the target object viewed by the user through the AR device, and the second three-dimensional reconstructed image is used for display on the AR device.
  • the application can realize displaying a three-dimensional reconstructed image on the AR device, and the displayed three-dimensional reconstructed image is adjusted in real time according to the feature point information of the collected target object, so that the doctor sees the three-dimensional on the AR device when viewing the AR device.
  • the reconstructed image is consistent with the target object viewed by the user through the AR device, and even if the position of the AR device is moved, the three-dimensional reconstructed image on the AR device can be adjusted in real time, thereby allowing the doctor to be in operation Greatly improved the accuracy and efficiency of the operation.
  • the present application further provides a human body positioning navigation device 800, which is shown in FIG.
  • One or more processors 810 and memory 820, one processor 810 is taken as an example in FIG.
  • the human body positioning navigation device that performs the augmented reality human body positioning navigation method based on real-time feedback may further include: an input device 830 and an output device 840.
  • the processor 810, the memory 820, the input device 830, and the output device 840 may be connected by a bus or other means, as exemplified by a bus connection in FIG.
  • the memory 820 is a non-volatile computer readable storage medium, and can be used for storing non-volatile software programs, non-volatile computer executable programs, and modules, such as real-time feedback-based augmented reality human body positioning navigation in the present application.
  • the program corresponding to the program instruction/module (for example, the receiving unit 704, the image generating unit 701, the image transform parameter generating unit 702, the adjusting unit 703, the transmitting unit 705, and the display unit 706 shown in FIG. 7).
  • the processor 810 executes various functional applications and data processing of the server by running non-volatile software programs, instructions, and modules stored in the memory 820, that is, implementing the real-time feedback-based augmented reality human body positioning navigation method in the present application. .
  • the memory 820 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function; the storage data area may store data created by use of the processing device operated according to the list item, and the like. .
  • memory 820 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device.
  • memory 820 can optionally include memory remotely located relative to processor 810, which can be networked A processing device connected to a list item operation. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • Input device 830 can receive input numeric or character information and generate key signal inputs related to user settings and function control of the body positioning navigation device.
  • the output device 840 can include a display device such as a display screen.
  • the one or more modules are stored in the memory 820, and when executed by the one or more processors 810, perform a real-time feedback-based augmented reality human body positioning navigation method in any of the above method embodiments.
  • the at least one processor 810 is configured to: generate an initial three-dimensional reconstructed image of the target object according to the medical image data of the target object; adjust the initial three-dimensional reconstructed image according to an instruction input by the medical database system and the user, to obtain the a first three-dimensional reconstructed image of the target object;
  • the AR device is transparent and allows a user to view the target object through the AR device;
  • the at least one processor 810 is specifically configured to:
  • the rotation angle, the rotation direction, the translation distance, and the scaling ratio are used as the image transformation parameters.
  • the at least one processor 810 is configured to:
  • the at least one processor 810 scans the target object by using a sensor on the AR device to collect feature point information of the target object, where the feature point information is information corresponding to the feature tag, or
  • the at least one processor 810 captures the target object by using a camera on the AR device to acquire feature point information of the target object, where the feature point information is information corresponding to a preset position on the target object.
  • the at least one processor 810 is further configured to generate an image adjustment parameter according to the received user instruction information, and adjust the second three-dimensional reconstructed image according to the image adjustment parameter to obtain a third three-dimensional reconstructed image. Displaying the third three-dimensional reconstructed image on the AR device.
  • the at least one processor 810 is specifically configured to:
  • the at least one processor 810 is specifically configured to:
  • a second parameter adjustment instruction according to a function module selected by the user from the pre-established function module library and a connection manner of the user to the selected function module, where each function module in the pre-established function module library is used to represent one a combination of image processing mode or multiple image processing modes, all functional modules in the pre-established function module library can be connected under certain rules; according to the second parameter adjustment instruction and the medical database system, Adjusting the initial three-dimensional reconstructed image to obtain a first three-dimensional reconstructed image of the target object.
  • the at least one processor 810 is specifically configured to:
  • the present application also provides a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform real-time feedback based on any of the above Augmented reality human body positioning navigation method.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).
  • the application also provides a computer program product comprising a computing program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions, when the program instructions are executed by a computer,
  • the computer performs the real-time feedback-based augmented reality human body positioning navigation method according to any one of the above.
  • the storage medium may be a magnetic disk, an optical disk, a read only memory (ROM), or a random access memory (RAM).
  • the present application also provides a puncture trajectory guiding device 900, as shown in FIG. 9, comprising a first component 901, a second component 902, a third component 903;
  • the third component 903 includes a groove on the base and the base, the axial direction of the groove is perpendicular to the base, and the inner side of the groove includes a first fixing structure;
  • the first component 901 is a catheter structure for carrying a puncture tube through the first component 901, the puncture tube enters a puncture object through the first component, and the first end of the first component 901 is inserted a groove of the third component 903, and an outer diameter of the first end of the first component 901 is smaller than an inner diameter of the groove, so that the first component 901 can rotate within the groove;
  • the second component 902 includes a second fixing structure for engaging with the first fixing structure, so that the first component 901 inserted into the third component 903 is fixed at a first angle .
  • the first fixing structure is a threaded structure
  • the second fixing structure is a threaded structure
  • the first fixing structure is a bolt structure
  • the second fixing structure is a bolt structure.
  • the first component 901, the second component 902, and the third component 903 are three independent components, and a complete puncture trajectory guiding device 900 can be obtained by assembling, that is, the first component 901 is inserted into the third component.
  • the component 903, the first component 901 can be rotated in the groove of the third component 903, so that the first component 901 and the base of the third component 903 form various angles, and after being adjusted to the required first angle, the first component
  • the second fixing structure of the two components 902 is screwed with the first fixing structure in the third component 903 such that the first component 901 is fixed at the first angle.
  • FIG. 10 it is a cross-sectional view of the third component 903.
  • the first angle is determined according to the puncture direction and angle required for the puncture, that is, after the first angle is formed, the first component is consistent with the puncture path for the puncture object.
  • the puncture trajectory guiding device 900 shown in FIG. 9 can be applied to a human body puncture operation. For example, when a puncture of a skull is required, first, by determining a puncture point of the skull, a puncture tool is used on the skull after puncture at the puncture point. By perforating, the base of the third component 903 of the puncture trajectory guiding device 900 is placed at the position of the skull perforation, and the pedestal is fixed on the skull, for example, using a titanium screw to fix the pedestal on the skull, thereby making the puncture trajectory The guiding device 900 is fixed to the skull and the first component 901 is aligned with the cranial perforation position.
  • the device 900 further includes a fourth component (not shown), the fourth component being a catheter structure, the fourth component being placed into the first component 901, ie, the first component 901 carrying the same through the first component 901
  • the fourth component, and the fourth component carries the puncture tube through the fourth component, that is, the fourth component is placed into the first component 901, and the puncture tube is placed into the fourth component, so that the puncture tube can pass Moving inside the fourth component, entering the puncture object, the difference between the inner diameter of the first component and the outer diameter of the fourth component is less than a first threshold for facilitating puncture, the inner diameter of the fourth component and the puncture
  • the difference in outer diameter of the tube is less than a second threshold, wherein the first threshold and the second threshold are both small values such that the inner diameter of the first component is slightly larger than the outer diameter of the
  • the length of the first component 901 is 6 centimeters (cm)
  • the inner diameter of the fourth component is either 2.5 millimeters (mm), or 4 millimeters, or 4.67 millimeters. Since the fourth component has a plurality of inner diameters, correspondingly, it can be applied to a plurality of thick and small puncture tubes to facilitate the needs of different application scenarios.
  • the second end of the first component 901 includes a tracker, the tracker is configured to detect the first angle, and optionally, the tracker is a 4-fork or 5-fork tracker, optionally The tracker is completely integrated with the first component 901.
  • the tracker can send the first angle to the human body positioning navigation device of the present application, and the human body positioning navigation device is according to the first The first angle at which the component 901 is located determines whether the first component 901 is located on the puncture path. If the first component 901 is not on the puncture path, the first angle is continuously adjusted until the first component 901 is located on the puncture path, thereby ensuring the puncture tube. It can be moved along the puncture path.
  • part or all of the first component 901, the second component 902, and the third component 903 are a half-fit structure, and in practical applications, may be used once for surgery.
  • the hand is split to the sides, and the puncture trajectory guiding device 900 is separated into two halves and is detached from the fourth component.
  • the present application also provides a puncture trajectory guiding device, as shown in FIG. 11, applied to a puncture scene, comprising a catheter and two marking members fixed at different positions on the catheter; the catheter is configured to carry through the a puncture tube of the catheter through which the puncture tube enters the puncture target; the two The marking members are respectively adapted to coincide with the marking marks on the puncture positioning line displayed on the augmented reality AR device at the time of puncture.
  • first of the two marking members is at a first distance from the first end of the conduit, and the second of the two marking members is from the first end of the conduit At a second distance, the first end of the catheter is adjacent one end of the puncturing object.
  • first distance is 5 centimeters (cm) and the second distance is 10 centimeters.
  • the inner diameter of the catheter may be 2.5 millimeters (mm), or 4 millimeters, or 4.67 millimeters.
  • the inside of the catheter contains at least one shrapnel for securing a puncture tube through the catheter.
  • the first end of the puncture trajectory guiding device shown in FIG. 11 is fixed to the puncture object such as the skull, and the first end point is aligned with the puncture point, so that when the puncture tube is inserted into the catheter of the puncture trajectory guiding device, It can pass through the first end point and the puncture point to reach the inside of the skull.
  • the human body positioning navigation device may generate a puncture path through the puncture object, and generate an extension line of the puncture path outside the puncture object, which is called a puncture positioning line, and the puncture positioning line
  • the virtual two mark marks are included, and the generated puncture positioning line and the virtual mark mark are sent to the AR glasses.
  • the puncture positioning line and the virtual mark mark can be seen on the AR glasses, and the permeable can be seen.
  • the AR glasses see the actual position of the puncture trajectory guiding device shown in FIG. 11 on the puncture object, so that the user can make the puncture displayed on the AR glasses by moving the user or moving the puncture trajectory guiding device shown in FIG.
  • the positioning line coincides with the catheter of the puncture trajectory guiding device, and the two virtual marking marks on the puncture positioning line coincide with the two marking members on the catheter of the puncture trajectory guiding device to complete the virtual and real combination.
  • the device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie may be located A place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to implement the embodiment. The purpose of the program.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

一种基于实时反馈的增强现实人体定位导航方法及装置,包括:根据目标对象的医学图像数据、医学数据库系统和/或用户指令,得到目标对象的第一三维重建图像,根据AR设备采集到的目标对象的特征点信息和第一三维重建图像生成图像变换参数,根据图像变换参数调整第一三维重建图像得到第二三维重建图像并实时显示到AR设备。上述方法可实现在AR设备显示三维重建图像且显示的三维重建图像是根据采集到的目标对象的特征点信息而进行实时调整,使得AR设备上的三维重建图像与用户透过AR设备观看到的的目标对象之间吻合,从而使医生在术中极大提高了手术的准确性和效率。

Description

一种基于实时反馈的增强现实人体定位导航方法及装置
本申请要求在2016年6月6日提交中华人民共和国知识产权局、申请号为201610395355.4,发明名称为“一种基于通用显示设备的显示方法及装置”的中国专利申请的优先权,要求在2016年8月3日提交中华人民共和国知识产权局、申请号为201610629122.6,发明名称为“一种基于实时反馈的增强现实人体定位导航方法及装置”的中国专利申请的优先权,要求在2016年10月13日提交美国国家知识产权局、申请号为15/292,947,发明名称为“一种基于实时反馈的增强现实人体定位导航方法及装置”的美国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,尤其涉及一种基于实时反馈的增强现实人体定位导航方法及装置。
背景技术
目前临床工作中大量的医疗操作是建立在准确的解剖定位的基础上,如各种穿刺操作,直到现在很多解剖结构定位仍是由医生徒手完成。因而造成定位不准确,造成一定的医疗操作风险。
为解决上述问题,目前临床上也已经在逐渐应用三维可视化软件,如3DSlicer、ImageJ等,或者导航系统专门配置的一些三维重建软件,使用这些软件对患者的身体部位进行三维重建,可以用于术前观察,以便医生可以判断患者身体部位的状况。
上述方法虽然在一定程度上提高了医生进行手术的效率及准确率,但是仍然存在着以下问题:1)这些软件生成的三维重建图像只能用于术前的观察,并不能够在三维重建图像和真实人体间建立直接联系;2)生成最终的三维重建图像没有根据操作环境进行相应调整,生成的定位数据没有和医生或现场 的交互输入/反馈进行动态调节;3)医生对操作的规划无法和患者真实物理组织对应起来。
综上所述,现有技术中的医学三维重建图像没有与真实人体建立直接联系,医生的操作规划无法和患者真实物理组织对应,因而导致无法在手术现场根据真实人体进行实时调整。
发明内容
本申请提供一种基于实时反馈的增强现实人体定位导航方法及装置,用以解决现有技术中存在的医学三维重建图像没有与真实人体建立直接联系,医生的操作规划无法和患者真实物理组织对应,因而导致无法在手术现场根据真实人体进行实时调整的技术问题。
第一方面,本申请提供一种基于实时反馈的增强现实人体定位导航方法,包括:
根据目标对象的医学图像数据,生成所述目标对象的初始三维重建图像;
根据医学数据库系统及用户输入的指令,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像;
根据增强现实AR设备采集到的目标对象的特征点信息和所述目标对象的第一三维重建图像,生成图像变换参数,所述第一三维重建图像为根据所述目标对象的医学图像数据生成的,所述AR设备为透明的且可使用户透过所述AR设备观看到所述目标对象;
根据所述图像变换参数,调整所述第一三维重建图像,得到第二三维重建图像,在所述AR设备上显示的所述第二三维重建图像中的特征点与用户透过所述AR设备观看到的所述目标对象的特征点吻合。
本申请,首先根据目标对象的医学图像数据、医学数据库系统及用户输入的指令,得到目标对象的第一三维重建图像,然后根据AR设备采集到的目标对象的特征点信息和目标对象的第一三维重建图像,生成图像变换参数,根据图像变换参数调整所述第一三维重建图像,得到第二三维重建图像,并 且所述第二三维重建图像中的特征点与用户透过所述AR设备观看到的目标对象的特征点吻合,所述第二三维重建图像用于在AR设备显示。本申请可实现在AR设备显示三维重建图像,并且显示的三维重建图像是根据采集到的目标对象的特征点信息而进行实时调整,使得医生在观看AR设备时,看到的AR设备上的三维重建图像与用户透过所述AR设备观看到的的目标对象之间是吻合的,并且即使AR设备的位置有所移动,AR设备上的三维重建图像也可以实时调整,从而使医生在术中极大提高了手术的准确性和效率。
可选地,所述根据AR设备采集到的目标对象的特征点信息和所述目标对象的第一三维重建图像,生成图像变换参数,包括:
根据所述特征点信息,确定所述目标对象的特征模式;
根据所述目标对象的特征模式及所述目标对象的第一三维重建图像,确定所述第一三维重建图像的旋转角度、旋转方向、平移距离及缩放比例;
将所述旋转角度、所述旋转方向、所述平移距离及所述缩放比例,作为所述图像变换参数。
可选地,所述根据AR设备采集到的目标对象的特征点信息和所述目标对象的第一三维重建图像,生成图像变换参数之前,还包括:
接收所述AR设备采集到的目标对象的特征点信息;
所述根据所述图像变换参数,调整所述第一三维重建图像,得到第二三维重建图像之后,还包括:
将所述第二三维重建图像发送给所述AR设备,以使在所述AR设备上显示所述第二三维重建图像。
可选地,所述AR设备通过所述AR设备上的传感器扫描所述目标对象来采集所述目标对象的特征点信息,所述特征点信息为所述特征标签对应的信息,或者
所述AR设备通过所述AR设备上的摄像头拍摄所述目标对象来获取所述目标对象的特征点信息,所述特征点信息为所述目标对象上预设位置对应的信息。
可选地,所述方法还包括:
根据接收到的用户指令信息,生成图像调整参数;
根据所述图像调整参数,调整所述第二三维重建图像,得到第三三维重建图像;
在所述AR设备上显示所述第三三维重建图像。
可选地,所述根据医学数据库系统及用户输入的指令,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像,包括:
接收用户通过预先设定的参数调整系统输入的第一参数调整指令,所述参数调整系统用于以可视化的方式显示三维重建图像信息;
根据所述第一参数调整指令及所述医学数据库系统,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像。
可选地,所述根据医学数据库系统及用户输入的指令,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像,包括:
根据用户从预先建立的功能模块库中选择的功能模块及用户对选择的功能模块的连接方式,确定第二参数调整指令,所述预先建立的功能模块库中的每个功能模块用于表示一种图像处理方式或多种图像处理方式的组合,所述预先建立的功能模块库中的所有功能模块可在一定的规则下进行连接;
根据所述第二参数调整指令及所述医学数据库系统,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像。
可选地,所述根据所述目标对象的医学图像数据,生成所述目标对象的初始三维重建图像,包括:
根据所述目标对象的医学图像数据、用户从预先建立的功能模块库中选择的功能模块及用户对选择的功能模块的连接方式,生成所述目标对象的初始三维重建图像,所述预先建立的功能模块库中的每个功能模块用于表示一种图像处理方式或多种图像处理方式的组合,所述预先建立的功能模块库中的所有功能模块可在一定的规则下进行连接;
所述根据医学数据库系统及用户输入的指令,调整所述初始三维重建图 像,得到所述目标对象的第一三维重建图像,包括:
根据用户对所述选择的功能模块进行指令输入,得到第三参数调整指令;
根据所述根据医学数据库系统及所述第三参数调整指令,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像。
第二方面,本申请提供一种人体定位导航装置,包括:
图像生成单元,用于根据目标对象的医学图像数据,生成所述目标对象的初始三维重建图像;根据医学数据库系统及用户输入的指令,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像;
图像变换参数生成单元,用于根据增强现实AR设备采集到的目标对象的特征点信息和所述目标对象的第一三维重建图像,生成图像变换参数,所述第一三维重建图像为根据所述目标对象的医学图像数据生成的,所述AR设备为透明的且可使用户透过所述AR设备观看到所述目标对象;
调整单元,用于根据所述图像变换参数,调整所述第一三维重建图像,得到第二三维重建图像,在所述AR设备上显示的所述第二三维重建图像中的特征点与用户透过所述AR设备观看到的所述目标对象的特征点吻合。
可选地,所述图像变换参数生成单元,具体用于:
根据所述特征点信息,确定所述目标对象的特征模式;
根据所述目标对象的特征模式及所述目标对象的第一三维重建图像,确定所述第一三维重建图像的旋转角度、旋转方向、平移距离及缩放比例;
将所述旋转角度、所述旋转方向、所述平移距离及所述缩放比例,作为所述图像变换参数。
可选地,所述人体定位导航装置还包括接收单元,用于:
接收所述AR设备采集到的目标对象的特征点信息;
所述人体定位导航装置还包括发送单元,用于:
将所述第二三维重建图像发送给所述AR设备,以使在所述AR设备上显示所述第二三维重建图像。
可选地,所述AR设备通过所述AR设备上的传感器扫描所述目标对象来 采集所述目标对象的特征点信息,所述特征点信息为所述特征标签对应的信息,或者
所述AR设备通过所述AR设备上的摄像头拍摄所述目标对象来获取所述目标对象的特征点信息,所述特征点信息为所述目标对象上预设位置对应的信息。
可选地,所述图像变换参数生成单元,还用于根据接收到的用户指令信息,生成图像调整参数;
所述调整单元,还用于根据所述图像调整参数,调整所述第二三维重建图像,得到第三三维重建图像;
所述人体定位导航装置还包括显示单元,用于在所述AR设备上显示所述第三三维重建图像。
可选地,所述图像生成单元,具体用于:
接收用户通过预先设定的参数调整系统输入的第一参数调整指令,所述参数调整系统用于以可视化的方式显示三维重建图像信息;
根据所述第一参数调整指令及所述医学数据库系统,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像。
可选地,所述图像生成单元,具体用于:
根据用户从预先建立的功能模块库中选择的功能模块及用户对选择的功能模块的连接方式,确定第二参数调整指令,所述预先建立的功能模块库中的每个功能模块用于表示一种图像处理方式或多种图像处理方式的组合,所述预先建立的功能模块库中的所有功能模块可在一定的规则下进行连接;
根据所述第二参数调整指令及所述医学数据库系统,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像。
可选地,所述图像生成单元,具体用于:
根据所述目标对象的医学图像数据、用户从预先建立的功能模块库中选择的功能模块及用户对选择的功能模块的连接方式,生成所述目标对象的初始三维重建图像,所述预先建立的功能模块库中的每个功能模块用于表示一 种图像处理方式或多种图像处理方式的组合,所述预先建立的功能模块库中的所有功能模块可在一定的规则下进行连接;
根据用户对所述选择的功能模块进行指令输入,得到第三参数调整指令;
根据所述根据医学数据库系统及所述第三参数调整指令,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像。
第三方面,本申请提供一种人体定位导航装置,包括:
至少一个处理器;以及,
与所述至少一个处理器通信连接的存储器;其中,
所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够:
根据目标对象的医学图像数据,生成所述目标对象的初始三维重建图像;根据医学数据库系统及用户输入的指令,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像;
根据增强现实AR设备采集到的目标对象的特征点信息和所述目标对象的第一三维重建图像,生成图像变换参数,所述第一三维重建图像为根据所述目标对象的医学图像数据生成的,所述AR设备为透明的且可使用户透过所述AR设备观看到所述目标对象;
根据所述图像变换参数,调整所述第一三维重建图像,得到第二三维重建图像,在所述AR设备上显示的所述第二三维重建图像中的特征点与用户透过所述AR设备观看到的所述目标对象的特征点吻合。
可选地,所述至少一个处理器,具体用于:
根据所述特征点信息,确定所述目标对象的特征模式;
根据所述目标对象的特征模式及所述目标对象的第一三维重建图像,确定所述第一三维重建图像的旋转角度、旋转方向、平移距离及缩放比例;
将所述旋转角度、所述旋转方向、所述平移距离及所述缩放比例,作为所述图像变换参数。
可选地,所述至少一个处理器,用于:
接收所述AR设备采集到的目标对象的特征点信息;
将所述第二三维重建图像发送给所述AR设备,以使在所述AR设备上显示所述第二三维重建图像。
可选地,所述至少一个处理器通过所述AR设备上的传感器扫描所述目标对象来采集所述目标对象的特征点信息,所述特征点信息为所述特征标签对应的信息,或者
所述至少一个处理器通过所述AR设备上的摄像头拍摄所述目标对象来获取所述目标对象的特征点信息,所述特征点信息为所述目标对象上预设位置对应的信息。
可选地,所述至少一个处理器,还用于根据接收到的用户指令信息,生成图像调整参数;根据所述图像调整参数,调整所述第二三维重建图像,得到第三三维重建图像;在所述AR设备上显示所述第三三维重建图像。
可选地,所述至少一个处理器,具体用于:
接收用户通过预先设定的参数调整系统输入的第一参数调整指令,所述参数调整系统用于以可视化的方式显示三维重建图像信息;
根据所述第一参数调整指令及所述医学数据库系统,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像。
可选地,所述至少一个处理器,具体用于:
根据用户从预先建立的功能模块库中选择的功能模块及用户对选择的功能模块的连接方式,确定第二参数调整指令,所述预先建立的功能模块库中的每个功能模块用于表示一种图像处理方式或多种图像处理方式的组合,所述预先建立的功能模块库中的所有功能模块可在一定的规则下进行连接;根据所述第二参数调整指令及所述医学数据库系统,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像。
可选地,所述至少一个处理器,具体用于:
根据所述目标对象的医学图像数据、用户从预先建立的功能模块库中选择的功能模块及用户对选择的功能模块的连接方式,生成所述目标对象的初 始三维重建图像,所述预先建立的功能模块库中的每个功能模块用于表示一种图像处理方式或多种图像处理方式的组合,所述预先建立的功能模块库中的所有功能模块可在一定的规则下进行连接;
根据用户对所述选择的功能模块进行指令输入,得到第三参数调整指令;
根据所述根据医学数据库系统及所述第三参数调整指令,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像。
第四方面,本申请提供一种非暂态计算机可读存储介质,所述非暂态计算机可读存储介质存储计算机指令,所述计算机指令用于使所述计算机执行上述第一方面任一项所述的基于实时反馈的增强现实人体定位导航方法。
第五方面,本申请提供一种计算机程序产品,所述计算机程序产品包括存储在非暂态计算机可读存储介质上的计算程序,所述计算机程序包括程序指令,当所述程序指令被计算机执行时,使所述计算机执行上述第一方面任一项所述的基于实时反馈的增强现实人体定位导航方法。
第六方面,本申请提供一种穿刺轨迹导引装置,
包括第一组件、第二组件、第三组件;
所述第三组件包括基座和基座上的凹槽,所述凹槽中轴方向与所述基座垂直,所述凹槽内侧包含第一固定结构;
所述第一组件为导管结构,用于承载穿过所述第一组件的穿刺管,所述穿刺管通过所述第一组件进入穿刺对象,所述第一组件的第一端插入所述第三组件的凹槽,且所述第一组件的第一端的外径小于所述凹槽的内径;
所述第二组件包含第二固定结构,所述第二固定结构用于与所述第一固定结构卡合,使插入所述第三组件的所述第一组件固定于第一角度。
可选地,所述第一组件的第二端包含追踪器,所述追踪器用于探测所述第一角度。
可选地,所述装置还包括第四组件,所述第四组件为导管结构,所述第一组件承载穿过所述第一组件的所述第四组件,所述第四组件承载穿过所述第四组件的所述穿刺管,所述第一组件的内径与所述第四组件的外径的差值 小于第一阈值,所述第四组件的内径与所述穿刺管的外径的差值小于第二阈值。
可选地,所述第四组件的内径或为2.5mm、或为4mm、或为4.67mm。
可选地,所述第一组件、所述第二组件、所述第三组件中的部分或全部为对半贴合结构。
可选地,所述第一固定结构为螺纹结构,所述第二固定结构为螺纹结构。
第七方面,本申请提供一种穿刺轨迹导引装置,应用于穿刺场景,其特征在于,包括导管及固定在所述导管上不同位置的两个标记部件;所述导管用于承载穿过所述导管的穿刺管,所述穿刺管通过所述导管进入穿刺对象;所述两个标记部件分别用于在穿刺时与增强现实AR设备上显示的穿刺定位线上的标记标志相吻合。
可选地,所述两个标记部件中的第一标记部件距离所述导管的第一端的距离为5cm,所述两个标记部件中的第二标记部件距离所述导管的所述第一端的距离为10cm,所述导管的第一端为靠近所述穿刺对象的一端。
可选地,所述导管的内径或为2.5mm、或为4mm、或为4.67mm。
可选地,所述导管内侧包含至少一个弹片,所述弹片用于固定所述穿刺管。
附图说明
为了更清楚地说明本申请中的技术方案,下面将对实施例描述中所需要使用的附图作简要介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本申请提供的基于实时反馈的增强现实人体定位导航方法流程图;
图2为本申请提供的基于参数调整系统的参数调整方式示意图;
图3为本申请提供的基于功能模块库的参数调整方式示意图;
图4为本申请提供的使用传感器采集信息示意图;
图5为本申请提供的使用摄像头采集信息示意图;
图6为本申请提供的基于实时反馈的增强现实人体定位导航方法详细流程图;
图7为本申请提供的人体定位导航装置示意图;
图8为本申请提供的人体定位导航装置示意图;
图9为本申请提供的一种穿刺轨迹导引装置;
图10为本申请提供的第三组件剖面图;
图11为本申请提供的另一种穿刺轨迹导引装置。
具体实施方式
为了使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请作进一步地详细描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。
下面结合说明书附图对本申请作进一步详细描述。
如图1所示,本申请提供的基于实时反馈的增强现实人体定位导航方法,包括:
步骤101、根据目标对象的医学图像数据,生成所述目标对象的初始三维重建图像;
步骤102、根据医学数据库系统及用户输入的指令,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像;
步骤103、根据增强现实AR设备采集到的目标对象的特征点信息和所述目标对象的第一三维重建图像,生成图像变换参数,所述第一三维重建图像为根据所述目标对象的医学图像数据生成的,所述AR设备为透明的且可使用户透过所述AR设备观看到所述目标对象;
步骤104、根据所述图像变换参数,调整所述第一三维重建图像,得到第二三维重建图像,在所述AR设备上显示的所述第二三维重建图像中的特征点 与用户透过所述AR设备观看到的所述目标对象的特征点吻合。
在本申请中,可以是基于AR(Augmented Reality,增强现实)设备进行信息的采集,但实际应用中,还可以是基于其它的通用显示设备进行信息的采集,例如可以是2D(2 Dimensions)或3D(3 Dimensions)的方式对图像进行显示的所有显示设备,可以是VR(Virtual Reality,虚拟现实)/3D/2D眼镜、VR/3D/2D显示设备或VR/2D/3D可穿戴设备等。目标对象指的是患者本身或者是患者的某个身体部分(如头颅、手臂、上半身部位等),患者可以是躺在手术台上,然后医生可以通过AR设备,在所述AR设备上显示有目标对象的三维重建图像,例如当需要观察患者头颅时,则AR设备上可以显示有患者头颅的三维重建图像,所述AR设备上安装有摄像头,可以通过摄像头观看到患者本身,当然,如果AR设备是透明的且是可佩带的(如AR设备是AR眼镜),则还可以直接通过佩戴AR设备,然后透过AR设备来观察患者,医生既可以通过AR设备看到患者本身,同时也可以看到AR设备上的三维重建图像。医生实施手术过程中,可以通过调整所述AR设备的位置,找到合适的位置,使得AR设备上显示的三维重建图像与用户透过AR设备观看到的目标对象重合,以头颅为例,当医生可以在AR设备上看见三维重建图像,然后通过移动AR设备的位置,找到一个合适的位置,使得用户透过AR设备观看到的患者的头颅与AR设备上的头颅三维重建图像重合,从而方便医生通过观看AR设备上的三维重建图像来观察患者的头颅内部结构。
本申请上述步骤101~步骤104可实现自动地将AR设备上的三维重建图像与目标对象进行对准。即当AR设备发生位置变动时,AR设备上显示的三维重建图像可以自动的变换角度和大小,使得用户透过AR设备观看到的目标对象与变换后的三维重建图像相吻合。
在上述步骤101中,首先根据目标对象的医学图像数据,生成所述目标对象的初始三维重建图像,其中目标对象的医学图像数据可以是CT(Computed Tomography,即电子计算机断层扫描)、MRI(Magnetic Resonance Imaging,磁共振成像)、PET(Positron Emission Computed Tomography,正电 子发射型计算机断层显像)等图像数据或者是上述一种或多种图像数据融合之后的图像数据,通过三维建模的方式,可以得到初始三维重建图像。
在上述步骤102中,根据医学数据库系统及用户输入的指令,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像。
其中,所述医学数据库系统指的是基于统计意义上的医学信息数据库,可以根据当前患者的医学数据更新,并可以根据历史最优结果以及历史平均值,方差等统计信息对三维重建数据进行优化,从而得到将优化后的初始重建图像作为所述第一三维重建图像。
下面具体给出两种根据医学数据库系统及用户输入的指令,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像的方法:
方法一、基于预先设定的参数调整系统来调整所述初始三维重建图像
步骤1、接收用户通过预先设定的参数调整系统输入的第一参数调整指令,所述参数调整系统用于以可视化的方式显示三维重建图像信息;
步骤2、根据所述第一参数调整指令及所述医学数据库系统,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像。
下面举例说明上述方法一的具体实现过程。参考图2,为本申请提供的基于参数调整系统的参数调整方式示意图。
在图2中,左上第一三维图像即为初始三维重建图像,如果想要对该三维重建图像进行调整,则可以通过如图2所示的参数调整系统直接进行修改,例如可以通过右边框上的参数调整框进行调整,再比如如果需要得到头颅穿刺路径,则用户(比如医生)可以通过在右上图像以及下排两个图像上根据经验标定一些参考点,然后点击图2中右下角的“生成”按钮,得到调整后的第一三维重建图像,例如经过调整,第一三维重建图像上包含有穿刺路径等增加的信息。
上述方法一种用户(如医生)可以根据平时的经验,在图2所示的参数调整系统上进行参数的微调,从而得到想要的结果,该方式可实现人机交互,尤其是大大地方便了医生在术前进行术前观察和术中手机操作。
上述方法一虽然已经很方便地为医生提供了手术指导,但仍然会遇到如下问题:图2所示的参数调整系统是预先设计好的,一旦建立好这个系统,就意味着只能根据该系统上提供的功能进行相应的参数调整,这将导致无法满足不同用户的不同需求。举例来说,医生想在最终显示的三维重建图像上显示血管图像,但该参数调整系统却事先没有建立该功能,则医生就无法通过图2所述的系统来调整三维重建图像得到想要的显示效果。
为此,本申请还提供另外一种调整初始三维重建图像的方法,具体如下。
方法二、基于用户从预先建立的功能模块库中选择的功能模块调整所述初始三维重建图像
步骤1、根据用户从预先建立的功能模块库中选择的功能模块及用户对选择的功能模块的连接方式,确定第二参数调整指令,所述预先建立的功能模块库中的每个功能模块用于表示一种图像处理方式或多种图像处理方式的组合,所述预先建立的功能模块库中的所有功能模块可在一定的规则下进行连接;
步骤2、根据所述第二参数调整指令及所述医学数据库系统,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像。
下面举例说明上述方法二的具体实现过程。参考图3,为本申请提供的基于功能模块库的参数调整方式示意图。
方法二是结合了功能模块库来调整初始三维重建图像,从而得到想要的第一三维重建图像。具体地,功能模块库中包含了多个功能模块,每个功能模块用于表示一种图像处理方式或多种图像处理方式的组合,并且功能模块库中的所有功能模块可在一定的规则下进行连接。具体的连接方式可以包括串联、并联、反馈等方式,但不限于这些方式。
例如参考图3,假设想要在初始三维重建图像的基础上进一步地显示脑室,则可以增加一个“显示脑室功能模块”;如果想要进一步地显示头骨,则可以增加一个“显示头骨功能模块”;如果想要进一步地显示皮肤,则可以增加一个“显示皮肤功能模块”;如果想要进一步地指定穿刺轨迹,则可以增加 一个“指定穿刺轨迹功能模块”。并且多个功能模块是可以进行连接以及排列组合的(排列组合可确定执行的先后顺序),在增加相应功能模块之后,还可以删除相应的功能模块,非常灵活方便,并且每个功能模块是可编辑的,具体地,例如可以通过在某个功能模块上单击鼠标右键,弹出一个属性编辑对话框,可通过该对话框在对功能模块的参数进行修改和调整,并且在调整之后,可以立刻预览显示效果,因而通过方法二的实现方式,相较于方法一,更加便捷,尤其是针对像医生这样的用户,使得医生可以更加容易理解和使用该方法。
并且实际应用中,在用户使用了一次功能模块化的组合之后,还可以将该组合存储到医学数据库中,作为一个处理模板进行保存,以便于在以后的使用中进行参考。
此外,本申请还提供另外一种生成初始三维重建图像的方法,具体包括:
步骤1、根据所述目标对象的医学图像数据、用户从预先建立的功能模块库中选择的功能模块及用户对选择的功能模块的连接方式,生成所述目标对象的初始三维重建图像,所述预先建立的功能模块库中的每个功能模块用于表示一种图像处理方式或多种图像处理方式的组合,所述预先建立的功能模块库中的所有功能模块可在一定的规则下进行连接;
步骤2、根据用户对所述选择的功能模块进行指令输入,得到第三参数调整指令;
步骤3、根据所述根据医学数据库系统及所述第三参数调整指令,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像。
具体地,首先可以基于用户从预先建立的功能模块库中选择的功能模块及用户对选择的功能模块的连接方式,然后结合输入的目标对象的医学图像数据,得到初始三维重建图像,并且对于初始三维重建图像的调整,可以是根据步骤2和3来完成,即根据用户对所述选择的功能模块进行指令输入(即可以通过在每个选择的功能模块上单击右键,弹出参数调整对话框,然后进行指令输入,得到调整后的参数),得到第三参数调整指令,然后根据所述根 据医学数据库系统及所述第三参数调整指令,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像。
当然,还可以是直接根据第三参数调整指令,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像。
本申请中,可以结合医学图像数据和功能模块库,将医学图像数据输入到功能模块库中,然后输出最终的三维重建图像。
在得到第一三维重建图像之后,为了将该第一三维重建以合适的方式在AR设备上进行显示,还需要对该第一三维重建图像进行角度和方向的调整,使得调整后的第一三维重建图像在AR设备上显示时可以与医生观察到透过AR设备观看到的患者的目标对象向吻合。
在上述步骤103中,根据增强现实AR设备采集到的目标对象的特征点信息和所述目标对象的第一三维重建图像,生成图像变换参数,所述第一三维重建图像为根据所述目标对象的医学图像数据生成的,所述AR设备为透明的且可使用户透过所述AR设备观看到所述目标对象。
通过AR设备采集目标对象的信息时,采集的信息可以是特征点信息,亮度,对比度,景深,距离,色调,色度,边缘等信息。本申请中,将以根据采集到的目标对象的特征点信息为例来说明如何对AR设备上显示的三维重建图像进行调整。
本申请中,根据采集到的目标对象的特征点信息以及目标对象的第一三维重建图像,生成图像变换参数,可选地,所述AR设备可通过至少以下两种方式来采集目标对象的特征点信息:
方式一、AR设备通过AR设备上的传感器扫描所述目标对象来采集所述目标对象的特征点信息,所述特征点信息为所述特征标签对应的信息
在该方式中,参照图4,为本申请提供的使用传感器采集信息示意图,在AR设备上以内置或者外置的方式安装有传感器401,以及在目标对象402身上安装有特征标签,特征标签用于被传感器401识别,传感器401可主动获取或者被动接收目标对象402身上的传感信息,从而根据AR设备上的传感器 可以获取到目标对象的特征点信息,该信息中包含医生与目标对象402当前的位置关系信息(包含角度、距离、方向等信息),即可以通过传感器获取目标对象的特征点信息。
方式二、AR设备通过所述AR设备上的摄像头拍摄所述目标对象来获取所述目标对象的特征点信息,所述特征点信息为所述目标对象上预设位置对应的信息
在该方式中,参照图5,为本申请提供的使用摄像头采集信息示意图,在AR设备上以内置或者外置的方式安装有摄像头501,摄像头501可对目标对象502进行拍照,从而可以根据拍照获得的照片信息进行模式识别分析,得到预先设定位置的相关信息,举例来说,参照图5,目标对象为头颅,预设位置为眼睛和鼻子,则通过AR设备上的摄像头进行对目标对象进行拍照,从而获得图像,然后基于模式识别,得到图像中的眼镜和鼻子等特征点的位置信息,从而可以得知AR设备当前与目标对象之间的位置关系(包括角度关系和距离关系)。
不管是以上述何种方式,最终可以获取到目标对象的特征点信息,在根据目标对象的特征点信息中包含了医生与目标对象之间的位置关系等信息,进而可以根据AR设备采集到的目标对象的特征点信息和目标对象的第一三维重建图像,生成图像变换参数,可选地,所述根据AR设备采集到的目标对象的特征点信息和所述目标对象的第一三维重建图像,生成图像变换参数,包括:根据所述特征点信息,确定所述目标对象的特征模式;根据所述目标对象的特征模式及所述目标对象的第一三维重建图像,确定所述第一三维重建图像的旋转角度、旋转方向、平移距离及缩放比例;将所述旋转角度、所述旋转方向、所述平移距离及所述缩放比例,作为所述图像变换参数。
在该方法中,首先根据目标对象的特征点信息,确定目标对象的特征模式,其中,预先存储了多种特征模式,每种特征模式都表示了一种医生与目标对象之间的位置关系,基于目标对象的特征点信息,可以从预先存储的多种特征模式中,匹配出一种特征模式,然后基于该特征模式以及目标对象的 第一三维重建图像,就可以得到第一三维重建图像所需要的旋转角度、旋转方向、平移距离及缩放比例,并将所述旋转角度、所述旋转方向、所述平移距离及所述缩放比例,作为所述图像变换参数。
在上述步骤104中,根据所述图像变换参数,调整所述第一三维重建图像,得到第二三维重建图像,在所述AR设备上显示的所述第二三维重建图像中的特征点与用户透过所述AR设备观看到的所述目标对象的特征点吻合。
举例来说,参照图4,通过传感器401获取到目标对象402的特征点信息之后,然后基于该特征点信息和第一三维重建图像403(即当前三维重建图像),可以得到图像变换参数,然后在步骤102中,根据所述图像变换参数,调整所述第一三维重建图像403,得到第二三维重建图像404,其中,所述第二三维重建图像中的特征点与用户透过AR设备观看到的目标对象的特征点是吻合的。
通过上述方式,使得AR设备上最终显示的是第二三维重建图像404,并且所述第二三维重建图像404与医生通过AR设备看到的目标对象402是吻合的。如果AR设备发生的位置移动,因而使得医生看到的目标对象发生了变化(主要是与AR设备之间的角度和距离的变化),此时则会重新使用步骤101~步骤104的方式,重新调整AR设备上的三维重建图像,使得调整后的三维重建图像与观察到的目标对象重新保持吻合。因而,本申请方法可以保证医生在手术现场随意移动AR设备时,都可以实时地更新所述AR设备上的三维重建图像,并保持与目标对象重合,使得医生可以通过观看AR设备上的三维重建图像来查看目标对象的内部结构,提高了手术的准确性和效率。
对于图5中通过摄像头获取目标对象信息的方式与上述图4中的方式类似,即可以根据获取到的目标对象502的信息及第一三维重建图像503,得到图像变换参数,然后根据图像变换参数对第一三维重建图像503进行调整,得到第二三维重建图像504,其中第二三维重建图像504与目标对象502是吻合的。
需要说明的是,本申请上述步骤101~步骤104方法的具体实施,可以是 由AR设备中的处理器来完成,即AR设备中集成了一个处理器;也可以是由第三方PC(personal computer,个人计算机)端来实施上述步骤101~步骤104方法,即AR设备只负责通过采集目标对象的特征点信息,然后发送给PC,由PC对第一三维重建图像进行变换得到第二三维重建图像后发送给AR设备进行显示。
当由PC来执行上述步骤101~步骤104方法时,则需要接收AR设备采集到的目标对象的特征点信息;然后根据所述图像变换参数,调整所述第一三维重建图像,得到第二三维重建图像,以及还需要将所述第二三维重建图像发送给所述AR设备,以使在所述AR设备上显示所述第二三维重建图像。
通过以上方法,即可实现不管医生如何移动AR设备,都可以实时调整AR设备上的三维重建图像,使得AR设备上显示的三维重建图像与用户透过AR设备观看到的目标对象是重合的。
在实际应用中,如果医生发现自动调整的三维重建图像与目标对象并没有完全对准,或者是医生自己想要对三维重建图像进行多种方式的观察时(比如想要放大、旋转等),医生希望可以通过人为发送指令给AR设备,进行相应调整,因此本申请还提供以下方法来调整AR设备上的三维重建图像:
根据接收到的用户指令信息,生成图像调整参数;根据所述图像调整参数,调整所述第二三维重建图像,得到第三三维重建图像;在所述AR设备上显示所述第三三维重建图像。
即当前AR设备上显示了已经与目标对象对准后的第二三维重建图像,此时医生可以关闭自动对准功能,从而不会再进行实时校准,此时医生可以发送指令给AR设备,例如可以是通过语音、头部运动、手势或者是手动调整AR设备上的按钮等方式,举例来说,医生通过语音“放大一倍”、“逆时针旋转30度”告知AR设备想要进行的操作,AR设备接收到语音指令后对第二三维重建图像进行相应调整,得到第三三维重建图像,然后在AR设备上显示,或者是AR设备将接收到的语音指令发送给PC,由PC对对第二三维重建图像进行相应调整,得到第三三维重建图像后发送给AR设备进行显示。因而通 过该方法可实现由医生自行控制AR设备上的三维重建图像的显示。
本申请,首先根据目标对象的医学图像数据、医学数据库系统及用户输入的指令,得到目标对象的第一三维重建图像,然后根据AR设备采集到的目标对象的特征点信息和目标对象的第一三维重建图像,生成图像变换参数,根据图像变换参数调整所述第一三维重建图像,得到第二三维重建图像,并且所述第二三维重建图像中的特征点与用户透过所述AR设备观看到的目标对象的特征点吻合,所述第二三维重建图像用于在AR设备显示。本申请可实现在AR设备显示三维重建图像,并且显示的三维重建图像是根据采集到的目标对象的特征点信息而进行实时调整,使得医生在观看AR设备时,看到的AR设备上的三维重建图像与用户透过所述AR设备观看到的的目标对象之间是吻合的,并且即使AR设备的位置有所移动,AR设备上的三维重建图像也可以实时调整,从而使医生在术中极大提高了手术的准确性和效率。
如图6所示,为本申请提供的基于实时反馈的增强现实人体定位导航方法详细流程图,包括:
步骤601、根据目标对象的医学图像数据,生成目标对象的初始三维重建图像;
步骤602、根据医学数据库系统,调整初始三维重建图像,得到目标对象的第一三维重建图像;
步骤603、根据AR设备采集到的目标对象的特征点信息,确定目标对象的特征模式,所述AR设备为透明的且可使用户透过所述AR设备观看到所述目标对象;
步骤604、根据目标对象的特征模式及目标对象的第一三维重建图像,确定所述第一三维重建图像的旋转角度、旋转方向、平移距离及缩放比例;
步骤605、将旋转角度、旋转方向、平移距离及缩放比例,作为所述图像变换参数;
步骤606、根据所述图像变换参数,调整所述第一三维重建图像,得到第二三维重建图像;
步骤607、将所述第二三维重建图像发送给所述AR设备,以使在所述AR设备上显示所述第二三维重建图像;
其中,在所述AR设备上显示的所述第二三维重建图像与用户透过所述AR设备观看到的所述目标对象吻合。
步骤608、根据接收到的用户指令信息,生成图像调整参数;
步骤609、根据所述图像调整参数,调整所述第二三维重建图像,得到第三三维重建图像;
步骤610、在所述AR设备上显示所述第三三维重建图像。
其中,在所述AR设备上显示的所述第三三维重建图像与用户透过所述AR设备观看到的所述目标对象吻合。
本申请,首先根据目标对象的医学图像数据、医学数据库系统及用户输入的指令,得到目标对象的第一三维重建图像,然后根据AR设备采集到的目标对象的特征点信息和目标对象的第一三维重建图像,生成图像变换参数,根据图像变换参数调整所述第一三维重建图像,得到第二三维重建图像,并且所述第二三维重建图像中的特征点与用户透过所述AR设备观看到的目标对象的特征点吻合,所述第二三维重建图像用于在AR设备显示。本申请可实现在AR设备显示三维重建图像,并且显示的三维重建图像是根据采集到的目标对象的特征点信息而进行实时调整,使得医生在观看AR设备时,看到的AR设备上的三维重建图像与用户透过所述AR设备观看到的的目标对象之间是吻合的,并且即使AR设备的位置有所移动,AR设备上的三维重建图像也可以实时调整,从而使医生在术中极大提高了手术的准确性和效率。
基于相同的技术构思,如图7所示,本申请还提供人体定位导航装置,包括:
图像生成单元701,用于根据目标对象的医学图像数据,生成所述目标对象的初始三维重建图像;根据医学数据库系统及用户输入的指令,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像;
图像变换参数生成单元702,用于根据增强现实AR设备采集到的目标对 象的特征点信息和所述目标对象的第一三维重建图像,生成图像变换参数,所述第一三维重建图像为根据所述目标对象的医学图像数据生成的,所述AR设备为透明的且可使用户透过所述AR设备观看到所述目标对象;
调整单元703,用于根据所述图像变换参数,调整所述第一三维重建图像,得到第二三维重建图像,在所述AR设备上显示的所述第二三维重建图像中的特征点与用户透过所述AR设备观看到的所述目标对象的特征点吻合。
可选地,所述图像变换参数生成单元702,具体用于:
根据所述特征点信息,确定所述目标对象的特征模式;
根据所述目标对象的特征模式及所述目标对象的第一三维重建图像,确定所述第一三维重建图像的旋转角度、旋转方向、平移距离及缩放比例;
将所述旋转角度、所述旋转方向、所述平移距离及所述缩放比例,作为所述图像变换参数。
可选地,所述人体定位导航装置还包括接收单元704,用于:
接收所述AR设备采集到的目标对象的特征点信息;
所述人体定位导航装置还包括发送单元705,用于:
将所述第二三维重建图像发送给所述AR设备,以使在所述AR设备上显示所述第二三维重建图像。
可选地,所述AR设备通过所述AR设备上的传感器扫描所述目标对象来采集所述目标对象的特征点信息,所述特征点信息为所述特征标签对应的信息,或者
所述AR设备通过所述AR设备上的摄像头拍摄所述目标对象来获取所述目标对象的特征点信息,所述特征点信息为所述目标对象上预设位置对应的信息。
可选地,所述图像变换参数生成单元702,还用于根据接收到的用户指令信息,生成图像调整参数;
所述调整单元,还用于根据所述图像调整参数,调整所述第二三维重建图像,得到第三三维重建图像;
所述人体定位导航装置还包括显示单元706,用于在所述AR设备上显示所述第三三维重建图像。
可选地,所述图像生成单元701,具体用于:
接收用户通过预先设定的参数调整系统输入的第一参数调整指令,所述参数调整系统用于以可视化的方式显示三维重建图像信息;
根据所述第一参数调整指令及所述医学数据库系统,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像。
可选地,所述图像生成单元701,具体用于:
根据用户从预先建立的功能模块库中选择的功能模块及用户对选择的功能模块的连接方式,确定第二参数调整指令,所述预先建立的功能模块库中的每个功能模块用于表示一种图像处理方式或多种图像处理方式的组合,所述预先建立的功能模块库中的所有功能模块可在一定的规则下进行连接;
根据所述第二参数调整指令及所述医学数据库系统,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像。
可选地,所述图像生成单元701,具体用于:
根据所述目标对象的医学图像数据、用户从预先建立的功能模块库中选择的功能模块及用户对选择的功能模块的连接方式,生成所述目标对象的初始三维重建图像,所述预先建立的功能模块库中的每个功能模块用于表示一种图像处理方式或多种图像处理方式的组合,所述预先建立的功能模块库中的所有功能模块可在一定的规则下进行连接;
根据用户对所述选择的功能模块进行指令输入,得到第三参数调整指令;
根据所述根据医学数据库系统及所述第三参数调整指令,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像。
本申请,首先根据目标对象的医学图像数据、医学数据库系统及用户输入的指令,得到目标对象的第一三维重建图像,然后根据AR设备采集到的目标对象的特征点信息和目标对象的第一三维重建图像,生成图像变换参数,根据图像变换参数调整所述第一三维重建图像,得到第二三维重建图像,并 且所述第二三维重建图像中的特征点与用户透过所述AR设备观看到的目标对象的特征点吻合,所述第二三维重建图像用于在AR设备显示。本申请可实现在AR设备显示三维重建图像,并且显示的三维重建图像是根据采集到的目标对象的特征点信息而进行实时调整,使得医生在观看AR设备时,看到的AR设备上的三维重建图像与用户透过所述AR设备观看到的的目标对象之间是吻合的,并且即使AR设备的位置有所移动,AR设备上的三维重建图像也可以实时调整,从而使医生在术中极大提高了手术的准确性和效率。
基于相同的技术构思,本申请还提供一种人体定位导航装置800,其硬件结构示意图,如图8所示,包括:
一个或多个处理器810以及存储器820,图8中以一个处理器810为例。
执行基于实时反馈的增强现实人体定位导航方法的人体定位导航装置还可以包括:输入装置830和输出装置840。
处理器810、存储器820、输入装置830和输出装置840可以通过总线或者其他方式连接,图8中以通过总线连接为例。
存储器820作为一种非易失性计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块,如本申请中的基于实时反馈的增强现实人体定位导航方法对应的程序指令/模块(例如,附图7所示的接收单元704、图像生成单元701、图像变换参数生成单元702、调整单元703、发送单元705、显示单元706)。处理器810通过运行存储在存储器820中的非易失性软件程序、指令以及模块,从而执行服务器的各种功能应用以及数据处理,即实现本申请中的基于实时反馈的增强现实人体定位导航方法。
存储器820可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储根据列表项操作的处理装置的使用所创建的数据等。此外,存储器820可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实施例中,存储器820可选包括相对于处理器810远程设置的存储器,这些远程存储器可以通过网络 连接至列表项操作的处理装置。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
输入装置830可接收输入的数字或字符信息,以及产生与人体定位导航装置的用户设置以及功能控制有关的键信号输入。输出装置840可包括显示屏等显示设备。
所述一个或者多个模块存储在所述存储器820中,当被所述一个或者多个处理器810执行时,执行上述任意方法实施例中的基于实时反馈的增强现实人体定位导航方法。
上述产品可执行本申请所提供的方法,具备执行方法相应的功能模块和有益效果,未在本实施例中详尽描述的技术细节,可参见本申请所提供的方法。
所述至少一个处理器810用于:根据目标对象的医学图像数据,生成所述目标对象的初始三维重建图像;根据医学数据库系统及用户输入的指令,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像;
根据增强现实AR设备采集到的目标对象的特征点信息和所述目标对象的第一三维重建图像,生成图像变换参数,所述第一三维重建图像为根据所述目标对象的医学图像数据生成的,所述AR设备为透明的且可使用户透过所述AR设备观看到所述目标对象;
根据所述图像变换参数,调整所述第一三维重建图像,得到第二三维重建图像,在所述AR设备上显示的所述第二三维重建图像中的特征点与用户透过所述AR设备观看到的所述目标对象的特征点吻合。
可选地,所述至少一个处理器810,具体用于:
根据所述特征点信息,确定所述目标对象的特征模式;
根据所述目标对象的特征模式及所述目标对象的第一三维重建图像,确定所述第一三维重建图像的旋转角度、旋转方向、平移距离及缩放比例;
将所述旋转角度、所述旋转方向、所述平移距离及所述缩放比例,作为所述图像变换参数。
可选地,所述至少一个处理器810,用于:
接收所述AR设备采集到的目标对象的特征点信息;
将所述第二三维重建图像发送给所述AR设备,以使在所述AR设备上显示所述第二三维重建图像。
可选地,所述至少一个处理器810通过所述AR设备上的传感器扫描所述目标对象来采集所述目标对象的特征点信息,所述特征点信息为所述特征标签对应的信息,或者
所述至少一个处理器810通过所述AR设备上的摄像头拍摄所述目标对象来获取所述目标对象的特征点信息,所述特征点信息为所述目标对象上预设位置对应的信息。
可选地,所述至少一个处理器810,还用于根据接收到的用户指令信息,生成图像调整参数;根据所述图像调整参数,调整所述第二三维重建图像,得到第三三维重建图像;在所述AR设备上显示所述第三三维重建图像。
可选地,所述至少一个处理器810,具体用于:
接收用户通过预先设定的参数调整系统输入的第一参数调整指令,所述参数调整系统用于以可视化的方式显示三维重建图像信息;
根据所述第一参数调整指令及所述医学数据库系统,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像。
可选地,所述至少一个处理器810,具体用于:
根据用户从预先建立的功能模块库中选择的功能模块及用户对选择的功能模块的连接方式,确定第二参数调整指令,所述预先建立的功能模块库中的每个功能模块用于表示一种图像处理方式或多种图像处理方式的组合,所述预先建立的功能模块库中的所有功能模块可在一定的规则下进行连接;根据所述第二参数调整指令及所述医学数据库系统,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像。
可选地,所述至少一个处理器810,具体用于:
根据所述目标对象的医学图像数据、用户从预先建立的功能模块库中选 择的功能模块及用户对选择的功能模块的连接方式,生成所述目标对象的初始三维重建图像,所述预先建立的功能模块库中的每个功能模块用于表示一种图像处理方式或多种图像处理方式的组合,所述预先建立的功能模块库中的所有功能模块可在一定的规则下进行连接;
根据用户对所述选择的功能模块进行指令输入,得到第三参数调整指令;
根据所述根据医学数据库系统及所述第三参数调整指令,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像。
本申请还提供一种非暂态计算机可读存储介质,所述非暂态计算机可读存储介质存储计算机指令,所述计算机指令用于使所述计算机执行上述任一项所述的基于实时反馈的增强现实人体定位导航方法。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(RandomAccessMemory,RAM)等。
本申请还提供一种计算机程序产品,所述计算机程序产品包括存储在非暂态计算机可读存储介质上的计算程序,所述计算机程序包括程序指令,当所述程序指令被计算机执行时,使所述计算机执行上述任一项所述的基于实时反馈的增强现实人体定位导航方法。
需要说明的是,本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(ROM)或随机存储记忆体(RAM)等。
本申请还提供一种穿刺轨迹导引装置900,如图9所示,包括第一组件901、第二组件902、第三组件903;
所述第三组件903包括基座和基座上的凹槽,所述凹槽中轴方向与所述基座垂直,所述凹槽内侧包含第一固定结构;
所述第一组件901为导管结构,用于承载穿过所述第一组件901的穿刺管,所述穿刺管通过所述第一组件进入穿刺对象,所述第一组件901的第一端插入所述第三组件903的凹槽,且所述第一组件901的第一端的外径小于所述凹槽的内径,从而第一组件901可在凹槽内转动;
所述第二组件902包含第二固定结构,所述第二固定结构用于与所述第一固定结构卡合,使插入所述第三组件903的所述第一组件901固定于第一角度。可选地,第一固定结构为螺纹结构,第二固定结构为螺纹结构;或者,第一固定结构为插销结构,第二固定结构为插销结构。
具体地,参考图9,第一组件901、第二组件902、第三组件903为三个独立组件,可通过拼装,得到一个完整的穿刺轨迹导引装置900,即将第一组件901插入第三组件903,第一组件901可在第三组件903的凹槽内转动,使第一组件901与第三组件903的基座形成各种角度,当调整至需要的第一角度后,可通过第二组件902的第二固定结构,与第三组件903内的第一固定结构拧合,从而使得第一组件901固定于第一角度。如图10所示,为第三组件903的剖面图。其中,第一角度是根据穿刺时所需要的穿刺方向和角度确定的,即形成第一角度后,第一组件与针对穿刺对象的穿刺路径是相吻合的。
图9所示的穿刺轨迹导引装置900可应用于人体穿刺手术,例如当需要对头颅进行穿刺时,首先通过确定头颅的穿刺点,利用穿刺工具在穿刺点对头颅穿刺后,在头颅上有穿孔,将穿刺轨迹导引装置900的第三组件903的基座放置于头颅穿孔位置,并将基座固定在头颅上,例如使用使用钛螺丝钉将基座固定在头颅上,从而使得穿刺轨迹导引装置900固定在头颅上,且第一组件901对准头颅穿孔位置。
进一步,将穿刺管通过所述第一组件901穿刺进入穿刺对象,可选地,在实际应用中,还可以借助于其它组件完成穿刺,例如,所述穿刺轨迹导引 装置900还包括第四组件(图中未示出),所述第四组件为导管结构,第四组件置入第一组件901,即第一组件901承载穿过所述第一组件901的所述第四组件,以及,第四组件承载穿过第四组件的穿刺管,也即,将第四组件置入第一组件901,并将穿刺管置入第四组件,从而,穿刺管可通过在第四组件内部移动,进入穿刺对象,为方便穿刺,所述第一组件的内径与所述第四组件的外径的差值小于第一阈值,所述第四组件的内径与所述穿刺管的外径的差值小于第二阈值,其中,第一阈值和第二阈值均为较小值,使得第一组件的内径略大于第四组件的外径,以及,使得第四组件的内径略大于穿刺管的外径,从而保证第四组件可紧贴于第一组件901的内壁,以及,保证穿刺管可紧贴于第四组件的内壁。
可选地,第一组件901的长度为6厘米(cm),可选地,第四组件的内径或为2.5毫米(mm)、或为4毫米、或为4.67毫米。由于第四组件有多种内径,相应地,可以适用于多种粗细的穿刺管,方便不同应用场景的需求。
可选地,上述第一组件901的第二端包含追踪器,所述追踪器用于探测上述第一角度,可选地,所述追踪器为4叉或5叉的追踪器,可选地,追踪器与第一组件901完全成为一体,追踪器在追踪到第一组件901所在的第一角度后,可将第一角度发送至本申请前述的人体定位导航装置,人体定位导航装置根据第一组件901所在的第一角度,确定第一组件901是否位于穿刺路径上,若第一组件901不在穿刺路径上,则继续调整第一角度,直到第一组件901位于穿刺路径上,从而保证穿刺管可沿着穿刺路径移动。
可选地,所述第一组件901、所述第二组件902、所述第三组件903中的部分或全部为对半贴合结构,在实际应用中,可以是一次性使用,在手术应用完毕后,用手向两侧掰开,该穿刺轨迹导引装置900分离为两半,且与第四组件脱离。
本申请还提供一种穿刺轨迹导引装置,如图11所示,应用于穿刺场景,包括导管及固定在所述导管上不同位置的两个标记部件;所述导管用于承载穿过所述导管的穿刺管,所述穿刺管通过所述导管进入穿刺对象;所述两个 标记部件分别用于在穿刺时与增强现实AR设备上显示的穿刺定位线上的标记标志相吻合。
其中,所述两个标记部件中的第一标记部件距离所述导管的第一端为第一距离,所述两个标记部件中的第二标记部件距离所述导管的所述第一端为第二距离,所述导管的第一端为靠近所述穿刺对象的一端。可选地,所述第一距离为5厘米(cm),所述第二距离为10厘米。
可选地,所述导管的内径或为2.5毫米(mm)、或为4毫米、或为4.67毫米。
所述导管内侧包含至少一个弹片,所述弹片用于固定穿过所述导管的穿刺管。
实际应用中,图11所示的穿刺轨迹导引装置的第一端点固定于穿刺对象例如头颅,且第一端点对准穿刺点,从而,当穿刺管插入穿刺轨迹导引装置的导管,可经过第一端点、穿刺点,到达头颅内部。
可选地,结合前述的人体定位导航装置,可由人体定位导航装置生成穿过穿刺对象的穿刺路径,以及生成穿刺路径在穿刺对象外部的延长线,称为穿刺定位线,所述穿刺定位线上包含虚拟的两个标记标志,并将生成的穿刺定位线及虚拟标记标志发送至AR眼镜,当用户佩戴AR眼镜时,可在AR眼镜上看见穿刺定位线及及虚拟标记标志,以及可透过AR眼镜看见实际的图11所示的穿刺轨迹导引装置在穿刺对象上的位置,从而,用户通过自身的移动或者移动图11所示的穿刺轨迹导引装置,可使得AR眼镜上显示的穿刺定位线与穿刺轨迹导引装置的导管吻合,并且使得穿刺定位线上的两个虚拟标记标志与穿刺轨迹导引装置的导管上的两个标记部件重合,完成虚实结合。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例 方案的目的。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到各实施方式可借助软件加通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行各个实施例或者实施例的某些部分所述的方法。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (25)

  1. 一种基于实时反馈的增强现实人体定位导航方法,其特征在于,包括:
    根据目标对象的医学图像数据,生成所述目标对象的初始三维重建图像;
    根据医学数据库系统及用户输入的指令,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像;
    根据增强现实AR设备采集到的目标对象的特征点信息和所述目标对象的第一三维重建图像,生成图像变换参数,所述第一三维重建图像为根据所述目标对象的医学图像数据生成的,所述AR设备为透明的且可使用户透过所述AR设备观看到所述目标对象;
    根据另一组用户指令信息,生成第二图像变换参数;
    根据所述图像变换参数,调整所述第一三维重建图像,得到第二三维重建图像,在所述AR设备上显示的所述第二三维重建图像中的特征点与用户透过所述AR设备观看到的所述目标对象的特征点吻合。
  2. 如权利要求1所述的方法,其特征在于,所述根据AR设备采集到的目标对象的特征点信息和所述目标对象的第一三维重建图像,生成图像变换参数,包括:
    根据所述特征点信息,确定所述目标对象的特征模式;
    根据所述目标对象的特征模式及所述目标对象的第一三维重建图像,确定所述第一三维重建图像的旋转角度、旋转方向、平移距离及缩放比例;
    将所述旋转角度、所述旋转方向、所述平移距离及所述缩放比例,作为所述图像变换参数。
  3. 如权利要求1所述的方法,其特征在于,所述根据AR设备采集到的目标对象的特征点信息和所述目标对象的第一三维重建图像,生成图像变换参数之前,还包括:
    接收所述AR设备采集到的目标对象的特征点信息;
    所述根据所述图像变换参数,调整所述第一三维重建图像,得到第二三 维重建图像之后,还包括:
    将所述第二三维重建图像发送给所述AR设备,以使在所述AR设备上显示所述第二三维重建图像。
  4. 如权利要求1-3中任一项所述的方法,其特征在于,所述AR设备通过所述AR设备上的传感器扫描所述目标对象来采集所述目标对象的特征点信息,所述特征点信息为所述特征标签对应的信息,或者
    所述AR设备通过所述AR设备上的摄像头拍摄所述目标对象来获取所述目标对象的特征点信息,所述特征点信息为所述目标对象上预设位置对应的信息。
  5. 如权利要求1-3中任一项所述的的方法,其特征在于,所述方法还包括:
    根据接收到的用户指令信息,生成图像调整参数;
    根据所述图像调整参数,调整所述第二三维重建图像,得到第三三维重建图像;
    在所述AR设备上显示所述第三三维重建图像。
  6. 如权利要求1所述的方法,其特征在于,所述根据医学数据库系统及用户输入的指令,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像,包括:
    接收用户通过预先设定的参数调整系统输入的第一参数调整指令,所述参数调整系统用于以可视化的方式显示三维重建图像信息;
    根据所述第一参数调整指令及所述医学数据库系统,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像。
  7. 如权利要求1所述的方法,其特征在于,所述根据医学数据库系统及用户输入的指令,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像,包括:
    根据用户从预先建立的功能模块库中选择的功能模块及用户对选择的功能模块的连接方式,确定第二参数调整指令,所述预先建立的功能模块库中 的每个功能模块用于表示一种图像处理方式或多种图像处理方式的组合,所述预先建立的功能模块库中的所有功能模块可在一定的规则下进行连接;
    根据所述第二参数调整指令及所述医学数据库系统,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像。
  8. 如权利要求1所述的方法,其特征在于,所述根据所述目标对象的医学图像数据,生成所述目标对象的初始三维重建图像,包括:
    根据所述目标对象的医学图像数据、用户从预先建立的功能模块库中选择的功能模块及用户对选择的功能模块的连接方式,生成所述目标对象的初始三维重建图像,所述预先建立的功能模块库中的每个功能模块用于表示一种图像处理方式或多种图像处理方式的组合,所述预先建立的功能模块库中的所有功能模块可在一定的规则下进行连接;
    所述根据医学数据库系统及用户输入的指令,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像,包括:
    根据用户对所述选择的功能模块进行指令输入,得到第三参数调整指令;
    根据所述根据医学数据库系统及所述第三参数调整指令,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像。
  9. 一种人体定位导航装置,其特征在于,包括:
    图像生成单元,用于根据目标对象的医学图像数据,生成所述目标对象的初始三维重建图像;根据医学数据库系统及用户输入的指令,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像;
    图像变换参数生成单元,用于根据增强现实AR设备采集到的目标对象的特征点信息和所述目标对象的第一三维重建图像,生成图像变换参数,所述第一三维重建图像为根据所述目标对象的医学图像数据生成的,所述AR设备为透明的且可使用户透过所述AR设备观看到所述目标对象;
    调整单元,用于根据所述图像变换参数,调整所述第一三维重建图像,得到第二三维重建图像,在所述AR设备上显示的所述第二三维重建图像中的特征点与用户透过所述AR设备观看到的所述目标对象的特征点吻合。
  10. 如权利要求9所述的人体定位导航装置,其特征在于,所述图像变换参数生成单元,具体用于:
    根据所述特征点信息,确定所述目标对象的特征模式;
    根据所述目标对象的特征模式及所述目标对象的第一三维重建图像,确定所述第一三维重建图像的旋转角度、旋转方向、平移距离及缩放比例;
    将所述旋转角度、所述旋转方向、所述平移距离及所述缩放比例,作为所述图像变换参数。
  11. 如权利要求9所述的人体定位导航装置,其特征在于,所述人体定位导航装置还包括接收单元,用于:
    接收所述AR设备采集到的目标对象的特征点信息;
    所述人体定位导航装置还包括发送单元,用于:
    将所述第二三维重建图像发送给所述AR设备,以使在所述AR设备上显示所述第二三维重建图像。
  12. 如权利要求9-11中任一项所述的人体定位导航装置,其特征在于,所述AR设备通过所述AR设备上的传感器扫描所述目标对象来采集所述目标对象的特征点信息,所述特征点信息为所述特征标签对应的信息,或者
    所述AR设备通过所述AR设备上的摄像头拍摄所述目标对象来获取所述目标对象的特征点信息,所述特征点信息为所述目标对象上预设位置对应的信息。
  13. 如权利要求9-11中任一项所述的的人体定位导航装置,其特征在于,所述图像变换参数生成单元,还用于根据接收到的用户指令信息,生成图像调整参数;
    所述调整单元,还用于根据所述图像调整参数,调整所述第二三维重建图像,得到第三三维重建图像;
    所述人体定位导航装置还包括显示单元,用于在所述AR设备上显示所述第三三维重建图像。
  14. 如权利要求9所述的人体定位导航装置,其特征在于,所述图像生 成单元,具体用于:
    接收用户通过预先设定的参数调整系统输入的第一参数调整指令,所述参数调整系统用于以可视化的方式显示三维重建图像信息;
    根据所述第一参数调整指令及所述医学数据库系统,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像。
  15. 如权利要求9所述的人体定位导航装置,其特征在于,所述图像生成单元,具体用于:
    根据用户从预先建立的功能模块库中选择的功能模块及用户对选择的功能模块的连接方式,确定第二参数调整指令,所述预先建立的功能模块库中的每个功能模块用于表示一种图像处理方式或多种图像处理方式的组合,所述预先建立的功能模块库中的所有功能模块可在一定的规则下进行连接;
    根据所述第二参数调整指令及所述医学数据库系统,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像。
  16. 如权利要求9所述的人体定位导航装置,其特征在于,所述图像生成单元,具体用于:
    根据所述目标对象的医学图像数据、用户从预先建立的功能模块库中选择的功能模块及用户对选择的功能模块的连接方式,生成所述目标对象的初始三维重建图像,所述预先建立的功能模块库中的每个功能模块用于表示一种图像处理方式或多种图像处理方式的组合,所述预先建立的功能模块库中的所有功能模块可在一定的规则下进行连接;
    根据用户对所述选择的功能模块进行指令输入,得到第三参数调整指令;
    根据所述根据医学数据库系统及所述第三参数调整指令,调整所述初始三维重建图像,得到所述目标对象的第一三维重建图像。
  17. 一种人体定位导航装置,其特征在于,包括:
    至少一个处理器;以及,
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所 述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-8任一所述方法。
  18. 一种非暂态计算机可读存储介质,其特征在于,所述非暂态计算机可读存储介质存储计算机指令,所述计算机指令用于使所述计算机执行权利要求1-8任一所述方法。
  19. 一种计算机程序产品,其特征在于,所述计算机程序产品包括存储在非暂态计算机可读存储介质上的计算程序,所述计算机程序包括程序指令,当所述程序指令被计算机执行时,使所述计算机执行权利要求1至8任一所述的方法。
  20. 一种穿刺轨迹导引装置,其特征在于,包括第一组件、第二组件、第三组件;
    所述第三组件包括基座和基座上的凹槽,所述凹槽中轴方向与所述基座垂直,所述凹槽内侧包含第一固定结构;
    所述第一组件为导管结构,用于承载穿过所述第一组件的穿刺管,所述穿刺管通过所述第一组件进入穿刺对象,所述第一组件的第一端插入所述第三组件的凹槽,且所述第一组件的第一端的外径小于所述凹槽的内径;
    所述第二组件包含第二固定结构,所述第二固定结构用于与所述第一固定结构协同作用将插入所述第三组件的所述第一组件固定于第一角度。
  21. 如权利要求20所述的穿刺轨迹导引装置,其特征在于,所述第一组件的第二端包含追踪器,所述追踪器用于探测所述第一角度。
  22. 如权利要求20或21所述的穿刺轨迹导引装置,其特征在于,所述装置还包括第四组件,所述第四组件为导管结构,所述第一组件的内径与所述第四组件的外径的差值小于第一阈值,所述第四组件的内径与所述穿刺管的外径的差值小于第二阈值;
    所述第一组件用于承载穿过所述第一组件的所述第四组件,所述第四组件用于承载穿过所述第四组件的所述穿刺管。
  23. 如权利要求20、21或22所述的穿刺轨迹导引装置,其特征在于, 所述第一组件、所述第二组件、所述第三组件中的部分或全部为对半贴合结构。
  24. 一种穿刺轨迹导引装置,应用于穿刺场景,其特征在于,包括导管及固定在所述导管上不同位置的两个标记部件;
    所述导管用于承载穿过所述导管的穿刺管,所述穿刺管通过所述导管进入穿刺对象;
    所述两个标记部件分别用于在穿刺时与增强现实AR设备上显示的穿刺定位线上的虚拟标记标志相吻合。
  25. 如权利要求24所述的穿刺轨迹导引装置,其特征在于,
    所述导管内侧包含至少一个弹片,所述弹片用于固定所述穿刺管。
PCT/CN2017/086892 2016-06-06 2017-06-01 一种基于实时反馈的增强现实人体定位导航方法及装置 WO2017211225A1 (zh)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CN201610395355 2016-06-06
CN201610395355.4 2016-06-06
CN201610629122.6 2016-08-03
CN201610629122.6A CN106296805B (zh) 2016-06-06 2016-08-03 一种基于实时反馈的增强现实人体定位导航方法及装置
US15/292,947 US20170053437A1 (en) 2016-06-06 2016-10-13 Method and apparatus for positioning navigation in a human body by means of augmented reality based upon a real-time feedback
US15/292,947 2016-10-13

Publications (1)

Publication Number Publication Date
WO2017211225A1 true WO2017211225A1 (zh) 2017-12-14

Family

ID=57664405

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/086892 WO2017211225A1 (zh) 2016-06-06 2017-06-01 一种基于实时反馈的增强现实人体定位导航方法及装置

Country Status (3)

Country Link
US (1) US20170053437A1 (zh)
CN (1) CN106296805B (zh)
WO (1) WO2017211225A1 (zh)

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10726593B2 (en) * 2015-09-22 2020-07-28 Fyusion, Inc. Artificially rendering images using viewpoint interpolation and extrapolation
US10013808B2 (en) 2015-02-03 2018-07-03 Globus Medical, Inc. Surgeon head-mounted display apparatuses
CN106296805B (zh) * 2016-06-06 2019-02-26 厦门铭微科技有限公司 一种基于实时反馈的增强现实人体定位导航方法及装置
CN107341791A (zh) * 2017-06-19 2017-11-10 北京全域医疗技术有限公司 一种基于混合现实的勾靶方法、装置及系统
US10861236B2 (en) * 2017-09-08 2020-12-08 Surgical Theater, Inc. Dual mode augmented reality surgical system and method
US20190254753A1 (en) 2018-02-19 2019-08-22 Globus Medical, Inc. Augmented reality navigation systems for use with robotic surgical systems and methods of their use
US10728430B2 (en) 2018-03-07 2020-07-28 Disney Enterprises, Inc. Systems and methods for displaying object features via an AR device
CN108389452A (zh) * 2018-03-23 2018-08-10 四川科华天府科技有限公司 一种电化情景教学模型及教学方法
WO2019210353A1 (en) * 2018-04-30 2019-11-07 MedVR Pty Ltd Medical virtual reality and mixed reality collaboration platform
TWI741196B (zh) * 2018-06-26 2021-10-01 華宇藥品股份有限公司 整合擴增實境之手術導航方法及系統
CN109068063B (zh) * 2018-09-20 2021-01-15 维沃移动通信有限公司 一种三维图像数据的处理、显示方法、装置及移动终端
US10872690B2 (en) * 2018-11-28 2020-12-22 General Electric Company System and method for remote visualization of medical images
CN109732606A (zh) * 2019-02-13 2019-05-10 深圳大学 机械臂的远程控制方法、装置、系统及存储介质
TWI766253B (zh) * 2019-03-19 2022-06-01 鈦隼生物科技股份有限公司 基於影像匹配決定手術路徑之方法與系統
CN110522516B (zh) * 2019-09-23 2021-02-02 杭州师范大学 一种用于手术导航的多层次交互可视化方法
US11992373B2 (en) 2019-12-10 2024-05-28 Globus Medical, Inc Augmented reality headset with varied opacity for navigated robotic surgery
TWI793390B (zh) * 2019-12-25 2023-02-21 財團法人工業技術研究院 資訊顯示方法及其處理裝置與顯示系統
US11464581B2 (en) 2020-01-28 2022-10-11 Globus Medical, Inc. Pose measurement chaining for extended reality surgical navigation in visible and near infrared spectrums
US11382699B2 (en) 2020-02-10 2022-07-12 Globus Medical Inc. Extended reality visualization of optical tool tracking volume for computer assisted navigation in surgery
US11207150B2 (en) 2020-02-19 2021-12-28 Globus Medical, Inc. Displaying a virtual model of a planned instrument attachment to ensure correct selection of physical instrument attachment
US11607277B2 (en) 2020-04-29 2023-03-21 Globus Medical, Inc. Registration of surgical tool with reference array tracked by cameras of an extended reality headset for assisted navigation during surgery
US11382700B2 (en) 2020-05-08 2022-07-12 Globus Medical Inc. Extended reality headset tool tracking and control
US11510750B2 (en) 2020-05-08 2022-11-29 Globus Medical, Inc. Leveraging two-dimensional digital imaging and communication in medicine imagery in three-dimensional extended reality applications
US11153555B1 (en) 2020-05-08 2021-10-19 Globus Medical Inc. Extended reality headset camera system for computer assisted navigation in surgery
CN111930231B (zh) * 2020-07-27 2022-02-25 歌尔光学科技有限公司 交互控制方法、终端设备及存储介质
US11737831B2 (en) 2020-09-02 2023-08-29 Globus Medical Inc. Surgical object tracking template generation for computer assisted navigation during surgical procedure
CN112773513A (zh) * 2021-03-13 2021-05-11 刘铠瑞 一种病变阑尾切除手术专用病理标本制作器械包
TWI818665B (zh) * 2021-11-10 2023-10-11 財團法人工業技術研究院 資訊顯示方法及其資訊顯示系統與處理裝置
CN116107534A (zh) 2021-11-10 2023-05-12 财团法人工业技术研究院 信息显示方法及其处理装置与信息显示系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103083065A (zh) * 2011-11-07 2013-05-08 苏州天臣国际医疗科技有限公司 穿刺器套管组件
CN104013425A (zh) * 2014-06-11 2014-09-03 深圳市开立科技有限公司 一种超声设备显示装置和相关方法
CN105748160A (zh) * 2016-02-04 2016-07-13 厦门铭微科技有限公司 一种穿刺辅助方法、处理器及vr眼镜
CN106097325A (zh) * 2016-06-06 2016-11-09 厦门铭微科技有限公司 一种基于三维重建图像的定位指示生成方法及装置
CN106296805A (zh) * 2016-06-06 2017-01-04 厦门铭微科技有限公司 一种基于实时反馈的增强现实人体定位导航方法及装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ES2402957T3 (es) * 2007-03-05 2013-05-10 Seeing Machines Pty Ltd Rastreo eficaz y preciso de objetos tridimensionales
EP2724294B1 (en) * 2011-06-21 2019-11-20 Koninklijke Philips N.V. Image display apparatus
WO2014013393A2 (en) * 2012-07-17 2014-01-23 Koninklijke Philips N.V. Imaging system and method for enabling instrument guidance
US10013808B2 (en) * 2015-02-03 2018-07-03 Globus Medical, Inc. Surgeon head-mounted display apparatuses
CN105395252A (zh) * 2015-12-10 2016-03-16 哈尔滨工业大学 具有人机交互的可穿戴式血管介入手术三维立体图像导航装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103083065A (zh) * 2011-11-07 2013-05-08 苏州天臣国际医疗科技有限公司 穿刺器套管组件
CN104013425A (zh) * 2014-06-11 2014-09-03 深圳市开立科技有限公司 一种超声设备显示装置和相关方法
CN105748160A (zh) * 2016-02-04 2016-07-13 厦门铭微科技有限公司 一种穿刺辅助方法、处理器及vr眼镜
CN106097325A (zh) * 2016-06-06 2016-11-09 厦门铭微科技有限公司 一种基于三维重建图像的定位指示生成方法及装置
CN106296805A (zh) * 2016-06-06 2017-01-04 厦门铭微科技有限公司 一种基于实时反馈的增强现实人体定位导航方法及装置

Also Published As

Publication number Publication date
US20170053437A1 (en) 2017-02-23
CN106296805A (zh) 2017-01-04
CN106296805B (zh) 2019-02-26

Similar Documents

Publication Publication Date Title
WO2017211225A1 (zh) 一种基于实时反馈的增强现实人体定位导航方法及装置
KR102013866B1 (ko) 실제수술영상을 이용한 카메라 위치 산출 방법 및 장치
US11272985B2 (en) Patient-specific preoperative planning simulation techniques
KR102327527B1 (ko) 3차원 데이터로 피험자의 실시간 뷰
US20190142359A1 (en) Surgical positioning system and positioning method
US10105187B2 (en) Systems, apparatus, methods and computer-readable storage media facilitating surgical procedures utilizing augmented reality
CN108210024B (zh) 手术导航方法及系统
US9179822B2 (en) Endoscopic observation supporting system, method, device and program
US9918798B2 (en) Accurate three-dimensional instrument positioning
US7774044B2 (en) System and method for augmented reality navigation in a medical intervention procedure
TW201912125A (zh) 雙模式增強現實外科手術系統和方法
EP3753507A1 (en) Surgical navigation method and system
US20160000303A1 (en) Alignment ct
JP2018514352A (ja) 後期マーカー配置による融合イメージベース誘導のためのシステムおよび方法
KR102105974B1 (ko) 의료 영상 시스템
WO2014120909A1 (en) Apparatus, system and method for surgical navigation
JP5934070B2 (ja) 仮想内視鏡画像生成装置およびその作動方法並びにプログラム
CN109313698A (zh) 同步的表面和内部肿瘤检测
JP5961504B2 (ja) 仮想内視鏡画像生成装置およびその作動方法並びにプログラム
EP3855396A2 (en) Orientation detection in fluoroscopic images
US20210353371A1 (en) Surgical planning, surgical navigation and imaging system
TWI679960B (zh) 手術器械引導系統
CN109106448A (zh) 一种手术导航方法和装置
Ke et al. Minimally Invasive Cochlear Implantation Assisted by Bi-planar Device: An Exploratory Feasibility Study: in vitro
KR101635515B1 (ko) 의료용 항법 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17809664

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17809664

Country of ref document: EP

Kind code of ref document: A1