CN110660130A - Medical image-oriented mobile augmented reality system construction method - Google Patents

Medical image-oriented mobile augmented reality system construction method Download PDF

Info

Publication number
CN110660130A
CN110660130A CN201910901432.2A CN201910901432A CN110660130A CN 110660130 A CN110660130 A CN 110660130A CN 201910901432 A CN201910901432 A CN 201910901432A CN 110660130 A CN110660130 A CN 110660130A
Authority
CN
China
Prior art keywords
gesture
augmented reality
model
arkit
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910901432.2A
Other languages
Chinese (zh)
Inventor
蔡林沁
陈思维
代宇涵
隆涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN201910901432.2A priority Critical patent/CN110660130A/en
Publication of CN110660130A publication Critical patent/CN110660130A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to a medical image-oriented mobile augmented reality system construction method, and belongs to the technical field of electronics. The method comprises the following steps: s1: performing three-dimensional reconstruction on the medical image through 3D Slicer software to obtain a three-dimensional reconstruction model, and guiding the model into an ARKit framework; s2: starting a camera of the mobile equipment to identify a current scene, and executing an SLAM algorithm interface provided by the ARKit to identify a plane of the current scene and identify the plane; s3: superposing the virtual three-dimensional reconstruction model to the current scene and placing the virtual three-dimensional reconstruction model on the identified real plane; s4: and recognizing the interaction gesture of the current user, and interacting with the virtual object in the real scene. The invention reduces the equipment cost and simultaneously adopts the mobile terminal to realize the convenience of medical care personnel in diagnosing diseases.

Description

Medical image-oriented mobile augmented reality system construction method
Technical Field
The invention belongs to the technical field of electronics, relates to the combination of a medical three-dimensional reconstruction technology and the technical field of mobile augmented reality, and particularly relates to a medical image-oriented mobile augmented reality system construction method.
Background
With the development of modern medical imaging technology, Computer Tomography (CT), Magnetic Resonance Imaging (MRI), positron emission tomography (PEI), Ultrasound (Ultrasound), and the like have appeared in sequence. However, these medical imaging devices can only provide two-dimensional images of internal tissue or organ sections of the human body, and cannot provide continuous three-dimensional images. In the current medical diagnosis, the focus is mainly discovered by observing a group of CT and MRI two-dimensional slice images, the shape and the size of the focus can be estimated only by the doctor's experience of reading, the intuition is lacked, and the accurate judgment is difficult to achieve.
The 3D Slicer is an open source software platform for medical image informatics, image processing and three-dimensional visualization. Slicer provides free, powerful cross-platform processing tools for doctors, researchers, and the public. Augmented Reality (AR) is a technology for calculating the position and angle of a camera image in real time and adding corresponding images, videos and 3D models, and aims to overlap a virtual world on a screen in the real world and interact with the virtual world. But no case of applying 3D Slicer in conjunction with AR technology to mobile terminals has been found so far.
Therefore, a method for constructing a mobile augmented reality system oriented to medical images is needed to solve the problem of inaccurate disease judgment.
Disclosure of Invention
In view of this, the present invention aims to provide a method for constructing a mobile augmented reality system for medical images, in which a three-dimensional model is obtained from a 3d slicer, the augmented reality medical three-dimensional model based on a visual SLAM algorithm and an IMU is issued to an iOS mobile terminal, and an interactive function is provided by recognizing an interactive gesture, so that the system can be used for teaching work or enable a doctor to perform preoperative planning more intuitively.
In order to achieve the purpose, the invention provides the following technical scheme:
a medical image-oriented mobile augmented reality system construction method specifically comprises the following steps:
s1: performing three-dimensional reconstruction on the medical image through 3D Slicer software to obtain a three-dimensional reconstruction model, and guiding the model into an ARKit framework;
s2: starting a camera of the mobile equipment to identify a current scene, and executing an SLAM algorithm interface provided by the ARKit to identify a plane of the current scene and identify the plane;
s3: superposing the virtual three-dimensional reconstruction model to the current scene and placing the virtual three-dimensional reconstruction model on the identified real plane;
s4: and recognizing the interaction gesture of the current user, and interacting with the virtual object in the real scene.
Further, the step S1 specifically includes the following steps:
s11: importing patient data (CT or MRI slice images) into 3D Slicer software to obtain a continuous data field, then selecting an interested region in a Segment editor module, and performing three-dimensional reconstruction;
s12: the operation simulation (such as puncture, craniotomy, endoscope, pedicle screw implantation and the like) is carried out in the Segment editor module or other plug-ins, a new three-dimensional reconstruction model is obtained at the same time, different interesting tissues can be dyed, and better observation is facilitated; if the derived three-dimensional model needs to comprise disordered and isolated fragments, cleaning is needed;
s13: the three-dimensional reconstruction model obtained in S12 is converted into a desired dae format by the blender software, and then introduced into the mobile augmented reality program based on the ARKit framework.
Further, in step S12, if the derived three-dimensional model needs to include a disordered isolated fragment, the derived three-dimensional model needs to be cleaned, specifically: firstly, a MeshLab software Remove Isolated Pieces module is adopted to clean up fragments, then a Split in Connected Components module is adopted to divide the model into different parts according to whether the model is Connected, and finally all the obtained three-dimensional models are stored in an obj format or a stl format.
Further, the step S2 specifically includes: running an ARWorldTrackingConfiguration to track the current scene and identify the scene plane by planeDetection, then drawing a square outline in AR view with FoucusSquare class, providing the user with a hint as to the ARKit world tracking state, the square changing size and direction to reflect the estimated scene depth, and switching between open and closed states by a highlighted animation to indicate whether the ARKit detects a plane suitable for placing the object; after the user places the virtual object, the focal square disappears, remaining hidden until the user points the camera to another surface.
Further, the step S3 specifically includes:
s31: firstly, adding a gesture recognizer addGestutureRecognizer (____) in an ARKit program so as to enable equipment to detect all interactive gestures, and then carrying out most basic interactive operation by setting a tap gesture subclass UIGestutureRecognizer, such as clicking and clicking a UI menu in an augmented reality program;
s32: when the user selects a virtual object to be placed, a setposition (: relative: smooth movement) method is called to place the object at a position approximately real in the middle of the screen using a simple heuristic method of the FocusSquare object, even if the Arkit has not detected a plane at that position; this position may not be an accurate estimate of the real surface on which the user wants to place the virtual object, but it is close enough to quickly display the object on the screen. As time goes on, the arkit detects the plane and optimizes its position estimation value, calls renderer (: didadd: for:) and renderer (: didddupdate: for:) entrusted methods to report the result; in these methods, an AdjustOnToplannCornor (u: using:) method is called to determine whether a previously placed virtual object is close to the detected plane. This approach will use a subtle animation to move the virtual object onto a plane so that the object appears to be at a user-selected location while benefiting from the precise estimation by the Arkit of the real-world surface at that location.
Further, the step S4 specifically includes:
s41: firstly, class VirtualObjectinteraction for managing standard interactive gestures is added, a translate (basedOn: infinitePlane) method is used for dragging the virtual object, and the program limits the movement of the object to a two-dimensional plane placed on the virtual object; then, a setPosition (relative to: smooth motion) method is used for generating smooth dragging motion, and the dragged object is prevented from lagging behind the gesture of the user; similarly, because the virtual object lies on a horizontal plane, the rotation gesture is implemented using the didRotate (_) method, rotating the object only about its vertical axis; finally, the scaling of the virtual medical model is realized by setting the pinch gesture subclass uipinchgeturerecognizer and the appropriate scaling ratio column Scale.
S42: responding to the gesture within a reasonable proximity of the interactive virtual object: performing a collision test using a contact position provided by a gesture recognizer in the object interacting (with: in) method; by performing a click test on the bounding box of the virtual object, the user touch is more likely to affect the object even if the touch position is not at a point where the object has visible content; by executing the multi-touch test on the multi-touch gesture, the user touches the object more likely to be influenced;
s43: avoiding potential gesture conflicts the ThresholdPanGesture class is a subclass of UIPanGesturre recognizers and provides a way to delay the effect of a gesture recognizer until an ongoing gesture passes a specified movement threshold. The use of the hresholdPanGesture class in the touchmoved (with) method allows the user to smoothly transition between dragging and rotating objects in a single two-finger gesture;
s44: add an additional interaction gesture: in the AR experience, a drag gesture, i.e., moving a finger to the device screen, is not the only natural way to drag virtual content to a new location. The user can also intuitively try to keep the finger on the screen while moving the device, effectively dragging the touch point in the AR scene. Such a gesture is supported by continuously calling the updateObjectToCurrentTrackingPosition () method while the drag gesture is in progress, even if the touch position of the gesture has not changed; if the device moves during the drag, the method calculates a new world location corresponding to the touch location and moves the virtual object accordingly.
The invention has the beneficial effects that:
1) the ARKit adopted by the invention integrates the visual SLAM algorithm and the IMU data processing algorithm, developers do not need to directly interact with the complex SLAM algorithm, and the mobile terminal with limited computing capability still has strong functions. By combining with an iOS development tool Xcode, the invention releases the medical demonstration method based on the ARKit framework to the mobile terminal, has the advantages of easy development and convenient use, and has popularity and convenience.
2) The invention combines the medical three-dimensional reconstruction technology and the mobile augmented reality technology, three-dimensionally reconstructs data of a real patient by using 3D slicer medical software and carries out operation simulation, then the obtained medical three-dimensional model is led into a mobile augmented reality program, and in the demonstration process, all the models before and after the operation can be simultaneously projected into a real environment and carry out interactive operations such as dragging, rotating, zooming and the like, so that the preoperative planning of doctors can be more visual, and learners can be more clearly recognized.
3) The invention introduces the functions of a brand-new interaction mode such as motion capture, face recognition (front-back camera synchronization) and the like and character occlusion, multi-face tracking, collaborative conversation and the like in the ARKit 3. Through real-time collaborative conversation among multiple people, people can obtain common AR experience during multi-person conversation, people can conveniently communicate and learn, meanwhile, the interaction process is more natural and smooth due to the addition of interaction modes such as facial recognition and motion capture (converting gestures and motions into input of joint and bone activities), and meanwhile, the current application expansibility is stronger, and the subsequent application content can be conveniently expanded.
4) The invention reduces the equipment cost and simultaneously adopts the mobile terminal to realize the convenience of medical care personnel in diagnosing diseases.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and combinations particularly pointed out hereinafter.
Drawings
For the purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made to the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a flow chart of an embodiment of the method of the present invention;
FIG. 2 is a diagram of the architecture of the present invention using the ARKit.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention in a schematic way, and the features in the following embodiments and examples may be combined with each other without conflict.
Referring to fig. 1 to fig. 2, as shown in fig. 1, fig. 1 is a preferred embodiment of a method for constructing a mobile augmented reality system for medical images according to the present invention, which includes the following steps:
step 1: obtaining a virtual medical model by three-dimensional reconstruction of medical images
Data (CT or MRI slice images) of a real patient are led into 3D slicer software, then an interested region is extracted and three-dimensionally reconstructed through a Segment editor module, the obtained three-dimensional model can be used for puncture, craniotomy, endoscope, pedicle screw implantation and other simulation operations in the 3D slicer, meanwhile, a new medical three-dimensional model of the corresponding operation is obtained, all the medical models are stored in an obj or stl format, and then the medical models are converted into a dae virtual model format required by an ARKit frame through blender software and led into a mobile augmented reality medical demonstration method program.
Step 2: building an ARKit framework in Xcode
An ARKit framework is built in the Xcode, the building diagram is shown in FIG. 2, and the ARSCNView main class is responsible for integrating virtual and real contents; SCNScene inherits ARSCNView and is responsible for capturing virtual scene information; the ARCamera inherits ARCSNNView and is responsible for capturing real scene information; the ARSession processes the acquired image sequence and finally outputs an ARFrame; all information of the real-world scene is contained in the ARFrame. The arssessionconfiguration provides tracking information, captures the spatial position of the camera, and calculates the true matrix position of the 3D object model relative to the camera when the 3D model is added.
Step 3: understanding the current scene detects planes
The method comprises the steps of acquiring point clouds of a current scene through an SLAM algorithm interface provided by an ARKit framework, analyzing point cloud information to obtain the point clouds belonging to the same plane, identifying the obtained plane, and selecting a determined position on the plane.
Step 4: placing a virtual medical model onto a real plane
The augmented reality aims to fuse a virtual object with the real world, wherein the core of the advanced augmented reality technology at a mobile terminal is visual SLAM, and an SLAM algorithm is executed by continuously acquiring image information captured by a mobile equipment camera and angular velocity and acceleration information acquired by an Inertial Measurement Unit (IMU) of a mobile equipment body, so as to estimate the current environment point cloud and locate the current position of the mobile equipment. And then binding the virtual space coordinate system with the coordinate system of the real environment, namely, placing the virtual object model in the real environment to further realize virtual-real fusion, accurately positioning the current environment and the coordinates of the current environment by using the SLAM algorithm to ensure that the virtual object is superposed to achieve the effect of no sense of incongruity, and the module is mainly realized by a SceneKit.
Step 5: user manipulation of virtual medical models through interactive gestures
The module mainly realizes three interactive gestures, namely dragging, rotating and zooming, simultaneously considers and solves the problems that whether the object dragging is smooth, different interactive gestures are potential conflicts, the gesture is responded in the reasonable approaching range of the interactive virtual object and the like, so that a user can easily operate and move the augmented reality medical demonstration method, and the module is mainly realized through the UIKit.
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.

Claims (6)

1. A medical image-oriented mobile augmented reality system construction method is characterized by specifically comprising the following steps:
s1: performing three-dimensional reconstruction on the medical image through 3D Slicer software to obtain a three-dimensional reconstruction model, and guiding the model into an ARKit framework;
s2: starting a camera of the mobile equipment to identify a current scene, and executing an SLAM algorithm interface provided by the ARKit to identify a plane of the current scene and identify the plane;
s3: superposing the virtual three-dimensional reconstruction model to the current scene and placing the virtual three-dimensional reconstruction model on the identified real plane;
s4: and recognizing the interaction gesture of the current user, and interacting with the virtual object in the real scene.
2. The method for constructing a mobile augmented reality system for medical images according to claim 1, wherein the step S1 specifically includes the following steps:
s11: importing patient data into 3D Slicer software to obtain a continuous data field, then selecting an interested region in a Segment editor module, and performing three-dimensional reconstruction;
s12: performing operation simulation in a Segment editor module or other plug-ins, obtaining a new three-dimensional reconstruction model at the same time, and dyeing different interesting tissues; if the derived three-dimensional model needs to comprise disordered and isolated fragments, cleaning is needed;
s13: the three-dimensional reconstruction model obtained in S12 is converted into a desired dae format by the blender software, and then introduced into the mobile augmented reality program based on the ARKit framework.
3. The method for constructing a mobile augmented reality system for medical images according to claim 2, wherein in step S12, if the derived three-dimensional model needs to include cluttered isolated fragments, it needs to be cleaned, specifically: firstly, a MeshLab software Remove Isolated Pieces module is adopted to clean up fragments, then a Split in connected Components module is adopted to divide the model into different parts according to whether the model is connected, and finally all the obtained three-dimensional models are stored in an obj format or a stl format.
4. The method for constructing a mobile augmented reality system for medical images according to claim 1, wherein the step S2 specifically includes: running an ARWorldTrackingConfiguration to track the current scene and identify the scene plane by planeDetection, then drawing a square outline in AR view with FoucusSquare class, providing the user with a hint as to the ARKit world tracking state, the square changing size and direction to reflect the estimated scene depth, and switching between open and closed states by a highlighted animation to indicate whether the ARKit detects a plane suitable for placing the object; after the user places the virtual object, the focal square disappears, remaining hidden until the user points the camera to another surface.
5. The method for constructing a mobile augmented reality system for medical images according to claim 1, wherein the step S3 specifically includes:
s31: firstly, adding a gesture recognizer addGestutureRecognizer (_ in an ARKit program so as to enable the equipment to detect all interactive gestures, and then carrying out the most basic interactive operation by setting a tap gesture subclass UIGestutureRecognizer;
s32: when a user selects a virtual object to be placed, calling a setposition (: relationship: smooth movement) method to place the object at a position approximately real in the middle of a screen by using a simple heuristic method of a FocusSquare object; as time goes by, the arkit detects planes and optimizes its position estimate, invoking renderer (: didaddo:) and renderer (: didddupdate: for:) delegation methods to report results.
6. The method for constructing a mobile augmented reality system for medical images according to claim 1, wherein the step S4 specifically includes:
s41: firstly, class VirtualObjectinteraction for managing standard interactive gestures is added, a translate (basedOn: infinitePlane) method is used for dragging the virtual object, and the program limits the movement of the object to a two-dimensional plane placed on the virtual object; then, a setPosition (relative to: smooth motion) method is used for generating smooth dragging motion, and the dragged object is prevented from lagging behind the gesture of the user; a rotation gesture is implemented using the didRotate (_) method, rotating the object only around its vertical axis; finally, the virtual medical model is zoomed by setting a pinch gesture subclass UIPinchGestutureRecognizer and a proper zoom ratio column Scale;
s42: responding to the gesture within a reasonable proximity of the interactive virtual object: performing a collision test using a contact position provided by a gesture recognizer in the object interacting (with: in) method; by performing a click test on the bounding box of the virtual object, the user touch is more likely to affect the object even if the touch position is not at a point where the object has visible content; by executing the multi-touch test on the multi-touch gesture, the user touches the object more likely to be influenced;
s43: the use of the hresholdPanGesture class in the touchmoved (with) method allows the user to smoothly transition between dragging and rotating objects in a single two-finger gesture;
s44: add an additional interaction gesture: supporting a drag gesture by continuing to invoke the updateObjectToCurrentTrackingPosition () method while such gesture is in progress; if the device moves during the drag, the method calculates a new world location corresponding to the touch location and moves the virtual object accordingly.
CN201910901432.2A 2019-09-23 2019-09-23 Medical image-oriented mobile augmented reality system construction method Pending CN110660130A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910901432.2A CN110660130A (en) 2019-09-23 2019-09-23 Medical image-oriented mobile augmented reality system construction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910901432.2A CN110660130A (en) 2019-09-23 2019-09-23 Medical image-oriented mobile augmented reality system construction method

Publications (1)

Publication Number Publication Date
CN110660130A true CN110660130A (en) 2020-01-07

Family

ID=69039061

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910901432.2A Pending CN110660130A (en) 2019-09-23 2019-09-23 Medical image-oriented mobile augmented reality system construction method

Country Status (1)

Country Link
CN (1) CN110660130A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111930240A (en) * 2020-09-17 2020-11-13 平安国际智慧城市科技股份有限公司 Motion video acquisition method and device based on AR interaction, electronic equipment and medium
CN112509153A (en) * 2020-12-22 2021-03-16 上海影谱科技有限公司 AR model display processing method and device based on mobile equipment positioning
CN113961080A (en) * 2021-11-09 2022-01-21 南京邮电大学 Three-dimensional modeling software framework based on gesture interaction and design method
CN114711962A (en) * 2022-04-18 2022-07-08 北京恩维世医疗科技有限公司 Augmented reality operation planning navigation system and method
CN116974369A (en) * 2023-06-21 2023-10-31 广东工业大学 Method, system, equipment and storage medium for operating medical image in operation
CN117055996A (en) * 2023-08-07 2023-11-14 香港科技大学(广州) Virtual scene interface display method, device, equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108682282A (en) * 2018-05-09 2018-10-19 北京航空航天大学青岛研究院 A kind of exchange method of the augmented reality version periodic table of chemical element based on ARKit frames
CN109223121A (en) * 2018-07-31 2019-01-18 广州狄卡视觉科技有限公司 Based on medical image Model Reconstruction, the cerebral hemorrhage puncturing operation navigation system of positioning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108682282A (en) * 2018-05-09 2018-10-19 北京航空航天大学青岛研究院 A kind of exchange method of the augmented reality version periodic table of chemical element based on ARKit frames
CN109223121A (en) * 2018-07-31 2019-01-18 广州狄卡视觉科技有限公司 Based on medical image Model Reconstruction, the cerebral hemorrhage puncturing operation navigation system of positioning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
伍学斌: "3D-Slicer联合sina软件辅助神经内镜微创手术治疗高血压脑出血的疗效观察", 《中国脑血管病杂志》 *
曹玉福: "手机定位及AR应用的初步探索", 《临床影像实践 三维可视化(网址:HTTPS://SLICERCN.COM/2378.HTML)》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111930240A (en) * 2020-09-17 2020-11-13 平安国际智慧城市科技股份有限公司 Motion video acquisition method and device based on AR interaction, electronic equipment and medium
CN112509153A (en) * 2020-12-22 2021-03-16 上海影谱科技有限公司 AR model display processing method and device based on mobile equipment positioning
CN112509153B (en) * 2020-12-22 2023-11-10 上海影谱科技有限公司 AR model display processing method and device based on mobile equipment positioning
CN113961080A (en) * 2021-11-09 2022-01-21 南京邮电大学 Three-dimensional modeling software framework based on gesture interaction and design method
CN113961080B (en) * 2021-11-09 2023-08-18 南京邮电大学 Three-dimensional modeling software framework based on gesture interaction and design method
CN114711962A (en) * 2022-04-18 2022-07-08 北京恩维世医疗科技有限公司 Augmented reality operation planning navigation system and method
CN116974369A (en) * 2023-06-21 2023-10-31 广东工业大学 Method, system, equipment and storage medium for operating medical image in operation
CN116974369B (en) * 2023-06-21 2024-05-17 广东工业大学 Method, system, equipment and storage medium for operating medical image in operation
CN117055996A (en) * 2023-08-07 2023-11-14 香港科技大学(广州) Virtual scene interface display method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110660130A (en) Medical image-oriented mobile augmented reality system construction method
JP6883177B2 (en) Computerized visualization of anatomical items
US10580325B2 (en) System and method for performing a computerized simulation of a medical procedure
JP6396310B2 (en) Method and apparatus for displaying to a user a transition between a first rendering projection and a second rendering projection
Genest et al. KinectArms: a toolkit for capturing and displaying arm embodiments in distributed tabletop groupware
CN109389669A (en) Human 3d model construction method and system in virtual environment
CN112740285A (en) Overlay and manipulation of medical images in a virtual environment
US20140324400A1 (en) Gesture-Based Visualization System for Biomedical Imaging and Scientific Datasets
Bornik et al. A hybrid user interface for manipulation of volumetric medical data
US20220346888A1 (en) Device and system for multidimensional data visualization and interaction in an augmented reality virtual reality or mixed reality environment
Krapichler et al. VR interaction techniques for medical imaging applications
CN113645896A (en) System for surgical planning, surgical navigation and imaging
CN113197665A (en) Minimally invasive surgery simulation method and system based on virtual reality
Chiang et al. A touchless interaction interface for observing medical imaging
KR101903996B1 (en) Method of simulating medical image and device thereof
CN114391158A (en) Method, computer program, user interface and system for analyzing medical image data in virtual multi-user collaboration
US10854005B2 (en) Visualization of ultrasound images in physical space
Krapichler et al. A human-machine interface for medical image analysis and visualization in virtual environments
CN113842227A (en) Medical auxiliary three-dimensional model positioning matching method, system, equipment and medium
Grandi et al. Spatially aware mobile interface for 3d visualization and interactive surgery planning
Sun Image guided interaction in minimally invasive surgery
CN107329669B (en) Method and device for selecting human body sub-organ model in human body medical three-dimensional model
Kirmizibayrak Interactive volume visualization and editing methods for surgical applications
Harders et al. New paradigms for interactive 3D volume segmentation
Kameyama et al. Virtual surgical operation system using volume scanning display

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200107

RJ01 Rejection of invention patent application after publication