CN110522516A - A kind of multi-level interactive visual method for surgical navigational - Google Patents
A kind of multi-level interactive visual method for surgical navigational Download PDFInfo
- Publication number
- CN110522516A CN110522516A CN201910899200.8A CN201910899200A CN110522516A CN 110522516 A CN110522516 A CN 110522516A CN 201910899200 A CN201910899200 A CN 201910899200A CN 110522516 A CN110522516 A CN 110522516A
- Authority
- CN
- China
- Prior art keywords
- image
- model
- histoorgan
- instrument tip
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2065—Tracking using image or pattern recognition
Landscapes
- Health & Medical Sciences (AREA)
- Surgery (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Robotics (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Processing Or Creating Images (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
The invention discloses a kind of multi-level interactive visual methods for surgical navigational.Steps are as follows for the method for the present invention: the multi-modal bidimensional image data of acquisition medical imaging device output first pre-process multi-modal bidimensional image data, obtain histoorgan image;Then three-dimensional reconstruction is carried out to histoorgan two-dimensional image histoorgan image, obtains the 3D model file of histoorgan and lesions position and is transmitted to client;Client parses 3D model file and obtains skeleton pattern;It selects the area-of-interest in 3D model and it is clearly rendered, it defines high-risk tissue model and dyes, finally by AR module, the 3D model after skeleton pattern and rendering is superimposed to the corresponding position of real world patient, and show auxiliary information on mobile terminals.The method of the present invention can help doctor in the course of surgery according to operation pathway real-time tracing and show lesions position surrounding tissue organ, the effect for slowing down surgeon stress, improving its success rate of operation by different level.
Description
Technical field
The present invention relates to Medical Image Processing, artificial intelligence, medical image visualization technique fields, and in particular to Yi Zhongyong
In the multi-level interactive visual method of surgical navigational.
Background technique
With the development of computer graphics techniques, demand of the people to three-dimensional information is growing day by day, three-dimensional visualization skill
Art also rapidly develops, and it is also all the more extensive in the application of field of medical imaging.In recent years, it was visualized for the 3D of medical field
Platform emerges one after another.But the reason of existing Visualization Platform is because of every aspects such as transmission speed, file size, rendering effects,
There are many defects, such as mobile terminal to calculate power deficiency, rendering slowly especially in terms of mobile terminal, causes to have on application scenarios
Significant limitation.And wherein most only provides checking or for preoperative surgery planning, performing the operation for preoperative image
Help can not be provided in the process.And preoperative planning information is presented in real time in surgical procedure in emerging augmented reality (AR) technology
It is possibly realized to doctor, but the Visualization Platform currently used for AR operation is still not perfect.In the course of surgery, existing medicine shadow
As the auxiliary that Visualization Platform is less able to provide lesions position is presented, doctor in the course of surgery can only lean on pair knub position
Deduction after preoperative image analysing computer, the experience and technology of this just more fastidious doctor itself.
Summary of the invention
In view of the deficienciess of the prior art, the present invention provides a kind of multi-level interactive visual side for surgical navigational
Method, application scenarios are broadly divided into two stages in preoperative and art.The stage provides the pre- place of multi-modality images at the end PC in the preoperative
Reason and three-dimensional reconstruction, rendering, and the planning of operation pathway is carried out on this basis;The stage is by combining AR technology in art
The auxiliary for carrying out threedimensional model is presented, and reaches more intuitive effect.The concrete condition for considering scene in art, uses mobile device
Show more convenient, combined by two stages, finally operation is implemented to achieve the effect that one accurately assists.
A kind of multi-level interactive visual method for surgical navigational, steps are as follows:
Step (1) obtains the multi-modal bidimensional image data of medical imaging device output, the multi-modal bidimensional image
Data include CT, MRI, DR, CTA, PET image;
Step (2) pre-processes multi-modal bidimensional image data by server end, obtains histoorgan image,
Histoorgan image is subjected to compression processing and obtains compressed histoorgan image;
Step (3) carries out three-dimensional reconstruction to histoorgan image and compressed histoorgan image respectively, is included
The 3D model file of histoorgan and lesions position and fuzzy 3D model file are simultaneously transmitted to client;
Step (4) client identifies the fuzzy 3D model file of vtk format, analytic fuzzy 3D model text by XTK frame
Coordinate data in part carries out iso-surface patch by WebGL, obtains skeleton pattern.
Step (5) is by the operation pathway of preoperative planning and the interactive operation of user to the text of 3D model described in step (3)
Area-of-interest in part select and render to the area-of-interest, the 3D model after being rendered.
Step (6) is superimposed to the corresponding of real world patient with the 3D model after rendering by AR module, by skeleton pattern
Position, and auxiliary information is shown on mobile terminals.
The specific method is as follows for step (2):
Described image pretreatment operation is completed by 3D Slicer software, and multi-modal bidimensional image data are imported 3D
In Slicer software, noise reduction is carried out to image using the noise reduction module of 3D Slicer software, uses point of 3D Slicer software
It cuts module to be split image, Threshold segmentation or manual segmentation mode can be taken, use the registration module of 3D Slicer software
To same patient's different modalities segmentation after image be registrated, using the Fusion Module of 3D Slicer software to registration after
Image carries out fusion and obtains histoorgan image, is pressed using the compression module of 3D Slicer software histoorgan image
Contracting obtains compressed histoorgan image.
The specific method is as follows for step (3):
Three-dimensional reconstruction is completed by 3D Slicer software, uses the reconfiguration technique under vtk environment, i.e. 3D Slicer software
In volume module histoorgan image and compressed histoorgan image are drawn respectively, generate available vtk
Then format threedimensional model carries out artificial treatment to vtk format threedimensional model, improves histoorgan and lesion, acquisition includes
The 3D model file of histoorgan and lesions position and fuzzy 3D model file.
The specific method is as follows for step (5):
The interactive operation of the user is realized the rotation of threedimensional model by the image procossing interface outside calling, put down
It moves, scaling, transparency are adjusted, dyeing;The area-of-interest is the cylindrical region in front of surgical instrument tip, and with
The mobile carry out real-time update of surgical instrument is identified by maker by AR.JS frame obtain surgical instrument tip location first
Then coordinate is extended by the coordinate line of maker coordinate and instrument tip and along instrument tip direction and preset
Depth and radius determine front cylindrical region, that is, area-of-interest, then extract the body number of 3D model in the area-of-interest
According to, rendering is carried out by volume data of the WebGL to extraction and is shown, high-risk tissue model is defined and dyes, the 3D after being rendered
Model.
The specific method is as follows for step (6):
The AR module is constructed using the AR.JS frame based on Web, by blender software by skeleton pattern
Dae format is converted to from vtk format with the 3D model after rendering, Visualization Platform is then uploaded to, is carried out by mobile device
Display;The lesion coordinate information for determining real world patient is marked by maker, by the same skeleton pattern of lesion coordinate information
And rendering after 3D model corresponded under real world coordinates system, coordinate convert by way of, by skeleton pattern with
And the 3D model after rendering is shown in the corresponding position of real world patient, and virtual image and display picture are superimposed,
And auxiliary information is shown on mobile terminals.
The auxiliary information include surgical instrument tip and operation pathway deviant and surgical instrument tip with it is high-risk
The distance and danger early warning of tissue model;
The deviant at the surgical instrument tip and operation pathway is real-time coordinates and the operation road at surgical instrument tip
Minimum range between diameter;The surgical instrument tip is surgical instrument tip at a distance from high-risk tissue model and danger early warning
Real-time coordinates and high-risk tissue model between minimum range, and given threshold T is mobile when minimum range is less than or equal to T
Terminal will sound an alarm and in display danger early warning mark.
The present invention has the beneficial effect that:
The method of the present invention can provide help to doctor under multiple scenes, can provide that image is shown, information is looked into the preoperative
Ask and surgery planning, but can in art by AR equipment carry out three-dimensional lesions position image superposition and operation pathway it is real-time with
Track realizes the Wheat straw mulching of operation;With cross-platform characteristic, a variety of displays such as PC, Pad, mobile phone, AR equipment can be adapted to eventually
End;Characteristic with adaptive layered time display is limited to device characteristics, will carry out to reconstruction image when being applied to mobile terminal
The crawl at local lesion position is simultaneously clearly shown, and carries out fuzzification operation to edge image, achievees the effect that show by different level.
The realization of the method for the present invention can help doctor in the course of surgery according to operation pathway real-time tracing and show lesion by different level
Position surrounding tissue organ facilitates operating doctor for the comprehensive multi-level visual understanding at patients surgery position, for operation
Accurate implementation support is provided, achieve the effect that slow down surgeon stress, improve its success rate of operation, final is the health care belt of patient
Carry out Gospel.
Detailed description of the invention
Fig. 1 is the Technology Roadmap of the method for the present invention.
Fig. 2 is the effect picture that the multi-level interactive visual of the present invention and area-of-interest divide.
Fig. 3 is the system flow chart of client of the present invention parsing display.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right
The present invention is further elaborated.It should be appreciated that following embodiment is only used for clearly illustrating technology of the invention
Scheme, and not intended to limit the protection scope of the present invention.
As shown in Figure 1, specific step is as follows for the method for the present invention:
Step (1) obtains the multi-modal bidimensional image data of medical imaging device output, multi-modal bidimensional image data packet
Include CT, MRI, DR, CTA, PET image;
Step (2) pre-processes multi-modal bidimensional image data by server end, obtains histoorgan image,
Histoorgan image is subjected to compression processing and obtains compressed histoorgan image;
Image pretreatment operation is completed by 3D Slicer software, and multi-modal bidimensional image data are imported 3D Slicer
In software, noise reduction is carried out to image using the noise reduction module of 3D Slicer software, uses the segmentation module of 3D Slicer software
Image is split, Threshold segmentation, manual segmentation various ways can be taken, use the registration module pair of 3D Slicer software
Image after same patient's different modalities segmentation is registrated, using the Fusion Module of 3D Slicer software to the figure after registration
As being merged, histoorgan image is obtained, histoorgan image is compressed using the compression module of 3D Slicer software
Obtain compressed histoorgan image.
Step (3) carries out three-dimensional reconstruction to histoorgan image and compressed histoorgan image respectively, is included
The 3D model file of histoorgan and lesions position and fuzzy 3D model file are simultaneously transmitted to client;
Three-dimensional reconstruction is completed by 3D Slicer software, uses the reconfiguration technique under vtk environment, i.e. 3D Slicer software
In volume module histoorgan image and compressed histoorgan image are drawn respectively, generate available vtk
Then format threedimensional model carries out artificial treatment to vtk format threedimensional model, improves histoorgan and lesion, acquisition includes
The 3D model file of histoorgan and lesions position and fuzzy 3D model file.
Step (4) client identifies the fuzzy 3D model file of vtk format, analytic fuzzy 3D model text by XTK frame
Coordinate data in part carries out iso-surface patch by WebGL, obtains skeleton pattern.
Fig. 3 is the system flow chart of client parsing display, is by XTK and WebGL to two-dimensional medical images and three-dimensional
The process for carrying out parsing rendering of model file.
Step (5) defines high-risk tissue model and contaminates according to the operation pathway of preoperative planning and the interactive operation of user
Color selects the area-of-interest in 3D model and is clearly rendered to it.
The interactive operation of user by call the rotation of external image procossing interface realization threedimensional model, translation, scaling,
Transparency is adjusted, is dyed;Area-of-interest is the cylindrical region in front of surgical instrument tip, and with the movement of surgical instrument
Real-time update is carried out, as shown in Fig. 2, identifying the seat for obtaining surgical instrument tip location by maker by AR.JS frame first
Then mark is extended and by maker coordinate and the coordinate line of instrument tip along instrument tip direction and preset depth
Degree and radius determine front cylindrical region, that is, area-of-interest, then extract the volume data of 3D model in area-of-interest, lead to
It crosses WebGL rendering is carried out to the volume data of extraction and show, define high-risk tissue model and dye, the 3D model after being rendered.
Step (6) is superimposed to the corresponding of real world patient with the 3D model after rendering by AR module, by skeleton pattern
Position, and auxiliary information is shown on mobile terminals.
AR module is constructed using the AR.JS frame based on Web, by skeleton pattern and is rendered by blender software
3D model afterwards is converted to dae format from vtk format, is then uploaded to Visualization Platform, is shown by mobile device;It is logical
Cross maker mark determine real world patient lesion coordinate information, by lesion coordinate information with skeleton pattern and rendering after
3D model corresponded under real world coordinates system, coordinate convert by way of, after skeleton pattern and rendering
3D model is shown in the corresponding position of real world patient, and virtual image and display picture are superimposed, and mobile whole
Auxiliary information is shown on end.
Auxiliary information includes the deviant and surgical instrument tip and high-risk tissue mould at surgical instrument tip and operation pathway
The distance and danger early warning of type;
The deviant of surgical instrument tip and operation pathway is between the real-time coordinates and operation pathway at surgical instrument tip
Minimum range;Surgical instrument tip is the real-time coordinates at surgical instrument tip at a distance from high-risk tissue model and danger early warning
With the minimum range between high-risk tissue model, and given threshold T, when minimum range is less than or equal to T, mobile terminal will be issued
Alarm is simultaneously identified in display danger early warning.
Consider in conjunction with surgical scene, it in practical applications can also be mechanical, electrical by hand for pad on mobile terminal in the present invention
Brain, AR glasses are shown.
For the method for the present invention by the multi-level interactive visual system completion for surgical navigational, which includes that image is pre-
Processing module, three-dimensional reconstruction module, parsing display module, user interactive module, AR secondary module.
Image pre-processing module, the multi-modal bidimensional image data for being exported according to medical imaging device, by noise reduction,
Compression, segmentation, registration, the technology merged pre-process image.
Three-dimensional reconstruction module is used for pretreated bidimensional image data, is generated by the technology of three-dimensional reconstruction corresponding
3-dimensional image model.
Display module is parsed, the 3-dimensional image data and bidimensional image data for being generated in client parsing, and carry out
Display.
User interactive module, for the interactive instruction according to medical staff, to two-dimensional medical image data and 3 D medical
Image model interacts operation.
AR secondary module for threedimensional model to be superimposed to the corresponding position of real world patient, while providing auxiliary
Supplementary information shows, the deviant including surgical instrument tip and operation pathway, surgical instrument tip and high-risk tissue model away from
From and danger early warning.
User interactive module, for the interactive instruction according to medical staff, specially to two dimension and 3-D image rotation,
Translation, scaling, transparency are adjusted, are dyed, and the selection to threedimensional model area-of-interest.
The above is only the preferred embodiment of the present invention, it is noted that those skilled in the art are come
It says, without departing from the technical principles of the invention, several improvement and deformations can also be made, these improvement and deformations are also answered
It is considered as protection scope of the present invention.
Claims (6)
1. a kind of multi-level interactive visual method for surgical navigational, which is characterized in that the method steps are as follows:
Step (1) obtains the multi-modal bidimensional image data of medical imaging device output, the multi-modal bidimensional image data
Including CT, MRI, DR, CTA, PET image;
Step (2) pre-processes multi-modal bidimensional image data by server end, histoorgan image is obtained, by group
It knits organic image and carries out the compressed histoorgan image of compression processing acquisition;
Step (3) carries out three-dimensional reconstruction to histoorgan image and compressed histoorgan image respectively, obtains comprising tissue
The 3D model file of organ and lesions position and fuzzy 3D model file are simultaneously transmitted to client;
Step (4) client identifies the fuzzy 3D model file of vtk format by XTK frame, in analytic fuzzy 3D model file
Coordinate data, pass through WebGL carry out iso-surface patch, obtain skeleton pattern;
Step (5) is by the operation pathway of preoperative planning and the interactive operation of user in 3D model file described in step (3)
Area-of-interest carry out select and the area-of-interest is rendered, the 3D model after being rendered;
Skeleton pattern is superimposed to the corresponding position of real world patient by AR module by step (6) with the 3D model after rendering
It sets, and shows auxiliary information on mobile terminals.
2. a kind of multi-level interactive visual method for surgical navigational according to claim 1, which is characterized in that step
Suddenly (2) the specific method is as follows:
Described image pretreatment operation is completed by 3D Slicer software, and multi-modal bidimensional image data are imported 3D Slicer
In software, noise reduction is carried out to image using the noise reduction module of 3D Slicer software, uses the segmentation module of 3D Slicer software
Image is split, Threshold segmentation or manual segmentation mode can be taken, using the registration module of 3D Slicer software to same
Patient's different modalities segmentation after image be registrated, using 3D Slicer software Fusion Module to the image after registration into
Row fusion obtains histoorgan image, carries out compression acquisition to histoorgan image using the compression module of 3D Slicer software
Compressed histoorgan image.
3. a kind of multi-level interactive visual method for surgical navigational according to claim 1, which is characterized in that step
Suddenly (3) the specific method is as follows:
Three-dimensional reconstruction is completed by 3D Slicer software, using the reconfiguration technique under vtk environment, i.e., in 3D Slicer software
Volume module respectively draws histoorgan image and compressed histoorgan image, generates available vtk format
Then threedimensional model carries out artificial treatment to vtk format threedimensional model, improve histoorgan and lesion, acquisition includes tissue
The 3D model file of organ and lesions position and fuzzy 3D model file.
4. a kind of multi-level interactive visual method for surgical navigational according to claim 3, which is characterized in that step
Suddenly (5) the specific method is as follows:
Rotation, translation, the contracting of threedimensional model are realized in the interactive operation of the user by the image procossing interface outside calling
It puts, transparency is adjusted, dyeing;The area-of-interest is the cylindrical region in front of surgical instrument tip, and with operation
The mobile carry out real-time update of instrument is identified the seat for obtaining surgical instrument tip location by AR.JS frame by maker first
Then mark is extended and by maker coordinate and the coordinate line of instrument tip along instrument tip direction and preset depth
Degree and radius determine front cylindrical region, that is, area-of-interest, then extract the body number of 3D model in the area-of-interest
According to, rendering is carried out by volume data of the WebGL to extraction and is shown, high-risk tissue model is defined and dyes, the 3D after being rendered
Model.
5. a kind of multi-level interactive visual method for surgical navigational according to claim 1 or 4, feature exist
In the specific method is as follows for step (6):
The AR module is constructed using the AR.JS frame based on Web, by blender software by skeleton pattern and wash with watercolours
3D model after dye is converted to dae format from vtk format, is then uploaded to Visualization Platform, is shown by mobile device;
The lesion coordinate information for determining real world patient is marked by maker, by the lesion coordinate information with skeleton pattern and
3D model after rendering is corresponded under real world coordinates system, in such a way that coordinate is converted, by skeleton pattern and wash with watercolours
3D model after dye is shown in the corresponding position of real world patient, and virtual image and display picture are superimposed, and
Auxiliary information is shown on mobile terminal.
6. a kind of multi-level interactive visual method for surgical navigational according to claim 5, which is characterized in that institute
The auxiliary information stated includes the deviant and surgical instrument tip and high-risk tissue model at surgical instrument tip and operation pathway
Distance and danger early warning;
The deviant at the surgical instrument tip and operation pathway is between the real-time coordinates and operation pathway at surgical instrument tip
Minimum range;The surgical instrument tip is the reality at surgical instrument tip at a distance from high-risk tissue model and danger early warning
When coordinate and high-risk tissue model between minimum range, and given threshold T, the mobile terminal when minimum range is less than or equal to T
It will sound an alarm and in display danger early warning mark.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910899200.8A CN110522516B (en) | 2019-09-23 | 2019-09-23 | Multi-level interactive visualization method for surgical navigation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910899200.8A CN110522516B (en) | 2019-09-23 | 2019-09-23 | Multi-level interactive visualization method for surgical navigation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110522516A true CN110522516A (en) | 2019-12-03 |
CN110522516B CN110522516B (en) | 2021-02-02 |
Family
ID=68669663
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910899200.8A Active CN110522516B (en) | 2019-09-23 | 2019-09-23 | Multi-level interactive visualization method for surgical navigation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110522516B (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111145190A (en) * | 2019-12-27 | 2020-05-12 | 之江实验室 | Single organ interaction method based on medical image processing and visualization |
CN111403022A (en) * | 2020-03-13 | 2020-07-10 | 北京维卓致远医疗科技发展有限责任公司 | Medical movable split type control system and use method |
CN112331311A (en) * | 2020-11-06 | 2021-02-05 | 青岛海信医疗设备股份有限公司 | Method and device for fusion display of video and preoperative model in laparoscopic surgery |
CN112618026A (en) * | 2020-12-15 | 2021-04-09 | 清华大学 | Remote operation data fusion interactive display system and method |
CN113907883A (en) * | 2021-09-23 | 2022-01-11 | 佛山市第一人民医院(中山大学附属佛山医院) | 3D visualization operation navigation system and method for ear-side skull-base surgery |
CN114795468A (en) * | 2022-04-19 | 2022-07-29 | 首都医科大学附属北京天坛医院 | Intraoperative navigation method and system for intravascular treatment |
CN115115810A (en) * | 2022-06-29 | 2022-09-27 | 广东工业大学 | Multi-person collaborative focus positioning and enhanced display method based on spatial posture capture |
CN115861298A (en) * | 2023-02-15 | 2023-03-28 | 浙江华诺康科技有限公司 | Image processing method and device based on endoscopy visualization |
CN116664580A (en) * | 2023-08-02 | 2023-08-29 | 经智信息科技(山东)有限公司 | Multi-image hierarchical joint imaging method and device for CT images |
CN117059235A (en) * | 2023-08-17 | 2023-11-14 | 经智信息科技(山东)有限公司 | Automatic rendering method and device for CT image |
TWI836492B (en) * | 2021-11-18 | 2024-03-21 | 瑞鈦醫療器材股份有限公司 | Method and navigation system for registering two-dimensional image data set with three-dimensional image data set of body of interest |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP4174869A1 (en) * | 2021-10-26 | 2023-05-03 | Koninklijke Philips N.V. | Case-based mixed reality preparation and guidance for medical procedures |
WO2023072685A1 (en) * | 2021-10-26 | 2023-05-04 | Koninklijke Philips N.V. | Case-based mixed reality preparation and guidance for medical procedures |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105748026A (en) * | 2009-06-23 | 2016-07-13 | 直观外科手术操作公司 | Medical Robotic System Providing Auxilary View Including Range Of Motion Limitations For Articulatable Instruments Extending Out Of Distal End Of Entry Guide |
CN106296805A (en) * | 2016-06-06 | 2017-01-04 | 厦门铭微科技有限公司 | A kind of augmented reality human body positioning navigation method based on Real-time Feedback and device |
CN106890024A (en) * | 2016-12-29 | 2017-06-27 | 李岩 | A kind of preoperative servicing unit of liver neoplasm heating ablation operation |
CN107296650A (en) * | 2017-06-01 | 2017-10-27 | 西安电子科技大学 | Intelligent operation accessory system based on virtual reality and augmented reality |
CN107456278A (en) * | 2016-06-06 | 2017-12-12 | 北京理工大学 | A kind of ESS air navigation aid and system |
CN107567642A (en) * | 2015-03-12 | 2018-01-09 | 快乐L-领主有限公司 | System, method and apparatus for the three-dimensional modeling based on voxel |
WO2018140415A1 (en) * | 2017-01-24 | 2018-08-02 | Tietronix Software, Inc. | System and method for three-dimensional augmented reality guidance for use of medical equipment |
WO2018148845A1 (en) * | 2017-02-17 | 2018-08-23 | Nz Technologies Inc. | Methods and systems for touchless control of surgical environment |
CN108766579A (en) * | 2018-05-28 | 2018-11-06 | 北京交通大学长三角研究院 | A kind of virtual cerebral surgery operation emulation mode based on high degrees of fusion augmented reality |
CN109223121A (en) * | 2018-07-31 | 2019-01-18 | 广州狄卡视觉科技有限公司 | Based on medical image Model Reconstruction, the cerebral hemorrhage puncturing operation navigation system of positioning |
CN109350242A (en) * | 2018-12-11 | 2019-02-19 | 艾瑞迈迪科技石家庄有限公司 | A kind of surgical navigational method for early warning, storage medium and terminal device based on distance |
CN109549689A (en) * | 2018-08-21 | 2019-04-02 | 池嘉昌 | A kind of puncture auxiliary guide device, system and method |
CN109758230A (en) * | 2019-02-26 | 2019-05-17 | 中国电子科技集团公司信息科学研究院 | A kind of neurosurgery air navigation aid and system based on augmented reality |
CN110123453A (en) * | 2019-05-31 | 2019-08-16 | 东北大学 | A kind of operation guiding system based on unmarked augmented reality |
US10390913B2 (en) * | 2018-01-26 | 2019-08-27 | Align Technology, Inc. | Diagnostic intraoral scanning |
CN110215283A (en) * | 2019-02-14 | 2019-09-10 | 清华大学 | Encephalic operation navigation system based on magnetic resonance imaging |
-
2019
- 2019-09-23 CN CN201910899200.8A patent/CN110522516B/en active Active
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105748026A (en) * | 2009-06-23 | 2016-07-13 | 直观外科手术操作公司 | Medical Robotic System Providing Auxilary View Including Range Of Motion Limitations For Articulatable Instruments Extending Out Of Distal End Of Entry Guide |
CN107567642A (en) * | 2015-03-12 | 2018-01-09 | 快乐L-领主有限公司 | System, method and apparatus for the three-dimensional modeling based on voxel |
CN106296805A (en) * | 2016-06-06 | 2017-01-04 | 厦门铭微科技有限公司 | A kind of augmented reality human body positioning navigation method based on Real-time Feedback and device |
CN107456278A (en) * | 2016-06-06 | 2017-12-12 | 北京理工大学 | A kind of ESS air navigation aid and system |
CN106890024A (en) * | 2016-12-29 | 2017-06-27 | 李岩 | A kind of preoperative servicing unit of liver neoplasm heating ablation operation |
WO2018140415A1 (en) * | 2017-01-24 | 2018-08-02 | Tietronix Software, Inc. | System and method for three-dimensional augmented reality guidance for use of medical equipment |
WO2018148845A1 (en) * | 2017-02-17 | 2018-08-23 | Nz Technologies Inc. | Methods and systems for touchless control of surgical environment |
CN107296650A (en) * | 2017-06-01 | 2017-10-27 | 西安电子科技大学 | Intelligent operation accessory system based on virtual reality and augmented reality |
US10390913B2 (en) * | 2018-01-26 | 2019-08-27 | Align Technology, Inc. | Diagnostic intraoral scanning |
CN108766579A (en) * | 2018-05-28 | 2018-11-06 | 北京交通大学长三角研究院 | A kind of virtual cerebral surgery operation emulation mode based on high degrees of fusion augmented reality |
CN109223121A (en) * | 2018-07-31 | 2019-01-18 | 广州狄卡视觉科技有限公司 | Based on medical image Model Reconstruction, the cerebral hemorrhage puncturing operation navigation system of positioning |
CN109549689A (en) * | 2018-08-21 | 2019-04-02 | 池嘉昌 | A kind of puncture auxiliary guide device, system and method |
CN109350242A (en) * | 2018-12-11 | 2019-02-19 | 艾瑞迈迪科技石家庄有限公司 | A kind of surgical navigational method for early warning, storage medium and terminal device based on distance |
CN110215283A (en) * | 2019-02-14 | 2019-09-10 | 清华大学 | Encephalic operation navigation system based on magnetic resonance imaging |
CN109758230A (en) * | 2019-02-26 | 2019-05-17 | 中国电子科技集团公司信息科学研究院 | A kind of neurosurgery air navigation aid and system based on augmented reality |
CN110123453A (en) * | 2019-05-31 | 2019-08-16 | 东北大学 | A kind of operation guiding system based on unmarked augmented reality |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111145190B (en) * | 2019-12-27 | 2022-06-17 | 之江实验室 | Single organ interaction method based on medical image processing and visualization |
CN111145190A (en) * | 2019-12-27 | 2020-05-12 | 之江实验室 | Single organ interaction method based on medical image processing and visualization |
CN111403022A (en) * | 2020-03-13 | 2020-07-10 | 北京维卓致远医疗科技发展有限责任公司 | Medical movable split type control system and use method |
CN112331311A (en) * | 2020-11-06 | 2021-02-05 | 青岛海信医疗设备股份有限公司 | Method and device for fusion display of video and preoperative model in laparoscopic surgery |
CN112618026A (en) * | 2020-12-15 | 2021-04-09 | 清华大学 | Remote operation data fusion interactive display system and method |
CN113907883A (en) * | 2021-09-23 | 2022-01-11 | 佛山市第一人民医院(中山大学附属佛山医院) | 3D visualization operation navigation system and method for ear-side skull-base surgery |
TWI836492B (en) * | 2021-11-18 | 2024-03-21 | 瑞鈦醫療器材股份有限公司 | Method and navigation system for registering two-dimensional image data set with three-dimensional image data set of body of interest |
CN114795468A (en) * | 2022-04-19 | 2022-07-29 | 首都医科大学附属北京天坛医院 | Intraoperative navigation method and system for intravascular treatment |
CN114795468B (en) * | 2022-04-19 | 2022-11-15 | 首都医科大学附属北京天坛医院 | Intraoperative navigation method and system for intravascular treatment |
CN115115810A (en) * | 2022-06-29 | 2022-09-27 | 广东工业大学 | Multi-person collaborative focus positioning and enhanced display method based on spatial posture capture |
CN115861298A (en) * | 2023-02-15 | 2023-03-28 | 浙江华诺康科技有限公司 | Image processing method and device based on endoscopy visualization |
CN116664580A (en) * | 2023-08-02 | 2023-08-29 | 经智信息科技(山东)有限公司 | Multi-image hierarchical joint imaging method and device for CT images |
CN116664580B (en) * | 2023-08-02 | 2023-11-28 | 经智信息科技(山东)有限公司 | Multi-image hierarchical joint imaging method and device for CT images |
CN117059235A (en) * | 2023-08-17 | 2023-11-14 | 经智信息科技(山东)有限公司 | Automatic rendering method and device for CT image |
Also Published As
Publication number | Publication date |
---|---|
CN110522516B (en) | 2021-02-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110522516A (en) | A kind of multi-level interactive visual method for surgical navigational | |
JP5130529B2 (en) | Information processing apparatus and program | |
US20170035517A1 (en) | Dynamic and interactive navigation in a surgical environment | |
JP2020506452A (en) | HMDS-based medical image forming apparatus | |
CN110490851A (en) | Galactophore image dividing method, apparatus and system based on artificial intelligence | |
CN110021445A (en) | A kind of medical system based on VR model | |
CN107067398A (en) | Complementing method and device for lacking blood vessel in 3 D medical model | |
Chu et al. | Perception enhancement using importance-driven hybrid rendering for augmented reality based endoscopic surgical navigation | |
EP3929869A1 (en) | Vrds 4d medical image-based vein ai endoscopic analysis method and product | |
Abou El-Seoud et al. | An interactive mixed reality ray tracing rendering mobile application of medical data in minimally invasive surgeries | |
Advincula et al. | Development and future trends in the application of visualization toolkit (VTK): the case for medical image 3D reconstruction | |
WO2021030995A1 (en) | Inferior vena cava image analysis method and product based on vrds ai | |
CN112331311B (en) | Method and device for fusion display of video and preoperative model in laparoscopic surgery | |
KR101657285B1 (en) | Ultrasonography simulation system | |
CN114340496A (en) | Analysis method and related device of heart coronary artery based on VRDS AI medical image | |
CN110189407A (en) | A kind of human body three-dimensional reconstruction model system based on HOLOLENS | |
CN114723893A (en) | Organ tissue spatial relationship rendering method and system based on medical images | |
WO2021081846A1 (en) | Vein tumor image processing method and related product | |
CN202815837U (en) | Ablation treatment image guiding device with image two dimension processing apparatus | |
CN202815838U (en) | Ablation treatment image guiding device with image three dimensional processing apparatus | |
CN112950774A (en) | Three-dimensional modeling device, operation planning system and teaching system | |
WO2021081839A1 (en) | Vrds 4d-based method for analysis of condition of patient, and related products | |
Chen et al. | A system design for virtual reality visualization of medical image | |
CN202815839U (en) | Ablation treatment image guiding device with image measure apparatus | |
Zheng et al. | The survey of medical image 3D reconstruction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |